XNNPACK
PublicHigh-efficiency floating-point neural network inference operators for mobile, server, and Web
convolutional-neural-networkconvolutional-neural-networkscpuinferenceinference-optimizationmatrix-multiplicationmobile-inferencemultithreadingneural-networkneural-networks
Creat:2019-09-14T07:48:37
Update:2025-03-27T10:14:16
2.1K
Stars
1
Stars Increase