Accelerated vector search using RAPIDS cuVS.
Tensors and Dynamic neural networks in Python with strong GPU acceleration
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
The fastai deep learning library
A WebGL accelerated JavaScript library for training and deploying ML models.
A high-performance, zero-overhead, extensible Python compiler with built-in NumPy support
Open3D: A Modern Library for 3D Data Processing
Open deep learning compiler stack for cpu, gpu and specialized accelerators
NVIDIA? TensorRT? is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
NumPy & SciPy for GPU
The Triton Inference Server provides an optimized cloud and edge inferencing solution.