Triton-TensorRT-Inference-CRAFT-pytorch
PublicAdvanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> ONNX -> TensorRT, Inference pipelines (TensorRT, Triton server - multi-format). Supported model format for Triton inference: TensorRT engine, Torchscript, ONNX
inferenceinference-engineinference-servernvidia-dockeronnxonnx-torchpytorchtensorrttensorrt-conversiontext-detection
Creat:2021-07-13T22:02:24
Update:2024-11-14T15:35:53
33
Stars
0
Stars Increase