AIbase

vllm

Public

A high-throughput and memory-efficient inference and serving engine for LLMs

Creat2023-02-09T19:23:20
Update2024-05-09T16:46:28
https://docs.vllm.ai
51.9K
Stars
60
Stars Increase

Related projects