HomeAI Tutorial

Lvllm

Public

LvLLM is a special extension of vllm that makes full use of CPU and memory resources, reduces GPU memory requirements, and features an efficient GPU parallel and NUMA parallel architecture, supporting hybrid inference for MOE large models.

Creat2025-09-26T21:39:03
Update2025-11-07T10:24:58
84
Stars
4
Stars Increase

Related projects