local-llm-server
PublicA containerized FastAPI server for running open source Large Language Models locally using llama.cpp
Creat:2025-06-21T10:37:48
Update:2025-06-21T10:38:54
0
Stars
0
Stars Increase
A containerized FastAPI server for running open source Large Language Models locally using llama.cpp