local-llm-workbench
Public? A comprehensive toolkit for benchmarking, optimizing, and deploying local Large Language Models. Includes performance testing tools, optimized configurations for CPU/GPU/hybrid setups, and detailed guides to maximize LLM performance on your hardware.
context-window-scalingcpu-inferencecudagpu-accelerationhybrid-inferenceinference-optimizationllama-cppllm-benchmarkingllm-deploymentlocal-llm
Creat:2025-03-27T09:13:00
Update:2025-03-27T09:47:23
2
Stars
0
Stars Increase