KVQuant
Public[NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
compressionefficient-inferenceefficient-modellarge-language-modelsllamallmlocalllamalocalllmmistralmodel-compression
Creat:2024-02-01T01:30:10
Update:2025-03-25T10:33:15
https://arxiv.org/abs/2401.18079
363
Stars
1
Stars Increase