The data to be translated: LQ-LoRA is an innovative variant based on LoRA, which achieves efficient fine-tuning of large language models through low-rank quantization matrix decomposition. By employing integer linear programming techniques, it enhances memory efficiency and has been experimentally proven to outperform other baseline methods in terms of performance. This technology is of great significance in addressing the memory and cost challenges when adapting language models to new datasets, bringing innovative solutions to the field of artificial intelligence.