This project provides a static quantized version of the Qwen-4B-Instruct-2507-Self-correct model, supporting tasks such as text generation, bias mitigation, and self-correction. Based on the Qwen-4B architecture, the model has undergone instruction fine-tuning and self-correction training, offering multiple quantized versions to meet different hardware requirements.
Natural Language Processing
TransformersEnglish