Vi-Qwen2-7B-RAG is a large language model fine-tuned based on Qwen2-7B-Instruct, specifically designed for Vietnamese RAG tasks. This model is trained on a Vietnamese dataset, supports a context of up to 8192 tokens, and has capabilities such as multi-context processing, negative information filtering, information integration, and positive-negative judgment.
Natural Language Processing
TransformersOther