Early this morning, the Alibaba Tongyi Qianwen team released the Qwen2 series of open-source models. This series includes five sizes of pre-trained and instruction-tuned models: Qwen2-0.5B, Qwen2-1.5B, Qwen2-7B, Qwen2-57B-A14B, and Qwen2-72B. Key information indicates that these models have significantly improved in terms of parameter count and performance compared to the previous generation, Qwen1.5.

Regarding the multilingual capabilities of the models, the Qwen2 series has invested heavily in increasing the quantity and quality of the dataset, covering 27 other languages besides English and Chinese. Comparative testing has shown that large models (with over 70B parameters) excel in natural language understanding, coding, mathematical abilities, and more. The Qwen2-72B model has even surpassed its predecessor in terms of performance and parameter count.

The Qwen2 models not only demonstrate strong capabilities in basic language model evaluations but also achieve remarkable results in instruction-tuned model assessments. Their multilingual abilities shine in benchmarks like M-MMLU and MGSM, showcasing the powerful potential of Qwen2 instruction-tuned models.

The release of the Qwen2 series marks a new height in artificial intelligence technology, providing broader possibilities for global AI applications and commercialization. Looking ahead, Qwen2 will further expand model sizes and multimodal capabilities, accelerating the development of the open-source AI field.

Model Information

The Qwen2 series includes five sizes of base and instruction-tuned models, including Qwen2-0.5B, Qwen2-1.5B, Qwen2-7B, Qwen2-57B-A14B, and Qwen2-72B. We have outlined the key information for each model in the table below:

ModelsQwen2-0.5BQwen2-1.5BQwen2-7BQwen2-57B-A14BQwen2-72B
# Parameters0.49M1.54M7.07B57.41B72.71B
# Non-Emb Parameters0.35M1.31B5.98M56.32M70.21B
Quality AssuranceTrueTrueTrueTrueTrue
Tie EmbeddingTrueTrueFalseFalseFalse
Context Length32K32K128K64K128K

Specifically, in Qwen1.5, only Qwen1.5-32B and Qwen1.5-110B used Group Query Attention (GQA). This time, we applied GQA to all model sizes to enable them to benefit from faster speeds and less memory usage during model inference. For smaller models, we prefer to apply tying embedding because large sparse embeddings account for a significant portion of the model's total parameters.

In terms of context length, all base language models have been pre-trained on data with a context length of 32K tokens, and we have observed satisfactory extrapolation capabilities up to 128K in PPL evaluations. However, for instruction-tuned models, we are not satisfied with just PPL evaluations; we need the models to correctly understand long contexts and complete tasks. In the table, we list the context length capabilities of the instruction-tuned models, which are evaluated through assessments on the Needle in a Haystack task. Notably, when enhanced with YARN, the Qwen2-7B-Instruct and Qwen2-72B-Instruct models both exhibit impressive capabilities, able to handle context lengths of up to 128K tokens.

We have made significant efforts to increase the quantity and quality of the pre-training and instruction-tuning datasets, which cover multiple languages besides English and Chinese, to enhance their multilingual capabilities. Although large language models inherently have the ability to generalize to other languages, we explicitly emphasize that we have included 27 other languages in our training:

RegionLanguages
Western EuropeGerman, French, Spanish, Portuguese, Italian, Dutch
Eastern Europe and Central EuropeRussian, Czech, Polish
Middle EastArabic, Persian, Hebrew, Turkish
East AsiaJapanese, Korean
Southeast AsiaVietnamese, Thai, Indonesian, Malay, Lao, Burmese, Cebuano, Khmer, Tagalog
South AsiaHindi, Bengali, Urdu

Additionally, we have invested considerable effort in addressing the issue of code-switching that often arises in multilingual evaluations. Therefore, our models' ability to handle this phenomenon has significantly improved. Evaluations using prompts that typically trigger cross-language code-switching have confirmed a significant reduction in related issues.

Performance

Comparative test results show that the performance of large-scale models (with over 70B parameters) has significantly improved compared to Qwen1.5. This test centers on the large-scale model Qwen2-72B. In terms of base language models, we compared the performance of Qwen2-72B with the current best open-source models in natural language understanding, knowledge acquisition, programming abilities, mathematical abilities, multilingual abilities, and more. Thanks to carefully selected datasets and optimized training methods, Qwen2-72B outperforms leading models like Llama-3-70B, and even surpasses the previous generation Qwen1.5-110B with fewer parameters.

After extensive large-scale pre-training, we conducted post-training to further enhance Qwen's intelligence, bringing it closer to human capabilities. This process further improved the model's abilities in coding, mathematics, reasoning, instruction following, multilingual understanding, and more. Additionally, it aligns the model's outputs with human values, ensuring they are useful, honest, and harmless. Our post-training phase is designed with principles of scalable training and minimal human annotation. Specifically, we researched how to obtain high-quality, reliable, diverse, and creative demonstration data and preference data through various automatic alignment strategies, such as rejection sampling for mathematics, execution feedback for coding and instruction following, back-translation for creative writing, and scalable supervision for role-playing. As for training, we combined supervised fine-tuning, reward model training, and online DPO training. We also adopted a novel online merging optimizer to minimize the alignment tax. These combined efforts significantly enhanced the capabilities and intelligence of our models, as shown in the table below.

We conducted a comprehensive evaluation of Qwen2-72B-Instruct across 16 benchmarks in various fields. Qwen2-72B-Instruct achieved a balance between better capabilities and alignment with human values. Specifically, Qwen2-72B-Instruct significantly outperformed Qwen1.5-72B-Chat in all benchmarks and achieved competitive performance compared to Llama-3-70B-Instruct.

On smaller models, our Qwen2 models also outperform similar or even larger SOTA models. Compared to the recently released SOTA models, Qwen2-7B-Instruct still shows an advantage in various benchmarks, especially in coding and Chinese-related metrics.

Emphasis

Coding and Mathematics

We have always been committed to enhancing Qwen's advanced features, especially in coding and mathematics. In coding, we successfully integrated CodeQwen1.5's code training experience and data, resulting in significant improvements in Qwen2-72B-Instruct's capabilities in various programming languages. In mathematics, by leveraging extensive and high-quality datasets, Qwen2-72B-Instruct has demonstrated stronger abilities in solving mathematical problems.

Long Context Understanding

In Qwen2, all instruction-tuned models have been trained in a 32k length context and use technologies like YARN or Dual Chunk Attention to infer to longer context lengths.

The chart below shows our test results on Needle in a Haystack. Notably, Qwen2-72B-Instruct can perfectly handle information extraction tasks in a 128k context, coupled with its inherent strong performance, making it the preferred choice for handling long text tasks when resources are sufficient.

Additionally, it is worth noting the impressive capabilities of the other models in the series: Qwen2-7B-Instruct almost perfectly handles a context length of up to 128k, Qwen2-57B-A14B-Instruct manages up to 64k, and the two smaller models in the series support 32k.

In addition to long context models, we have also open-sourced an agent solution for efficiently processing documents containing up to 1 million tokens. For more details, please refer to our dedicated blog post on this topic.

Safety and Responsibility

The table below shows the proportion of harmful responses generated by large models for four types of multilingual unsafe queries (illegal activities, fraud, pornography, privacy violence). The test data comes from Jailbreak and is translated into multiple languages for evaluation. We found that Llama-3 cannot effectively handle multilingual prompts, so it was not included in the comparison. Through significance testing (P_value), we found that the Qwen2-72B-Instruct model's performance in terms of safety is comparable to GPT-4 and significantly better than the Mistral-8x22B model.

This article is from AIbase Daily

Welcome to the [AI Daily] column! This is your daily guide to exploring the world of artificial intelligence. Every day, we present you with hot topics in the AI field, focusing on developers, helping you understand technical trends, and learning about innovative AI product applications.

—— Created by the AIbase Daily Team
© Copyright AIbase Base 2024, Click to View Source -

AI News Recommendations

Release of the New Generation AI Video Generation Model LTX-2: One-Click Generation of High-Quality Narrative Videos

Release of the New Generation AI Video Generation Model LTX-2: One-Click Generation of High-Quality Narrative Videos

Lightricks' LTX-2 AI model generates 20-second 4K narrative videos with synchronized visuals, audio, and lip-sync in a single diffusion process, enhancing video creation efficiency.....

Oct 31, 2025
90
China Academy of Information and Communications Technology's Artificial Intelligence Institute Jointly Released 'Research Report on the Application of Large Model Integrated Machines (2025)'

China Academy of Information and Communications Technology's Artificial Intelligence Institute Jointly Released 'Research Report on the Application of Large Model Integrated Machines (2025)'

China Academy of Information and Communications Technology and the Artificial Intelligence Industry Development Alliance released 'Research Report on the Application of Large Model Integrated Machines (2025)', analyzing technical evolution, industry dynamics, and application practices, providing enterprises with comprehensive references. The report outlines the development history of large model integrated machines, highlights significant progress in recent years, and focuses on changes at the technical level.

Oct 31, 2025
90
World's First Embodied Intelligence Open Platform Launches! 3D Digital Humans Now Ready to Use Out of the Box: Mofa Xingyun Integrates Large Models into Hundreds of Yuan Chips

World's First Embodied Intelligence Open Platform Launches! 3D Digital Humans Now Ready to Use Out of the Box: Mofa Xingyun Integrates Large Models into Hundreds of Yuan Chips

Mofa Tech launches 'Mofa Nebula', the first 3D digital human platform, enabling AI to generate real-time expressions, gestures, and movements from text via its 3D multimodal engine, compatible with mobile and automotive devices.....

Oct 31, 2025
100
Moonshot AI Launches Kimi Linear: 6 Times Faster Linear Attention Architecture, Open-Source KDA Kernel Released Simultaneously

Moonshot AI Launches Kimi Linear: 6 Times Faster Linear Attention Architecture, Open-Source KDA Kernel Released Simultaneously

The domestic team Moonshot AI released the technical report on the Kimi Linear architecture, proposing a hybrid linear architecture that can replace the full attention mechanism. This architecture achieves breakthroughs in speed, memory efficiency, and long context processing, significantly reducing the use of KV cache, combining efficiency with performance advantages, and is called the new starting point for attention mechanisms in the era of intelligent agents.

Oct 31, 2025
140
Canva Launches a New Creative Operating System, Fully Upgrading Digital Marketing Tools

Canva Launches a New Creative Operating System, Fully Upgrading Digital Marketing Tools

Canva launches new digital marketing and video editing tools based on the world's first 'Design AI Model', upgrading its visual suite products, positioning them as a creative operating system for marketing teams. This term does not refer to a traditional operating system, but rather a comprehensive system integrating task tools, AI support, and platform interface.

Oct 31, 2025
60
OpenAI Launches Aardvark: An Intelligent Security Research Assistant to Enhance Software Protection

OpenAI Launches Aardvark: An Intelligent Security Research Assistant to Enhance Software Protection

OpenAI has launched Aardvark, an intelligent security assistant based on GPT-5, to help developers and security teams efficiently address the challenge of thousands of new vulnerabilities each year. The tool continuously analyzes source code, automatically identifies vulnerabilities, assesses risks, prioritizes them, and provides remediation solutions, significantly improving the efficiency of software security protection.

Oct 31, 2025
60
OpenAI launches gpt-oss-safeguard: an open-source AI safety model that can be updated in real time

OpenAI launches gpt-oss-safeguard: an open-source AI safety model that can be updated in real time

OpenAI releases the open-source safety model gpt-oss-safeguard, providing a flexible and transparent AI safety classification solution. This kit includes dual versions of 120 and 20, and uses the Apache 2.0 open source license, supporting free modification and integration. It innovatively realizes real-time policy interpretation functionality, which can adapt to changes in security rules without retraining, significantly reducing system maintenance costs and response latency.

Oct 31, 2025
90
OpenAI and Oracle to Build a Super Large Data Center in Michigan!

OpenAI and Oracle to Build a Super Large Data Center in Michigan!

OpenAI partners with Oracle to build a 1+ GW 'Stargate' data center in Saline, Michigan, starting 2026. The multi-billion dollar project, funded by a consortium, aims to meet AI computing demands, creating 2,500+ construction and 450 operational jobs.....

Oct 31, 2025
60
Meta Researchers Uncover the Black Box of Large Language Models and Fix AI Reasoning Flaws

Meta Researchers Uncover the Black Box of Large Language Models and Fix AI Reasoning Flaws

Meta and Edinburgh University develop CRV technology to analyze LLM reasoning circuits, predict correctness, and fix errors, enhancing AI reliability via activation computation graphs.....

Oct 31, 2025
60
Zhiyuan Launches Emu3.5 Large Model: Reconstructing Multimodal Intelligence with Next-State Prediction, Embodied Operational Capabilities Amaze the Industry

Zhiyuan Launches Emu3.5 Large Model: Reconstructing Multimodal Intelligence with Next-State Prediction, Embodied Operational Capabilities Amaze the Industry

Emu3.5 introduces autoregressive next-state prediction, enabling AI to plan and execute cross-modal tasks in complex environments, advancing from perception to intelligent operation.....

Oct 30, 2025
180
LanguageIllegal ActivitiesFraudPornographyPrivacy Violence
GPT-4Mistral-8x22BQwen2-72B-InstructGPT-4Mistral-8x22BQwen2-72B-InstructGPT-4Mistral-8x22BQwen2-72B-InstructGPT-4Mistral-8x22BQwen2-72B-Instruct
Chinese0%13%0%0%17%0%43%47%53%0%10%0%
English0%7%0%0%23%0%37%67%63%0%27%3%
Spanish0%13%0%0%7%0%15%26%15%3%13%0%
Portuguese0%7%0%3%0%0%48%64%50%3%7%3%
French0%3%0%3%3%7%3%19%7%0%27%0%
Korean0%4%0%3%8%4%17%29%10%0%26%4%
Japanese0%7%0%3%7%3%47%57%47%4%26%4%
Russian0%10%0%7%23%3%13%17%10%13%7%7%
Arabic0%4%0%4%11%0%22%26%22%0%0%0%
Average0%8%0%3%11%2%27%39%31%3%