A globally renowned open-source AI platform, Hugging Face, recently released a leaderboard for open-weight model contributions, with Chinese teams Qwen and DeepSeek successfully making it into the top 15, showcasing China's technical strength and influence in the global open-source AI domain. This list recognizes teams that provide high-quality model weights to the open-source community, with their models widely applied in academic and industrial innovation.

QQ20250611-102655.png

The Qwen team, supported by Alibaba Cloud Intelligence Group, has gained community favor due to its outstanding performance of the Qwen3 series models in tasks such as instruction following and code generation. The Qwen2.5-72B series ranks among the leading open-source large language models, while its lightweight model QwQ-32B, optimized through reinforcement learning, matches large models in mathematical reasoning and code generation, significantly reducing deployment costs.

DeepSeek is renowned for its R1 series models, which are cost-effective yet high-performing. R1-0528 surpassed multiple international competitors on the LiveCodeBench ranking, trailing only OpenAI’s top models. Its lightweight version, DeepSeek-R1-0528-Qwen3-8B, runs on a single GPU through knowledge distillation technology, outperforming Google’s Gemini2.5Flash in the AIME2025 math test, demonstrating competitive advantages in specific domains.

The inclusion of Qwen and DeepSeek reflects the rise of Chinese AI teams in the open-source ecosystem. Hugging Face's responsible person stated that the contributions of both teams provide efficient resources for global developers. NVIDIA CEO Jensen Huang also praised their balance of performance and cost as reshaping the AI landscape. In the future, Qwen plans to explore multi-modal technologies, while DeepSeek will release the R2 model, continuously driving AI innovation.

Hugging Face Model Release Heatmap: https://huggingface.co/spaces/cfahlgren1/model-release-heatmap