December 1 — Chinese AI startup DeepSeek AI released the DeepSeek-V3.2 series of models. The series includes DeepSeek-V3.2 and its high-computation-enhanced version, DeepSeek-V3.2-Speciale. The new model features an innovative sparse attention mechanism (DSA) and enhanced agent capabilities, aiming to challenge top global AI models, including OpenAI's GPT-5 and Google's Gemini 3.0 Pro.

DeepSeek

The core of the DeepSeek-V3.2 series lies in its unique DeepSeek Sparse Attention (DSA) architecture. This mechanism first achieved fine-grained sparsity in attention, reducing computational complexity and memory usage in long-text scenarios while maintaining performance comparable to dense attention models. This technological innovation brings efficiency improvements:

  • Reasoning speed for long-text tasks increased by 2 to 3 times.
  • API costs decreased, with the official announcement of a reduction of more than 50% in price.

DeepSeek-V3.2 is positioned as a "agent-first" model, focusing on deeply integrating deep reasoning capabilities with tool usage workflows. The model was trained using a large-scale agent task synthesis pipeline, enhancing its generalization ability in real-world application scenarios. The new model introduces a "thinking mode," allowing the model to perform chain-of-thought reasoning before executing complex tasks, improving problem-solving accuracy. In a series of agent evaluations, V3.2 reached the highest level among open-source models. This release includes two core versions:

  1. DeepSeek-V3.2: This version is now available on DeepSeek's web portal, app, and API services. It is a model that balances efficiency and performance, suitable for daily reasoning assistants and development tasks.
  2. DeepSeek-V3.2-Speciale: This is a high-computation-enhanced version focused on extreme reasoning capabilities, currently available only through temporary API services. Official reports state that the Speciale version outperformed GPT-5 in certain high-difficulty reasoning tasks and achieved gold medal-level results in the 2025 International Mathematical Olympiad (IMO) and International Olympiad in Informatics (IOI).

DeepSeek has already opened the weights of the V3.2 model on Hugging Face and provided related open-source kernels and demonstration code, supporting researchers and companies for commercial deployment. Analysts believe that the release of DeepSeek V3.2 marks a step forward in the AI industry toward models that combine deep reasoning and practical tool operations, further narrowing the gap between open-source models and closed-source giants. Developers can refer to the DeepSeek API documentation for more technical details and usage guides.