On December 4th, DeepSeek officially released two major updates without prior notice: the official version of DeepSeek-V3.2 and the ultra-inference version DeepSeek-V3.2-Speciale. The official website, app, and API have all been seamlessly switched with one click. With this release, DeepSeek once again firmly cements the title "the strongest open-source model" on its announcement board.
DeepSeek-V3.2: The first open-source large model that "thinks and uses tools"
The biggest highlight of V3.2 is the first integration of the "thinking process" with "tool calls," supporting two parallel modes:
- Thinking mode: The model can perform long-chain reasoning before accurately calling tools;
- Non-thinking mode: It still maintains lightning-fast response times.
Through massive Agent synthetic training data, V3.2 directly dominates all current intelligent agent public evaluation rankings without any specific training, easily taking the top spot in open-source models, even approaching the performance of some closed-source top models.

DeepSeek-V3.2-Speciale: The ultimate form of an inference monster
The Speciale version can be understood as an enhanced version that "turns thinking to maximum." Its only goal is to push the inference capabilities of open-source models to physical limits.
It inherits the top-level ability of DeepSeek-Math-V2 in proving mathematical theorems and demonstrates remarkable stability in scenarios such as long-chain logic, complex problem decomposition, and multi-step planning.
Test results show that in tasks requiring more than 30 steps of deep reasoning, Speciale significantly outperforms all existing open-source models, being humorously called the "open-source o3/o4 killer" by the community.
Seamless platform-wide update, user experience remains uninterrupted
Released and immediately available! Currently, the chat interface of DeepSeek's official website, mobile app, and API service have all been upgraded to the official version of V3.2. Users need not perform any actions; simply refresh the page to experience the new capabilities, truly achieving "wake up to a stronger model."
AIbase Exclusive Comments
It's only early 2025, and DeepSeek has already raised the bar for the open-source community with two consecutive releases: one is the flexible and efficient V3.2, and the other is the inference-driven Speciale, directly pushing both "performance limit" and "usability" to the extreme.
Even more terrifyingly, this is just the "point two" version of the DeepSeek-V3 series.
While others are still competing on parameters and context length, DeepSeek is already competing on whether the model can think.
This move has once again pushed the ceiling of domestic open-source models three floors higher.
Who will take the next challenge?





