Recently, Liang Wenfeng, founder of DeepSeek, revealed in an internal communication that the next generation of flagship large model, DeepSeek V4, is scheduled to be officially released in late April 2026. This news marks a key breakthrough for domestic large models in the trillion-parameter competition. Recently, DeepSeek's web version has already launched "Fast Mode" and "Expert Mode," completing a practical pre-exhibition before the release of V4 through differentiated interactions, such as simple search and handling of long-term complex questions.

DeepSeek

In terms of technology, DeepSeek V4 is expected to achieve a leap in scale, reaching a trillion parameters and a million-level context window. Particularly notable is that the model has achieved deep compatibility with domestic chips such as Huawei's Ascend for the first time. This strategic move is seen as a key milestone for China's AI industry to reduce reliance on the CUDA ecosystem and build an independent computing foundation. Driven by this expectation, the domestic computing market has reacted strongly, with tech giants such as Alibaba, ByteDance, and Tencent pre-booking hundreds of thousands of new AI chips, aiming to quickly integrate the V4 model through cloud services, leading to a recent 20% increase in AI chip prices.

As DeepSeek V4 approaches, the competition among large models has evolved from a pure algorithm contest into a comprehensive confrontation of "model + computing power + ecosystem." By deeply adapting to domestic computing power, DeepSeek not only improves the cost-effectiveness ratio of model inference but also opens up a sustainable growth path for domestic large models under computing constraints.