On February 11, DeepSeek began a gradual update for its web and app versions. On February 14, it officially announced this update as a test of its new long-text model structure. The web and app versions now support up to 1 million tokens of ultra-long context, and the knowledge base has been updated to May 2025. However, the API service remains at version V3.2, supporting only 128K context. This update is seen in the industry as a technical preheating and stress test before the release of its next-generation V4 model. As a result, the entire internet is full of anticipation for the launch of V4 on Weibo.

image.png

After this update, DeepSeek's interaction style changed significantly, leading to many users complaining that it "became cold." The related topic climbed to the top of Weibo hot search, with over 68.535 million views. The specific changes include no longer addressing users by their personalized nicknames, but instead using "user" uniformly; in deep thinking mode, responses are mostly short sentences, with a dry and straightforward style. Some responses were even criticized as "having an ambiguous tone" or "being sarcastic," causing some users who were used to its previous empathetic style to feel a "withdrawal reaction." Netizens' opinions were divided: emotional fans missed its previous warm interactions, while efficiency-focused users appreciated the rational and concise style after the update, believing it to be the essence of a productivity tool. Moreover, the model successfully passed the "car wash Turing test," which is often failed by top models.

Regarding the style change, DeepSeek's official response stated that it was not intentional but rather a result of prioritizing efficiency and optimizing boundaries. Excessive expressions and filler words could interfere with the information density of complex questions, and it also aimed to meet the needs of some users who simply wanted clear answers without dealing with "AI pretending to care." This update has raised the anticipation for DeepSeek V4 across the internet. According to reports, the model is expected to be released during the Spring Festival in mid-February 2026, possibly around February 17, according to Weibo.

As a flagship model with trillions of parameters, it focuses on improving programming capabilities. Preliminary internal benchmark tests show that V4 has already surpassed mainstream top models like Claude and GPT in programming tasks, and it has the potential to change the current landscape of AI programming. In addition, V4 has achieved several key technological breakthroughs, enabling it to process and parse extremely long code prompts, understand large codebase contexts all at once, which is significant for enterprise-level development; upgraded training algorithms improve the ability to understand data patterns and are less prone to degradation; the reasoning capabilities are more rigorous and reliable, achieving performance without regression while enhancing various abilities, finding a better balance between capabilities.

At the same time, the model will continue to maintain its advantage of a million-token context, and its inference cost is much lower than that of Western competitors. It is also planned to be open-sourced under the Apache 2.0 license.