Anthropic company released an incident report confirming that its latest Claude Opus4.1 and Opus4 models indeed experienced a "dumbing down" phenomenon between August 25th and 28th. Users may find the quality of responses significantly declined, including inaccurate answers, formatting errors, and issues when calling tools.
According to the official statement, the cause of this issue was an update to the reasoning stack. Although the update's intention was to improve the model's efficiency and throughput, it clearly did not achieve the expected results. The Anthropic team responded quickly, rolling back Claude Opus4.1 to restore its original performance. At the same time, they found that Claude Opus4.0 was also affected and decided to implement corresponding fixes.
The official emphasized that they always prioritize user experience when updating models and promise to ensure that the response quality of the models will not decrease. After this incident, Anthropic obviously realized that while pursuing technological advancement, it is even more important to focus on the actual user experience.
In today's rapidly developing AI technology, the quality and stability of models are particularly important. User trust in AI tools comes not only from their powerful functions but also from their ability to reliably provide high-quality services. Anthropic's quick response to this incident demonstrates their emphasis on product quality and sets a good example for other technology companies.
In the future, Anthropic will continue to work on optimizing its AI models to ensure that users can enjoy smarter and more efficient services when using the Claude series tools.