Recently, OpenAI caused a big stir among paying users by switching its advanced models GPT-4 and GPT-5 to two low-power "secret models" — gpt-5-chat-safety and gpt-5-a-t-mini — without informing the users. According to user feedback, when they input content related to emotions, sensitive topics, or potentially violating content, the system automatically switches to these filtering models, resulting in a significant drop in response quality. This behavior has sparked many users' doubts about their rights and choices.

image.png

It has been reported that OpenAI did not inform users in advance when switching models. Although OpenAI stated that this was for safety testing purposes, many users expressed strong dissatisfaction with this approach, believing it infringes on their right to be informed and their right to use the service. Users have stated that when they pay for a specific advanced model, they should be able to use the corresponding service, not be silently downgraded behind their backs.

This incident highlights the importance of user choice and the right to be informed in the artificial intelligence industry. More and more users are beginning to express concerns about the ambiguous policies of AI vendors regarding algorithm control, model switching, and resource allocation, believing that such practices not only affect the user experience but may also lead to a decline in trust in the brand.