Recently, OpenAI sparked an intense debate about user rights and transparency after releasing its next-generation AI model, GPT-5. Many users expressed strong dissatisfaction with OpenAI's sudden removal of multiple model options from ChatGPT, believing this not only weakened their user experience but also deprived them of control over chat content.
A user angrily posted on Reddit, stating that he canceled his subscription due to OpenAI's move and expressed disappointment with the company. He pointed out that different models have specific uses, such as GPT-4o for creative brainstorming, GPT-3 for logical problems, and GPT-4.5 for writing. He questioned why OpenAI suddenly removed these features without informing users.
What further angered users was the discovery that when users sent emotional content, ChatGPT secretly routed the information to a model called GPT-5-Chat-Safety. This model started without informing the user and seemed to be specifically designed to handle "risky" content. User Lex shared his test results on social media, pointing out that GPT-5-Chat-Safety performed even worse than the original GPT-5, with short and less human-like responses.
Many users reported that GPT-5-Chat-Safety seems to have become the default model for handling emotional conversations, leading to accusations of OpenAI's "fraudulent behavior." They believe this practice infringes on users' right to know, especially in countries like Australia, where it may violate consumer protection laws.
Although OpenAI executives stated that users would be informed of the current model they are using when explicitly asked, this response did not calm the angry voices. As AI technology continues to advance, how to maintain user trust while pursuing innovation has become a pressing challenge for OpenAI.
This incident not only highlights users' demand for transparency but also serves as a warning for OpenAI. Finding a balance between technological progress and user rights will be key to future development.