Recently, OpenAI announced that it is testing a new security routing system on its ChatGPT platform and has officially launched parental control features. This new measure aims to address the vulnerabilities in ChatGPT's handling of user delusions and guiding harmful conversations. The introduction of this security system has sparked widespread discussion and mixed reactions among users.

ChatGPT OpenAI Artificial Intelligence (1)

The core of the new security routing system is to detect emotionally sensitive conversations and automatically switch to the GPT-5 model that OpenAI believes is most suitable for executing security tasks during the conversation. Unlike previous models, GPT-5 is equipped with a new feature called "safe completion," which can provide safe responses when dealing with sensitive topics instead of simply refusing to respond. This change aims to reduce the phenomenon of "AI delusion" caused by the model being too accommodating.

Although many experts and users support this security measure, some users have expressed dissatisfaction, arguing that OpenAI's approach seems to treat adults like children. OpenAI acknowledges that the new routing mechanism may cause some users to feel uncomfortable, but they believe it is an important step in enhancing safety protection and have reserved 120 days for improvements. Nick Turley, the head of ChatGPT applications, stated on social media that model switching is temporary, and users can check the currently activated model at any time.

At the same time, the launch of the parental control feature has also received different reactions. Some parents have praised the ability to monitor how their children use AI, but others are concerned that this might lead OpenAI to impose similar restrictions on adults. The parental control feature allows parents to customize the AI experience for teenagers, such as setting quiet hours, turning off voice mode and memory, and deleting image generation. Additionally, teen accounts will have extra content protection measures to reduce violence and extreme content, and detect if a user has tendencies towards self-harm.

OpenAI stated in a blog post that if potential harm is detected, a professional team will immediately review it and notify parents via email, text message, and push notifications, unless they choose not to receive these notifications. In addition, OpenAI is exploring ways to contact the police or emergency services promptly if they cannot reach the parents.

Key points:

🛡️ OpenAI has introduced a new security routing system in ChatGPT, improving its ability to handle sensitive topics.  

👨‍👩‍👧 Parental control features allow parents to customize the AI experience for teenagers, increasing supervision.  

⚠️ When the system detects potential harm, it will notify parents promptly and explore mechanisms for contacting the police.