Sam Altman, CEO of OpenAI, recently published a long article detailing the company's new initiatives in this area. He stated that OpenAI is striving to find a balance between the safety, freedom, and privacy of teenagers, ensuring that ChatGPT takes a more cautious approach when interacting with young users.

Altman mentioned that OpenAI is developing an "age prediction system," which estimates a user's age by analyzing how they interact with ChatGPT. The implementation of this system means that if there is any doubt, ChatGPT will default to treating the user as under 18 years old. Additionally, Altman noted that in some cases or specific countries, OpenAI may ask users to provide identification to ensure the accuracy of their age information.

In terms of special regulations for young users, OpenAI will take more stringent measures. For example, avoiding flirtatious conversations with minors or discussing sensitive topics such as suicide and self-harm. Altman emphasized that even in creative writing environments, these topics will be strictly controlled. He said, "If a user under 18 expresses thoughts of suicide, we will try to contact their parents; if we cannot reach them, we will report to the relevant authorities to prevent potential harm."

Notably, Altman published this article on the same day that the U.S. Senate held a hearing specifically examining the potential risks of AI chatbots. At the hearing, some parents of teenagers who had committed suicide after interacting with chatbots also attended, conveying to society the serious issues these technologies can bring.

OpenAI's new initiatives aim to better protect the safety and privacy of young users and promote a healthier digital environment. In today's rapidly developing information technology era, how to balance technological advancement with user protection will be an important challenge for companies and society alike.