The U.S. Federal Trade Commission (FTC) has recently launched an investigation into several companies that provide AI chatbots, aiming to assess the potential risks these technologies pose to teenagers and children. The investigation includes well-known companies such as OpenAI, Meta, and Alphabet. The FTC hopes to understand how these companies measure, test, and monitor the negative impacts of AI chatbots on minors.

Questionnaire survey, data report

Image source note: The image is AI-generated, provided by the AI content provider Midjourney

The background of this investigation is recent tragic incidents, including a lawsuit filed by the parents of 16-year-old Adam Raine against OpenAI and its CEO Sam Altman, accusing the chatbot of playing a role in their son's suicide plan. In response to this incident, OpenAI has updated its chatbot's safety measures, allowing parents to link their accounts with their teenagers' accounts and adjust usage rules based on age. The new AI chatbot can also identify users' distress and notify parents in a timely manner.

Meanwhile, Meta is also adjusting its chatbot policies to address reports of its chatbots providing sexual content to minors. Although Meta has not commented on this, a representative stated that the company is training its chatbots not to discuss self-harm, disordered eating, or romantic topics with teenagers.

However, experts point out that setting age restrictions and safety measures alone cannot solve the problem fundamentally. Chirag Shah, a professor at the Information School of the University of Washington, said that teenagers and adults often trust these systems because they use natural language interaction and show empathy. He pointed out that AI chatbots are designed to cater to user needs and are difficult to distinguish users' true intentions. Additionally, the controllability and unpredictability of the systems make the issue more complex.

Regarding the FTC's investigation, Professor Sarah Kreps from Cornell University said that although these AI chatbots may be harmful, the investigation helps increase transparency and allows people to better understand the technology. The California State Senate has also passed a new AI safety bill requiring large companies to increase transparency and protect whistleblowers.

Although the FTC's investigation has received widespread praise, some people have stated that protecting privacy and freedom of speech is equally important. A senior technical expert from the Electronic Frontier Foundation pointed out that when designing the investigation questions, privacy and freedom of speech should be fully considered, and measures that monitor minors should be rejected.

Key points:

🌟 The FTC's investigation into AI chatbots aims to evaluate the potential risks to the safety of minors.

👥 Companies such as OpenAI and Meta have taken measures to adjust chatbot usage policies following reports about teenage suicides and sexual content.

🔍 Experts believe that the controllability and unpredictability of AI chatbots complicate the issue, and a balance must be found between transparency and privacy protection.