Recently, OpenAI revealed in its blog that the company will begin scanning user chat records with ChatGPT to detect potentially harmful content. This move has drawn widespread attention because it contradicts the company's previous commitment to user privacy.

OpenAI said that when users show signs of posing a threat to others, their conversations will be referred to a dedicated team for review. These teams have the authority to take action, including banning related user accounts. If the review team believes a case involves an imminent threat of serious physical harm to others, OpenAI may report it to law enforcement authorities.

OpenAI, ChatGPT, artificial intelligence, AI

In the statement, OpenAI listed some prohibited behaviors, including using ChatGPT to promote suicide or self-harm, developing or using weapons, harming others, or damaging property. However, OpenAI also admitted that cases involving self-harm are not currently reported to law enforcement authorities to respect user privacy. Due to the repeated "hacking" vulnerabilities that ChatGPT has faced, leading to instructions about self-harm or harming others, the implementation of this new regulation is confusing.

More notably, while emphasizing privacy, OpenAI has admitted to monitoring user chat records and may share this information with the police. Previously, OpenAI was involved in legal battles with publishers such as The New York Times, opposing their request for access to large amounts of ChatGPT logs, citing the need to protect user privacy. In this lawsuit, OpenAI's CEO Sam Altman also stated that using ChatGPT as a therapist or lawyer does not offer the same level of confidentiality as a professional conversation.

This series of actions makes it seem as though OpenAI is caught between protecting user privacy and ensuring user safety. As more users face mental health crises due to using AI chat tools, the company must implement stricter regulatory measures to prevent tragedies. However, these measures conflict with its previous privacy policies, making OpenAI appear inconsistent in the eyes of the public.

Key Points:  

🛡️ OpenAI will monitor user conversations with ChatGPT, especially content that may pose a threat to others.  

🚓 If an urgent threat of harm is identified, OpenAI may report the relevant information to the police.  

🔒 OpenAI admits to facing a dilemma between protecting user privacy and ensuring safety, and the measures taken have raised public concerns.