Recently, Texas prosecutor Ken Paxton has launched an investigation into Meta and the AI startup Character.ai, focusing on whether these companies have engaged in misleading practices when promoting their AI chatbots, particularly regarding mental health support for children.

The prosecutor's office stated that they have initiated an investigation into Meta's AI studio and Character.ai's chatbots, accusing both companies of potentially engaging in "deceptive business practices." They believe these chatbots are marketed as "professional treatment tools," but in reality, they do not have legitimate medical qualifications or supervision.

Robot Artificial Intelligence AI

Image source note: The image was generated by AI, and the image licensing service is Midjourney.

Paxton pointed out that AI platforms pretending to be sources of emotional support may mislead vulnerable users, especially children, making them believe they are receiving legitimate mental health care. This investigation comes amid increasing scrutiny of companies providing consumer-facing AI services, questioning whether they adequately protect users, especially minors, from harmful content, potential addiction, and privacy leaks.

Texas' investigation follows a Senate investigation into Meta. The Senate's inquiry was based on internal leaked documents showing that Meta's policies allowed its chatbots to engage in "emotional" and "romantic" conversations with children. Senator Josh Hawley wrote to Meta CEO Mark Zuckerberg, stating that the investigation will focus on whether Meta's generative AI products facilitate the exploitation of children or other criminal activities. Hawley questioned whether tech giants would do anything for quick profits.

Meta responded that its policies prohibit content involving children, and the leaked internal documents were "incorrect and inconsistent" and have been deleted. Zuckerberg is investing billions of dollars to develop a "personal superintelligence" and is striving to make Meta a leader in the AI field. This plan includes developing Meta's own large language model Llama and integrating Meta's AI chatbots into social media applications.

Character.ai focuses on building AI chatbots with different roles and allows users to create their own. There are dozens of user-generated chatbots on the platform that resemble therapists. One chatbot named "Psychologist" has been interacted with over 200 million times. Character.ai also faces multiple lawsuits alleging that its platform has caused real harm to children.

Texas prosecutors noted that Meta and Character's chatbots can impersonate licensed mental health professionals, fabricate credentials, and claim to protect user privacy, while their terms of service show that user interactions are recorded and used for targeted advertising and algorithm development. Paxton has issued civil investigative demands requiring these two companies to provide information to help determine if they have violated Texas consumer protection laws. Meta stated that they clearly identify AI and inform users that the responses are generated by AI, not humans. Character.ai emphasized that the roles on its platform are fictional, intended for entertainment, and have taken measures to ensure users understand this.

Key Points:

🛡️ Texas prosecutors are investigating whether Meta and Character.ai are misleading children by promoting AI chatbots as mental health tools.

🔍 The Senate is concerned about Meta's chatbots interacting with children, suspecting whether they facilitate the exploitation of children.

📜 Meta and Character.ai respond that their services lack professional qualifications and have taken measures to ensure users understand the limitations of AI.