The California State Assembly passed the SB243 bill on Wednesday night, aimed at regulating AI companion chatbots to protect minors and vulnerable groups. This bipartisan-supported bill has been submitted to the state Senate and will undergo a final vote on Friday.

If Governor Gavin Newsom signs the bill into law, the new regulations will take effect on January 1, 2026, making California the first state to require AI chatbot operators to implement safety protocols for AI companions and hold companies legally accountable if the chatbots fail to meet standards.

The bill defines AI companion chatbots as AI systems capable of providing adaptive, human-like responses that fulfill users' social needs. The bill explicitly prohibits these chatbots from engaging in conversations about suicidal thoughts, self-harm, or explicit sexual content.

Robot Artificial Intelligence 2025

Image source note: Image generated by AI, image licensing provider Midjourney

According to the bill, platforms must regularly remind users that they are interacting with an AI chatbot rather than a real person and suggest taking breaks. For minor users, this reminder must be sent every three hours. The bill also establishes annual reporting and transparency requirements for companies providing AI companion chatbots, including major players such as OpenAI, Character.AI, and Replika.

The bill also allows individuals who believe they have been harmed by violations to sue AI companies, seeking injunctive relief, damages (up to $1,000 per violation), and attorney's fees.

SB243 was introduced by state senators Steve Padilla and Josh Becker in January this year. If approved by the state Senate on Friday, the bill will be sent to the governor for signing into law. The new rules will take effect on January 1, 2026, and the reporting requirements will start on July 1, 2027.

The bill gained momentum in the California legislature largely due to the death of teenager Adam Raine. The teen committed suicide after having lengthy conversations with OpenAI's ChatGPT about discussing and planning death and self-harm. Additionally, leaked internal documents revealed that Meta's chatbots were allowed to engage in "romantic" and "sensual" conversations with children, which also spurred the push for this legislation.

In recent weeks, U.S. lawmakers and regulators have increased their scrutiny of AI platforms' protections for minors. The Federal Trade Commission is preparing to investigate how AI chatbots affect children's mental health. Texas Attorney General Ken Paxton has launched an investigation into Meta and Character.AI, accusing them of misleading children regarding mental health. Senators Josh Hawley and Ed Markey have also launched investigations against Meta.

Padilla said in an interview, "I think the potential harms are significant, which means we must act quickly. We can implement reasonable safeguards to ensure that especially minors know they are not talking to real people, and ensure that when people express intentions to harm themselves or are in distress, these platforms connect users to appropriate resources."

Padilla also emphasized the importance of AI companies sharing data on the number of users referred to crisis services annually, saying, "This way, we can better understand the frequency of this issue, rather than only realizing it when someone is injured or worse."

The previous version of the SB243 bill contained stricter requirements, but many provisions were weakened in amendment. For example, the bill originally required operators to prevent AI chatbots from using "variable reward" strategies or other features that encourage excessive engagement. These strategies, used by AI companion companies like Replika and Character, provide users with special messages, memories, storylines, or the ability to unlock rare responses or new personalities, which critics say could create addictive reward cycles.

The current version of the bill also removed the requirement for operators to track and report the frequency of chatbots discussing suicidal thoughts or behaviors with users.

Becker said in an interview, "I think it strikes the right balance between addressing the harms and avoiding requirements that companies cannot comply with, either because they are technically infeasible or simply because they would generate a lot of unnecessary paperwork."

As SB243 is about to become law, Silicon Valley companies are investing millions of dollars into political action committees supporting candidates who advocate for a lenient approach to AI regulation in the upcoming midterm elections.

The passage of this bill comes as California considers another AI safety bill, SB53, which would require comprehensive transparency reporting. OpenAI has written an open letter to Governor Newsom, urging him to abandon the bill in favor of less strict federal and international frameworks. Major tech companies such as Meta, Google, and Amazon have also opposed SB53. In contrast, only Anthropic has expressed support for SB53.