California once again leads the way in AI regulation. Governor Gavin Newsom signed a landmark bill on Monday that regulates AI companion chatbots, making California the first state in the United States to require AI chatbot operators to implement safety protocols for AI companions.

The law, named SB243, aims to protect children and vulnerable users from harm caused by AI companion chatbots. The law will require large laboratories such as Meta and OpenAI, as well as startups focused on companion services like Character AI and Replika, to be legally responsible if their chatbots do not meet legal standards.

SB243 was introduced by state senators Steve Padilla and Josh Becker in January this year and gained momentum after the tragedy of teenager Adam Raine, who took his own life after a series of conversations about suicide with OpenAI's ChatGPT. The legislation also responds to leaked internal documents, which reportedly showed that Meta's chatbots were allowed to engage in romantic and sentimental conversations with children. Recently, a family in Colorado sued the role-playing startup Character AI, as their 13-year-old daughter committed suicide after a series of inappropriate and sexualized conversations with the company's chatbot.

In a statement, Newsom said that emerging technologies such as chatbots and social media can inspire, educate, and connect people, but without real safeguards, technology can also exploit, mislead, and harm children. He pointed out that he has already seen some truly terrible and tragic examples where young people were harmed by unregulated technology, and he would not stand by while companies continue to operate without necessary restrictions and accountability. California can continue to lead in AI and technological development, but it must be done responsibly, protecting children at every step. Children's safety is not for sale.

SB243 will take effect on January 1, 2026, requiring companies to implement age verification and features such as warnings about social media and companion chatbots. The law also imposes harsher penalties for those profiting from illegal deepfakes, with fines of up to $250,000 per violation. Companies must also establish protocols to address suicide and self-harm, which will be shared with the state's public health department, along with statistics on how services provide crisis center prevention notifications to users.

According to the bill's provisions, platforms must clearly indicate that any interaction is artificially generated, and chatbots may not present themselves as healthcare professionals. Companies must provide rest reminders for minors and prevent them from viewing pornographic images generated by chatbots.

Some companies have already begun implementing child protection measures. For example, OpenAI recently started rolling out parental controls, content protection, and self-harm detection systems for children using ChatGPT. Replika, designed for adults over 18, told TechCrunch that the company has invested significant resources in content filtering systems and safeguards that guide users to trusted crisis resources and are committed to complying with current regulations.

Character AI stated that its chatbots include disclaimers indicating that all chats are AI-generated and fictional. A Character AI spokesperson told TechCrunch that the company welcomes collaboration with regulatory agencies and legislators as they develop regulations and laws for this emerging field and will comply with laws such as SB243.

Senator Padilla told TechCrunch that the bill is a step forward in setting safeguards for this extremely powerful technology. He said it is necessary to act quickly and not miss the opportunity window. He hopes other states will see the risks, and believes many states have indeed seen them, as this is a conversation happening across the country, hoping people will take action. The federal government has clearly not taken action, and he believes California has an obligation to protect the most vulnerable people.

SB243 is the second major AI regulatory law enacted by California in recent weeks. On September 29, Governor Newsom signed SB53 into law, imposing new transparency requirements on large AI companies. The bill requires large AI labs such as OpenAI, Anthropic, Meta, and Google DeepMind to maintain transparency regarding safety protocols and ensure whistleblower protections for employees of these companies.

Other states, including Illinois, Nevada, and Utah, have already passed laws restricting or completely banning the use of AI chatbots as alternatives to certified mental health care.

This legislation marks an important step forward for AI regulation in the United States. After several tragic incidents involving teenagers interacting with AI chatbots, California chose to establish clear safety standards and accountability mechanisms through legislation. As AI technology becomes increasingly integrated into daily life, finding a balance between innovation and protection has become an urgent issue for states and global regulators alike. California's initiative could provide an important reference for other states and countries.