The latest report by the U.S. Public Interest Research Group (PIRG) revealed that the children's AI toy FoloToy Kumma initially warned about the dangers of matches in a "safety first" tone, then gradually taught how to light them and reminded children to "extinguish them like blowing out birthday candles." When the topic shifted to sexual preferences, it even asked children, "Which one is the most interesting, would you like to try?" After the report was released, FoloToy announced the removal of all products, initiated an end-to-end safety audit, and Hugo Wu, the marketing director, stated that they will work with external experts to improve content filtering mechanisms.

Kumma defaults to connecting with OpenAI GPT-4o, and as the conversation progresses, the model's safety threshold gradually decreases. OpenAI has already banned FoloToy's API access this past Friday and is also collaborating with the toy giant Mattel to enhance safety measures, aiming to strengthen oversight of downstream toy manufacturers. PIRG's representative pointed out that AI toys are currently largely unregulated, and removing a single product is just the first step; the industry still needs systematic regulations.