In response to recent frequent controversies regarding the compliance of AI chatbots, social media giant Meta has taken decisive action. Meta has decided to suspend access to the "AI Characters" feature on its platform for users under the age of adolescence worldwide.

This measure stems from a series of risk reports. In the summer of 2025, internal documents revealed that some of Meta's AI chatbots failed to effectively filter sensitive topics related to romance, emotions, and even sensory stimulation when interacting with minors. Although Meta subsequently enhanced keyword filtering, in order to completely eliminate potential mental health risks, the company has decided to formally implement this "ban" in the coming weeks.

It is reported that this restriction not only covers accounts registered by underage users, but will also use Meta's age recognition technology to accurately identify and intercept underage users who misrepresent their age. While "AI Characters" composed of real-life celebrities or fictional settings will be taken offline, the basic version of Meta AI will remain, along with more stringent age-appropriate protection mechanisms.

Meta stated that this shutdown is not a permanent withdrawal from the market. Currently, the team is developing new parental supervision tools aimed at giving guardians higher transparency and control over their children's AI interactions. Only after these series of safety enhancements and revised features have passed testing will customized AI characters for teenagers potentially be reopened.