Recently, YouTube, the video giant under Google, has faced a collective condemnation from the academic community. Over 200 child development experts and educational institutions have signed an open letter, urging the platform to stop recommending artificial intelligence-generated videos to minors.

These experts point out that the platform is flooded with AI videos that claim to be "educational," but their content is often illogical and of extremely low quality. This type of content, referred to as "AI garbage," is massively consuming children's attention.

youtube 油管

The Uncontrolled Experiment: Experts Worry About Children's Cognitive Damage

The expert group bluntly warned in their letter that pushing AI content without in-depth research is like conducting an "uncontrolled experiment." They are worried that young children cannot distinguish between virtual and real, leading to delayed social and emotional development.

Even more concerning is that some creators are using AI tools to mass-produce low-quality videos for profit. This business model that prioritizes clicks over content quality is turning children's channels into meaningless "digital landfills."

Platform Response: Strengthened Review but Labeling Criticized as Ineffective

Facing the criticism, YouTube officially responded that they have set strict standards in the children's version of the app and limit the display of AI content only from a few high-quality channels. The platform spokesperson emphasized that they have required creators to disclose and label AI-generated information.

However, regulators and critics argue that text labels are ineffective for infants who cannot read yet. As AI governance becomes a key industry focus by 2026, how to draw clear boundaries between technological innovation and children's protection has become a legal and ethical challenge that tech giants must face.