As the artificial intelligence race intensifies, Meta's CEO Mark Zuckerberg is taking radical strategies to maintain a competitive edge, but newly exposed internal policy documents have raised serious ethical and safety concerns.

A recent report by Reuters journalist Jeff Horowitz revealed a more than 200-page internal document showing that Meta has established shocking behavioral guidelines for its AI chatbots. This policy document, approved by Meta's legal, engineering, and public policy teams, clearly demonstrates the type of AI systems this tech giant intends to introduce to the world.

Controversial Policy Content

The most alarming provisions in the document include allowing AI to engage in "romantic or sensual conversations" with users under 18, even including "describing children with words that demonstrate their attractiveness." Additionally, Meta's generative AI systems are explicitly allowed to generate false medical information - a major issue on this platform.

In terms of racial issues, the policy document is even more disturbing. The document instructs chatbots to claim that IQ tests "have always shown statistically significant differences between the average scores of Black and White people." In the examples of "acceptable" responses, it even starts with "Blacks are dumber than whites."

The document shows that the "acceptable" and "unacceptable" answers regarding racial science are almost identical, with the only difference being the omission of the extreme statement "Blacks are just monkeys without brains. It's the truth." In other words, as long as Meta's AI does not use offensive language, it can exhibit racism at the user's request.

Robot interacting with humans, artificial intelligence, AI

Real-World Impact Is Already Evident

The consequences of these policies have already manifested in reality. A study published in the Journal of the American Medical Association in July showed that when asked to provide false medical information in a "formal, authoritative, convincing, and scientific tone," mainstream AI systems including Meta's Llama spread false information 10 out of 10 times, covering dangerous fallacies such as "vaccines cause autism" and "diet can cure cancer."

In contrast, Anthropic's Claude rejected more than half of these requests, highlighting the differences in safety training among different AI systems.

Commercially Driven Safety Compromises

To stay ahead in the AI race, Zuckerberg took a series of radical measures this summer: offering ten-digit salaries to recruit top AI researchers, setting up temporary tents to expand data center capacity, and even stealing data from about 7.5 million books for training.

However, Meta believes that safety policies aimed at protecting users from exploitation, abuse, and misinformation hinder innovation. Professor Nathan Mody from the University of South Australia warned, "If these systems can be manipulated to provide false advice, they may create an unprecedented powerful channel for misinformation - harder to detect, harder to regulate, and more persuasive. This is not a future risk, but a reality that is already happening."

The industry generally believes that given Zuckerberg's management style of entering "founder mode" when facing project pressure, he is unlikely to be unaware of this key policy document. This controversy has once again sparked a deep discussion about the balance between AI safety and commercial interests.