Recently, Meta (the parent company of Facebook, Instagram, WhatsApp, and Threads) announced that it will shift its internal security and privacy review work to artificial intelligence, planning to automate up to 90% of risk assessments. According to internal documents obtained by NPR, the responsibility for evaluating the impact of updates on user privacy, harm to minors, or the spread of misinformation, which was previously handled by specialized teams, will now primarily be transferred to AI technology.
In the new evaluation framework, product teams need to fill out a questionnaire detailing the update content, after which the AI system will instantly provide an assessment result, pointing out potential risks and setting conditions for the project. Human supervision will only be required in specific cases, such as when a project introduces new risks or the team specifically requests human involvement. An internal slide from Meta shows that teams will make "instant decisions" based on AI evaluations.
This change allows developers to release new features more quickly, but experts, including former Meta executives, expressed concerns that this could lead to reduced caution when launching products. "To some extent, this means more products going live faster with less regulatory and adversarial review, which will increase risks," said an anonymous former Meta executive.
Meta stated that the new process aims to "simplify decision-making" and emphasized that "human expertise" will still be used for "novel and complex issues." Although the company insists that only "low-risk decisions" will be automated, internal documents show that sensitive areas such as AI safety, youth risks, and content integrity will also undergo automated assessments.
Some internal and external voices at Meta warned that over-reliance on AI for risk assessments may be shortsighted. A former employee noted, "Whenever they launch a new product, they face a lot of scrutiny, and these reviews often uncover issues the company should take more seriously."
In addition, since 2012, Meta has been bound by an agreement with the Federal Trade Commission (FTC) requiring privacy reviews for product updates. Meta claims it has invested over $8 billion in privacy programs and continues to refine processes.
Interestingly, European users may not face the same degree of automation. Internal communications indicate that decisions regarding EU products will still be managed by Meta's headquarters in Ireland, due to stricter regulations imposed by the Digital Services Act on content and data protection.
Key points:
🛡️ Meta plans to automate 90% of its risk assessment work to accelerate product updates.
⚠️ Experts are concerned that this move may lead to security risks and reduce human oversight.
🇪🇺 European users' product assessments will still be handled by Meta's headquarters to comply with local regulations.