Recently, internet safety advocates in the UK have warned Ofcom, the national communications regulatory agency, urging restrictions on Meta's (formerly Facebook) use of artificial intelligence (AI) in critical risk assessments. This call is based on a report indicating that Meta plans to delegate up to 90% of its risk assessment work to AI automation. This change has sparked widespread concerns regarding user safety, particularly the protection of underage users.
Image Source Note: The image was generated by AI, and the image authorization service provider is Midjourney.
According to the UK's Online Safety Act, social media platforms are responsible for assessing the potential risks their services may bring and implementing corresponding mitigation measures. This risk assessment process is considered a key component of the law, ensuring the safety of users, especially child users. However, multiple organizations, including the Molly Rose Foundation, the children's charity NSPCC, and the Internet Watch Foundation, believe that letting AI take charge of risk assessments is "a step backward and extremely unsettling."
In a letter to Ofcom CEO Melanie Dawes, advocates strongly urged the regulator to clearly state that risk assessments relying entirely or mainly on automated generation should not be considered "suitable and sufficient." The letter also mentioned that Ofcom should question the assumption that platforms can lower standards during the risk assessment process. In response, an Ofcom spokesperson said that the regulator will carefully consider the concerns raised in this letter and respond at the appropriate time.
Meta responded to the letter, emphasizing its commitment to safety. A Meta spokesperson stated, "We are not using AI to make risk decisions; instead, we have developed a tool to help teams identify legal and policy requirements for specific products. Our technology is used under human supervision to enhance the ability to manage harmful content, and our technological advancements have significantly improved safety outcomes."
The Molly Rose Foundation cited a report from National Public Radio (NPR) when organizing this letter, stating that Meta's recent algorithm updates and new security features will primarily be approved by AI systems rather than subject to manual review. An anonymous former Meta executive noted that such changes would allow the company to roll out app updates and new features on Facebook, Instagram, and WhatsApp more quickly but also bring "higher risks," as potential issues are less likely to be identified before new product releases.
NPR also mentioned that Meta is considering automating reviews in some sensitive areas, including teenage risks and the spread of misinformation.
Key Points:
📉 Internet safety organizations urge Ofcom to limit Meta’s use of AI in risk assessments, concerned about its impact on user safety.
🔍 Meta responds that it is not using AI to make risk decisions but is enhancing content management capabilities through tools supervised by humans.
⚠️ Reports indicate that Meta’s latest changes may increase risks when launching new features, making it harder to identify potential problems in advance.