Recently, the U.S. government has signed agreements with tech companies such as Google DeepMind, Microsoft, and xAI, to review early versions of their new artificial intelligence models before they are released to the public. This collaboration, led by the Center for AI Standards and Innovation (CAISI) under the U.S. Department of Commerce, aims to ensure a balance between the capabilities of new powerful AI models and national security.

Security, Black Technology, Future, Program

CAISI stated in a press release that this review process is crucial for understanding cutting-edge AI technologies and their potential impacts on national security. CAISI Director Chris Ford pointed out that independent and rigorous measurement science can effectively identify risks related to cybersecurity, biosecurity, and chemical weapons. The signing of these agreements will help the federal government better advance work for the public interest at critical moments.

Additionally, CAISI emphasized that it has become a common practice for developers to share information about unreleased AI models with the government, which helps the government comprehensively assess capabilities and risks related to national security. In recent years, as concerns over the dangers posed by next-generation AI models, such as Anthropic's Mythos, have increased, the importance of such agreements has become more apparent.

At the same time, AI safety experts in the technology industry and government officials have expressed concerns about the potential risks of these powerful models being exploited by hackers. As a result, Anthropic has decided to limit the scope of Mythos's promotion and launched a project called "Glass Wings," aiming to collaborate with multiple tech companies to protect the security of global critical software.

Regarding AI regulation, the Trump administration had considered issuing an executive order to strengthen oversight of these tools, but the administration denied this. Meanwhile, Microsoft also announced a similar agreement with a government-backed AI safety institute in the UK, emphasizing the need for close collaboration with the government for testing national security and public safety risks.

Key Points:

1️⃣ The U.S. government has reached an agreement with Google DeepMind, Microsoft, and xAI to review the national security risks of AI models.

2️⃣ CAISI emphasizes that independent measurement science is crucial for understanding the capabilities and risks of AI models.

3️⃣ AI safety experts are concerned about the potential risks of powerful models being exploited by hackers, and tech companies are working together to protect critical software security.