Recently, AI company Anthropic officially supported the SB53 bill proposed by California Governor Scott Wiener. The bill aims to impose unprecedented transparency requirements on the world's largest AI model developers, marking the first legislative attempt in the United States to address AI safety. However, paradoxically, many Silicon Valley tech companies and the federal government have strongly opposed this bill.

Anthropic stated in a blog post: "Although we believe that frontier AI safety issues should be addressed at the federal level rather than by individual states, the development of powerful AI technology will not wait for consensus in Washington." The company emphasized that establishing AI governance standards is urgent, and SB53 provides a reasonable path.

If the SB53 bill is passed, AI model developers such as Anthropic, OpenAI, Google, and xAI will need to develop safety frameworks and publish public safety and security reports before deploying powerful AI models. In addition, the bill will also provide protection for employees who report safety issues.

The bill specifically focuses on limiting the contribution of AI models to "catastrophic risks," defined as events causing at least 50 deaths or over $1 billion in losses. SB53 focuses on preventing extreme AI risks, such as preventing AI models from being used in the development of biological weapons or cyberattacks, without addressing more immediate issues like deepfakes or excessive customization.

The California Senate has already passed the preliminary version of SB53, but it still needs a final vote before being sent to the governor for signing. Although Governor Newsom has not yet expressed his position on the bill, he previously vetoed a similar SB1047 bill.

Opposition mainly comes from Silicon Valley and the Trump administration, who believe that such bills may hinder innovation in the U.S. competition with China. Investors such as Andreessen Horowitz and Y Combinator have strongly opposed this bill, arguing that state governments should not interfere in AI safety issues and that this matter should be left to the federal government.

Despite these objections, policy experts believe that SB53 is relatively moderate compared to previous AI safety bills. California legislators demonstrated respect for technological realities and some legislative restraint during the drafting of this bill. Jack Clark, co-founder of Anthropic, stated that although he hopes for federal standards, the current bill provides an indispensable blueprint for AI governance.

Key Points:

📜 Anthropic supports the SB53 bill, emphasizing the importance of AI governance.  

⚖️ The bill requires large AI developers to create safety frameworks, release safety reports, and protect whistleblowers.  

🔍 SB53 focuses on preventing extreme AI risks and still needs a final vote to take effect.