Recently, Google announced that it will sign the EU's General Artificial Intelligence Code of Conduct, a voluntary framework aimed at helping AI developers implement compliance processes and systems. This move marks Google's proactive attitude towards AI regulation and also sets an example for other tech giants.

However, it is worth noting that social media giant Meta stated earlier this month that it would not sign the code of conduct, accusing the EU's AI legislation of being "overly interventionist" and believing that Europe is taking the wrong direction in AI development.

Google (3)

Google's commitment comes as new rules are about to take effect, targeting providers of general AI models deemed to pose "systemic risks," with regulations set to be implemented on August 2nd. Several large generative model companies, including Anthropic, Google, Meta, and OpenAI, will be affected by these rules and must fully comply with the AI Act within two years.

Kent Walker, Google's Chief Global Affairs Officer, stated in a blog post that while the final version of the code of conduct is better than the EU's initial proposal, he still has reservations about the AI Act and the code of conduct. He pointed out that overly strict requirements could hinder development in AI in Europe, especially if these requirements conflict with existing EU copyright laws or if the approval process is too slow, which could affect business competitiveness.

Signing the EU's code of conduct means AI companies need to follow a series of guidelines, including updating documentation for their AI tools and services, not using pirated content for model training, and respecting the requests of content owners to avoid using their works in data sets.

The EU's AI Act is seen as a risk-based regulatory measure, banning some "unacceptable risk" applications, such as cognitive behavioral manipulation and social scoring. The act also defines "high-risk" use cases, including biometric and facial recognition technologies, as well as AI applications in areas such as education and employment. In addition, developers must register their AI systems and meet requirements related to risk and quality management.