At a critical moment in the formulation of AI regulatory policies in the UK, Nick Clegg, former Deputy Prime Minister and ex-executive at Meta, has sparked widespread controversy with his latest remarks. He claimed that requiring AI companies to obtain consent before using artists' works for model training would "fundamentally kill" the UK's AI industry.

Clegg's "Infeasible" Argument

In a book promotion event, Clegg acknowledged that creative communities should have the right to choose whether their works are used for AI model training. However, he insisted that obtaining prior consent is not feasible in practice. During an interview with The Times, he stated: "These systems are trained on vast amounts of data, and I just don't know how to ask everyone first."

Clegg further warned that if the UK were to implement such requirements independently while other countries did not follow suit, it would "destroy this country's AI industry overnight." This statement directly responds to calls from the creative community for stricter copyright protection.

1_1693449769614_ai2023_Facial_AI_robots_a_lot_of_cash_on_the_conference_table_i_d531bb02-0ec5-4e0a-9f1c-a65fa11a4c51

Parliamentary Debate and Star Signatories

The UK Parliament held intense debates over an amendment to the Data (Use and Access) Bill. Proposed by filmmaker and director Beeban Kidron, the amendment requires technology companies to disclose copyrighted works used for training AI models.

This proposal received strong support from the creative community. Hundreds of renowned musicians, writers, designers, and journalists, including Paul McCartney, Dua Lipa, Elton John, and Andrew Lloyd Webber, signed a public letter in early May in support of the amendment.

Policy Balance

Despite active lobbying by the creative community, Members of Parliament ultimately rejected the transparency amendment on Thursday. Science Minister Peter Kyle emphasized during the vote: "The success and prosperity of the UK economy depend on both the AI and creative industries." This statement indicates the government's attempt to strike a balance between two important sectors.

Supporters like Kidron believe that transparency requirements will help enforce copyright law. If AI companies are required to disclose the origins of their training data, it will reduce the likelihood of "stealing" works, thereby better protecting creators' rights.

Controversy to Continue

Facing setbacks, Kidron declared in a Guardian op-ed that "the fight is not over yet." The Data (Use and Access) Bill will return to the House of Lords for further review in early June, signaling that the clash between AI regulation and copyright protection will continue to escalate.

This controversy reflects the core contradiction facing global AI development: how to find a balance point between promoting technological innovation and protecting intellectual property rights. As an important AI technology hub and creative industry powerhouse, the UK's policy choices will have significant implications for global AI regulatory trends.