Recently, Demis Hassabis, the founder of DeepMind, made alarming remarks in public. He admitted that the super artificial intelligence being developed does indeed pose a risk of human extinction, and the global AI development competition has entered an "irreversible" state of loss of control.

Hassabis pointed out that due to the pressure of commercial competition and technological rivalry, traditional external governance measures have become difficult to effectively regulate AI. This statement has triggered deep concerns within the technology community about the narrowing "window period" for AI safety.

Clash, struggle, battle, PK, robot, confrontation, war

Image source note: The image is AI-generated, and the image licensing service provider is Midjourney.

The Systematic Defense Line Has Collapsed, Safety Standards Are Being Sacrificed for Development Speed

Hassabis was once a firm advocate for AI safety, even trying to build a "technical safety net" through independent oversight and secret R&D in his early days. However, the sudden emergence of ChatGPT in 2022 disrupted the original development rhythm, forcing companies like Google to merge their R&D departments to compete, and the original safety review mechanisms gradually failed.

He admitted that the idea of relying on ethics committees and external systems to regulate AI has largely failed. In the face of ruthless commercial battles, non-profit governance mechanisms often struggle to maintain their substantial influence.

Shifting to Personal Influence, Controlling the Last Defense at Key Decision-Making Positions

Faced with the current situation of loss of control, Hassabis has changed his approach to governance, trying to exert personal influence by occupying key decision-making positions. He is simultaneously advancing the development of top models like Gemini while using his technical authority to manage risk flows at critical points.