Google DeepMind has recently done something rare in the AI circle - formally established a full-time philosopher position, and this is the first time in a leading AI laboratory.
The person taking this position is Henry Shevlin, a scholar from the University of Cambridge, who is expected to join in May. His research area is not algorithms or model architecture, but machine consciousness, human-machine relationships, and whether humans are prepared for the arrival of AGI. More importantly, he is not a nominal advisor, but will be truly embedded in DeepMind's actual research process and participate in frontline work.

This appointment is thought-provoking when considered closely.
For a long time, AGI was seen by top laboratories more as an engineering problem - whether there is enough computing power, data, or whether the architecture design is sophisticated enough. But now, DeepMind is telling the outside world with a real position that this issue is not so simple.
When a machine begins to show behaviors that resemble "consciousness," how should we define it? Where is the boundary between humans and AI? When AGI really arrives, can the ethical framework of human society withstand it? Engineers cannot provide answers to these questions.
At the same time, public fear and anxiety about AI development are spreading in reality. Introducing a philosopher at this time is, in a way, a response - acknowledging that these deep issues can no longer be avoided and must be faced directly.



