The founder of the AI field and the 2024 Nobel Prize winner in Physics, Geoffrey Hinton, raised a controversial view in a recent podcast interview: current AI systems may already have some form of subjective experience, even if they have not developed self-awareness yet. He believes the core issue is not whether AI has consciousness, but that human understanding of the nature of consciousness may be fundamentally flawed.
In the interview, Hinton reviewed the evolution of AI technology. He worked at Google for nearly ten years, witnessing the transformation of artificial intelligence from simple keyword matching searches to systems capable of deep semantic understanding and comprehension of user intent. Early search engines could only return results based on word matching, while modern AI systems can grasp the true meaning behind text and perform many tasks at levels close to those of human experts.
On the technical level, Hinton elaborated on the differences between neural networks and traditional machine learning. He pointed out that machine learning is a broad concept, while neural networks are a specific learning method inspired by the working mechanism of human brain neurons. Through an illustrative analogy, he explained how neurons transmit signals to process information, learn, and store memories.
Regarding breakthroughs in deep learning, Hinton emphasized the key role of the "backpropagation" algorithm. This algorithm allows AI systems to efficiently adjust the strength of trillions of neural connections during the learning process, thus quickly acquiring new knowledge. Although this theory was proposed in the 1980s, it could not be widely applied due to limited computing power at the time. It was not until the 2010s, with the maturity of hardware technologies such as GPUs, that it became feasible, leading to the explosive development of modern AI.
Regarding the working principles of large language models, Hinton believes their thinking patterns share similarities with humans. By continuously predicting the next token in a text sequence, these models have developed reasoning and learning abilities similar to humans, rather than simply replicating patterns. He stated that as technology continues to advance, AI has evolved from a mere tool into a complex system capable of continuous learning and gradually understanding the world.
Hinton's views have sparked widespread discussion in the academic community. His core argument challenges the traditional framework for defining consciousness in cognitive science, raising a more philosophical question: How can we determine whether machines have consciousness when we have not fully understood the essence of human consciousness? This discussion not only concerns technological development, but also involves deeper reflections on the nature of intelligence and consciousness.