In the field of AI, an influential figure - Ilya Sutskever, former co-founder of OpenAI and current CEO of Safe Superintelligence, recently made notable comments in a three-thousand-word interview. He pointed out that the mainstream development path of AI has reached a bottleneck, marking our return to a stage of in-depth research from an era of expansion.
Sutskever believes that from 2012 to 2020, the AI industry experienced a rapid research phase, followed by a stage of large-scale expansion. Now, with the growth of computing power, model performance has not significantly improved, and the boundary between expansion and waste of computing power has become blurred. This phenomenon makes one wonder whether current AI research should once again focus on the exploration of fundamental theories and methods.
He further discussed the generalization ability of large models, pointing out that these models perform well in specific evaluations, but often make mistakes in practical applications. This phenomenon may be due to the data selected for reinforcement learning training being too narrow, leading to unsatisfactory performance when dealing with complex real-world tasks. Sutskever used a metaphor to illustrate this: current AI models are like students who focus on programming competitions, performing well in competitions, but may not necessarily excel in real work.
Sutskever also emphasized the importance of emotions in AI decision-making. He proposed that human values and decision-making abilities are, to some extent, regulated by emotions, which may have been formed through the process of evolution. He stated that future AI systems need to consider emotional factors in order to better understand and adapt to the complex world.
Not only Sutskever, but many pioneers in the AI field have also raised doubts about the current direction of AI development. For example, Turing Award winner Yann LeCun pointed out that current language model technology might be a "dead end" that cannot achieve true intelligence. He believes that "world models" will be the mainstream of future AI, driving the development of AI by simulating and understanding the world.
In summary, the AI industry is facing a major turning point. Relying solely on the expansion of computing power and scale is no longer sufficient, and new research paradigms must be re-examined and explored to pave the way for achieving Artificial General Intelligence (AGI).


