With the rapid development of artificial intelligence (AI) technology, an increasing number of researchers have begun to delve into large language models such as ChatGPT. Recently, a research team from Arizona State University published a thought-provoking paper on the preprint platform arXiv, suggesting that our understanding of these AI models may be misconstrued. They argue that these models do not truly think or reason but instead search for correlations.
In the paper, the researchers specifically mentioned that although these AI models often generate seemingly reasonable intermediate processes before providing answers, this does not mean they are reasoning. The research team emphasized that anthropomorphizing AI model behaviors could lead to public misunderstanding of their mechanisms. They pointed out that the "thinking" of large models is actually about calculating and finding correlations between data rather than understanding causal relationships.
Source Note: Image generated by AI, licensed by MidJourney
To verify their viewpoint, the researchers also mentioned some reasoning models like DeepSeek R1, which perform excellently in certain tasks. However, this does not prove they possess human-level thinking ability. Research shows that there is no real reasoning process in AI outputs. Therefore, if users view the intermediate inputs generated by AI models as reasoning processes, it might mislead them about the problem-solving capabilities of AI.
This study reminds us that in an increasingly AI-dependent era, we must be more cautious about the capabilities of these technologies. As our understanding of the capabilities of large models deepens, future AI research may move towards more explainable directions, helping users better understand the actual working principles of AI.