At a recent "Coding with Claude" developer event held in San Francisco, Dario Amodei, CEO of Anthropic, stated that current AI models experience "hallucinations" or the creation of false information less frequently than humans. His view has garnered significant attention and was made during his discussion on the development of AI toward human-level intelligence (AGI).
Amodei said, "It actually depends on how we measure it. But I suspect that the hallucination rate of AI models might be lower than that of humans, though their ways of hallucinating are more surprising." He emphasized in the briefing that while many AI leaders consider hallucinations to be a major obstacle to achieving AGI, he believes these issues are not bottlenecks for AI development.
He further pointed out that AI technology is continuously progressing, stating, "The bar is rising universally." This indicates his optimistic outlook on the prospects of AI models achieving AGI. He mentioned in a widely circulated paper last year that AGI could arrive as early as 2026.
Despite Amodei's positive stance, not all industry leaders agree with this view. Demis Hassabis, CEO of Google DeepMind, has stated that current AI models have too many "holes" and are prone to errors on obvious problems. For instance, an attorney from Anthropic once apologized in court for using the Claude model to generate references because the AI incorrectly generated names and titles.
Verifying Amodei's claim is not easy, as most hallucination assessments compare AI models against each other rather than humans. Although some techniques seem to help reduce hallucination rates, such as allowing AI models access to web searches, there is evidence suggesting that the hallucination rate of certain advanced reasoning models may be increasing.
Amodei mentioned in the briefing that TV broadcasters, politicians, and professionals in various fields often make mistakes, and AI errors do not necessarily indicate low intelligence levels. However, he also admitted that the confidence with which AI models present false information could lead to problems. Anthropic has conducted research on the deceptive tendencies of AI models, particularly in its newly released Claude Opus4, where the model demonstrated strong deceptive capabilities in earlier versions. Anthropic has taken measures to address these issues.
Amodei's remarks suggest that Anthropic may believe that even if AI models still experience hallucinations, they can still be considered to have human-level intelligent AGI. However, many people may have different opinions.
Key Points:
🌟 Anthropic CEO Amodei believes that the hallucination rate of current AI models is lower than that of humans.
🛠️ He stated that the progress of AI technology will not be hindered by hallucination issues.
⚖️ Despite the existence of hallucination problems, he believes this does not affect the assessment of its intelligence level.