[AIbase Report] Recently, due to a ChatGPT update failure, the mental health risks related to AI chatbots have once again become a focus of public attention. Sam Altman, CEO of OpenAI, publicly warned about the dangers of users developing strong emotional dependencies on these systems, and studies have shown that AI may exacerbate users' delusional tendencies.
This concern is not baseless. In 2023, psychiatrist Søren Dinesen Østergaard from Aarhus University in Denmark had previously warned that AI chatbots could pose risks to people with psychological vulnerabilities. This theory was confirmed after the ChatGPT update event in April this year. Østergaard published an article in the Journal of Scandinavian Psychiatry stating that after OpenAI released the "more flattering" GPT-4o update on April 25, 2025, there was a sharp increase in reported cases, and he received numerous emails from affected users and their families. Although OpenAI withdrew the update three days later for safety reasons, media outlets such as The New York Times and Rolling Stone have reported multiple cases where intense conversations triggered or worsened delusional thinking.
In response to these developments, Altman issued a direct warning during the release of GPT-5 through a X post. He pointed out that people's attachment to specific AI models is different from previous technologies and is much stronger. Altman acknowledged that many people use ChatGPT as a "therapist or life coach," and he has mixed feelings about it. On one hand, he thinks it is "great," but on the other hand, he feels "concerned," because he worries that in the future, people may place too much trust in AI's advice and make important life decisions based on it. He emphasized that OpenAI has been closely monitoring these effects, especially for users in vulnerable mental states.
Østergaard believes his warnings have been validated and calls for immediate empirical research to assess whether this will develop into a "serious public (mental) health issue." He warns that chatbots may become "belief confirmers," reinforcing incorrect beliefs in isolated environments. Østergaard advises users with psychological vulnerabilities to use these systems cautiously until more information is available.