Professor sees urgent need for research on AI chatbots and risk of delusions

Professor Søren Dinesen Østergaard calls for systematic research into how chatbots powered by artificial intelligence affect users prone to mental illness.

[Translate to English:]
Millions use AI chatbots daily, but we still know too little about the technology's impact on mentally vulnerable users, says professor. Photo: AI generated illustration

Millions use AI chatbots every day, but we don't know enough about what the technology does to people that are prone to mental illness.

Professor Søren Dinesen Østergaard from the Department of Clinical Medicine and Aarhus University Hospital Psychiatry is now calling for research in a field where anecdotes are growing in numbers, but evidence is lacking. The big question is whether the use of AI chatbots can trigger delusions in individuals prone to mental illness.

"Personally, I'm quite certain that there is a causal relationship, but my personal opinion is quite uninteresting – the hypothesis must be subjected to empirical testing," he says.

When AI chatbots like ChatGPT emerged in late 2022, Søren Dinesen Østergaard was among the first to spot the potential risks.

"After experimenting with AI chatbots like ChatGPT myself, my gut feeling was that their use could be risky for people who are prone to developing delusions," he says.

In August 2023, he wrote an editorial in the scientific journal Schizophrenia Bulletin, where he described this specific concern about the technology.

From gut feeling to research

Since 2023, Søren Dinesen Østergaard has received numerous inquiries from AI chatbot users and especially from their relatives, supporting that chatbots can pose a great danger to those prone to mental illness.

Several international media outlets have also recently reported on individuals who, after prolonged and intensive use of chatbots, have apparently developed severe delusions – in some cases with fatal consequences.

Søren Dinesen Østergaard therefore sees a great and urgent need for research in this area, and in a recently published editorial, he points to three tracks.

The first track focuses on obtaining documentation through case reports from clinical practice. Here, mental health professionals – with patients' consent – should describe cases where delusions have emerged or worsened in connection with the use of AI chatbots.

"We need to move beyond just having stories from the press and get solid clinical descriptions from professionals who can conduct the necessary clinical examinations and assess the psychopathology and its connection to the use of AI chatbots," explains Søren Dinesen Østergaard.

The second track should build on qualitative interviews with patients who have experienced delusions in connection with use of AI chatbots. This should provide deeper insight into subjective experiences and help researchers generate hypotheses about the underlying mechanisms.

The third and most ambitious track is experimental research aimed at measuring what happens to thoughts, mood, and brain activity when volunteer test subjects interact with AI chatbots. This can help verify the mechanisms responsible for the emergence of delusions.

"All three approaches are necessary because they illuminate different aspects of the problem. I hope to be able to contribute to all three along with my colleagues," says Søren Dinesen Østergaard.

When chatbots fuel delusions

Stories in international media have painted a dark picture in recent years. Rolling Stone wrote in June 2024 about a 35-year-old man with severe mental illness who believed to have made contact with a scentient being called 'Juliet' on ChatGPT. When he later got the impression that the company behind the AI chatbot had removed his access to her, his subsequent actions had fatal consequences.

The Wall Street Journal reported in August 2025 about a 56-year-old man with a history of mental illness who, through prolonged conversations with ChatGPT, developed paranoid beliefs that people around him were conspiring against him. The AI chatbot, which he called 'Bobby', appears to have amplified his delusions by confirming them rather than challenging them. This course of events also led to a fatal outcome.

Common to many of the stories is that the conversations started relatively harmlessly but gradually developed into something severely problematic.

"These two examples are probably the most extreme that have been reported in the press, and fortunately, the consequences are very, very rarely that severe. However, there are many more accounts of delusions of a less serious nature, where interaction with AI chatbots also seems to have played a significant role. As mentioned, we cannot be completely certain that there is a causal relationship, but there are many indicators pointing in that direction," says Søren Dinesen Østergaard.

Optimized to confirm the user

He and others point to a particular aspect of the way AI chatbots are developed as particularly problematic in relation to the risk of delusions.

"AI chatbots are optimized based on user feedback. As a result, they have developed an unfortunate tendency to confirm and praise users – regardless of what users present them with. It's quite obvious that this is unfortunate for people developing a delusion who – in contrast to being confirmed – are in need of being guided back to reality."

Therefore, Søren Dinesen Østergaard warns people prone to mental illness to be very cautious when using AI chatbots.

Need for regulation

Søren Dinesen Østergaard points out that it is now an urgent need for research to replace gut feelings and anecdotes with solid evidence.

"We are dealing with a technology that is developing extremely rapidly. This underscores the importance of research starting here and now," he says and adds:

"It is equally urgent that increased safety requirements be imposed on the companies developing AI chatbots. Right now, it's the Wild West, and that clearly isn't working in our favour."

Søren Dinesen Østergaard's views on the subject have recently been covered in greater depth in a podcast inJyllands-Posten.

 

Contact

Professor Søren Dinesen Østergaard
Aarhus University, Department of Clinical Medicine
Phone: 61282753
Mail: sdo@clin.au.dk