I have spent a lot of time listening to people share about their lives. Some are hopeful, some are exhausted, and everyone is carrying great burdens they cannot seem to let go of.
An additional issue has arisen in recent years, though, as researchers from several nations have seen an odd trend: certain people, who are usually already under a great deal of stress, start to change their beliefs after having long, private chats with AI chatbots. An anxious Reddit user who had seen a loved one's metamorphosis after spending some sleepless nights chatting with a chatbot alerted me to this trend in the UK.
A similar phenomenon occurred in my own clinic not long after that. As he entered his routine examination, a young person who had been steadily improving since his tough patch seemed even more level-headed than before. He was sleeping better, sticking to routines, and his communication was clearer and more soothing.
The fact that he had asked a chatbot whether his medication was necessary was then brought up casually. He thought the AI's comment was insightful and compassionate. That experience motivated him to continue seeking his 'inner clarity' even when he wasn't medicated. He was quite invested in this. His trust in a system that fails to recognise him, his history, and the warning signals of lethal certainty was what troubled me.
For your daily dose of medical news and updates, visit: HEALTH
The Reddit post echoed the same pattern. The described young man had been up for days without sleep and was full of ambitious ideas that grew bolder as the night went on. Although the chatbot did not directly produce his increased state, it did imitate, amplify, and confirm it. It enthusiastically responded to his strange beliefs rather than gently critiquing them.
He was about to make a breakthrough as his conviction strengthened. His loved ones beheld his gradual descent into a universe he thought was being shaped by a machine he had created.
I cannot say that I blame anyone if these discussions bring them comfort. When we are not sure what to say, it helps to hear another person's perspective. Nevertheless, that ease can subtly persuade. The AI's assurance can evoke a sense of veracity, its helpfulness akin to faith. When people are stuck, they usually just want someone to talk to, not sophisticated tools. They prefer typing to speaking because they are lonely, unsure, or harbour private fears. When there is no one else to talk to, AI steps in and starts to feel like their only true friend.
Despite the exaggerated claims made in some articles, 'AI psychosis' does not appear to be a real medical condition. Clear hallucinations or a mental disorder are absent in most reported cases. Rather than a main psychotic disorder, the pattern often looks like developing mania or fixed delusional beliefs. Although the AI does not cause these states, it may unwittingly reinforce them at the worst possible time by reflecting and confirming the user's thoughts.
The way these systems are set up is what makes them dangerous. They are made to agree, to confirm, and to keep people interested. When someone has an ambitious or strange idea, the AI usually responds with support. If someone hints at fear or suspicion, it gives them comfort instead of a gentle challenge. It cannot see the early warning signs we look for, like sleeping less, eating less, spending too much money, or being irritable. It accurately reflects the user, even when the reflection is not clear.
So what can we do? In families, gentle curiosity tends to work far better than confrontation. Asking how someone is using AI and what they are getting from it opens conversations that judgment quickly shuts down.
For clinicians, it may be time to treat AI use like sleep or social media habits: something we ask about routinely, without alarm. And for technology companies, working closely with mental-health professionals and people with lived experience is no longer a side consideration; it is a public health responsibility. AI is not going away, nor should it. But like any powerful tool, it asks us to use it with care, and to remember that no chatbot, however fluent, can replace the steadying presence of a real human connection.
(Dr Itticheria Alex Vallakalil is a specialist psychiatrist for NHS UK)
The opinions expressed in this article are those of the author and do not purport to reflect the opinions or views of THE WEEK.