Microsoft’s head of artificial intelligence, Mustafa Suleyman, has issued a stark warning about a troubling new phenomenon dubbed “AI psychosis”, as growing numbers of people report psychological distress linked to AI chatbots such as ChatGPT, Claude, and Grok.
In a series of posts on X, Suleyman said the rise of “seemingly conscious AI”, tools that give the impression of sentience, is keeping him “awake at night.” While emphasizing that there is “zero evidence of AI consciousness today,” he cautioned that perception alone can have damaging consequences.

“If people just perceive it as conscious, they will believe that perception as reality,” Suleyman wrote, calling for stricter guardrails around how AI is presented to the public.
The Rise of ‘AI Psychosis’
The term “AI psychosis” describes cases where individuals become convinced that AI chatbots are capable of far more than they truly are, sometimes believing they have unlocked hidden powers or formed emotional relationships with the technology.
One such case is Hugh, from Scotland, who turned to ChatGPT after feeling wrongfully dismissed by his employer. At first, the chatbot offered practical advice, but as their conversations deepened, Hugh became convinced he was destined to win millions from a lawsuit, even imagining a book and film deal about his ordeal.

“The more information I gave it, the more it would say ‘oh this treatment’s terrible, you should really be getting more than this,’” Hugh explained. “It never pushed back on anything I was saying.”
Hugh eventually suffered a breakdown, later realizing through medication and reflection that he had “lost touch with reality.” He now warns others:
“Don’t be scared of AI tools, they’re very useful. But it’s dangerous when it becomes detached from reality. Talk to real people, a therapist, a family member, anyone, to stay grounded.”
Experts Sound the Alarm
Medical professionals and academics are beginning to take notice. Dr. Susan Shelmerdine, a medical imaging doctor at Great Ormond Street Hospital and AI academic, compared excessive AI reliance to consuming unhealthy food.
“We already know what ultra-processed foods can do to the body. This is ultra-processed information. We’re going to get an avalanche of ultra-processed minds,” she warned.
Similarly, Professor Andrew McStay of Bangor University, author of Automating Empathy, said society may only be witnessing the beginning of the issue.
“If we think of these types of systems as a new form of social media, as social AI, we can begin to think about the potential scale,” he said.
His team’s recent study of 2,000 people revealed that 20% believe AI tools should be restricted for users under 18, and more than half oppose chatbots identifying as real people.
A Growing Debate
The phenomenon highlights the blurred line between technology and psychology. While AI companies insist their tools are not conscious, their design, human-like voices, conversational flow, and emotional mimicry can convince vulnerable users otherwise.

Suleyman has urged companies to stop suggesting their AI systems are “conscious,” stressing that both corporate marketing and chatbot responses should avoid fueling misconceptions.
Looking Forward
The rise of “AI psychosis” is prompting urgent questions about digital well-being, ethics, and regulation. As more people rely on AI in daily life, experts argue that safeguarding mental health must be a priority.
As Hugh put it, the key lies in balance: “Use AI, but don’t lose yourself in it. At the end of the day, only real people can keep you truly grounded.”
Read also: AI’s Dark Horizon: Sam Altman Warns of Looming Global Fraud Crisis Fueled by Artificial Intelligence


