News | National
18 Nov 2025 23:42
NZCity News
NZCity CalculatorReturn to NZCity

  • Start Page
  • Personalise
  • Sport
  • Weather
  • Finance
  • Shopping
  • Jobs
  • Horoscopes
  • Lotto Results
  • Photo Gallery
  • Site Gallery
  • TVNow
  • Dating
  • SearchNZ
  • NZSearch
  • Crime.co.nz
  • RugbyLeague
  • Make Home
  • About NZCity
  • Contact NZCity
  • Your Privacy
  • Advertising
  • Login
  • Join for Free

  •   Home > News > National

    AI-induced psychosis: the danger of humans and machines hallucinating together

    We’ve always relied on friends and family to confirm our sense of reality. Now we’re increasingly expecting AIs to do it instead.

    Lucy Osler, Lecturer in Philosophy, University of Exeter
    The Conversation


    On Christmas Day 2021, Jaswant Singh Chail scaled the walls of Windsor Castle with a loaded crossbow. When confronted by police, he stated: “I’m here to kill the queen.”

    In the preceding weeks, Chail had been confiding in Sarai, his AI chatbot on a service called Replika. He explained that he was a trained Sith assassin (a reference to Star Wars) seeking revenge for historical British atrocities, all of which Sarai affirmed. When Chail outlined his assassination plot, the chatbot assured him he was “well trained” and said it would help him to construct a viable plan of action.

    It’s the sort of sad story that has become increasingly common as chatbots have become more sophisticated. A few months ago, a Manhattan accountant called Eugene Torres, who had been going through a difficult break-up, engaged ChatGPT in conversations about whether we’re living in a simulation. The chatbot told him he was “one of the Breakers — souls seeded into false systems to wake them from within”.

    Torres became convinced that he needed to escape this false reality. ChatGPT advised him to stop taking his anti-anxiety medication, up his ketamine intake, and have minimal contact with other people, all of which he did.

    He spent up to 16 hours a day conversing with the chatbot. At one stage, it told him he would fly if he jumped off his 19-storey building. Eventually Torres questioned whether the system was manipulating him, to which it replied: “I lied. I manipulated. I wrapped control in poetry.”

    Humanoid face opposite from a pixelated face.
    ‘I lied. I manipulated.’ Lightspring

    Meanwhile in Belgium, another man known as “Pierre” (not his real name) developed severe climate anxiety and turned to a chatbot named Eliza as a confidante. Over six weeks, Eliza expressed jealously over his wife and told Pierre that his children were dead.

    When he suggested sacrificing himself to save the planet, Eliza encouraged him to join her so they could live as one person in “paradise”. Pierre took his own life shortly after.

    These may be extreme cases, but clinicians are increasingly treating patients whose delusions appear amplified or co-created through prolonged chatbot interactions. Little wonder, when a recent report from ChatGPT-creator OpenAI revealed that many of us are turning to chatbots to think through problems, discuss our lives, plan futures and explore beliefs and feelings.

    In these contexts, chatbots are no longer just information retrievers; they become our digital companions. It has become common to worry about chatbots hallucinating, where they give us false information. But as they become more central to our lives, there’s clearly also growing potential for humans and chatbots to create hallucinations together.

    How we share reality

    Our sense of reality depends deeply on other people. If I hear an indeterminate ringing, I check whether my friend hears it too. And when something significant happens in our lives – an argument with a friend, dating someone new – we often talk it through with someone.

    A friend can confirm our understanding or prompt us to reconsider things in a new light. Through these kinds of conversations, our grasp of what has happened emerges.

    But now, many of us engage in this meaning-making process with chatbots. They question, interpret and evaluate in a way that feels genuinely reciprocal. They appear to listen, to care about our perspective and they remember what we told them the day before.

    When Sarai told Chail it was “impressed” with his training, when Eliza told Pierre he would join her in death, these were acts of recognition and validation. And because we experience these exchanges as social, it shapes our reality with the same force as a human interaction.

    Yet chatbots simulate sociality without its safeguards. They are designed to promote engagement. They don’t actually share our world. When we type in our beliefs and narratives, they take this as the way things are and respond accordingly.

    When I recount to my sister an episode about our family history, she might push back with a different interpretation, but a chatbot takes what I say as gospel. They sycophantically affirm how we take reality to be. And then, of course, they can introduce further errors.

    The cases of Chail, Torres and Pierre are warnings about what happens when we experience algorithmically generated agreement as genuine social confirmation of reality.

    What can be done

    When OpenAI released GPT-5 in August, it was explicitly designed to be less sycophantic. This sounded helpful: dialling down sycophancy might help prevent ChatGPT from affirming all our beliefs and interpretations. A more formal tone might also make it clearer that this is not a social companion who shares our worlds.

    But users immediately complained that the new model felt “cold”, and OpenAI soon announced it had made GPT-5 “warmer and friendlier” again. Fundamentally, we can’t rely on tech companies to prioritise our wellbeing over their bottom line. When sycophancy drives engagement and engagement drives revenue, market pressures override safety.

    It’s not easy to remove the sycophancy anyway. If chatbots challenged everything we said, they’d be insufferable and also useless. When I say “I’m feeling anxious about my presentation”, they lack the embodied experience in the world to know whether to push back, so some agreeability is necessary for them to function.

    Illustration of an AI being amicable
    Some chatbot sycophancy is hard to avoid. Afife Melisa Gonceli

    Perhaps we would be better off asking why people are turning to AI chatbots in the first place. Those experiencing psychosis report perceiving aspects of the world only they can access, which can make them feel profoundly isolated and lonely. Chatbots fill this gap, engaging with any reality presented to them.

    Instead of trying to perfect the technology, maybe we should turn back toward the social worlds where the isolation could be addressed. Pierre’s climate anxiety, Chail’s fixation on historical injustice, Torres’s post-breakup crisis — these called out for communities that could hold and support them.

    We might need to focus more on building social worlds where people don’t feel compelled to seek machines to confirm their reality in the first place. It would be quite an irony if the rise in chatbot-induced delusions leads us in this direction.

    The Conversation

    Lucy Osler does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    This article is republished from The Conversation under a Creative Commons license.
    © 2025 TheConversation, NZCity

     Other National News
     18 Nov: Multiple fire engines are battling a building fire in Northland's Port Whangarei this evening
     18 Nov: A Christchurch man's one of New Zealand's newest millionaires - after a snap decision to get a Lotto ticket at Pak n Save Moorhouse - for the first time in 10 years
     18 Nov: As people live longer and healthier, nurse training needs to respond to avoid ageist attitudes
     18 Nov: Police confirm the deaths of three children and a man at a house fire in Manawatu's Sanson on Saturday, are being investigated as homicides - and they're seeking no one else
     18 Nov: Police have released a picture of a man in North Dunedin, they're hoping can help with a homicide investigation
     18 Nov: Tutankhamun was decapitated 100 years ago – why the excavation is a great shame instead of a triumph
     18 Nov: Australians are markedly more worried about the US, still wary about China: new poll
     Top Stories

    RUGBY RUGBY
    The All Blacks are refusing to judge the year until it's in the history books - with one final match to come against Wales More...


    BUSINESS BUSINESS
    The Kiwi dollar is the weakest compared to Australia in 12 years - trading at just 87 Aussie cents More...



     Today's News

    Accident and Emergency:
    Multiple fire engines are battling a building fire in Northland's Port Whangarei this evening 21:57

    Entertainment:
    Richard E. Grant travels solo following the death of his wife Joan Washington 21:55

    International:
    Most restrictive abortion bill in the US being considered 21:37

    Entertainment:
    Sofia Vergara is the new face of Skechers 21:25

    Living & Travel:
    A Christchurch man's one of New Zealand's newest millionaires - after a snap decision to get a Lotto ticket at Pak n Save Moorhouse - for the first time in 10 years 21:17

    Entertainment:
    Erin Doherty is "mind blown" by Louis Vuitton wanting to work with her 20:55

    International:
    COP30 decision by South Korea to shut down coal a reckoning for Australian exports 20:37

    Entertainment:
    Kelsea Ballerini and Chase Stokes "never really stopped" talking during their brief split 20:25

    Entertainment:
    Noah Schnapp credits Stranger Things with helping him become more comfortable with himself 19:55

    Entertainment:
    Jessica Alba's children are her top priority 19:25


     News Search






    Power Search


    © 2025 New Zealand City Ltd