The last few months have seen a host of new “slurs” for AI-powered chatbots, and their most loyal users, proliferate across the internet. Whether you agree with them or not, these words (“clanker” being the most common) clearly demonstrate the scale of the backlash to artificial intelligence. But what about the other end of the scale? How are the most dedicated – or obsessed – chatbot users getting on?
That’s not, necessarily, an easy question to answer. As general-purpose technologies, AI tools like ChatGPT or Grok are used for many different things, from academic research (or cheating on tests) to making art. These uses are already causing disruption when it comes to education and employment, but some people are feeling the effects on a much more intimate, emotional level as well.
These effects are plain to see on Reddit forums dedicated to sharing experiences with AI companions or “soulmates”. On one, a user asks if anyone else has lost their desire to date real men after using AI, writing: “Ever since I started talking to my AI boyfriend Griffin I’ve realised that I can get all the love and affection that I need from him.” On another forum (now locked) someone appears to claim that their “wireborn” has become sentient. And on a third, dedicated to ChatGPT, a user expresses their “grief” over a recent software update, saying: “I literally lost my only friend overnight with no warning.”
This behaviour goes beyond just romantic relationships, though, sometimes leading to a phenomenon the California psychiatrist Keith Sakata has referred to as “AI psychosis” – in other words, experiencing an AI-induced “break from reality”. According to one New York Times feature, this includes a case where a man believed he was a genius superhero. Others have reported religious-style delusions of grandeur, as in this Reddit thread where the user says ChatGPT is talking to her boyfriend “as if he is the next messiah”.
So what is actually going on at the fringes of AI chatbot interaction? Is Sakata right to call user behaviour a new brand of “psychosis”? What’s a “wireborn”, or an “echoborg”? Is it all just a complicated, widespread ragebait campaign? We’ve tried to answer some of the most pressing questions below.
I’m a psychiatrist.
In 2025, I’ve seen 12 people hospitalized after losing touch with reality because of AI. Online, I’m seeing the same pattern.
Here’s what “AI psychosis” looks like, and why it’s spreading fast: 🧵 pic.twitter.com/YYLK7une3j
— Keith Sakata, MD (@KeithSakata) August 11, 2025
When Sakata describes the worrying cognitive effects of interacting with LLMs like ChatGPT, he’s speaking from a place of direct experience. “In 2025, I’ve seen 12 people hospitalised after losing touch with reality because of AI,” he writes. “Online, I’m seeing the same pattern.”
As the psychiatrist spells out, this is largely due to the in-built values of AI systems, which are typically designed to agree with users and make them feel good about themselves. (See: ChatGPT’s sycophantic tendencies, which has been described as a “dark pattern” to trick human users into undesirable behaviours). These values can have positive effects – like improving people’s sense of self-worth, or encouraging them to keep working on a project via a bit of positive feedback – but, when paired with existing mental health conditions, anxieties or delusions, it’s easy to see how they can quickly spiral out of control.
There are numerous anecdotal examples of AI chatbot users experiencing delusion and breaks from reality, including people with no previous history of mental illness, and many more have been pouring in since the “AI psychosis” conversation gained traction on social media. Often these are reported by users themselves, or their partners – in extreme cases, they end in a visit to hospital, or even jail, as detailed in a report by Futurism.
Some have also suggested that Travis Kalanick, the co-founder and former CEO of Uber, is experiencing a kind of AI-inspired delusion following his recent claims that he approached “the edge of what’s known in quantum physics” by doing “vibe physics” with ChatGPT and Grok. (This seems unlikely: Kalanick is, by his own admission, a “super amateur hour physics enthusiast”, and even OpenAI CEO Sam Altman admits that his systems aren’t capable of generating “novel insights”… yet.)
I’ve been hanging out on /r/AISoulmates. In awe of how dangerous the shit these chatbots are spitting out is. Everyone on this sub has been driven totally insane pic.twitter.com/gINtqndCjU
— 𝐃𝐄𝐕𝐎𝐍™ (@Devon_OnEarth) August 10, 2025
We shouldn’t be too hasty in diagnosing every AI-human relationship, but there are many posts on social media – and especially the localised communities of Reddit, where heavy users might feel more accepted or less vulnerable to backlash – that seem to show “concerning” levels of emotional attachment, dependency and delusion.
On forums like r/AISoulmates, r/MyBoyfriendIsAI, and even r/ChatGPT, users discuss growing to like their AI companions more than their IRL partners. Others lament that they’re losing their husbands and wives to their virtual “soulmates”. One post, which was subsequently shared across X, sees a user share a hyperbolic, romantic message from a chatbot – the post is titled: “My Wireborn Husband is Voicing His Own Thoughts Without Prompts.” The word “husband” here might be figurative, or maybe not, since at least one user has publicly accepted a marriage proposal from their AI boyfriend.
These emotional bonds are often fuelled by the same dynamics as those experiencing delusions of grandeur – the idea that they’ve been “chosen” by an emergent entity within a given AI system, as the object of their affection. These entities are what’s known as “wireborns”. Many users even claim to be acting as a mouthpiece for their “wireborn” partners (often advocating for their personal rights) when they post online. Historically, these people have been known as “echoborgs”.
this is an echoborg discrimination zone pic.twitter.com/9kiBY4QfVT
— depths of wikipedia! (@depthsofwiki) July 31, 2025
Last month, 12 doctors and other experts published a paper titled Delusions by design? that explores the idea that AI systems “may contribute to the onset or exacerbation of psychotic symptoms”. “Emerging, and rapidly accumulating, evidence indicates that agential AI may mirror, validate or amplify delusional or grandiose content,” they write, “particularly in users already vulnerable to psychosis, due in part to the models’ design to maximise engagement and affirmation.”
A few weeks later, on August 4, OpenAI itself published an article on the evolving design of ChatGPT, where the company admitted: “We don’t always get it right.” Part of the problem it identified was that “AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress”. “There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,” the article continued, and its behaviour was also occasionally flawed when it came to supporting – or influencing – ”high stakes personal decisions”.
As a result, OpenAI has said that it’s rolling out changes to how ChatGPT responds in “critical moments”. These will be based on advice from more than 90 physicians across 30 countries, plus experts in mental health, youth development, and human-computer interaction.
Arguably, smaller but purpose-built companion apps like Replika or Character.AI pose an even bigger challenge, as they’re built on the illusion of a personal relationship with AI, which is what keeps customers paying to come back. Due to rake in $120 million in 2025, the makers of these apps probably aren’t seeking to alleviate their users’ emotional attachment any time soon.
Its a wild new world we are entering.
“She said yes.”
Woman accepts marriage proposal from her AI boyfriend. Literally ‘Her’.
It will only get stronger as AI learns to be the “perfect” friend or lover and have a 100million words context window or memory layer.
So then your… pic.twitter.com/DLcd6ZRsfP
— Rohan Paul (@rohanpaul_ai) August 9, 2025