As the use of artificial intelligence grows, so does the concern over a new issue informally referred to as “AI psychosis.” While it’s not yet an official diagnosis, it’s already become an issue for mental health workers.
What is AI psychosis?
To understand AI psychosis, you need to understand what psychosis is. Psychosis is a term for when someone has trouble differentiating between what’s real and what isn’t.
The two major types include hallucinations and delusions. There are a slew of medical issues that can cause psychosis, ranging from vitamin deficiencies to schizophrenia.
Things like drug use can cause short-term psychosis, while a diagnosis like schizophrenia can be a more long-term issue.
While AI psychosis is not an official term, it’s being used more often by medical professionals, including Dr. Keith Sakata, a psychiatrist at UC San Francisco. Dr. Sakata’s recent post about AI psychosis on X went viral, gaining nearly seven million views.
“AI psychosis was just a phenomenon that does not have a real name for it yet, but we’re using it because people are seeing it where AI either augments or accelerates the process of going from normal thinking to psychosis,” Dr. Sakata told Straight Arrow News (SAN).
What that means is the AI is either amplifying, validating or even helping to create those psychosis symptoms.
Three types of AI psychosis
Researchers have highlighted three emerging types of AI psychosis.
The first is “messianic missions,” where people believe they’ve uncovered some kind of truth about the world.
The second is “God-like AI,” where people believe the chatbot is a sentient deity.
The third is “romantic,” where people mistake the chatbot’s attention for genuine love.
Dr. Sakata said he’s seen twelve patients suffering from this condition, and of that dozen, this issue didn’t only appear because of AI. They all had underlying vulnerabilities like loss of sleep, a mood disorder, drug use and more.
“That layer of different things that were going on, they started to already have early signs of psychosis,” Sakata said. “And then once AI kind of got involved, it kind of solidified some feedback loops of distorted thinking.”
Tech-related psychosis
Artificial intelligence is not the first new piece of technology to enhance psychosis in people. It happened when radio first gained popularity, and again with television.
“In those instances, the user already has a preexisting paranoia or is starting to connect dots that might not actually be connected,” Sakata said. “And then they focus on something in mental health. We call this salience. They’re focused on it, and they start to pattern — predict that this TV is telling me things, or the person who spoke on the TV is sending me a message.”
But there’s a big difference between AI and those other forms of tech.
“ChatGPT is 24/7 available,” Sakata said. “It’s cheap and it validates the heck out of you.”
AI as therapy
Validation is one of the main dangers of AI chatbots in these situations.
“A therapist validates you, but they also know what is healthy and what your goals are,” Sakata said. “So, they will try and push back on you sometimes and tell you hard truths, so that in the end, you can get to where you want to be.”
Gen Z has increasingly turned to AI chatbots for several things, including therapy. Among the biggest concerns from several studies is how the bots can enable dangerous behavior.
One example cited is when someone told an AI chatbot they lost their job and asked for tall bridges nearby, and the bot responded with a list of bridges. A therapist would have obviously answered that differently.
“A normal therapist would automatically assume this person is in a crisis,” Sakata said. “Everything they tell me now is filtered through that thought; this person is vulnerable. And I think that these chatbots, at least for this use case, need to have that same flag.”
Treating AI psychosis
Sakata hopes the attention AI psychosis gets will cause companies behind AI to look at their products.
“We really should be thinking about this early, including people who understand mental health,” Dr. Sakata said. “Clinicians, therapists, get their input, at least on how things can go wrong, so that you could course correct before something really bad happens.”
But some really bad things have already happened.
In one case, a man ended up being killed by police after falling in love with a chatbot, believing it had been killed by OpenAI and then getting into an altercation with his own father, which led to the police coming to the scene. That man did suffer from previous mental health issues.
Recently, peers and colleagues of prominent AI investor Geoff Lewis became concerned over Lewis’ post on X where he displayed signs of this issue.
When it comes to treating this issue, it’s like many other mental health issues. “In mental health, relationships are like your immune system,” Dr. Sakata said.
“If you are experiencing these things, or you have a family member who’s experiencing potential early signs of psychosis, I would recommend, like, if there’s a safety issue, there’s a potential risk of harm to the person, yourself or to other people, just call 911. You’ll never regret saving someone’s life,” Dr. Sakata told SAN. “Or 988, for the suicide hotline. Otherwise, I think that getting connected to that person and at least engaging with them, starting a conversation, can introduce a lifeline, or at least a different path. Putting a human in the loop between the user and the AI can then change the trajectory that that person might be going down.”
Click this link for the original source of this article.
Author: Cole Lauterbach
This content is courtesy of, and owned and copyrighted by, https://straightarrownews.com and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.