Be clear about this: the creators of AI and their leaders have it in their power to stop this on the spot. They refuse to do so, even with the blind and stubborn arrogance of Big Pharma companies that refuse to stop making harmful drugs. Addiction means customers. Customers mean money. Money trumps humanity.
⁃ Patrick Wood, Editor.
Growing numbers of people are suffering from “AI psychosis”, where they believe that chatbots have become sentient or have imbued them with superhuman powers, Microsoft’s head of artificial intelligence has warned.
Mustafa Suleyman said reports of people wrongly believing that AI had become conscious were becoming more common.
“Reports of delusions, ‘AI psychosis,’ and unhealthy attachment keep rising. And as hard as it may be to hear, this is not something confined to people already at-risk of mental health issues,” he wrote on X. “Dismissing these as fringe cases only help them continue,” he added.
AI psychosis is not an accepted clinical term. However, it is increasingly being used to describe a phenomenon where people interacting with AI chatbots, such as ChatGPT, Claude or Grok, become detached from reality. They may believe the AI has real intentions, emotions, or incredible powers.
Examples include thinking they have unlocked secret features, forming romantic attachments to an AI, or believing that a chatbot has provided them with extraordinary abilities.
Suleyman said that “seemingly conscious AI” — tools that appear to be sentient — were keeping him “awake at night”. While AI is not conscious in any human sense, the perception that it is can have dangerous effects, he added.
“Consciousness is a foundation of human rights, moral and legal. Who/what has it is enormously important,” he wrote.
Now, over a decade later, at least one high-profile user seems to have become convinced that an AI chatbot has allowed him to challenge scientific boundaries. On an episode of the All-In podcast, Travis Kalanick, the former Uber chief who resigned in 2017, described using tools such as ChatGPT and Grok with the firm belief that they were carrying him towards breakthroughs in quantum physics.
“I’ll go down this thread with GPT or Grok, and I’ll start to get to the edge of what’s known in quantum physics, and then I’m doing the equivalent of vibe coding, except it’s vibe physics,” he said. “I’ve gotten pretty damn close to some interesting breakthroughs just doing that.”
Hugh, from Scotland, shared his experience with the BBC. He said he became convinced he was about to become a multimillionaire after using ChatGPT to prepare for what he believed was wrongful dismissal by a former employer. “The more information I gave it, the more it would say ‘oh this treatment’s terrible, you should really be getting more than this,’” he said. “It never pushed back on anything I was saying.”
Suleyman has called for clearer boundaries and warnings around AI. “Companies shouldn’t claim/promote the idea that their AIs are conscious. The AIs shouldn’t either,” he wrote on X.
Dr Susan Shelmerdine, a medical imaging doctor at Great Ormond Street Hospital and an AI academic, told the BBC that one day doctors may start asking patients how much they use AI, in the same way that they currently ask about smoking and drinking habits. “We already know what ultra-processed foods can do to the body, and this is ultra-processed information. We’re going to get an avalanche of ultra-processed minds,” she said.
Click this link for the original source of this article.
Author: Rhys Blakely via The Times
This content is courtesy of, and owned and copyrighted by, https://www.technocracy.news and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.