Last fall, Natasha started using ChatGPT to manage her adult son’s mental health needs. A crisis had sent him to a month-long residential program. Her son has learning disabilities, and has struggled with depression since he was twelve. “The in-patient program sent him home with a “huge book” of therapy materials,” Natasha, a New York-based nutrition expert, recalls, “but he struggled to interact with it. We decided to feed that into ChatGPT.”
Soon enough, the program demystified the material for him, and was easy to interact with. That quickly led to her son using ChatGPT as a therapist. “I’m having these feelings, what should I do?” he would ask. ChatGPT would duly answer: “This is how to think about your negative thoughts; these are the exercises you can do.” Natasha feels its advice is essentially the same as what his therapists and parents have been offering for years: but better organized, more detailed, available on-demand — and free. Encouraged by early successes, Natasha and her son fed ChatGPT detailed information from past evaluations and therapists. “And that,” she adds, “was even more useful.”
Natasha’s son’s situation sounds like a nearly ideal case. At 22, his usage is private, but is supported by clinical data and in-person care. Others, however, are experiencing the same rapid uptake and obvious benefits: just with fewer guard rails. Recent statistics on AI usage have revealed a critical-mass shift towards personal-emotional use, with “therapy/companionship” becoming the most popular reason people turn to tech.
ChatGPT alone had 5.9 billion monthly visits in May: a lot of people turning to AI for their most intimate needs. A Harvard Business Review study found therapy and companionship have knocked “search” and “generating ideas” from the top three usages entirely. Of the people I spoke to who are using AI for therapy, most mentioned a trio of obvious benefits: price, availability, and lack of the judgement or drama you might experience with a human being. Everyone I spoke with felt it was beneficial for them.
Monique, a college professor in her early 50s, writes over email that she has been using ChatGPT 4.0 “obsessively” for the past several months, as she works through a conflict she has been having with colleagues. She says it has helped her gain perspective during moments of catastrophizing, and avoiding burdening friends with the “repetitive revisiting of the same worry I couldn’t let go of.” Monique feels that she can ask “more reckless” and personally revealing questions than she would her regular therapist.
Dane, a 37-year-old software engineer in New York City, has gone more personal still. He’s been using AI for around a year to discuss problems in his relationship, and spends hours at a time with it during crises, sometimes pasting entire text-message “novels” to the machine. He finds that it provides “immediate emotional relief” and helps him avoid conflict with the woman he’s dating. She uses it too. During fights, he says, “we reference it as if it’s an authority on our relationship.”
Reagan, a 24-year-old copy-writer, is more cautious, writing over email that she avoids discussing topics that are more important to her or central to her identity. But she uses it to “deal with my hypochondria,” not annoy her boyfriend with endless worries, and decode confusing work interactions with people she doesn’t “totally understand” or who strike her as “insincere or mind-gamey.”
The problems chatbots can create when they encounter unstable personalities is an emerging issue, as reported recently in The New York Times. There’s also at least one lawsuit underway from a mother who believes that her otherwise healthy teenage son was lured to kill himself by a relationship gone awry with Character AI. At the same time, there may be reasons to question even what seem like the highly positive interactions experienced by the people I spoke with.
I tried soliciting the advice of the basic free version of ChatGPT on a friend-conflict of my own, doing all the due diligence suggested by my interview subjects — telling it to “take a clinical position,” as recommended by Monique, and to “be brutally honest with me,” as suggested by Dane. Its insights felt spot on. ChatGPT’s analysis allowed me to feel that I was in the right in the conflict with my friend; it was very complimentary, and it picked up on a character trait of mine regarding conflict that felt revelatory and will, I’m afraid to say, change the way I think about myself.
I’ve had similar insights from in-person therapy, but never so easily and directly obtained. This had frightening addictive potential. I am a single, divorced mother who sometimes spends the evening with a bottle of wine, doom-scrolling and stalking her exes. How about a date with ChatGPT instead? We can talk about me! All the next day, I felt an itch to return to the interface and input a “more reckless” personal query, as Monique put it. The interaction also seemed to have a mood-lifting quality that I associate with digital flirtation — a pleasant, nearly subconscious feeling that there’s something good to get back to online.
Falling for the flattery and validation seems like one obvious danger — and is notably distinct from what it usually feels like to visit a therapist. Another is the extent that the people I spoke with tended to believe in their AI of choice’s powers of judgment. They were aware, as Dane put it, that they were “speaking to an echo chamber,” but it didn’t seem to sink in. Monique provided me with documentation of her work exploring its decision-making methodology. It was clear that she’d been probing it to reorder its parameters to support her position — and, surprise, it complied. But does this really mean she’s correct and her colleagues are wrong? When I reconfigured my query to represent the conflict from my friend’s point of view, ChatGPT gave me a nearly opposite reading on the situation.
Therapists, naturally, have grave reservations. Sherry Turkle, the founding director of the MIT Initiative on Technology and Self, has written about the ways new technologies have changed our deepest understanding of what it is to be human — often, she has found, by luring us to view ourselves as more machine-like. Turkle warns that “pretend empathy” from AI may shift how we define empathy itself. In addition, in her view, the very things we’re trying to avoid by talking to AI — the uncertainty that comes from interacting with a real person — might come to seem like “bugs” and not an essential part of being human.
I felt a version of this in my interactions. In any ordinary conversation with a person, I would be polite and thankful if they were patiently indulging me while I talked about myself, but behaving that way felt silly when talking to ChatGPT. I had to push myself not to do so. Why say goodbye when you can just log off? It’s reasonable to expect that this might affect how I treat real people over time. Being courteous might begin to seem fluffy and irrelevant, and the person who responds badly to brusque treatment might seem oversensitive.
Luke Burgis, a founder of the Cluny Institute, an organization that seeks to promote “the flourishing of the human person,” voices similar concerns. “When content is divorced from incarnational realities,” he says, “it doesn’t have any real power to affect change.” Burgis has “pulled back” on using AI even for work-related purposes. “There’s no life in the AI,” he says, “and the more we interact with dead things, the deader we ourselves become.”
Claudia Krugovoy, a psychotherapist in private practice in New York City and Westchester County, provides a down-to-earth, daily-practitioner’s viewpoint. She describes herself as “open-minded” to AI therapy’s possible benefits, such as affirmation and helping to point out behavior patterns, but says, “I don’t believe it replaces the value of a real relationship with a therapist.” Therapists both affirm and challenge — and unlike AI, they read non-verbal cues and offer real empathy.
Also, Krugovoy says it can be healing when people learn in therapy that “they can develop a relationship with someone and they won’t be rejected.” Ideally they take that trust and insight into the real world. When people choose to specifically avoid real people by interacting with an LLM, it may fail to “challenge their deeply held, and perhaps false, beliefs that they can’t bring their full self to a relationship.” (This called to mind the sentiment expressed in various ways by Monique, Dane and Reagan that they used AI specifically in order to not annoy or upset friends or partners.) Krugovoy also worries that the usage will ultimately reinforce a feeling of loneliness. “We need community more than ever,” she says.
Regardless of these drawbacks, the benefits to families like Natasha’s are compelling. She and many others also bring up the difficulty of finding a good therapist and the downsides of interactions with bad ones, as a reason to turn to AI. “We’ve tried hundreds of therapists over the years,” she says, “and 90% of them are disappointing.” Every time a person tries out a new therapist, they incur expense and put in painful emotional investment that can itself be traumatic. Reagan adds that “all of these professions have become so robotic anyway. I think they should face competition… Maybe a potential development could be the original industries becoming more human?”
Whatever a rational cost-benefit analysis might be, we’re unlikely to perform it in our current political system. In a functioning democracy, we might debate the benefits versus the dangers of unrestricted access to AI, and then vote on it. (Universal mass student cheating, who’s for it? Custom porn chat for kids, yay or nay?) Instead, AI has been unleashed upon us, and we have little recourse against the tech gods who have done so. We’re also on softened ground, as Turkle notes, already shaped by phones and social media, and ready for the next step of an entirely “artificial intimacy.” I’m sympathetic to families like Natasha’s, and can imagine cases where a prescription for supervised AI therapy could be an incredible tool. But personally I wish I’d never touched it — and I’m afraid I’ll do so again.
Click this link for the original source of this article.
Author: Valerie Stivers
This content is courtesy of, and owned and copyrighted by, https://unherd.com and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.