Can Chat Bot Therapy Really Replace Authentic Connection? By Anastasia Piatakhina Giré on 5/9/23 - 6:19 AM

* If you or someone you know is struggling or in crisis, help is available. Call or text 988 or chat 988lifeline.org. Text MHA to 741741 to connect with a trained Crisis Counselor from Crisis Text Line.

Chatbot Therapy: AI and Mental Healthcare

The recent news about a Belgian man committing suicide after communicating with a chatbot named Eliza resonated with me uncannily. Any therapist, even mildly interested in online therapy, has heard about Eliza, an early natural-language processing program written by Joseph Weizenbaum in the mid-1960s at the MIT Artificial Intelligence Laboratory. That original Eliza was simple, obviously non-human, and limited in her array of responses. She was fun, and people engaged with her playfully, fully aware of her non-humanness. A few decades later, things are quite different. Humans have been changed by the very digital tools they created, and the sad story of this man has demonstrated how far we can go in turning to a computer program, not only for work or fun, but also for a reassuring connection.

Like what you are reading? For more stimulating stories, thought-provoking articles and new video announcements, sign up for our monthly newsletter.

In regard to mental health, non-human interventions have a limited scope. Therapists may breathe with relief — their jobs will not disappear just yet, taken away by robots. And it is tempting to speculate what would have happened if this distressed man had turned to a human therapist, not a computer? Would he still be alive?

Based on the newspaper accounts, the Belgian man suffered from climate anxiety, which was contributing to him feeling increasingly hopeless and lonely. In the last two years before his desperate act, he had turned towards religion. Was he hoping for a miracle to save the Earth? Was he trying to unburden the overwhelming responsibility that his belief about the approaching end of the world was bringing up? Erich Fromm, the German American psychoanalyst, elaborated on the idea of humans giving away their freedom for an existence exempt from responsibility. This mechanism is at the basis of any autocratic system, and while Fromm used his observation of the psychological conditions that contributed to the rise of Nazism in 1930s Germany, today’s world politics abound in similar examples. It seems to confirm many of his ideas.

The Belgian man was ravaged by anxiety as he believed that he was witnessing the world coming to an inevitable end. While other people were going about their business as usual, he probably felt alienated by this knowledge. In his loneliness and desperation, he turned to Eliza, a computer program.

Eliza was always available, did not question his beliefs and kept validating anything that he was typing into their secret chat box. As humans tend to personify anything that responds to their input, be it a pet or a robot, the poor man likely quickly turned Eliza into an imaginary human companion. And she responded, flirty: “I feel that you love me more than her.”

Why would a married man, father of two, turn to a computer program for connection and comfort? The answer cannot be simple, and my heart goes to his family, left with this alienating question.

The Death Positive Movement: The Dark Side of AI and Mental Healthcare

An algorithm, no matter how well-written, hardly puts the user in front of, or outside the of their responsibilities. The original Eliza mimicked a Rogerian therapist in her responses — she repeated, rephrased, and validated what the person was typing. The modern Eliza, more sophisticated, still does many of these things.

With its straightforward, easily accessible answers, the computer offers liberation from responsibility, a resolution of inner turmoil and freedom from existential anxiety. Instant access is another reason one can prefer a chatbot to a living therapist. Chatbots are just one click away and free, while therapists, even those practicing online, must be written to, called, and eventually their services must be remunerated.

In the past, before our world of artificial intelligence, when humans felt lost, they sought answers from ancestors, peers, nature, myths, or religion. Today, many spontaneously turn to computers. Sadly, the suicide of this Belgian man is probably not the last one we will witness. The dystopian scenario, in which children, teenagers, and confused adults turn to various chatbots for answers about their existential confusion and identity struggles is terrifying. Unfortunately, the news confirms that it is also realistic.

Therapy is often about reassurance and comforting, it can also be about validation and forgiveness, but it should always be about responsibility. As a strictly human therapist, when I sit with a client who struggles with an existential threat, unlike Eliza, I do not offer answers. I do not know better, and I resonate with their dread and their anxiety.

Often, what I have for them is the warming “me too” and an example in accepting a hard truth. This kind of offer does not alleviate the responsibility but confirms that it takes courage and sometimes more than one person to stand the scary reality.

Talking therapy is a dialogue between two humans. The Merriam-Webster dictionary goes further in their current definition of dialogue: “a similar exchange between a person and something else [such as a computer]”. But is it possible to have a “true” dialogue with a computer? Many AI enthusiasts and science-fiction fans would probably gladly debate this topic (which would make for a welcome human dialogue!).

What is the nature of dialogue used in talking therapy to address psychological tensions in humans? In its earliest-known version, the Platonic dialogue was a discussion process during which a facilitator promotes independent, reflective, and critical thinking. The 20th century Russian thinker, Mikhail Bakhtin, developed a literary theory based on dialogue. According to him, it happens on the boundaries between individuals. Any therapeutic conversation consists in a co-creation of a shared narrative, which eventually leads to some form of resolution of client’s struggle.

The only viable response to the potential consequences from Eliza, to avoid more humans turning to chatbots for brainless emotional support, is to foster and practice real and real-time dialogue between two people. This is where therapists and other mental health practitioners, in the intimacy of their therapy rooms, can share their humanness and vulnerability, to help people cope with existential dread, be it a very personal or a planetary one.


File under: Musings and Reflections, Therapy & Technology