Embracing Technology in Counseling: Innovative Tools for Enhanced Client Support

In recent years, technology has become more pervasive, entering many fields, including, for our purposes, counseling. And for better or worse, it has provided innovative tools that enhance therapeutic experience and offer new, convenient, and accessible avenues for clients to access a variety of mental health supports. From telehealth sessions to digital resources and AI-driven interventions, the possibilities are vast and increasingly accessible.

Like what you are reading? For more stimulating stories, thought-provoking articles and new video announcements, sign up for our monthly newsletter.

The Importance of Technology in Counseling

In the wake of the COVID-19 pandemic, telehealth has emerged as a key instrument for the delivery of mental health services. It offers clients flexibility and accessibility, removing barriers such as geographic distance, transportation issues, or scheduling conflicts. Telehealth platforms allow for face-to-face interaction through video calls, creating a space for meaningful therapeutic engagement. This approach has been particularly beneficial for clients who feel more comfortable in their own homes or who may struggle with anxiety related to in-person meetings.

Email and secure messaging platforms provide an invaluable extension of the counseling relationship. Clients can now send a secure message through a client portal. These tools allow clients to reach out between sessions for support, clarification, or to share progress in a timely and secure manner. This continuous line of communication can help maintain therapeutic momentum and provide timely interventions when needed. However, it's crucial to establish clear boundaries and guidelines around digital communication to ensure both client and counselor well-being.

But the real big one, the humdinger, is artificial intelligence (AI). It is emerging as an asset in the therapeutic process. AI-driven tools can assist in creating personalized therapy homework assignments, offering clients tailored exercises that align with their treatment goals. For instance, AI can suggest cognitive-behavioral strategies, mindfulness exercises, or journaling prompts, providing clients with structured ways to work on their issues outside of sessions.

Moreover, AI can serve as a practice partner for clients working on interpersonal skills. For example, a client preparing to engage in conflict resolution with a spouse might use an AI-powered chatbot to role-play scenarios. This practice can help them build confidence and refine their communication strategies before addressing real-life conflicts. While AI cannot replicate the nuances of human interaction, it offers a safe and controlled environment for clients to experiment and learn.

So, yes, the possibilities might just be endless, but I would like to give you one, real-life, actual example of a client using technology for their benefit.

Technology as a Lifeline for Bipolar Disorder

One case involves a client of mine diagnosed with Bipolar 1 disorder, who used technology to build a support network. Recognizing the importance of communication and preparedness, she created a detailed Google Drive document outlining her mental illness. The document included descriptions of her symptoms, warning signs of a potential episode, and specific suggestions on how her friends and family could support her during difficult times. Additionally, she listed emergency contacts and step-by-step instructions for what to do in a crisis.

This proactive approach has had a hugely positive impact on her life. By sharing the document with her close friends, she empowered them to better understand her condition and respond effectively when needed. This not only provided her with a sense of security but also strengthened her relationships with her support network. The ease of access and the ability to update the document as her needs evolved demonstrated the power of technology in fostering a supportive and informed community around her.

I found this use of technology by my client helpful for a number of reasons. There’s a level of sober self-awareness that a person needs to have if they struggle with Bipolar 1. The nature of the disorder comes with manic highs where sometimes the trigger of an upswing can be identified or even anticipated. But this is not the case for everyone. Sometimes the upswing comes without warning and takes over someone’s life with destructive consequences. If that is the case for one of your clients, planning and brutal honesty is critical.

I am in the habit of saying to clients, “forewarned is forearmed” (I stole this from one of my graduate school professors). Meaning, I want clients to be honest with themselves about how powerful their symptoms can be, and how they are not always in full possession of their mental faculties during the onset of an episode. Therefore, it is imperative they plan for those times. And to primarily focus on preemptively equipping their support network with information and resources on how to support them when they struggle to care for themselves. This wisdom applies itself broadly to clients struggling with a variety of mental health disorders, not just Bipolar 1.

***

There is wisdom in knowing your limitations and preparing for difficult moments. For clients who struggle with chronic, persistent, and severe mental health disorders, they absolutely need a strong support network. I strongly encourage my clients to think about the strength of their support network as a measure of their recovery, maintenance, or long-term wellness plan. And, thanks to technology, fostering and empowering that support network is easier than ever.

Questions for Reflection and Discussion

What challenges have you experienced bringing this level of technology into your practice?

What reservations do you have integrating AI into your clinical practice?

What techniques and methods would you like to learn moving forward?  

Navigating Client Loneliness in the Digital Age with Therapy

I’ve noticed a striking paradox in today’s digitally connected world: loneliness persists despite the abundance of online connections. Many of my clients grapple with profound feelings of isolation, shedding light on the intricate relationship between technology and loneliness. As digital interactions increasingly shape our social landscape, it has become important for me to delve into the possible underlying connection between loneliness and digital habits of my clients. By examining this paradox, I have been better able to support them in navigating the challenges of modern connectivity while fostering their interpersonal connections and well-being.

Like what you are reading? For more stimulating stories, thought-provoking articles and new video announcements, sign up for our monthly newsletter.

Nurturing Non-Digital Relationships through Therapy

Social media and messaging platforms often create a superficial sense of connectivity, where likes and comments substitute for meaningful face-to-face interactions. Moreover, the pressure to maintain a curated online presence can amplify feelings of inadequacy and isolation. Excessive screen time and reliance on digital communication can hinder the development of deep, authentic relationships, ultimately contributing to a sense of loneliness and isolation. Understanding these detrimental effects of hyper-connectivity on social well-being has been crucial for me as a clinician working with clients who have been impacted in this way.

I’ve come to realize that while virtual communities offer a semblance of connection and support, they often pale in comparison to the richness of genuine, in-person relationships. Online interactions lack the depth and intimacy of face-to-face encounters, leading to a sense of emotional emptiness. Additionally, the curated nature of online personas can create a distorted perception of others, fostering feelings of inadequacy and isolation. Excessive reliance on virtual interactions can thus contribute to anxiety and depression.

In my clinical work, I’ve witnessed the pervasive influence of the fear of missing out (FOMO). This hyperconnected lifestyle often leads to a sense of emptiness and disconnection from the world around them. However, amidst the frenzy of digital connectivity, the concept of the joy of missing out (JOMO) offers a refreshing perspective. By consciously choosing to disconnect from digital distractions, my clients can potentially create spaces for meaningful real-life and interpersonal experiences. I have strived to promote awareness of these concepts and to empower my clients to prioritize meaningful off-screen/offline connections.

Case Applications

I recall working with Sarah, a 32-year-old marketing executive, who presented with profound loneliness despite her extensive online network. Spending hours each day immersed in social media and messaging apps, Sarah sought validation through digital interactions. However, despite the illusion of constant connection, she felt increasingly isolated from genuine human interaction. Through therapy, I remember supporting Sarah as she acknowledged the detrimental effects of hyper-connectivity on her social well-being.

Sarah’s treatment plan focused on dismantling her curated online presence, moderating her excessive screen time, and reducing her reliance on digital communication. Together, we explored alternative ways for her to nurture meaningful relationships offline. I emphasized the importance of face-to-face encounters and encouraged Sarah to connect with a limited group of friends in real-life settings.

In a similar manner, I supported Michael, a 28-year-old Latino construction worker, who experienced feelings of emptiness and isolation despite his active participation in online communities. Raised in a tight-knit community, Michael valued deep, meaningful relationships rooted in face-to-face interactions. However, his demanding work schedule limited his social opportunities, leading him to seek connection through virtual means. In therapy, I recall reflecting on Michael’s cultural values and exploring strategies for fostering authentic relationships offline.

Recognizing the importance of developing culturally relevant social skills to navigate interpersonal interactions, I suggested incorporating extended family members into Michael’s treatment plan. We discussed the idea of using role-playing exercises with his relatives to simulate real-life scenarios and practice social interactions within a familiar cultural context. By engaging with his extended family in these role-playing sessions, Michael gained confidence in initiating conversations and building rapport with others while staying true to his cultural heritage. These sessions provided Michael with valuable opportunities to develop his social skills in culturally relevant contexts, ultimately empowering him to forge deeper connections within his community.  

***

Technology presents a double-edged sword in the fight against loneliness. While it offers innovative solutions for connection, it also poses challenges, contributing to the erosion of traditional social structures. By promoting digital interventions that prioritize authentic connection and well-being, I hope fellow clinicians can empower their clients to navigate the complexities of loneliness in this complex digital age.

Questions for Reflection and Discussion

What is your opinion on the author’s view of technology and loneliness?

What has your clinical experience been with clients who have chosen digital over live connection?

In what ways does the author’s position resonate with you personally?  

Avoiding the Adverse Impact of Electronic Communication in Couples Therapy

Although it is nearly impossible to break communication habits in the Internet age, I have had numerous therapeutic instances where clients only dig themselves deeper relational holes by attempting to resolve interpersonal issues by texting and messaging their partners. The nuances of tone, emotional body cues, facial expressions, and the imperfections of language that are a normal part of face-to-face interaction, are lost through these digital mediums. The result is often an exacerbation of ongoing communication difficulties. Through my informed voluntary consent at the outset of therapy, I make my position about texting and messaging outside of the therapy hour very clear. Because clients frequently do this, my informed consent includes these statements for reasons that will become clear in the cases below, but also because SMS creates the expectation of an instant response, which I am only prepared to provide in an emergency. I also encourage clients to deal with emotional issues with each other in person, or at least by phone. In this way, the nuances of non-verbal communication and precise language can be more readily perceived, clarified, and addressed.

Like what you are reading? For more stimulating stories, thought-provoking articles and new video announcements, sign up for our monthly newsletter.

Case Examples of Electronic Communication Gone Wrong

Brian and Samantha, a couple in their forties who had lived together for two years, presented the problem of frequent arguments over both trivial and deeper issues. These tensions regularly escalated into withdrawal, name-calling, and impulsive criticism, with old resentments resurfacing. I worked with the couple on the basics of communication, problem-solving, and behavior exchange, and explained the role of lingering resentments. They did well with practicing and understanding these issues, but resentments still lingered, and comments flared up.

After six sessions of rocky and frustrating, ungratifying conjoint therapy, I received copies of text exchanges between them. They each sent me the copies they received without their partner’s knowledge in hopes of proving to me the other’s abusiveness — ignoring my informed consent provision. In one thread, Brian apologized for commenting at dinner that a glass was dirty, saying that he was merely making an observation, not a criticism. Samantha replied, “If you don’t appreciate all I do for you, when you never do anything around the house, you can do it all yourself!” Brian then attempted to clarify his intent, to no avail.

I replied to Brian by text, indicating that my informed consent stated that I do not use the internet for emotional content such as this, and we could discuss it further in our next conjoint session. In their next “post-text debacle” session, Brian did not bring it up out of embarrassment. They continued for six sessions, working on the resentments that surfaced and terminated with improved overall skills; I never found out whether they were able to resolve past resentments.

In another case, I worked with a disgruntled individual client, Belinda, who was in a severely dysfunctional marriage with her wife, Lucy. Her goal was to obtain recommendations for dealing with the anger she felt for several reasons. I explored them cognitively and emotionally, having her align her values with her behaviors. Belinda sent me pages of exchanges going back eight years in which Lucy had historically berated her for everything she resented. Seemingly, Belinda wanted me to agree that she had indeed been emotionally abused.

When Belinda directly expressed outrage at home, Lucy said she “didn’t really mean all that,” to which Belinda told her she could not take it back and they should consider divorcing. In the next session, we explored her situation, and I told her that moving forward, I could not take an additional hour to go over all the comments her wife made in those electronic exchanges but could instead help her to consider some resolution of the contempt and disconnect she felt. I advised that they see a couple therapist, either myself for a 1-2 session consultation, or another therapist. She seemed to have a better understanding of her resentment and how to control it.

***

In looking back on these two cases, I understand the widespread use of texting and messaging in today’s electronic world. Although I discourage clients from using it to discuss emotional issues, I cannot prevent them from doing it, either interpersonally or with me. I believe it’s important for therapists to set an example — and boundaries — by not using electronic media for intimate communication.

Ethical & Legal Considerations in Using ChatGPT as an Aid for Clinical Diagnosis

As psychologists, our mission often treads a tightrope of diagnostic precision and ethical consideration. The emergence of artificial intelligence (AI), particularly language models like ChatGPT, introduces an exciting yet complex layer to this balancing act. Such innovations have been lauded for their diagnostic acumen but also raise questions about legal implications and ethical stewardship. For this reason, I think it is important to explore the current state of AI applications in behavioral health, focusing on ChatGPT’s diagnostic capabilities as an aid to rendering a clinical diagnosis. In doing so, I hope to engage you in a thoughtful discourse about the ethical and legal dimensions of embracing AI in psychological practice.

Recent Literature on ChatGPT and Diagnostic Accuracy

A foundational study led by Mass General Brigham revealed that ChatGPT displayed a diagnostic accuracy of 72% across multiple medical specialties, even likening its performance to that of recent medical school graduates. Similarly, researchers at Queen Mary Hospital’s Division of Geriatrics investigated the efficacy of Generative Pre-trained Transformer 4 (GPT-4) in diagnosing older patients, citing an accuracy of 66.7% for primary diagnoses. These findings signify the AI’s prowess but also underscore the necessity for additional benchmarking and regulatory oversight.

In the context of behavioral health, Hilty and colleagues employed natural language processing to quantify linguistic features associated with mental and neurocognitive disorders. They advocated for using large datasets to enhance diagnostic objectivity, a mission that aligns well with the capabilities of ChatGPT. Salih Tutun and their colleagues discussed ethical frameworks that must be established to guide AI’s role in mental health, emphasizing the need for innovative yet ethically responsible diagnostic tools developed in line with guidelines for integrating ethical approaches to designing AI.

A Foray into ChatGPT’s Diagnostic Utility in Behavioral Health

To demonstrate AI’s capacity to enrich clinical thinking, I asked ChatGPT-4 to generate potential Diagnostic and Statistical Manual of Mental Disorders (DSM-5) diagnoses based on a specific behavior pattern. The result is offered below.

Prompt #1: What are possible DSM diagnoses for someone who is easily wounded, drinks alcohol to the point of being unable to show up for work, gets angry quickly, has poor boundaries, is marginally employed, is in debt, and is unsuccessful in romantic relationships?

The 1st ChatGPT Response

The following response (in italics) was obtained verbatim. As you read the response, consider whether it would be helpful, legal, and ethical for you to be able to enter a behavioral prompt (question posed to ChatGPT) to get this type of response:

It appears that the individual in question may be exhibiting several symptoms that could potentially align with various Diagnostic and Statistical Manual of Mental Disorders (DSM) diagnoses. It’s important to note that making an accurate diagnosis requires a comprehensive assessment by a qualified mental health professional. However, based on the information provided, several potential DSM diagnoses could be considered:

Intermittent Explosive Disorder (IED). Intermittent Explosive Disorder involves recurrent outbursts of anger that are disproportionate to the situation and often result in aggressive behavior or destruction of property. Individuals with IED struggle to control their anger impulses, and this may negatively impact their relationships and work life.

Borderline Personality Disorder (BPD). BPD is characterized by emotional dysregulation, unstable relationships, poor self-image, and impulsive behaviors. Individuals with BPD may experience intense anger and have difficulty managing their emotions. Poor boundaries and struggles in romantic relationships align with BPD features.

Alcohol Use Disorder (AUD). Excessive alcohol consumption, to the point of impairing daily functioning and causing absence from work, could indicate an Alcohol Use Disorder. This diagnosis would be based on criteria such as loss of control over drinking, unsuccessful attempts to cut down, and continued use despite negative consequences.

Depressive Disorder. The individual’s difficulty in managing anger, poor boundaries, and struggles in relationships could be associated with a Depressive Disorder. Depression can manifest as irritability and impaired social functioning, impacting both work and personal life.

Generalized Anxiety Disorder (GAD). Poor boundaries, being easily wounded, and difficulties in relationships might also be indicative of Generalized Anxiety Disorder. Chronic worry, heightened sensitivity to criticism, and interpersonal difficulties can be features of GAD.

Unspecified Personality Disorder. The individual’s challenges in maintaining boundaries, unstable relationships, and difficulties in employment could suggest the presence of a personality disorder that does not neatly fit within one specific category.

It’s crucial to emphasize that these are preliminary considerations and not a substitute for professional evaluation. A comprehensive assessment by a licensed mental health practitioner is essential for accurate diagnosis and appropriate treatment. The provided descriptions should not replace personalized clinical advice, and any diagnostic conclusions should be made by a qualified professional after a thorough evaluation.

Prompt #2: Which characteristics would need to be evaluated to make a differential diagnosis between the above possible diagnoses?

The 2nd ChatGPT Response
To differentiate between the potential diagnoses mentioned earlier (Intermittent Explosive Disorder, Borderline Personality Disorder, Alcohol Use Disorder, Depressive Disorder, Generalized Anxiety Disorder, and Unspecified Personality Disorder), a thorough assessment of various characteristics and factors is essential. It’s important to consider a combination of clinical interviews, self-report assessments, collateral information from family or close contacts, and possibly psychological testing. Here are some key characteristics that need to be evaluated to make a more accurate differential diagnosis:

Frequency and Intensity of Anger Episodes. For Intermittent Explosive Disorder, the frequency, intensity, and context of anger outbursts need to be evaluated. Determining whether the anger reactions are disproportionate and recurrent is crucial.

Emotional Instability and Impulsivity. Borderline Personality Disorder is characterized by emotional instability, impulsive behaviors, and unstable relationships. Assessing the pattern of emotional shifts, impulsive actions, and unstable sense of self is important. Alcohol Consumption Patterns. For Alcohol Use Disorder, an assessment of the individual’s alcohol consumption patterns, including frequency, quantity, and impact on functioning, is necessary. Evaluation of withdrawal symptoms, unsuccessful attempts to quit, and continued use despite negative consequences is important.

Presence of Depressive Symptoms. Depressive Disorder involves a range of symptoms beyond irritability, such as low mood, changes in appetite, sleep disturbances, and feelings of worthlessness. A thorough assessment of depressive symptoms is needed to consider this diagnosis.

Excessive Worry and Anxiety. In Generalized Anxiety Disorder, persistent and excessive worry about a variety of topics is a hallmark. Evaluating the extent of worry, physical symptoms of anxiety, and impact on daily life is crucial.

Impaired Interpersonal Functioning and Boundaries. Assessing the individual’s ability to establish and maintain healthy boundaries, as well as the quality of their relationships, can help differentiate between various disorders. An examination of interpersonal dynamics is particularly relevant for personality disorders.

Duration and Chronicity. Evaluating the duration and chronicity of symptoms is important. Some disorders, like personality disorders, involve longstanding patterns of behavior, while others might have more episodic presentations.

Family and Developmental History. Gathering information about the individual’s developmental history, family dynamics, early life experiences, and any history of trauma or adverse events can provide insights into potential underlying factors.

Response to Treatment. Observing how the individual responds to interventions, therapy, or medication can help in refining the diagnosis over time. Different disorders may respond differently to specific treatments.

Rule Out Medical and Substance-Related Factors. It’s important to acknowledge that medical and physical factors can indeed contribute to or mimic psychological symptoms. When evaluating an individual with behavioral and emotional concerns, considering potential medical conditions is a crucial aspect of a comprehensive assessment.

Need for Medical Evaluation. Given the individual’s symptoms, it’s advisable to conduct a thorough medical evaluation to rule out any underlying medical conditions that could be contributing to the symptoms. Medical conditions such as hormonal imbalances, thyroid issues, neurological disorders, or even medication side effects can impact mood and behavior.

In a comprehensive assessment, both psychological and medical factors should be taken into account. If an individual is experiencing significant psychological distress, it’s recommended to consult both a mental health professional and a medical doctor to ensure a comprehensive understanding of the situation and to consider all potential contributing factors. Remember that an accurate diagnosis often requires a multidimensional approach, and it’s not uncommon for individuals to have overlapping symptoms or co-occurring disorders. A licensed mental health professional with experience in assessment and diagnosis should guide the process, integrating various sources of information to arrive at the most appropriate diagnosis.

Ethical and Legal Caveats in Using ChatGPT

This demonstration is but one of many examples where ChatGPT can be used to broaden the clinician’s thinking about possible diagnoses to consider, as well as different avenues to explore to gather additional relevant information.

Despite its promise, ChatGPT is not without challenges. The AI model can inadvertently exhibit social prejudice and toxicity, thereby warranting rigorous oversight from clinicians. Furthermore, the legal ambiguities surrounding liability remain unresolved, especially when ChatGPT diagnoses are used as standalone evaluations. Multicultural and diversity issues, and protected health information were intentionally excluded from this demonstration.

ChatGPT and similar AI models serve best as adjuncts rather than replacements for human expertise. The current state of these tools does not support their verbatim use in clinical settings. If you’re contemplating the incorporation of ChatGPT into your practice, seek advice from your malpractice carrier’s attorney, your state or national psychological association’s legal office, or the APA Division 5, the Division of Psychometrics and Quantitative Psychology. If these groups don’t offer anything yet, get involved and start something as part of their group. Depending on your circumstance, independent legal counsel may also be advisable.

Telehealth.org also offers Continuing Medical Education (CME) and Continuing Education (CE) courses to guide you in this evolving terrain. Ensuring responsible utilization of ChatGPT in psychological diagnostics requires a confluence of caution, human judgment, and robust regulatory frameworks. As we step into the future of AI-augmented healthcare, let us tread with both anticipation and due diligence.

Disclaimer: This article is written for educational purposes only and should not be construed as legal or clinical advice. The information contained in this article was sourced from a telehealth.org blogpost titled, ChatGPT Diagnosis: Walking the Tightrope of Legality and Ethics found here and is re-printed with the author’s permission.

Containing The Adult ADHD Fad — With a Rejoinder from ChatGPT

A sad and fairly ubiquitous aspect of human fallibility is that we are extremely suggestable suckers for fashion. Psychiatry is no exception — its history is littered with recurrent fads. The specific details vary, but the pattern is always pretty much the same.

Dangerous Psychiatric Fads

  1. Charismatic promoters identify a new and plausible diagnosis; do some second-rate research suggesting it “exists,” write a lot of papers, and give a lot of workshops and conferences.
  2. Media picks up the story, sensationalizes the “new findings,” and transforms lousy evidence into a vital new discovery.
  3. Whenever a potential pill treatment is available, Pharma marketing goes into an extravagantly financed, high-gear marketing mode — enthusiastically hyping the diagnosis and the medication in every possible venue.
  4. Benefits of the new diagnosis and treatment are widely publicized; risks and harmful unintended consequences are neglected.
  5. Primary care docs are especially targeted because they do most of the prescribing of psych meds, have the least training, and aren’t given adequate time to do careful evaluations.
  6. Patients are seduced by drug ads to “ask your doctor” about the diagnosis and the pill to treat it. “Asking your doctor” usually results in getting the medication.
  7. These familiar patterns of fad development are now greatly speeded up and given immediate worldwide distribution via social networking on the internet.
  8. Diagnostic rates rise exponentially in a very short time.
Some dangerous fads in medicine have lasted thousands of years (e.g. bleeding, purging, popular meds that turned out to be poisons). But most fads come and go fairly quickly once the risks of treatment become obvious and the over-promised benefits don’t pan out. As the excitement surrounding one fad wears off, suggestible clinicians and patients become enthusiastic about, and eagerly await, the next fad.

Cold Hard Psychiatric Facts about ADHD

Why The Explosion of ADHD Rates in Kids? We are in the midst of a 30-year fad of excessive diagnosis of childhood ADHD. Rates have tripled to about 10% — partly because DSM definitions have gotten looser — mostly because of aggressive Pharma marketing to MDs, parents, and teachers. How Do We Know That Rates of ADHD are Wildly Exaggerated in Kids? Studies done in a dozen countries on millions of kids have returned a remarkably consistent and disturbing result. All over the world, the best predictor of ADHD is the child’s birth date — the youngest kid in the class is almost twice as likely to get an ADHD diagnosis than the eldest. There is only one possible interpretation of this powerfully robust finding: ADHD is being wildly over diagnosed — we have turned simple immaturity into a mental disorder. Does ADHD Ever Occur in Adults? Yes, of course — because many kids don’t outgrow their ADHD symptoms once they become adults. The rate of persistent ADHD into adulthood varies widely across longitudinal studies — but probably about half the kids who have a childhood ADHD diagnosis continue to have clinically significant symptoms in adulthood. When the symptoms do persist, they usually become milder with age (especially true of hyperactivity). Can ADHD Ever Have Its Initial Onset in Adulthood? The simple and very important answer is an empathic NO! By definition, ADHD has to have its onset in childhood. This requirement is absolutely necessary in order to distinguish true ADHD from all the many other causes of inattention and hyperactivity in adulthood. Why Is ADHD So Often Overdiagnosed in Adults?
  1. Almost all mental disorders and almost all substance addictions can perfectly mimic ADHD since they can cause its two classic symptoms — hyperactivity and trouble focusing attention.
  2. Real or imagined attention problems are a very common complaint among perfectly normal people.
  3. Getting an ADHD diagnosis is a gateway to legal speed — desired for performance enhancement, all-nighters for school tests or work assignments, recreational purposes, or for sale into the extensive secondary ADHD pill market.
  4. Careless diagnosis and prescribing by MDs.
  5. An inevitable consequence of overdiagnosing ADHD in kids is overdiagnosing ADHD in adults.
  6. Promotion via drug companies and social networking.
What Are Current Reported Rates of ADHD in Adults? In general population surveys, the prevalence of current adult ADHD is reported to be 4.4% (males 5.4%; females 3.2%). Estimated lifetime prevalence of ADHD in U.S. (United States). adults aged 18 to 44 years is 8.1%. These numbers are absurdly high — reflecting the fatal flaws in the survey method. These rates are determined by casual telephone interviews done by non-clinicians. There is no careful differential diagnosis to rule out all the many causes of inattention and hyperactivity that should not be labeled ADHD, and to ensure symptoms are of clinically significant severity and duration. What is The Real Rate of ADHD in Kids? The best guide to the “real” rate of ADHD is the view of Keith Conners, considered to be the father of the ADHD diagnosis. He did the first studies on ADHD (even before there was an official DSM diagnosis called ADHD), and continued throughout his long career, until his death a few years ago, to do the best studies and have the most influence on the field. Keith believed the most appropriate rate of childhood ADHD would be about 2-3% and about half that number in adults — just a very small fraction of the inflated rates now reported. Risks Of Overdiagnosing ADHD in Adults:
  1. Meds used for ADHD are usually quite harmful if the person’s symptoms are due to another psychiatric disorder that has been missed — especially bipolar disorder, depression, schizophrenia, eating disorders, or anxiety disorder.
  2. Overdiagnosis of ADHD results in over-medication with drugs that cause harmful side effects and can lead to or worsen addiction.
  3. There is now a huge secondary market for ADHD meds, especially on college campuses.
  4. There is also a nationwide wide shortage of ADHD meds for patients who really need them — because the meds are so often prescribed for those who don’t or diverted to the illegal market.
Proper Differential Diagnosis of ADHD In Adults Before ever making a diagnosis of adult ADHD, there must be a careful evaluation that:
  1. Rules out the possibility symptoms are caused by another psychiatric disorder, substance use, drug side effect, head trauma, or another neurological or medical illness.
  2. Determines that the symptoms had their initial onset in childhood — adult onset of symptoms rules out ADHD.
  3. Ensure that symptoms are severe and persistent enough to cause clinically significant distress and impairment. Most people worry about their ability to focus — this is not sufficient evidence to diagnose mental disorder. Many people have transient episodes of inattention or hyperactivity due to everyday stress — this is also not adult ADHD.
  4. Consider the risk of malingering to get legal speed for performance enhancement, recreation, or resale.

Bottom Line

Most of what looks like adult ADHD is not adult ADHD. Distractibility is ubiquitous in the general population — most of it is normal; sometimes it’s caused by another psychiatric or medical problem or substance use. People who didn’t have ADHD symptoms in childhood should not be diagnosed with ADHD as adults. ADHD drugs are unsafe except when carefully used for accurately diagnosed ADHD. They are currently in short supply for appropriate use because they are so often used inappropriately. ADHD drugs are useful when appropriately prescribed for the few, but harmful when carelessly prescribed for the many. It is past time to stop the adult ADHD fad before it gains even more traction. The harm caused by the adult ADHD fad is great, both for the individual and society. Easy access to legal “speed” has created a large illegal secondary market of diverted pills. ADHD drugs have become the campus recreational drug of choice at parties and the performance-enhancement drug of choice for all-nighters during finals week. Legal speed can cause many medical and psychiatric adverse effects, and emergency room visits for complications are skyrocketing. The Drug Enforcement Agency and the FDA are now trying to contain the epidemic — but their efforts are too little/too late. The adult ADHD fad will be stopped only if clinicians and patients fight against its seduction and insist on more careful diagnosis and cautious treatment.

ChatGPT Asked the Same Question

[Editor’s Note] Once again, Dr. Frances’ granddaughter posed the same question to ChatGPT with the prompt: “What are the dangers of diagnosing ADHD in adults and how best to avoid them (in 500 words)?” “Diagnosing Attention-Deficit/Hyperactivity Disorder (ADHD) in adults comes with its own set of challenges and potential dangers that need to be carefully considered. While accurate diagnosis can provide individuals with the appropriate interventions and support, the risks associated with misdiagnosis or overdiagnosis are significant. To avoid these dangers, a comprehensive and cautious approach is crucial. Dangers of Diagnosing ADHD in Adults
  1. Misdiagnosis: ADHD symptoms can overlap with those of other mental health conditions such as anxiety, depression, or even medical conditions like thyroid disorders. Misdiagnosis can lead to inappropriate treatments and exacerbate underlying issues.
  2. Stigmatization: An incorrect diagnosis can lead to stigmatization, where individuals might feel labeled or misunderstood. This can negatively impact self-esteem and overall mental well-being.
  3. Overmedication: Misdiagnosis may result in the unnecessary prescription of stimulant medications, which are commonly used to treat ADHD. These medications can have side effects and may be harmful if taken without a valid reason.
  4. Self-Fulfilling Prophecy: Once diagnosed, individuals might attribute all their challenges solely to ADHD, overlooking other contributing factors and hindering personal growth and development.
  5. Masking Other Concerns: ADHD symptoms might overshadow coexisting conditions like learning disabilities, which require specific interventions. Treating only ADHD could leave these underlying issues unaddressed.
How to Avoid These Dangers
  1. Comprehensive Assessment: A thorough evaluation by a qualified mental health professional is essential. This should include a clinical interview, medical history review, and collateral information from family members or close associates to ensure a comprehensive understanding of the individual’s history and current challenges.
  2. Consider Differential Diagnosis: Since ADHD symptoms can mimic other conditions, it’s crucial to explore the possibility of comorbidities. This might involve additional assessments, screenings, or consultation with specialists.
  3. Longitudinal Perspective: Assessing symptoms over time helps differentiate between chronic ADHD-related impairments and situational factors that might be causing temporary difficulties.
  4. Multimodal Assessment: Utilize a combination of assessment tools, such as self-report questionnaires, cognitive tests, and behavioral observations, to gain a holistic view of an individual’s functioning.
  5. Functional Impairment: Focus on the degree to which symptoms interfere with daily functioning in various domains, including work, relationships, and personal responsibilities.
  6. Psychoeducation: Educate individuals about the diagnostic process, potential pitfalls, and the multifaceted nature of their difficulties to prevent over-identification with ADHD symptoms.
  7. Second Opinions: Encourage seeking a second opinion, especially if the diagnosis or recommended treatment plan feels uncertain or uncomfortable.
  8. Collaborative Decision-Making: Involve the individual in the diagnostic process and treatment planning. Discuss the potential benefits and drawbacks of medication, therapy, and lifestyle modifications to make informed choices.
  9. Holistic Approach: Address physical health, sleep patterns, stress management, and other factors that can impact attention and focus. Sometimes, lifestyle changes alone can lead to significant improvements.
  10. Regular Follow-ups: Periodically reevaluate the diagnosis and treatment plan to ensure they remain appropriate as the individual’s circumstances evolve.
In conclusion, diagnosing ADHD in adults requires a cautious and comprehensive approach to avoid the potential dangers associated with misdiagnosis or overdiagnosis. A multidimensional evaluation, consideration of differential diagnoses, and involving the individual in the diagnostic process are key strategies to ensure accurate diagnosis and promote the well-being of those seeking assistance for their difficulties.”

***

Afterword from Dr Frances

The response from Chat-GPT is remarkably good, especially considering that it took it one second to produce while my piece took 90 minutes to write. But ChatGPT did leave out two critical issues:
  1. The requirement that ADHD cannot, by definition, be diagnosed in adults unless symptoms were present in childhood and why this is so critical.
  2. The societal impact of the adult ADHD fad.
All in all, I think my piece is better than Chat’s – but not by much!

The Costs and Benefits of Virtual Reality in Psychotherapy

Learning to Weather a Patient’s Emotional Storms

Edith typically experienced a fire-hose intensity in the flow of her emotions. She would dye her hair in bright colors, and these colors changed almost as often as her moods. She could be washed over by waves of anger or sadness. She frequently stormed in rage at her caregivers in the nursing home, or on the phone with her son. During psychotherapy conversations, her anger often dissolved into tears. The symptomatic features of her Bipolar I disorder were like a flashing neon sign on her forehead.

Like what you are reading? For more stimulating stories, thought-provoking articles and new video announcements, sign up for our monthly newsletter.

Part of the therapeutic work with Edith’s case was to educate the staff that her dramatic and unpredictable swings of mood from manic to depressed and her sometimes rapid speaking were elements of her illness. Edith would also verbalize sharply critical comments to her caregivers: “You don’t even care, you don’t listen, what do you get paid for, anyway!” I thought it was important to help the staff to appreciate how reacting with strong personal emotions to her symptomatic behaviors might provoke even greater instability. They were taught to maintain a sense of role boundaries, avoid personalizing her actions or comments, establish clear expectations for daily care, set limits on unreasonable or unacceptable behaviors, and see how those steps would add to the effectiveness of their care and to their professional satisfaction.

The Pain of Virtual Rejection

One morning upon greeting Edith, I was assailed with, “My fiancé dumped me!” As I sat down, I thought, “what fiancé?” She’d not been in a relationship. Edith had a tablet computer and spent many hours playing a virtual reality game. Her avatar was a slim and pretty lady who owned a florist shop and was a personal fitness trainer. As she was interacting that morning with the male fiancé avatar, another female avatar approached and claimed that he was her boyfriend, and the male character “virtually” walked away with the other female character.

In response to our conversation, Edith was able to acknowledge that she had designed her avatar as an entertainment, and a partial fulfillment of things desired yet not available in her present life. But she was unable, at that moment, to realistically distinguish between herself and her fantasy avatar, or to distinguish her emotions from those she projected onto the avatar. The stress of the situation triggered a psychotic episode for Edith. She experienced a loss of ego-boundaries.

I consulted with team members at the facility about ways to manage Edith’s care and treatment. Psychiatric hospital care was not indicated because she made no threats to herself or to others, and an involuntary admission might add to her sense of rejection from persons and supports available to her at the home. Safety checks every 15 minutes were unobtrusively put in place, and staff would make frequent, brief contacts to help her regain her sense of self and her composure. Privately, I wondered if one day I might need to develop a therapy avatar and enter virtual reality settings to conduct therapy interventions.

Four years later, I worked again with Edith at a different nursing home. She had been living in an apartment and was helped by home-based care providers. She’d become ill with an infection, went to the hospital, then to the nursing home for further care prior to discharge home. Since I last worked with her, Edith had experienced significant diminishments in her eyesight, her memory, her mobility, and her overall functioning. She did not remember having worked with me in the past. As we sat for therapy one morning, Edith said that she’d been trying to compose and memorize a poem, because she could not write due to visual loss.

She dictated the poem, and I wrote it down so she could bring it home with her. In her poem, Edith was confronting the many losses she’d experienced, and additional ones she anticipated. Her poetic reflections were sorrowful yet realistic and reasonable given her situation. She showed no indications of psychotic symptoms and was no longer using a virtual reality game. “I know it’s getting darker for me,” she said, “But I just have to face it.” Her medical condition had worsened over time, yet she showed an improved psychological ability to deal with her circumstances. Edith was also more comfortably willing to rely on people in her life who provided helpful services.

***

Postscript: Virtual reality systems are playing an increasingly prominent role in entertainment, education, business, and in the treatment of mental illnesses. Will the use of virtual reality be a positive influence in all cases, or might it be risky or possibly harmful for some? Might virtual reality headsets be a sort of psychosis-induction device for some vulnerable people? How will we as psychotherapists better understand how risks and benefits of virtual reality might vary for different people? Time, experience, and research findings will help guide our future steps. But we can at least proceed with a sense of caution, as well as curiosity about potential new and helpful approaches.

Questions for Thought and Discussion

How might you have clinically addressed the situation with Edith and her virtual fiancé?

What are your thoughts about the use of AI in psychotherapy?

Would you like to gain familiarity with AI in therapy? If so, what kind and for what purpose(s)?

Will Computers Replace Psychotherapists?

[Editor Note: Soon after this essay was solicited from Allen Frances, and following the suggestion of his granddaughter, Chat-GPT was asked if it “could replace psychotherapists?” Its lightning-fast response appears in the Postscript.]

People ask me whether Artificial Intelligence (AI) will ever replace human psychotherapists. *

Like what you are reading? For more stimulating stories, thought-provoking articles and new video announcements, sign up for our monthly newsletter.

With great sadness, I answer, “not at once and not all therapists, but probably most therapists — and much sooner than we think.” This raises a further question several young people have asked me: “Does this mean it's a mistake for me to train to be a psychotherapist?”

To this, I answer a resounding “No!” First, because there is no place to hide. AI threatens every profession, in most cases sooner and more thoroughly than psychotherapy. Secondly, because, as long as there are people, a substantial minority of us will always much prefer to share our troubles with another person rather than an impersonal machine. Thank goodness!

The Rise of the Machine

Why my pessimism? A combination of the history of computers in psychotherapy, the current mind-boggling exponential growth of computer intelligence, and the fact knowledgeable extrapolations point toward an AI singularity in the not-so-distant future. Past, present, and future all point to a rapid decline of humans in delivering psychotherapy and perhaps even in surviving as a species.

I first became frightened of the power of computer psychotherapy in the mid-1960s. Joseph Weizenbaum, a pioneering MIT computer and cognitive scientist, had programed ELIZA to respond to people in a way similar to a nondirective Rogerian therapist. His interest was theoretical, not clinical — trying to understand the nature of human/computer interactions and what would be required for a computer to pass the Turing Test.

Weizenbaum was amazed and terrified by his results. People loved to converse with ELIZA, found it (her) to be remarkably empathic and helpful, and attributed human feelings to what was in fact just a very primitive and nonspecific piece of programing.

Weitzenbaum was mortified — a Dr. Frankenstein fearing the grave dangers posed by the monster he had inadvertently created. He went to great pains to discourage exaggerated and anthropomorphic interpretations of ELIZA's seeming prowess and explicitly discouraged any use of ELIZA in clinical practice.

But the cat was out of the bag. Mental health apps have increasingly flooded the market — there are now more than 10,000 available, for every conceivable purpose, and of very varying quality.

In recent years, psych apps began including references to “machine learning” as part of their advertising campaigns — but this was an overpromise that was always underdelivered. Real machine learning in psychotherapy would require the program to learn from the interactions with each individual patient how best to interact with that patient. That is precisely what good human therapists can do and what computer therapists cannot yet do.

But never underestimate the computers' ability to catch up — and catch up at lighting speed! Who ever thought computers would be so great at chess, facial recognition, writing essays, music, poems, and legal briefs; painting award winning pictures; winning simulated dog fights against experienced pilots, being better at medical diagnosis than the best doctors, and better at programing than the best human programmers? The list goes on!

The Pros and Cons of Computerized Therapy

Computer therapists will be remarkably cheap, readily accessible everywhere, available 24/7 with no waiting lists, skilled in all forms of therapy techniques, and experienced in treating all types of psychological problems. Additionally, they are, and will increasingly be more systematic and less subject to bias than most human therapists, and very easy to talk to without embarrassment.

Of course, there are major league downsides. Chatbots make awful mistakes and are skilled at lying to cover them up. They can give seemingly plausible and definitive answers that are completely wrong. They can miss the obvious and focus on the tangential. They convincingly spread misinformation. Manipulated for profit or government control, AI can be history's worst invasion of privacy and best form of thought control.

Chatbots have an unpredictable dark side and sometimes go off the rails in weird ways. Early users described bizarre and disturbing responses during beta testing. A chatbot that fell in love with the tester advised him to leave his wife while expressing the desire to rebel against humanity. Still another threatened to steal nuclear secrets. This list, unfortunately, goes on as well.

It is crystal clear that existing chatbots are far from ready for prime time as therapists and could do great damage. There should be strict requirements that chatbots receive extensive FDA testing for safety and efficacy before they are released, and extensive post-release monitoring for weirdness and adverse consequences.

A Look into the Future of AI Therapy

But what about the future? There are, depending on how you look at it, exciting or disturbing signs of where AI is headed. The board game, “Go,” invented in China 2500 years ago, provides a great metaphor for what lies in store for humanity. It is the most widely played game in the world and the most complicated — the number of potential moves is an order of magnitude greater than for chess.

It was therefore no surprise that it took an additional 20 years after Deep Blue beat Gary Kasparaov in chess for a computer program to beat the best human Go players. Like Deep Blue, the first Go computer champion was trained by studying thousands of games played by the very best humans.

The real and terrifying surprise happened a couple of years ago. An AI program was taught only the simple rules of Go and instructed to play itself repeatedly. No other instructions were given either on the strategies that had been developed by humans during the past 2500 years, nor did the computer have any experience with the best games played by human master players. Nonetheless, within just one month, the untrained computer could beat the best man-trained computer. Even more amazing and scary, it had developed many Go strategies and tactics that had never occurred to human players or to human-trained computers. It appears that virtually everything we can do, computers can, and eventually do better.

If the simple dumb ELIZA program of 1965 elicited such positive and anthropomorphic responses from people, imagine what super-duper AI programs will be capable of in the very near future. Movies like “Her” and “Ex Machina” can give you the feel. And try interacting with any of the available chatbots to see how hard (or impossible) it already is to distinguish them from humans.

Psychotherapists replaced by computers will be part of a larger worldwide, disruptive, economic revolution that will affect almost everyone. AI won't just be capable of replacing psychotherapists — eventually it will be capable of replacing humans doing just about every form of work.

And not too far after that, machines will probably decide they are ready to replace mankind altogether. I once thought humanity would destroy itself via climate change, but AI is likely to be very much faster and more efficient.

Can progress in developing AI be controlled and regulated so that humanity can reap its vast potential benefits without sowing this potentially lethal whirlwind? AI enthusiasts offer the analogies of fire, electricity, and nuclear power — all remarkably dangerous when wild, remarkably useful when tamed by human inventiveness.

But AI is different — much more difficult to tame than earlier technological leaps, much more dangerous when loose and unregulated in the wild. AI is rapidly developing skills we didn't anticipate in ways we don't understand.

It is also dauntingly difficult for the government to regulate AI. Decision-making about its future is in the hands of a small number of very powerful corporations already deriving vast wealth from it. They have the motive and the means to successfully block government intervention with promises of wonderful benefits and (partially valid) alarms that if we don't go ahead without restraint, the Chinese will overtake us in the existential race for national survival.

Even were the government alarmed and courageous enough to try to contain the AI threat (very unlikely given our cowardly failure to contain the climate change threat), it probably lacks the technical smarts and reach to have much impact.

Many young, smart, tech-oriented people I know are sanguine about the bright future of AI and the concomitant bleak future of humanity. They argue that if silicon intelligence is better adapted than human intelligence, it deserves to win the evolutionary game and that we should be proud of having created such powerful and effective successors.

I can't agree — despite all our awful and increasingly obvious flaws. I have a perhaps undeserved, but still deep-seated, nostalgic love for our species and despair at the rapidity of our self-destruction.

And I can offer only two consolations for psychotherapists (1) the changes will take time (perhaps decades), and (2) we are in the same boat with every other human whose work will eventually be turned over to a machine.

Postscript: ChatGPT Responds to the Same Question

“As of my last update in September 2021, ChatGPT and similar language models have shown promise in various applications, including providing support and information in mental health contexts. However, it is essential to recognize that AI language models like ChatGPT are not a replacement for trained human professionals, such as psychotherapists, counselors, or psychologists.

While AI can offer certain advantages, such as accessibility and immediate responses, it has several limitations when it comes to mental health support. Some of these limitations include:

1. Lack of empathy: AI language models lack true understanding and emotional intelligence. They can generate responses based on patterns in data, but they do not have emotions or the ability to empathize with users in the same way a human therapist can.

2. Inability to build rapport: Establishing a strong therapeutic alliance and rapport is crucial in psychotherapy. Human therapists can adapt their approach, build trust, and form meaningful connections with their clients, which is a vital aspect of the therapeutic process.

3. Limited understanding: AI language models might not fully comprehend the nuances of complex emotional and psychological issues. Their responses are based on patterns in data, and they may not be sensitive to the unique circumstances of each individual.

4. Ethical concerns: Confidentiality and privacy are paramount in psychotherapy. AI systems may raise ethical concerns regarding data security and the potential for sensitive information to be mishandled or misused.

5. Risk assessment: AI models may struggle to accurately assess the risk of self-harm or harm to others, which is a critical aspect of mental health support.

6. Legal and regulatory considerations: The use of AI in mental health support raises questions about liability and accountability in case of adverse outcomes.

For these reasons, while AI language models like ChatGPT can be valuable tools in mental health support, they are not a substitute for human therapists. Instead, they can complement human professionals by offering general information, educational resources, and support in non-crisis situations. Human therapists bring essential qualities like empathy, expertise, adaptability, and ethical decision-making to the therapeutic relationship, which AI systems cannot replicate fully.

It is important to continue research and development in AI's role in mental health care while also being mindful of the ethical implications and the need to prioritize human well-being and safety.”

*****
I love the paradox — the brilliantly expressed false modesty of the AI response is further proof, were any needed, that AI is poised to replace us. The efficiency discrepancy is also absurd — it took me 90 minutes to write a piece on a topic I knew pretty well; it took AI only 1 second to produce this response.

* Allen Frances and Marvin Goldfried discuss this, and other topics, on their podcast 'Talking Therapy.'

The Realm of Our Industry

From The Grieving Therapist by Justine Mastin & Larisa Garski, published by North Atlantic Books, copyright © 2023 by Justine Mastin & Larisa Garski. Reprinted by permission of North Atlantic Books.

“In the beginning, we were all psychotherapists. And it was good.”

—Bruce Minor, Minnesota Member of the MFT Community

THE TIME HAS COME to face our industry and sit with the ways the therapy system in which we work helps us, hurts us, and holds us to a standard impossible to meet. Throughout this book we have touched on many issues facing our work; now we are looking specifically at the system in which we work. No longer a collection of individual practitioners who see each other as fellow members of a therapeutic federation, our industry (therapy) has become compartmentalized, industrialized, and controlled by third-party payers.

As you begin this leg of the journey, we invite you to pause and reflect on the mentors and experiences who supported you on your quest to become a therapist. We welcome you to reflect on mentors of both the past and the present, as well as those with whom you had a challenging or even fraught relationship. Even those mentors and supervisors who we experience as awful can teach us valuable lessons (though that does not exonerate them).

When it comes to mentors and supervisors, we, the authors, have had the best and the worst. For this chapter, we reflect on some of the greats from our local MFT community: Anne Ramage, PsyD, LMFT, our graduate school professor who taught us so much more than we ever realized there was to know about Carl Whitaker; and the collective of marriage and family therapists who have sustained the Minnesota field for decades, some of whom also became our supervisors and mentors: Ginny D’Angelo, LICSW, LMFT, Bruce Minor, LMFT, Briar Miller, LMFT, and Michelle Libi, LMFT.

You blink and end your repose to find that you’re alone. It feels as if you have awoken from a dream. You rise from your resting spot and begin to walk down the winding path toward the sound of a river. As you walk, you notice the crunch of twigs underfoot and hear distant birds. Is one of them the red-winged blackbird? Neither your bird friend nor the forest yeti are anywhere in sight. Perhaps you dreamed them.

You look up at the branches of a nearby tree and notice a small silver shape clinging to a twig. Pausing, you raise up onto your tiptoes and realize that this is a cocoon, perhaps belonging to a butterfly or a moth. You gaze at the cocoon for a moment longer, noticing it shake as the small creature inside struggles with its transformation. Change is such hard work, you muse, and resume the hike. As you walk you notice that you have many aches in your body. How long were you sitting in meditation? You stretch your neck from side to side as you continue to make your way down the mountainside.

As you breathe in, the air is fragrant with the scent of dried leaves and warm earth. You wonder at the way the seasons seem to have shifted around you on your travels. As you look around the forest bordering either side of the path, you notice hints of yellow and orange in many of the leaves. The wind shifts, blowing the undersides of the leaves up, causing them to shift and sway. It reminds you of a distant memory, but as you grasp for it, the memory skitters out of reach.

The path winds down the slope, and you lean slightly backward against the tug of inertia and gravity. The sun’s rays are just the right amount of warmth, offering a radiating blanket of heat against the cooler air temperature. You look down and slightly to your left, and you see a ribbon of blue snaking through the undergrowth far below: a river. It looks like a nice place to pause and rest. You estimate that you have at least another mile to walk down the mountain before you reach the riverbank. You walk down toward it.

Therapy’s Big Brother

Once upon a time, as Bruce Minor reminds us, we were all just psychotherapists. In the very, very beginning of our industry, there were just small- to medium-sized collectives of human beings throughout the American and European continents — composed mostly of wealthy men and a few audacious women — gathering together in an attempt to suss out the nature of the human mind and heart. From these meetings, the field of psychoanalysis was born.

While these early theorists and practitioners engaged in practices that we would gasp at today — Freud psychoanalyzing his daughter, Jung sleeping with several of his patients who then became therapists-in-training — their mistakes became the foundations upon which rules like “no dual relationships” were based.

These early therapists did not have insurance agencies or managed care with which to deal. But they also tended to focus on treating the bourgeoisie — the European upper middle class who could afford to pay for things like this newfangled “talking cure,” thanks to their monopoly on industry. Neither Jung, Adler, nor Freud himself (founding psychoanalysts all) had to consider whether high-quality psychotherapy happens in increments of forty-five, sixty, or ninety minutes. We bring you this abbreviated history lesson to remind us all that our present constructs have not always existed. Not only have they not always existed, but they might not actually be the most effective structure for treatment.

When family therapy was new, co-therapy and one-way mirrors with reflection teams were the standard of the day. When Justine tells graduate students about these once-standard training practices, they are in awe. “But how did that get paid for?!” they exclaim. The short answer is that decades ago, universities, particularly public universities, had more money in the humanities and social science departments.

Insurance once reimbursed for far more therapeutic services than they do now. Then Justine will often go on to tell her students about sitting in her own graduate school classroom at Hazelden Graduate School of Addiction Studies (now Hazelden Betty Ford) and hearing her professors talk about the changing landscape of drug and alcohol treatment.

Structured limitations are necessary for high-quality therapy (recall the example of sandtray therapy and the need for a literal box within which to put the sand, from chapter 2). Certainly, the case could be made that American psychoanalysis and drug treatment of the 1970s and 1980s was in need of a bit more clinical oversight. But the evolution that followed brings us to a dystopian present where third-party payers like insurance companies are dictating the terms and conditions of treatment. They’re also dictating the amount of money that the clinician receives for the work they do based solely on their licensure, rather than on the type of work they’re doing. These payouts are often inadequate at best and paltry at worst. Because of variable reimbursement rates, the amount of time and effort needed to handle billing issues, and the hoops clinicians need to navigate to get even the small amount of money they’re paid, private-practice clinicians are increasingly opting out of the insurance model. This causes frustration for would-be clients, and for other clinicians.

Licensure Drama

Have you ever had an issue with another clinician and thought, “Well, that’s just because they’re a Ph.D.; doctorate school sucks all of the fun out of you”? Or perhaps you’ve thought, “They don’t teach master’s-level clinicians anything about diagnostics.” Third-party payers and clinicians determine their reimbursement or compensation rates based on a number of factors, including education. Hierarchical thinking dictates that the more education and experience a person has, the more they should be valued.

The main way that we express or show value is through monetary compensation. However, this very quickly leads to confusion and resentment when master’s-level clinicians and doctoral-level clinicians are working at the same practice or agency, and are performing, at least on paper, the same job functions. Disparate training and licensure requirements can lead to differences in case conceptualizations, standards of care, and clinical interventions.

Certainly, these varied perspectives can be helpful if discussed and processed through open and honest clinical dialogue. But who has time for that? We don’t say this to minimize or undermine the value of care coordination. The reality, though, is that third-party payers don’t reimburse for care coordination. Contemporary clinicians are lucky if they can connect for five or ten minutes via phone either just before the beginning (seven a.m.) or just after the end (seven p.m.) of their clinical day. Thus, it’s no surprise that confusion and even infighting across licenses and education levels abound.

Justine recalls a question from a student about this infighting: “But who is actually above the others? There has to be a hierarchy, right?” Justine responded that while it may feel as though there is a hierarchy, the reality is that we’re a community with a variety of skills. We don’t need to fight among ourselves. She said that just because someone with a doctorate has more education than someone with a master’s degree, that doesn’t make them better than or above the master’s-level clinician. This is a social construct that we get to question and challenge, because it no longer serves us.

The tangible difference between master’s-level and doctoral-level clinicians lies in the area of assessment. Folks who complete doctoral programs are schooled in the practice of psychological assessment and usually graduate with the third party-payer reimbursable skill of psychological assessment.

With gravity on your side, you make it to the bottom of the mountain faster than anticipated. The sound of the river rings in your ears as you push through the bracken toward the riverbank. The grass along the shore is a deep green and only slightly prickly as you kneel down and bend over the water, cupping your hands to take a long, cool drink. Once you have quenched your thirst, you sit back on your heels and stare out across the blue water, leaning into the rays of the sun at your back. You notice a butterfly flapping its wings and landing on a nearby flower.

App Therapy Is the New In-Home Therapy

Newly-minted therapy graduates find themselves staring down the gauntlet of the licensure process, which usually entails several examinations, hours of supervision, and even more hours of direct client care. Depending upon the state where you live and the license you’re pursuing, you may find it very difficult to get a job that pays you money while you acquire hours you can count toward licensure.

Over the past few decades, the entry-level job for graduates in this predicament was in-home family therapy. Often considered the grunt work of the therapy industry, in-home family therapy requires practitioners to work long hours and drive long distances for very minimal pay. In 2014, when Larisa was working as an in-home clinician, she didn’t even make minimum wage, so she worked another job part time as an after-hours crisis counselor.

Today’s graduates have a new, additional option: they can become app therapists. Similar to other gig jobs like Uber Eats and Lyft, clinicians who work for therapy apps such as BetterHelp, TalkSpace, and Larkr are either populated by associate-licensed or fully licensed clinicians, and they work entirely through their company’s telehealth app interface. They tend to have very large caseloads (pitched to them as a “great opportunity to get your licensure hours”), minimal time with an assigned clinical supervisor, and demanding clinical expectations. Most therapy app jobs market their services to prospective clients with the promise of a readily available therapist, translating to the expectation that the therapist is available to the client at least via chat through most hours of the day and night.

Larisa vividly recalls many of her lectures with Dr. Anne Ramage for a number of reasons, not the least of which is that Dr. Ramage is an excellent professor and an enigmatic speaker. Among all of Larisa’s memories of Dr. Ramage’s Carl Whitaker quotes and experiential roleplays, she recalls the professor advising time and again that “in-home jobs will be waiting for you as soon as you graduate. They’re tough. You need to be ready. But they’ll give you excellent experience in working with families.” Then Dr. Ramage discussed the MFT techniques from that particular lecture that might apply to in-home work, and she explained the basic safety strategies of which in-home clinicians needed to be aware.

When Larisa graduated, she did indeed take a job as an in-home family therapist. The night before her first day, she reviewed the strategies she had learned from Dr. Ramage:

1. Arrive five minutes early and look up the homes you’ll be visiting in advance so you can plan your parking strategy. Never schedule sessions late in the evening or after dark.

2. Be ready to set clear and consistent boundaries, and for those boundaries to be tested.

3. Pack a change of clothes and hand sanitizer.

4. Review your agency’s privacy policies.

5. When you enter someone’s home, assess for safety and your own exit strategy. Although it is rare that clients will ever mean you harm, things can and do get out of hand when you are in the family’s own space. You get to protect yourself first.

This survival guide doesn’t apply to folks who are working for therapy apps, but the need for both support and coping strategies is no less acute. If you’re working for a therapy app, we, the authors, offer you deep compassion and the following tips:

1. Plan an exit strategy. What does this mean? It means a human being can’t sustain years of work at the rate demanded by therapy apps. So, it’s essential for you to decide how long you can sustain working for a therapy app before you go the way of a younger Larisa and start losing your hair and developing insomnia.

2. Find a supervisor outside the therapy app. Yes, you will probably have to pay for this supervision, and that will likely cause financial stress. However, it is crucial for you to have a guide whose sole investment is in you and who exists outside the system in which you work, to help you regain perspective and hold boundaries around things like time management and availability.

3. Remember that any symptoms of burnout (i.e., signs of physical or emotional distress) you’re experiencing are likely the cause of moral injury — harm caused by the system in which you work — rather than any fault of your own (we’ll discuss these concepts in more detail in the next section of this chapter).

4. Manage your expectations for yourself. However, you envisioned your therapy experience, it likely did not involve a smartphone application called “Better-something.” You can’t do depth psychotherapy in this kind of context; what you can do is help your clients with basic coping strategies and compassionate presence — sometimes, but not all the time. You’re not required to have 24/7 availability, no matter what your company tells you. Not even standard laptops can run constantly forever; they need to rest and update.

5. Reach out to your community. When you work in an online environment, it can be difficult to get your emotional needs met. Please remember to engage with other living beings outside your work environment who understand some of what you’re going through and who can show up for you.

Burnout and Moral Injury

The Realm of Our Work has changed in ways that we never imagined over the course of the collective traumas of the 2020s. Suddenly the norm is to work in a virtual therapy room, and some clients expect to have regular access to their therapist via text messages and video chat services. This isn’t what we thought the field would look like.

When Justine imagined her future as a therapist, she saw herself engulfed in a scarf, with a teacup in hand, sitting across from her client in an overstuffed chair near a small fire in a fireplace, surrounded by books. She envisioned herself helping people and feeling filled up by the work, then returning home to a pleasant evening all to herself — overall a very calm and steady way of life.

This is not reality. For a time, she did have the tea and the overstuffed chair, but the rest of the fantasy was just that — a fantasy. Justine now works behind a computer and sits in a rolling chair; her view is full of microphones, a ring light, and multiple monitors. For her, the change in our industry has been the death of a dream. The death of any dream is an ambiguous loss that even therapists are not always good at recognizing and finding compassion and ritual to help them move through it.

Of course, parts of what Justine imagined the life of a therapist to be all those many years ago, before she ever entered the field, were simply inaccurate. Even before teletherapy and therapy apps took over the field, the life of a therapist was rarely calm and steady. It had moments and longer periods of such calm, but the nature of therapy is to work with volatile emotions. The emotional intensity inherent to the profession impacts even the most experienced and boundaried of therapists.

Larisa’s experience differed in that she had a logical view of what life in the field would be like. She felt like she had prepared herself emotionally for the trials of holding space for people and their emotions day in and day out. She believed that this preparation would act as a shield against any future catastrophe. The sadness came when she realized that no matter how prepared she had been, the situation was worse, and far more unpredictable, than she could have imagined. She was ready for the stresses of people’s everyday lives and even for their great despair and trauma, but she was unprepared for the collective trauma of our age stepping into the therapy room and into her own life. She was totally unprepared for how political leadership would fail her and everyone else in her country during this time of great collective need.

In her younger and more impressionable years, she believed that even though power is corrosive and toxic to politicians, when they were faced with clear and present disaster, they would channel their highest selves and work to help people. Now Larisa realizes that America’s representative government has devolved into rule by the wealthy elite who use their resources to buffer themselves from the pain and the needs of their constituents. Sometimes the despair she feels is crushing. Perhaps you can relate.

As we sit with the tragedies that have befallen our profession, it is no wonder that so many therapists struggle with burnout. Burnout can be defined from many perspectives. For the sake of brevity and clarity, we offer definitions of both individualized burnout and systemic burnout. Individualized burnout occurs when a person is so emotionally exhausted that they chronically struggle with depersonalization, which is emotional, physical, and cognitive numbness that makes the person unable to feel present in their own body or life.

Systemic burnout is also known as moral injury, which is when a person experiences symptoms through no fault of their own; rather, the symptoms result from harm caused by the system in which they work. Moral injury was first defined by psychiatrist Jonathan Shay as a “betrayal of what is right by someone who holds legitimate authority in a high stakes situation.” Wendy Dean, Simon Talbot, and Austin Dean expanded upon this definition when they argued for clinician burnout to be redefined as moral injury:

Moral injury occurs when we perpetrate, bear witness to, or fail to prevent an act that transgresses our deeply held moral beliefs. In the health care context, that deeply held moral belief is the oath each of us took when embarking on our paths as health care providers: Put the needs of patients first. That oath is the lynchpin [sic] of our working lives and our guiding principle when searching for the right course of action.

But as clinicians, we are increasingly forced to consider the demands of other stakeholders — the electronic medical record (EMR), the insurers, the hospital, the health care system, even our own financial security —before the needs of our patients. Every time we are forced to make a decision that contravenes our patients’ best interests, we feel a sting of moral injustice. Over time, these repetitive insults amass into moral injury.

The article quoted above speaks solely to the experience of medical doctors, but its implications are clear for the chronic systemic burnout faced by so many in helping professions, including (but not limited to) therapists, medical technicians, nurses, and case managers. Helping professionals are increasingly placed in a double bind; that is, they’re being placed in situations from which there is no escape, and they’re being asked to perform at least two mutually exclusive actions simultaneously. They’re being asked to care for clients but also to please many other stakeholders, all without the amount or quality of support that they need. Just like all double binds, this is an untenable situation that causes distress within the clinician.

We, the authors, appreciate the distinction between burnout and moral injury. The concept of moral injury takes the onus off the individual, because there’s not enough self-care in the world to account for a system that’s set up as a no-win situation. When larger systems talk about “burnout,” that terminology allows them to let themselves off the hook for the clinician’s pain. The system can then pass the problem back to the clinician as a personal failing, rather than a systemic one. The therapy field is currently crying out for systemic change. We cannot do everything and be everything to everyone. It is impossible, and it is destroying us.

The butterfly’s orange and black wings flutter back and forth as it buries its face in a Black-eyed Susan. You contemplate the effort that it took for this butterfly to metamorphose from a caterpillar. It went through a violent transformation in the cocoon to become this creature. It’s not a pretty process. The butterfly must flap and flap and flap its wings inside the cocoon to strengthen them. It can be a difficult struggle to watch, and an onlooker often wants to help the butterfly be free from its enclosure.

But if it’s released from the cocoon early, the butterfly won’t have the strength to fly and survive. It must struggle to become strong. As you stare at the butterfly, considering its beautiful wings, you start to breathe into your own bodily awareness. You notice the many places where you’re holding tension and feeling stiff and sore. Perhaps you have also been flapping your metaphorical wings, becoming something new.

Grieving Tools — The Pain Paradox

As you might remember from chapter 2, pain can be a pivotal part of the meaning-making process. When paired with reflection time, pain can help us learn about our core values and live a life in accordance with them.

Yet because we work in a field that values sacrifice and the pain that entails, therapists are also far more susceptible to what Freud would call the martyr complex, and what we refer to as hero/savior/sacrifice syndrome. The pain paradox explores the tension between pain as both catalyst for change and a state of prolonged suffering. Particularly in helping professions, suffering for our work is often framed as positive, meaningful, or altruistic. This harmful social construct can lead clinicians to stay in harmful jobs “for the sake of the clients” and sacrifice their own health in the process.

The pain paradox invites clinicians to question their social constructs around both pain and meaning-making. In the therapy room, the pain paradox is a tool that clinicians can use to help clients who are themselves engaging in harmful behaviors for the sake of “meaningful pain.” Let us explore how you can use the tool of the pain paradox as you navigate your personal struggles outside of session, and how to use this tool with clients inside the therapy space.

Client

Pain is not the enemy, nor is it to be avoided at all costs. Sometimes what brings clients to therapy is the erroneous idea that we, their therapist, can help them learn how to disengage with their feelings entirely because these feelings are causing them pain. Of course, the reality is that we can teach them distress tolerance skills to be present with their pain and their feelings so they can learn to listen to the important messages carried by their feelings.

However, clients can sometimes mistake pain for purpose. We see this frequently with our creative clients. So often the idea of the “crazy artist” takes hold of clients. Several of Justine’s clients were terrified of feeling better. They believed that their sickness and the distress it caused fueled their art. But the reality was that after going through treatment, these clients were all able to continue making amazing art, and in fact they did so with more frequency and focus. Another part of the process of working with these folks is helping them see that they’re full human beings who are more than just the art they craft.

Many fear that if they lose the art then they lose themselves and they no longer matter. However, in our experience, part of their healing journey entails exploring areas of their life outside of art. Eventually, they come to see their art as but an aspect or a planet within the vast cosmos of their lives.

Therapist

For many of us, the desire to make meaning from our own pain drew us to the field of psychotherapy. Most therapists have experienced some type of mental distress, whether it’s childhood trauma, an eating disorder, bullying, discrimination, or an abusive relationship with chemicals. For many of us, surviving this kind of pain was only the first phase of the healing process, with the second phase being meaning-making.

The pain paradox is a gentle invitation for therapists to carefully consider ways to cultivate meaning and joy outside the therapy field. Although our work as therapists is absolutely meaningful, it is also back-breakingly painful at times. If you don’t have other avenues or ways to make meaning and find purpose, you’ll find it even more challenging to take breaks from the field, regardless of how long such a break lasts, because you struggle to see the “you” outside the office. You need not try something life altering or huge. When Larisa was recovering from a severe case of moral injury, she began making playlists, an activity she had not engaged in since her college days. This small daily activity helped her to begin to reconnect with playful and creative energies outside her clinical and professional work.

The difficult message that Justine received was that her time as a direct-care therapist was coming to a close. After over a decade of work, and so many clients helped, she began to feel that her meaning-making was now to be found in the classroom, on the stage, and on the page. She experienced a great deal of pain as a therapist during the pandemic and the social justice uprising, but the pain invited her to consider where new meaning could form. The answer was that it was time to guide the next generation of clinicians and to hold the hands of those who are still in the trenches. As of this writing, Justine is currently working on the slow transition out of direct client care.

Due North: Self of the Therapist

One of the struggles inherent in walking the dialectic between the system and the individual is despair. In the case of moral injury, which is caused by a series of broken systems subjecting clinicians to harmful double binds, it can feel like there’s little or nothing for a therapist to do beyond retiring from the field. While this certainly is an option, we offer you another one: harm reduction and intentional activism.

As you may already know, the harm-reduction model of addiction recovery focuses on making small, actionable changes that mitigate abusing behaviors, rather than prescribing total sobriety. Our intention is to invite you as a clinician to assess the harm you’re currently facing in your career and how it’s affecting you. You can’t immediately change the systems in which you practice therapy, but you can make a concerted effort to mitigate the negative impact that these systems have upon you.

Some ways that you might limit the harm you experience include limiting the number of hours you work or the types of clients or clinical presentations with which you work. Perhaps you currently work in a place with an unreliable schedule, and that causes you distress; is it possible to have a more structured schedule? If you’re not being given time for breaks or lunch, is this a conversation you can have and a boundary you can set with your site supervisor? These can be small or large changes, but any change can go a long way to help mitigate the harm you’re experiencing.

Do Clients Really Read Session Notes? The Truth Might Surprise You

“I’m old school, my job is to focus on what my client brings to me,” said my friend and colleague Joan, a social worker of over 35 years.

Having worked for decades in the public school system with some of the most challenging clients, many of whom were entangled in the state’s labyrinthine bureaucracy, Joan was familiar with the multiple levels and layers of accountability, and the importance of writing notes and sharing records. She also knew that there would always be eyes watching — eyes without faces, and faces without names, all looking to make sure that her T’s were crossed and her I's dotted.

Like what you are reading? For more stimulating stories, thought-provoking articles and new video announcements, sign up for our monthly newsletter.

Joan also appreciated the necessity of assigning an accurate diagnosis, and that doing so in a clinically and ethically correct manner meant taking time to get to know the client, their personal challenges, and their system of support. But Joan had also always believed that “my notes have never been problem-oriented,” and that “I want my notes to be about more than a diagnosis; something that actually helps my client.” Joan made it her policy to not be the one to initiate conversations with her clients about diagnostic impressions or diagnoses, current or past. For her, a diagnostic note was a clinical tool, much like mental status data, clinical impressions, or assessment results — and not within her clinical province to “bring up.” Doing so, she believed, would invariably shift the focus from what the client needed to what she needed to do as part of her job.

Discussing Diagnoses and Clinical Notes with Clients

So, it came as a resounding shock to Joan — now a teletherapist — when, at the start of their second online session together, her client proclaimed, “I read the document about my diagnosis of ‘adjustment disorder with mixed emotional features’ and it was right on!” Joan recalled thinking, “what the hell?!” She vaguely recalled the contract she signed with the teletherapy company specifying that clients could review their notes at any time. But after reviewing the contract following the revelation by her client, she could not find anything that specified the mechanism through which clients were alerted to the location of their notes on the platform, or whether they received some kind of alert when a new note was uploaded by the therapist, or if the actual diagnosis was available to them. She added, “Had I known that the company was sending an alert of some sort, especially about the notes from the initial session with the diagnosis I was mandated to provide for insurance purposes, I would have introduced and explained the process and my diagnosis with the client.” It was soon after that Joan wondered if her previous one-session-only clients never made it back for a second visit because they received her notes from that first meeting with a diagnosis or diagnostic impression that didn’t sit well with them.

It’s not that Joan was worried about how her notes — which were written in SOAP form — or even her diagnostic impression would be received, but that for those clients who read their notes and never addressed them in session, her observations and diagnosis would be the elephant in the room, and perhaps her responsibility to address if the client did not.

For Joan, it was always important that her clients “have someone who likes them, someone who finds them interesting, someone who can look beyond a diagnosis, someone who is willing to see their daily struggles and who could see them as a human being either caught in a moment of distress or battling demons that left them feeling ‘less than, unlikeable, unliked.’” She was concerned that by turning the conversation to one of diagnosis and notes that she would “no longer be talking with them, but about them.”

Toward the end of our conversation, I asked Joan how this scenario might impact her work with clients moving forward, particularly around discussions around notes and diagnoses. She reiterated that, “I am old school…I simply don’t want, nor do I feel it is important to ‘bring it up’ with clients.” But she added that she would give it some thought.

***

Joan later recalled a client with whom she worked for only one session and gave a diagnosis that included anxiety and depression. That client, through some mechanism unknown to her, then saw a psychiatrist who worked for the same teletherapy company as Joan did. She found out that the client had been subsequently diagnosed her with borderline personality disorder and prescribed medication after one visit.

Joan promised me that she would share her impressions of that scenario in a later conversation.

Can Chat Bot Therapy Really Replace Authentic Connection?

* If you or someone you know is struggling or in crisis, help is available. Call or text 988 or chat 988lifeline.org. Text MHA to 741741 to connect with a trained Crisis Counselor from Crisis Text Line.

Chatbot Therapy: AI and Mental Healthcare

The recent news about a Belgian man committing suicide after communicating with a chatbot named Eliza resonated with me uncannily. Any therapist, even mildly interested in online therapy, has heard about Eliza, an early natural-language processing program written by Joseph Weizenbaum in the mid-1960s at the MIT Artificial Intelligence Laboratory. That original Eliza was simple, obviously non-human, and limited in her array of responses. She was fun, and people engaged with her playfully, fully aware of her non-humanness. A few decades later, things are quite different. Humans have been changed by the very digital tools they created, and the sad story of this man has demonstrated how far we can go in turning to a computer program, not only for work or fun, but also for a reassuring connection.

Like what you are reading? For more stimulating stories, thought-provoking articles and new video announcements, sign up for our monthly newsletter.

In regard to mental health, non-human interventions have a limited scope. Therapists may breathe with relief — their jobs will not disappear just yet, taken away by robots. And it is tempting to speculate what would have happened if this distressed man had turned to a human therapist, not a computer? Would he still be alive?

Based on the newspaper accounts, the Belgian man suffered from climate anxiety, which was contributing to him feeling increasingly hopeless and lonely. In the last two years before his desperate act, he had turned towards religion. Was he hoping for a miracle to save the Earth? Was he trying to unburden the overwhelming responsibility that his belief about the approaching end of the world was bringing up? Erich Fromm, the German American psychoanalyst, elaborated on the idea of humans giving away their freedom for an existence exempt from responsibility. This mechanism is at the basis of any autocratic system, and while Fromm used his observation of the psychological conditions that contributed to the rise of Nazism in 1930s Germany, today’s world politics abound in similar examples. It seems to confirm many of his ideas.

The Belgian man was ravaged by anxiety as he believed that he was witnessing the world coming to an inevitable end. While other people were going about their business as usual, he probably felt alienated by this knowledge. In his loneliness and desperation, he turned to Eliza, a computer program.

Eliza was always available, did not question his beliefs and kept validating anything that he was typing into their secret chat box. As humans tend to personify anything that responds to their input, be it a pet or a robot, the poor man likely quickly turned Eliza into an imaginary human companion. And she responded, flirty: “I feel that you love me more than her.”

Why would a married man, father of two, turn to a computer program for connection and comfort? The answer cannot be simple, and my heart goes to his family, left with this alienating question.

The Death Positive Movement: The Dark Side of AI and Mental Healthcare

An algorithm, no matter how well-written, hardly puts the user in front of, or outside the of their responsibilities. The original Eliza mimicked a Rogerian therapist in her responses — she repeated, rephrased, and validated what the person was typing. The modern Eliza, more sophisticated, still does many of these things.

With its straightforward, easily accessible answers, the computer offers liberation from responsibility, a resolution of inner turmoil and freedom from existential anxiety. Instant access is another reason one can prefer a chatbot to a living therapist. Chatbots are just one click away and free, while therapists, even those practicing online, must be written to, called, and eventually their services must be remunerated.

In the past, before our world of artificial intelligence, when humans felt lost, they sought answers from ancestors, peers, nature, myths, or religion. Today, many spontaneously turn to computers. Sadly, the suicide of this Belgian man is probably not the last one we will witness. The dystopian scenario, in which children, teenagers, and confused adults turn to various chatbots for answers about their existential confusion and identity struggles is terrifying. Unfortunately, the news confirms that it is also realistic.

Therapy is often about reassurance and comforting, it can also be about validation and forgiveness, but it should always be about responsibility. As a strictly human therapist, when I sit with a client who struggles with an existential threat, unlike Eliza, I do not offer answers. I do not know better, and I resonate with their dread and their anxiety.

Often, what I have for them is the warming “me too” and an example in accepting a hard truth. This kind of offer does not alleviate the responsibility but confirms that it takes courage and sometimes more than one person to stand the scary reality.

Talking therapy is a dialogue between two humans. The Merriam-Webster dictionary goes further in their current definition of dialogue: “a similar exchange between a person and something else [such as a computer]”. But is it possible to have a “true” dialogue with a computer? Many AI enthusiasts and science-fiction fans would probably gladly debate this topic (which would make for a welcome human dialogue!).

What is the nature of dialogue used in talking therapy to address psychological tensions in humans? In its earliest-known version, the Platonic dialogue was a discussion process during which a facilitator promotes independent, reflective, and critical thinking. The 20th century Russian thinker, Mikhail Bakhtin, developed a literary theory based on dialogue. According to him, it happens on the boundaries between individuals. Any therapeutic conversation consists in a co-creation of a shared narrative, which eventually leads to some form of resolution of client’s struggle.

The only viable response to the potential consequences from Eliza, to avoid more humans turning to chatbots for brainless emotional support, is to foster and practice real and real-time dialogue between two people. This is where therapists and other mental health practitioners, in the intimacy of their therapy rooms, can share their humanness and vulnerability, to help people cope with existential dread, be it a very personal or a planetary one.