What happens when the worlds of AI and therapy collide?

What happens when the worlds of AI and therapy collide?

“Gradually, and then suddenly.” The line from Hemingway’s The Sun Also Rises — a character’s summary of how he went bankrupt — has found new purpose lately in capturing how huge technological shifts take place: incremental steps until the shock of realizing that we’re in new territory altogether. In the field of artificial intelligence (AI), the gradual part has played out over the last few decades. We saw machines steadily improve at playing chess, eventually besting the human world champion. We saw AI get better at recognizing voices and faces, and generating them. We saw AI improve at breast cancer diagnosis and protein structure prediction. We’ve grown accustomed to AI getting better and better at individual tasks.

But last week, we felt the ground shift. For the first time, we were suddenly confronted with an AI, in the form of OpenAI’s ChatGPT, that can do so many things we had previously thought of as uniquely human. Chatting as if with a friend. Writing a poem or song or essay or short play on any requested topic, with meaning, emotional nuance, and narrative cohesion. Writing software code, given instructions in natural English. Answering questions about quantum physics, human psychology, and politics. Making up jokes, impersonating Twitter personalities, engaging in creative role play. You can go online and talk with it yourself. It’s amazing, even if highly imperfect.

For the first time, it’s not just theoretically possible, but palpable: an AI within arm’s reach of passing the Turing Test, a bar for machine intelligence in which a person can’t reliably tell whether their chat partner is a computer or another human being. ChatGPT has its limitations and foibles, but it’s a publicly accessible, preconception-shattering experience of a possible world in which no realm of human capability is off-limits for an AI — even what we’ve regarded as the most deeply and uniquely human ways of relating.

As someone who has researched and helped to build AI systems, and is now a psychotherapist, this experience has increased the urgency I have in understanding what happens if — or really, when — the world of AI and the world of therapy come together in increasingly intertwined and sophisticated ways. It’s scary, but also perhaps inspiring, to contemplate.

The Risks

First, let’s get clear about the risks. AIs like ChatGPT have impressive, even awe-inspiring, capabilities, but they’re also still limited in capability and very error prone. ChatGPT, for example, will give plausible sounding answers to almost any question, including when it doesn’t (or can’t possibly) know the answer. It has a major confabulation problem. Integrity, and humility, are paramount in therapy, and the AIs clearly have a ways to go here.

Trained on the huge corpus of data that is the internet, AIs are also subject to all the biases to be found there. While researchers have attempted to put safeguards in place, there are still situations in which the AI would be at risk of reproducing bias and discrimination, and recapitulating hate speech. That is of course a disaster in any context, especially in therapy.

There is good reason to be concerned about the data that would be collected by an AI in the context of having therapeutic conversations, and how that data would be used. Therapy is, and should be, an inherently confidential endeavor, and extreme care will need to be taken to preserve each client’s right to privacy.

Finally, and most importantly, there will be some people, perhaps most people, who simply don’t want to interact with a machine intelligence, let alone in the virtual therapy room. They will understandably want respite from a world in which we already arguably spend too much time interfacing with computers. They will crave the healing inherent in connecting with another fallible and authentic human. And given what we know about how therapy works, it may well be that this authentic connection is not just a nice-to-have, but absolutely crucial for genuine healing.

All this to say: AI replacing therapists doesn’t seem to be on the near-term horizon. But could AI helpfully augment human therapists? With the above risks in mind, and with humility that we still have a ways to go technologically, I think it’s helpful to start really digging into this question. If there is the possibility of genuine benefit, then there is meaningful work we can get started on — in fact, some startups have already begun to build in the directions outlined below — and more time to address possible risks.

Augmenting Therapy: Possibilities

Let’s start with feelings as an “input.” Specifically, what if the client’s emotional state could be better understood by the therapist, and even the client, with the help of AI? Empathy is of course crucial to effective therapy, and therapists, like all humans, are prone to overestimating their ability to gauge a client’s nuanced feelings by intuition alone. While some approaches to therapy incorporate routine and structured assessments of client’s emotional state, this is far from standard across modalities, and such assessments depend on the client’s willingness and ability to accurately report how they’re feeling.

We know that speech patterns and vocal acoustics, facial expressions and microexpressions, and even patterns of movements are all predictive of cognitive and emotional state. An AI could measure these signals and track them over time, so that therapist and client can get more data to better understand how the client is doing and how the therapy may be helping or falling short. Even in cases where the client is providing detailed survey responses, the AI could assist by highlighting important trends and summarizing qualitative themes. The opportunity here is to establish a responsive and reliable feedback loop, so that client and therapist get more visibility into how therapy and life circumstances are affecting the client.

It’s possible that this kind of data, from an “outer loop” perspective analyzed over population data, could even enable individualized matching at the outset of therapy, helping to direct a client to the most effective approach, and even therapist, for that client’s unique emotional and behavioral “signature.” This matching could extend beyond talk therapy to include responses to (and individualized predictions about) psychopharmacological and other interventions as well.

As important “outputs” of a therapy session, consider the client’s insights, shifts in perspective, adaptive beliefs, and more genuinely felt and congruent emotions. Yet anyone who has ever been to therapy can attest: often by the time we’ve driven home, or switched tabs in the browser, such changes can begin to decay and fade into the background. What if instead of leaving sessions full-hearted but empty-handed, we instead had access to a kind of “highlight reel” of each session?

Such a tool could help us to recall our learnings and emotional insights, even in our own words. Over time, with access to the corpus of each of our own generated insights, we could even ask the AI for support with particular challenges, and be reminded of the progress we’ve made and what we’ve learned along the way. The therapist would be supportive in charting new waters, while the AI would use session content to help us to remember — in an ever-evolving, dynamically generated way — where we’ve been.

Additionally, there is all the work that happens between sessions. Whether written exercises, role plays, exposure work, or behavioral interventions, most therapists agree that change is faster, deeper, and more enduring when clients engage with therapeutic work between sessions. What if an AI could intelligently guide clients through written work, respond interactively in the context of role plays, provide encouragement and support during exposure work, as well as accountability and advice for behavioral interventions? Beyond helping the client in those moments, the AI could helpfully remember the client’s questions, challenges, and insights during these exercises. This could be useful material to discuss during therapy sessions, as well as for keeping an interactive record to enhance the client’s understanding of their individual therapeutic journey.

Finally, whatever patterns emerge on an individual level, might also be of use to other clients. Properly anonymized, with carefully obtained, voluntary consent, it’s possible to imagine clients opting to share the broad outlines of their initial symptoms, challenges, therapeutic insights, and victories with others. Clients of the future might be able to “search” for, or even be recommended, therapy journeys that resonate for them, and then integrate some of the lessons learned in their own work.

Looking Back, and Looking Ahead

It turns out that AI and therapy together go way back. In 1964, a computer scientist at the MIT Artificial Intelligence Laboratory named Joseph Weizenbaum created a natural language processing program called ELIZA. Using an incredibly naive algorithm by today’s standards, it matched the user’s written input to a set of internal rules, and responded in a simple, open-ended way that nonetheless suggested to the user that the program had understood, even empathized, with what they had just typed. Meant as a parody of non-directive therapists like Carl Rogers, many who played with the program couldn’t help but feel like they were interacting with a real intelligence.

Now, more than half a century later, we have far more powerful algorithms like deep learning, and mind-bogglingly large data sets to train them on. The “illusion” that these programs are intelligent, even emotionally intelligent, is far more compelling. And our thoughts reasonably turn to what can happen if this intelligence is misunderstood, misapplied, or even misappropriated. There is even a recently produced, wonderfully immersive computer game and visual novel called Eliza, in homage to the now-ancient algorithm, that imagines a dystopian future in which a venture capital-funded startup abuses AI therapy for its own financial benefit. I played Eliza, and found it emotionally rich, visually compelling, and sadly all-too-plausible.

So what do we do? It’s tempting to close our eyes and unask the question. We don’t yet really understand, let alone have a plan for, the disruption that will surely come about as AI remakes industries and automates away jobs, creating financial and social upheaval. It can be scary to imagine — as therapists, as clients, as human beings — that what we think and feel and do can be algorithmically understood and influenced, even if for our own benefit.

I suggest that we use this anxiety to drive a conversation in which we clarify our goals, our ethics, and our vision for how this might look. We don’t know when, but there is little doubt that truly intelligent AI is coming. The sooner we can have a more informed conversation about AI and therapy, the more likely it is that we wind up in the future we actually want.

Note: This post is part of a greater Therapy Science blog by Feeling Good Therapist Brad Dolin. Brad's previous career in machine learning, artificial intelligence, and software development continues to inform his curiosity, and he's fascinated by the ways that technology and therapy intersect. The original article is published here.

Feeling Good Tips In Your Inbox

High-quality content. Zero spam.

When was the last time you felt your best

Find A Therapist

Get matched with a therapist proven and vetted to help you feel better faster