Michael Lambert on Preventing Treatment Failures (and Why You're Not as Good as You Think)

Michael Lambert on Preventing Treatment Failures (and Why You're Not as Good as You Think)

by Tony Rousmaniere
Dr. Michael Lambert's groundbreaking work on tracking client outcomes has revealed a huge blindspot for psychotherapists: We don't notice when our patients are getting worse. But he's got the solution if you're willing to try something new.

PSYCHOTHERAPY.NET MEMBERSHIPS

Get Endless Inspiration and
Insight from Master Therapists,
Members-Only Content & More


 

The Blind Spot

Tony Rousmaniere: Let’s jump right in. You’re a leading researcher in the field of helping clinicians track their clients’ outcomes.
Michael Lambert: Right.
TR: Despite a quickly growing body of evidence that tracking outcomes can really help clinical practice, there are still many clinicians who don’t do it or who don’t want to do it. How would you make the case to these clinicians that tracking outcomes can be beneficial for their practice and for their clients?
ML: Well, the system we developed, the OQ (outcome questionnaire) Analyst, essentially monitors people’s mental health by asking 45 questions about their mental health. Clinicians can’t do that on a weekly basis because it takes too much time to do it, so the best way to do it is through a client self-report measure that asks very specific questions about different areas of functioning. It’s important to use a self-report measure and to tap into a broad range of symptoms that wouldn’t normally come up in a session, since sessions usually focus on what happened last week. It’s like taking a patient’s blood pressure and checking their vital signs for each visit. It gives you a much more precise measure of how they’re doing over time.

We developed the measure essentially to reduce treatment failure. It came out of the problem of managed care bothering clinicians with management bureaucracy around cases they knew nothing about. And so the idea was to stop managed care from managing all the patients in the clinician’s caseload and to focus on the management of patients not responding to treatment. So it’s not for all patients. It’s not necessary for the majority of the patients, actually—but it is necessary for patients who are not progressing or are getting worse. 
About 8 percent of adult patients actually deteriorate at the time they leave treatment, and with kids it’s double that at least. So 15—24 percent of adolescent child clients actually leave treatment worse off than when they started.


Our estimate is that about 8 percent of adult patients actually deteriorate at the time they leave treatment, and with kids it’s double that at least. So 15—24 percent of adolescent child clients actually leave treatment worse off than when they started, which doesn’t include people who simply aren’t improving. But in our survey with clinicians we asked what percent of their patients were improving in psychotherapy, and they estimated 85 percent. This is a major blind spot for clinicians. They’re not good at identifying cases where patients are not progressing or are getting worse. Even in clinical trials where you’re delivering evidenced based psychotherapy and get well trained clinicians who are following protocol, etc., you’re only getting about two-thirds of those patients responding to treatment. And then in routine care, the percentage of responders is closer to one-third. So clinicians’ estimates are way overstated.

In many ways, I think it’s a necessary distortion for clinicians; in order for us to remain optimistic and dedicated and committed and engaged, we have to look for the silver lining even when patients are overall not changing or outright worsening. It’s kind of a defensive posture, and it serves clients well generally and it serves clinicians well generally because the more success we see in our patients the happier we are in our jobs. But the downside is for the subset of patients who are not on track for a positive outcome. The distortion doesn’t work in their favor.
 

We Are the 90 Percent

TR: So are you saying that therapists are kind of inherently optimistic and positive, which helps them with most clients, but creates a blind spot for clients who are possibly deteriorating?
ML: Exactly. The evidence for that comes from a few studies we’ve done. It’s been true since it was first studied in the 1970s that individual private practice clinicians are overestimating treatment effects. This has been going on for 40 or 50 years that we know of and probably forever and it goes on today.


So if you’re in that world of overestimating the successes, then you’re not going to be motivated to adopt what we’ve developed because you can just stay in the happy world of optimism. But if you actually measure people’s symptoms and their interpersonal relationships and their functioning at work or homemaking or study, then the patients aren’t reporting the same thing that clinicians are reporting. That’s a problem.

Another related problem is just how good clinicians think they are at having success compared to other clinicians. Ninety percent of us who practice—I’m one of those 90 percent—think our patients’ outcomes are better than our peers outcomes. So
90 percent of us think we’re above the 75th percentile.
90 percent of us think we’re above the 75th percentile. And none of us in our survey saw any clinician who rated themselves below average compared to their peers; whereas, 50 percent of us have to be below average because it’s normally distributed. So we live in this world where we not only think our patients are having excellent success, but we think we’re having greater success than our peers.
 
That’s one line of evidence to support formal measurement. Another one is a guy named Hatfield in Pennsylvania, who did a study where he compared patients’ mental health with clinicians case notes, and clinicians missed 75 percent of people who were getting worse.

In the study we did we asked 20 clinicians, doctoral level psychologists, and 20 trainees getting doctorate degrees to identify the cases they were treating where patients were getting worse and who they predicted would leave treatment worse off. The patients answered a questionnaire at the end of every session and we identified 40 out of about 350 patients who got worse over the course of their treatment. Of the clinicians in the study, one trainee identified one of those 40 as being worse at the end of the treatment. The licensed professionals didn’t identify a single case.
We live in this world where we not only think our patients are having excellent success, but we think we’re having greater success than our peers.



They did identify about 16 people who were worse off in a particular session than they were when they entered treatment, so if they had just used that information alone, they would have increased their predictability a lot. We thought maybe licensed professionals would be better than trainees, but there was absolutely no difference. It’s a blind spot. We’re just ignoring it.
 

The Moneyball Approach to Therapy

TR: This reminds me of that movie, “Moneyball,” where they talk about using statistics to improve baseball outcomes. It’s like a Moneyball approach to therapy.
ML: Exactly. And if you listen to any recent talks by Bill Gates about improving the health of kids in underdeveloped nations and teaching in the U.S., he’s advocating essentially the same thing we’re advocating. You’ve got to measure it. You’ve got to identify the problems because you can’t solve the problem unless you can identify the problem.
Our clinicians are no better now than they were before we started doing this research. They actually have to use the data.
The way to identify it is not to ask clinicians. We are optimistic. We have to be. I want clinicians to continue thinking that they’re better than their peers. I want them to continue to have huge impacts on their patients. But there are some patients for whom it just isn’t true. So clinicians can’t do it with their intuition.

In our statistical algorithms, we look for the 10 percent of clients that are furthest off track and then we tell clinicians, “This patient is not on track.” That’s what clinicians can't do on their own. That’s information they need. They don’t actually get better at this over time. Our clinicians are no better now than they were before we started doing this research. They actually have to use the data.
TR: So this isn’t something that therapists should hope to improve, like getting rid of this blind spot?
ML: No. All our data suggests they don’t improve. 

But Therapy is So Complicated and Nuanced...

TR: We use the OQ Analyst here at my clinic and we find it really helpful. When I talk about it with other clinicians, one thing I hear a lot is, “Therapy is so complicated and nuanced and subtle. How could a computer program possibly understand that?” What would you say to them?
ML: I’d say that computers weigh evidence properly and clinicians don’t. Clinicians don’t know what evidence is relevant to predicting failure and they don’t weigh it. A statistical system actually gives things weight. 
TR: Are you a practicing therapist yourself?
ML: Yes, and I think I’m better than 90 percent of other therapists [laughs].
TR: I’m sure you are! So how has using the OQ affected your personal practice?
ML: Well, I pay attention to it. I realize that it’s much more accurate than I am. So when somebody goes off track I take that seriously. I say, “Well, whatever is causing this—whether it’s something about our therapy or something in the outside world—something is making them deviate from the usual course to recovery.”

The second part in what we developed was a clinical support tool for identifying what might be going on that’s causing the deterioration. We have a 40-item measure, the ASC, the Assessment for Single Cases, that measures generic problems in psychotherapy like the therapeutic alliance, negative life events, social support outside of therapy and motivation. And there’s a prompt to consider referral for medication. If a patient is getting worse and we’re working hard in therapy, then maybe they need to consider being on a medication. And there’s a prompt for change in therapy tactics, like delivering a more structured psychotherapy—you start increasing the directiveness of the therapy for the off track cases. If you’ve ever read any of Luborsky’s stuff, they do brief psychodynamic psychotherapy of about 20-25 sessions and they divide what they’re doing into supportive tactics and expressive tactics. One goes into deeper exploration of a person and the other one offers a more supportive environment. So you might shift from an expressive tactic to a supportive tactic when people go off track instead of pushing harder to break down fences. You start to try to strengthen the defenses that are there.
When clients are interviewed about the course of therapy, they lie to protect their therapists. But when they take a self-report measure, they're inclined to give a more honest appraisal.



For example, if I were treating a posttraumatic stress disorder patient and we were doing exposure and I was tracking their mental health status and they were going off track, I’d think about giving them coping strategies to deal with their anxiety. We might back off from exposure and make sure they have the tools they need to deal with the anxiety that’s provoked by the exposure. Because they should get more anxious, they should become more disturbed, but it shouldn’t last every day of the week after an exposure session. So you might think you’ve got them in the habit of breathing, but they’re actually not breathing and you have to go back to basics and make sure they’re taking some time to breathe when they get panicked. So the problem could be anything from a technique that’s being misapplied, like exposure therapy, or the need for medication because they’re not really able to make use of the therapy and they’re decompensating.

Another blind spot for clinicians is the therapeutic alliance. Clinicians tend to overrate it as positive, but it really does correlate with outcome if it’s based on client self-report. We’ve looked at studies where clients are interviewed about the course of therapy and in that case they lie to protect their therapists. But when they take a self-report measure, they’re inclined to give a more honest appraisal. 

My Therapist Was Glad to See Me

TR: What do you use to measure the alliance?
ML: We use the ASC for that, too. Eleven of the 40 items are alliance items and they’re based on traditional conceptions of therapeutic alliance, but with 11 specific items like “my therapist was glad to see me.”
It would be nice if therapists knew when patients didn’t think they were glad to see them.
It would be nice if therapists knew when patients didn’t think they were glad to see them. That’s something that therapists can take action on pretty fast unless there’s strong countertransference problems, in which case they probably need to seek supervision and figure out why they don’t like a client.

It might be the time of day, for example. If you see somebody at 5:00, you may not be as perky as at 4:00. Or it may be certain client characteristics like they’re intellectualizing and boring. So we just try to provide clinicians with individual item feedback on items of the 11 that are below average. But it’s only for the 20 percent or so of clients who go off track.
TR: What about dropouts? That’s a pretty chronic, widespread problem in our field that we generally don’t like to talk about. Did OQ help clinicians with that at all?
ML: Yes. What it tends to do in our feedback studies is it keeps the patients who go off track in treatment longer with much better outcomes at the end. And it tends to shorten the treatment with people who are responding well to treatment because it presumably facilitates the discussion of ending treatment. So overall you get about the same treatment lengths, but you’ve got more treatment aimed at people who are having a problematic response and less treatment than people who are responding. We actually find that about half the dropouts are completely satisfied with treatment. So they quit because they felt better. And that can happen really fast, so not all dropouts are a bad thing; about half of them are.

Suicide and Substance Abuse

TR: You mentioned earlier that the OQ assesses for suicide and drinking and other red flags. Maybe you could just speak to that and how it can help clinicians dealing with these issues.
ML: Well, there are three subscales. There’s the symptom distress subscale that’s mainly anxiety and depression with some physical anxiety symptoms. Then there’s one on interpersonal relations and one on social role functioning. The role of adults is often to go to work and do their job and get raises and advance their careers. If you’re a student, it’s succeeding in college or some training program. You can look at those different areas and sort of calibrate problem areas in those three areas. Is it across the board or is it one of the three? And then you can focus your treatment based on where the problems are. And then there are critical items that go into those subscales that are substance abuse and suicide.

We find clinicians tend to underestimate the problems people have with substances.
We find clinicians tend to underestimate the problems people have with substances. They’re under reported, but when they are reported it’s often not addressed because people underestimate the negative consequences of substance use. With suicide, no clinician asks patients at every session how suicidal they were this last week, but that can spike quickly. A patient can go from not thinking of suicide much at all to thinking of it almost daily over the last week. One item on suicide isn’t a predictor of suicide, but, of course, predicting suicide is sort of beyond us generally speaking. So it’s important to ask more questions about It more frequently.

When I see a client and I give them the OQ45, it gives me right off the bat a gauge of just how unhappy they are, but I don’t find it a rich diagnostic instrument. It’s more like a blood pressure test. Some people come in with a really high score. If they score a 100 then I’m really alert because if that doesn’t come down, they’re going to do something stupid. They’re going to try suicide, or drink too much or be too promiscuous or they’re going to end up in the hospital. So for me, if I was tracking somebody that has a score of 100 and we had three weeks of therapy and their score didn’t come down, I’d be thinking about medication if they were depressed more than if somebody had a score of 70, which is moderately or mildly disturbed.

For people scoring really high, they’ll likely have a better outcome if they’re not just relying on psychotherapy. So it could prompt a referral, but certainly it’s going to prompt you to be very alert. I usually have a good sense in the first session without the OQ45 of how disturbed people are—unless they’re that exceptional person that doesn’t want to admit to anything, but has plenty of problems. They may not trust you and they may not trust the system and they may not want to report stuff. You find that a lot in the military. When they start to trust you they’re more open.

I saw a borderline patient who didn’t look very borderline on the surface, and it took six months for me to learn that she was cutting herself. I gave her the MMPI as well and she scored quite normally on the MMPI and then was within the average range with OQ45. She presented herself with a simple phobia, a driving phobia. So we were concentrating on the phobia, but there was all kinds of stuff that came out once she felt more trusting. So if there’s a discrepancy between the score on the test and your own intuition, then that tells you the patient may be too ashamed or distrustful to tell you.
 

When Confidence Hinders Us

TR: It seems that a real crux of this is therapists being willing to acknowledge their own limits or blind spots. I came across the outcome measurement before I was licensed. I was a beginner, so it was pretty easy for me to acknowledge. Do you find that more experienced clinicians have a harder time acknowledging that they have blind spots and might need something like the OQ45 to help find them?
ML: I think people trained in CBT and behavior therapies would be open to measurement. Although, in routine practice, they don’t really do it the way it’s supposed to be done and start relying on their intuition. But CBT therapists generally are more open to it. If you get somebody who’s psychodynamic, they’re very, very resistant. I’ve found that it does depend on theoretical orientation. I think also in certain community mental health settings where the patients are so disturbed it can be quite disheartening to see the slow rate of change if there’s any change at all.So you’d just rather not see the bad news because you’re kind of used to people not responding very much.

So it’s a lot harder to sell with psychodynamic therapists and maybe post-modern therapy. Even though client-centered approaches have a long history of studying the effects of psychotherapy and the process of psychotherapy, they still see simple self-report measures as easily faked.
Psychodynamic therapists are usually overly confident in their clinical judgment, so they see defenses at work everywhere and don’t trust self-report measures.
Psychodynamic therapists are usually overly confident in their clinical judgment, so they see defenses at work everywhere and don’t trust self-report measures. But I think underneath all of that is that once we get into a routine and we develop confidence, we think there is no reason to give new interventions a try. You just hear all kinds of excuses for why people can’t do this and they usually don’t hold water. For example, patients don’t mind doing it at all. They like it.

It’s true across all of medicine, where people are really slow to take advantage of innovations. They only adopt new innovations when the gal in the office adopts it. So you’ve got to get people doing it around you before you decide you’ll give it a try. In our very first study, we only got half the therapists to participate. And then by the time we did our third study, all but one participated. And now if the computer system goes down, people get really upset. They don’t want to work without it. But it took two or three years to get all of them into it.

Innovations are a hard sell. Unfortunately, the way most clinicians get exposed to this is through administrators who make them do it, and then their general attitude is distrust of the way the information is being used. Clinicians passively-aggressively don’t participate, and as a result they sabotage the whole effort. It ends up being a power struggle between clinicians and administrators.
 
TR: This brings up a question I wanted to ask you, which is about using the OQ to compare therapists. I think I’ve heard you say that you don’t think it or other outcome measures should be used to compare therapists. Is that accurate?
ML: Yes. I think you end up being on thin ice in settings where patients are assigned randomly. In most settings, like private practice settings, they’re not assigned randomly but you can’t assume that clinicians have equivalent caseloads. Plus we find most clinicians are in the middle. But you can see a big difference between clinicians at the extremes. The average deterioration rate at the institute is about two to three percent, and then we’ll find a clinician that has a deterioration rate of 17 percent. We had one clinician in our center whose patients on average got worse. So I think you can do something with that data. But you wouldn’t want to make too much of it because most of us can’t be distinguished. Our patients do well. And our student therapists do as well as our licensed, supervising professionals. That’s very disturbing [laughs].
Our student therapists do as well as our licensed, supervising professionals. That’s very disturbing.


The only thing we can find is that when you see somebody with a lot of experience, their patients get better faster. But the overall outcome is the same. Even the stuff on paraprofessionals doesn’t show a huge difference between professionals and paraprofessionals.

If you go to a conference where people present outcome data on borderlines, they spend half their time arguing that the patients in their setting are real borderlines and the patients in the other people’s settings are mild borderlines or not real borderlines. Everybody always wants to say, “I have tougher cases,” but it’s not true all that often.
 
TR: Well, that’s how I personally know them in the top 10 percent of therapists, because I’m getting average results, but with really tough cases [laughs].
ML: But the really tough cases, from the point of view of measuring outcomes, are patients who aren’t disturbed. If I was going to fill my caseload to make my data look good, I’d go for the moderately disturbed patients. I would not want a patients who were close to the norm because those people are not going to change. They have nowhere to go. Whereas, the people that are admitting a lot of disturbance, it’s harder for them to get worse and there’s a lot of room for them to improve. Does that make sense?
TR: Absolutely.
ML: They would change a lot. They may never enter the ranks of normal functioning, but they would definitely improve.

The Fact is, We're All About Average

TR: There’s a handful of therapists, including myself, who have been making our outcome data available to the general public, to prospective clients. Do you think that’s a legitimate use of the outcome data?
ML: I have some concerns about it, so I guess it depends on how it’s used. Because in some ways you don’t want patients to know the truth that they have, say, a 50 percent chance of recovering. And if it’s in comparison to other therapists, then you’ve got to make sure there’s some way of making the cases equivalent. Individual clinicians can’t do this, unless they’re gifted with statistics. What we’re doing in managed care is we can calculate the expected level of success for a clinician based on their mix of clients. So if you had one kind of mix, the expectations would be higher than if you had a different mix. And then you can see how they perform in relation to the expected treatment response for their mix.
You don’t want patients to know the truth that they have, say, a 50 percent chance of recovering.
 

The fact is we’re all just about average. So we have no unique claim to effectiveness unless we’re the outlier. So it might be good for outliers on the positive side. For the average clinician you are just able to say, “my outcomes are as good as others.”
 
TR: Our outcomes, as a field, are pretty good, though, especially when you compare it to medical outcomes.
ML: Yes, I think we have a lot to be proud of. 
TR: So your average clinic therapist is actually pretty good.
ML: Yes, I think so. But knowing routine care clinics, the average number of sessions is three or four. So that’s a dose of therapy that’s good for 25 percent of people, not 75 percent. 
TR: What about for therapists who do want to get better? I know a lot of the Psychotherapy.net readers are there to learn new techniques and broaden their skills and knowledge. Can the OQ help people become better therapists?
ML: Maybe in the long, long run, but I don’t think there’s any evidence for it. I think you’ve got to go through the procedures, get the feedback and figure out a way to make it work for the patient. But if they don’t get feedback, they’re not going to be able to identify problem cases and make appropriate adjustments.

What’s true is you need to be measuring patients on an ongoing basis and get feedback when client’s are failing. I don’t think there’s too much effect for giving feedback to clinicians whose patients are progressing well. They may like it, but as far as improving their outcomes, most of the bang for the buck is when the therapy has gone off track. That’s the novel information.
Feedback helps when it’s novel, when it’s giving you information that you didn’t know about.
Feedback helps when it’s novel, when it’s giving you information that you didn’t know about.
 
TR: It sound like what you are saying is the way that we improve is by really recognizing our blind spots and finding tools to help us there rather than thinking we’re going to overcome them.
ML: Yes. The practice of medicine is a good analogy. I don’t think my doctor is any better at guessing my blood pressure after measuring everybody’s blood pressure and getting feedback. I just don’t think he can operate without a lab test. I don’t think we want people managing medical illnesses without lab tests. And they don’t feel any shame at all. They feel like they really get good information and they wouldn’t dream of managing a disease without that information. They don’t expect themselves to be able to do it or learn from it.

If you look at the psychoactive medications—I’m just shocked at how poorly it’s managed. If you work at UCLA, you believe one thing’s the best practice and if you work at NYU, you’ve got a completely different set of practices. And it’s not like it’s based on how your patients are responding to the drugs because it’s very poorly monitored.

I hope this is not too disappointing.
 
TR: How so?
ML: Well just that the feedback is absolutely essential. Therapists can’t just “get good.”
TR: I actually find it liberating because it means I don’t have to try to become good at something that I’m just inherently not good at. So it kind of takes the load off. I just hope we can find more things like this in the future to point out our blind spots and help us so we don’t have to run around pretending they’re not there.
ML: We’ve confirmed our findings in study after study—and now there are more studies coming out of Europe—but it’s really hard to get clinicians to do it. There are people who adopt this early in their careers, but many people are pretty closed and defensive.
TR: Well I’m a psycho dynamic therapist—I do short-term dynamic work and I’m part of a psychodynamic community—and I have found that newer therapists are just a lot more open to it and are kind of growing up with it. 
ML: And they’re not so afraid of technology.
TR: Yeah, that too. So I’m really hoping that the psychodynamic community can start to embrace this instead of resisting it.
ML: It’s not an easy sell, but we’ll see.
TR: Well, it’s been a really fascinating conversation. Thank you so much for taking the time to talk about your work. 
ML: : It was my pleasure.


Copyright © 2013 Psychotherapy.net, LLC. All rights reserved.
Order CE Test
$15.00 or 1.00 CE Point
Earn 1.00 Credits
Buy Now

*Not approved for CE by Association of Social Work Boards (ASWB)

Bios
CE Test
Michael Lambert Michael J. Lambert, PhD is a professor of Psychology at Brigham Young University and has been in private practice as a psychotherapist throughout his career. His research spans 30 years and has emphasized psychotherapy outcome, process and the measurement of change. He has edited, authored, or co-authored nine academic research based books, and 40 book chapters, while publishing over 150 scientific articles on treatment outcomes. He is the co-author of the Outcome Questionnaire, a measure of treatment effects that is growing in popularity.
Tony Rousmaniere Tony Rousmaniere, PsyD is Clinical Faculty at the University of Washington and has a private practice in Seattle. He hosts the clinical training website www.dpfortherapists.com, and is the author/editor of four books on clinical training: Deliberate Practice for Psychotherapists, The Cycle of Excellence: Using Deliberate Practice to Improve Supervision and Training, Using Technology to Enhance Counseling Training and Supervision: A Practical Handbook, and the forthcoming Mastering the Inner Skills of Psychotherapy: A Deliberate Practice Handbook. In 2017 Dr. Rousmaniere published an article in The Atlantic Monthly, “What your therapist doesn’t know”. Dr. Rousmaniere provides workshops, webinars, and advanced clinical training and supervision to clinicians in the United States, the United Kingdom, Europe, Asia, and Australia. He was previously Associate Director of Counseling and Director of Training at the University of Alaska Fairbanks Student Health and Counseling Association. More about Dr. Rousmaniere can be found at www.drtonyr.com

CE credits: 1

Learning Objectives:

  • Describe the origin and purpose of the Outcome Questionnaire Analyst
  • List the key elements of treatment failures
  • Analyze possible blind spots in your own work with clients

Articles are not approved by Association of Social Work Boards (ASWB) for CE. See complete list of CE approvals here