We developed the measure essentially to reduce treatment failure. It came out of the problem of managed care bothering clinicians with management bureaucracy around cases they knew nothing about. And so the idea was to stop managed care from managing all the patients in the clinician’s caseload and to focus on the management of patients not responding to treatment. So it’s not for all patients. It’s not necessary for the majority of the patients, actually—but it is necessary for patients who are not progressing or are getting worse.
Our estimate is that about 8 percent of adult patients actually deteriorate at the time they leave treatment, and with kids it’s double that at least. So 15—24 percent of adolescent child clients actually leave treatment worse off than when they started, which doesn’t include people who simply aren’t improving. But in our survey with clinicians we asked what percent of their patients were improving in psychotherapy, and they estimated 85 percent. This is a major blind spot for clinicians. They’re not good at identifying cases where patients are not progressing or are getting worse. Even in clinical trials where you’re delivering evidenced based psychotherapy and get well trained clinicians who are following protocol, etc., you’re only getting about two-thirds of those patients responding to treatment. And then in routine care, the percentage of responders is closer to one-third. So clinicians’ estimates are way overstated.
In many ways, I think it’s a necessary distortion for clinicians; in order for us to remain optimistic and dedicated and committed and engaged, we have to look for the silver lining even when patients are overall not changing or outright worsening. It’s kind of a defensive posture, and it serves clients well generally and it serves clinicians well generally because the more success we see in our patients the happier we are in our jobs. But the downside is for the subset of patients who are not on track for a positive outcome. The distortion doesn’t work in their favor.
So if you’re in that world of overestimating the successes, then you’re not going to be motivated to adopt what we’ve developed because you can just stay in the happy world of optimism. But if you actually measure people’s symptoms and their interpersonal relationships and their functioning at work or homemaking or study, then the patients aren’t reporting the same thing that clinicians are reporting. That’s a problem.
Another related problem is just how good clinicians think they are at having success compared to other clinicians. Ninety percent of us who practice—I’m one of those 90 percent—think our patients’ outcomes are better than our peers outcomes. So
That’s one line of evidence to support formal measurement. Another one is a guy named Hatfield in Pennsylvania, who did a study where he compared patients’ mental health with clinicians case notes, and clinicians missed 75 percent of people who were getting worse.
In the study we did we asked 20 clinicians, doctoral level psychologists, and 20 trainees getting doctorate degrees to identify the cases they were treating where patients were getting worse and who they predicted would leave treatment worse off. The patients answered a questionnaire at the end of every session and we identified 40 out of about 350 patients who got worse over the course of their treatment. Of the clinicians in the study, one trainee identified one of those 40 as being worse at the end of the treatment. The licensed professionals didn’t identify a single case.
They did identify about 16 people who were worse off in a particular session than they were when they entered treatment, so if they had just used that information alone, they would have increased their predictability a lot. We thought maybe licensed professionals would be better than trainees, but there was absolutely no difference. It’s a blind spot. We’re just ignoring it.
In our statistical algorithms, we look for the 10 percent of clients that are furthest off track and then we tell clinicians, “This patient is not on track.” That’s what clinicians can't do on their own. That’s information they need. They don’t actually get better at this over time. Our clinicians are no better now than they were before we started doing this research. They actually have to use the data.
The second part in what we developed was a clinical support tool for identifying what might be going on that’s causing the deterioration. We have a 40-item measure, the ASC, the Assessment for Single Cases, that measures generic problems in psychotherapy like the therapeutic alliance, negative life events, social support outside of therapy and motivation. And there’s a prompt to consider referral for medication. If a patient is getting worse and we’re working hard in therapy, then maybe they need to consider being on a medication. And there’s a prompt for change in therapy tactics, like delivering a more structured psychotherapy—you start increasing the directiveness of the therapy for the off track cases. If you’ve ever read any of Luborsky’s stuff, they do brief psychodynamic psychotherapy of about 20-25 sessions and they divide what they’re doing into supportive tactics and expressive tactics. One goes into deeper exploration of a person and the other one offers a more supportive environment. So you might shift from an expressive tactic to a supportive tactic when people go off track instead of pushing harder to break down fences. You start to try to strengthen the defenses that are there.
For example, if I were treating a posttraumatic stress disorder patient and we were doing exposure and I was tracking their mental health status and they were going off track, I’d think about giving them coping strategies to deal with their anxiety. We might back off from exposure and make sure they have the tools they need to deal with the anxiety that’s provoked by the exposure. Because they should get more anxious, they should become more disturbed, but it shouldn’t last every day of the week after an exposure session. So you might think you’ve got them in the habit of breathing, but they’re actually not breathing and you have to go back to basics and make sure they’re taking some time to breathe when they get panicked. So the problem could be anything from a technique that’s being misapplied, like exposure therapy, or the need for medication because they’re not really able to make use of the therapy and they’re decompensating.
Another blind spot for clinicians is the therapeutic alliance. Clinicians tend to overrate it as positive, but it really does correlate with outcome if it’s based on client self-report. We’ve looked at studies where clients are interviewed about the course of therapy and in that case they lie to protect their therapists. But when they take a self-report measure, they’re inclined to give a more honest appraisal.
It might be the time of day, for example. If you see somebody at 5:00, you may not be as perky as at 4:00. Or it may be certain client characteristics like they’re intellectualizing and boring. So we just try to provide clinicians with individual item feedback on items of the 11 that are below average. But it’s only for the 20 percent or so of clients who go off track.
When I see a client and I give them the OQ45, it gives me right off the bat a gauge of just how unhappy they are, but I don’t find it a rich diagnostic instrument. It’s more like a blood pressure test. Some people come in with a really high score. If they score a 100 then I’m really alert because if that doesn’t come down, they’re going to do something stupid. They’re going to try suicide, or drink too much or be too promiscuous or they’re going to end up in the hospital. So for me, if I was tracking somebody that has a score of 100 and we had three weeks of therapy and their score didn’t come down, I’d be thinking about medication if they were depressed more than if somebody had a score of 70, which is moderately or mildly disturbed.
For people scoring really high, they’ll likely have a better outcome if they’re not just relying on psychotherapy. So it could prompt a referral, but certainly it’s going to prompt you to be very alert. I usually have a good sense in the first session without the OQ45 of how disturbed people are—unless they’re that exceptional person that doesn’t want to admit to anything, but has plenty of problems. They may not trust you and they may not trust the system and they may not want to report stuff. You find that a lot in the military. When they start to trust you they’re more open.
I saw a borderline patient who didn’t look very borderline on the surface, and it took six months for me to learn that she was cutting herself. I gave her the MMPI as well and she scored quite normally on the MMPI and then was within the average range with OQ45. She presented herself with a simple phobia, a driving phobia. So we were concentrating on the phobia, but there was all kinds of stuff that came out once she felt more trusting. So if there’s a discrepancy between the score on the test and your own intuition, then that tells you the patient may be too ashamed or distrustful to tell you.
So it’s a lot harder to sell with psychodynamic therapists and maybe post-modern therapy. Even though client-centered approaches have a long history of studying the effects of psychotherapy and the process of psychotherapy, they still see simple self-report measures as easily faked.
It’s true across all of medicine, where people are really slow to take advantage of innovations. They only adopt new innovations when the gal in the office adopts it. So you’ve got to get people doing it around you before you decide you’ll give it a try. In our very first study, we only got half the therapists to participate. And then by the time we did our third study, all but one participated. And now if the computer system goes down, people get really upset. They don’t want to work without it. But it took two or three years to get all of them into it.
Innovations are a hard sell. Unfortunately, the way most clinicians get exposed to this is through administrators who make them do it, and then their general attitude is distrust of the way the information is being used. Clinicians passively-aggressively don’t participate, and as a result they sabotage the whole effort. It ends up being a power struggle between clinicians and administrators.
The only thing we can find is that when you see somebody with a lot of experience, their patients get better faster. But the overall outcome is the same. Even the stuff on paraprofessionals doesn’t show a huge difference between professionals and paraprofessionals.
If you go to a conference where people present outcome data on borderlines, they spend half their time arguing that the patients in their setting are real borderlines and the patients in the other people’s settings are mild borderlines or not real borderlines. Everybody always wants to say, “I have tougher cases,” but it’s not true all that often.
The fact is we’re all just about average. So we have no unique claim to effectiveness unless we’re the outlier. So it might be good for outliers on the positive side. For the average clinician you are just able to say, “my outcomes are as good as others.”
What’s true is you need to be measuring patients on an ongoing basis and get feedback when client’s are failing. I don’t think there’s too much effect for giving feedback to clinicians whose patients are progressing well. They may like it, but as far as improving their outcomes, most of the bang for the buck is when the therapy has gone off track. That’s the novel information.
If you look at the psychoactive medications—I’m just shocked at how poorly it’s managed. If you work at UCLA, you believe one thing’s the best practice and if you work at NYU, you’ve got a completely different set of practices. And it’s not like it’s based on how your patients are responding to the drugs because it’s very poorly monitored.
I hope this is not too disappointing.