Daryl Chow on Reigniting Clinical Supervision

Supervision at the Crossroads

Lawrence Rubin: Good morning Daryl. Thanks for sharing your time with our readers. Your research and writing suggest that supervision as it has traditionally been practiced is in crisis. What is the crisis in the field of supervision that you are responding to in your work?
Daryl Chow: I think there are weaknesses in the status quo practice of supervision, and that is something that we should pay attention to and do something about. I think change needs to start to grow from what we know from the research, as well as from clinical practice in supervision. We need to do something that's closer towards two domains: helping therapists improve their performance and, while they're doing that, also emphasize what they are learning. So,
it's not just helping supervisors with what they're doing on a case-by-case basis, but also helping them to develop and evolve through time
it's not just helping supervisors with what they're doing on a case-by-case basis, but also helping them to develop and evolve through time.
LR: What does it mean to help supervisees or therapists grow and develop, as opposed to just performing in supervision?
DC: In my online course, Reigniting Clinical Supervision, we make an important distinction from the get-go between coaching for performance and coaching for development and learning. Coaching for performance is one way of doing clinical supervision where we help each therapist improve in the stuck cases they are presenting in supervision. This is indeed important in helping them work through the clinical issues that may be blocking progress or preventing them from making inroads in their work with clients.

But I also think what supervisors need to support is an undulating process of helping clinicians with their stuck cases, while also trying to glean general principles with which they can help clinicians then create or identify patterns that are showing up through these stuck cases. It is a matter of looking closely at the cases in which the clinician is not making progress in order to help them in their own personal and professional development. This transcends a case-by-case supervisory discussion in order to focus on the therapist’s growth edge; those skills and characteristics that are generalizable, or what Wendell Berry talks about in terms of agriculture, which is solving for patterns. So, these two worlds of coaching, or supervising for performance and development, need to come together in the supervisory relationship.

If you look at the literature right now from Edward Watkins and others who have done great work in the study of clinical supervision, we have not made any progress. If the outcome of effective supervision is reflected or measured in client improvement, we have not actually moved the needle.

Tony Rousmaniere and his colleagues wrote a paper in which they concluded that
the variance in client outcome accounted for by clinical supervision is less than 1%
the variance in client outcome accounted for by clinical supervision is less than 1%, which means not much, right? That's concerning, because we put so much time, effort, and money into supervision. So, while I don't think I would use such a strong word as crisis to describe the field of clinical supervision, there is definitely a need for change. I really think that we are seeing things slowly changing on the ground level and there are people who are trying to change what we have come to accept as standard practice in supervision. 

Supervising for Development

LR: Okay, so what is the supervisor actually working on when she is focused on the supervisee's development?
DC: Well, the short answer is specific stuff such as the supervisee’s learning objectives. And their learning objectives are based on their performance. I will give you an example. If a clinician was to seek help from a clinical supervisor, that clinician (the supervisee) would first need to have a baseline of their performance, not just at the client-by-client level, but based on a composite of cases that they're seeing that provides them with enough reliable client outcome data.

And then, from those results, they would try to figure out where they're at before deciding where they need to go and what issues they need to address in supervision. I think that's a critical first step, because better results in in clinical supervision as measured by client outcome are obtained sequentially, not simultaneously. By that I mean we need to figure out where the supervisee is at. If their clinical outcomes are average, that really doesn’t say much about what they need to do in order to improve their performance. It is a matter of taking the second step, which is zooming in or focusing on those areas of clinical practice and therapeutic relationship where that clinician needs to improve. Simply focusing on the fact that the clinician is “average regarding their clinical outcomes,” doesn’t tell the supervisor where she needs to focus her lens regarding the supervisee’s skills and development.

So, as an example, if a clinician’s performance was average compared to international benchmarks, the supervisor would then focus in on those cases in which the clinician was stuck. They might listen to some recordings of the clinician’s work to discover that the clinician and the client did not develop therapeutic goal consensus. And it is often the case that
goal consensus is one areas that's not often fleshed out or verified in the process of the first or even in subsequent sessions
goal consensus is one areas that's not often fleshed out or verified in the process of the first or even in subsequent sessions. You and I both know that the goalpost changes as we go, right?

Sometimes the goal is to figure out the goal, to figure out what is or should be the focus of the session. Then the therapist and supervisor work on that one specific area. And then—and this is the critical piece—if the clinician and client are indeed working on goal consensus, it's important for both the therapist and the client, as well as the therapist and the supervisor, to follow through with the work towards that goal and then determine if doing so actually had an impact on therapeutic outcome.  
LR: And just to define the outcomes variables you're talking about—are you talking about outcomes in the client progress, or in the supervisee’s behavior?
DC: I think you hit on an important note, because the feeling of benefit for the therapist does not mean actual benefit for the client that they work with. Remember, we're dealing with two steps removed from the office, so we need to make sure that the work we are doing with the supervisee translates into positive outcome for the client. It's almost like a paradox if you see two overlapping circles. Yes, it's about the supervisee’s performance, but if you focus purely on their performance, you're not going to go anywhere with the client. You're going to be riddled with anxiety. "Am I doing well? Am I doing badly?" And there's so much judgment involved.

We need to see the impact on our clients and see if our learning leads to impacting the people that we're working with. If the learning was focused on goal consensus, we want to see that it actually translates to an actual impact on the clients that you're working with on that level, on one client at a time. But we also want to see if that helps you to move up your effectiveness above your baseline. 
LR: It seems you're saying that, if a supervisor is good at his or her job and guiding the supervisee effectively in the deliberate practice of therapy, then the client will by definition improve.
DC: Wouldn't you expect that?
LR: I would, but isn't it possible that—and I'm not trying to be provocative—but that a supervisor may be very effective in guiding the supervisee or the clinician in deliberately practicing their craft, but the client doesn't improve? Does that mean that the supervision failed? Or might it just be that something was missed? In other words, can you have good supervision and still poor therapeutic outcomes? Or do poor outcomes in therapy mean that the supervision was not effective?
DC: That's a really good point that world-champion poker player, Annie Duke, talks about in her book, Thinking in Bets. She makes a very important distinction which I think we need to think about slowly and carefully. And the point that she was making is:
we tend to conflate outcomes with process
we tend to conflate outcomes with process.

She says that when we get a poor outcome, let's say in the game of poker, we think that our process is responsible for that outcome. She says we tend to conflate the two. If you take some time to think carefully about how you're making decisions, how you're building the process and making a good plan, then if the outcome is bad, don't make that conflation too quickly.

Because in the game of poker, just like in the game of life, there's a lot of random noise, a lot of things that are beyond your and my control. But if you understand with the help of a supervisor that you are working on something critical—in our case, goal consensus because we know the effect size for goal consensus is huge, then it becomes a matter of focusing more directly on building that particular skill in supervision, not other skills unrelated to goal consensus.

And if goal consensus is indeed important—even if one client doesn't work out well, you don't want to go and throw the baby out with the bathwater. You want to just go back and refine goal consensus building skills again. Close the loop. And this is one thing supervisors and therapists can do, is to make sure that, after a discussion, they close the loop.

It sounds so plain and simple, but I think it's really something that's lacking in supervision as well as clinical practice, that people don't really close the loop by figuring out ways to refine the important skills in supervision that actually impact client outcome. If you continue doing this with other clients, will this have an impact as well? 

Deliberate Practice

LR: Along these lines, you have an upcoming book, Better Results: Using Deliberate Practice to Improve Therapeutic Effectiveness, with Scott Miller and Mark Hubble. How can supervisors use deliberate practice to improve not only their supervisee's performance but their own performance as supervisors?
DC:
When we are working in supervision… we are really working within a multi-tiered structure that includes the supervisor, supervisee and the client.
It's a brilliant question, and I know, Lawrence, we've talked about this. My belief at this point is I think that it is critical. We are really in the early days of this type of investigation, but I think it's an important area to work on, and here's why.

My belief is that knowledge is multilevel. When we are working in supervision, we are doing just that because we are really working within a multi-tiered structure that includes the supervisor, supervisee and the client. And let me just use an analogy from the world of music. I'm always impressed by not just what the musician does in a music studio or how they work. I'm always interested in who else is in the room. And one of the things that comes up very often for me is the role of the producer. Sometimes it's the group of artists itself, and sometimes it's someone else.

And a couple of people that stick out to me are Brian Eno, who has worked with Talking Heads, Madonna, U2, and Rick Rubin who has worked with death-metal bands like Slayer. He's worked with many Hip Hop artists. He's also worked with the late Johnny Cash. There’s something about being in the presence of these types of producers that brings out the best in the musicians.

My question is twofold. One, what the hell are these producers doing that brings out the best in the musician? But I also am interested in how I can help others and myself be able to become more like a coach or mentor the likes of college basketball’s John Wooden. And the one thing that I think is becoming a little bit clearer as I go is that we really need a system of practice, a way to systematically organize ourselves around how we think about supervision. So, when I say system, it just means as simple as: how do we track outcomes?

My mentor and collaborator, Scott Miller, talks a lot about feedback-informed treatment. To me, measuring what we value is key, because measurement precedes professional development, so it is critical to help people, supervisees in this case, to systematically track their outcomes and to have a system of coaching already in place by the time they come into supervision.

And then we develop a taxonomy of deliberate practice activities so we know where they're at in the baseline, how to help them figure out a way to deconstruct the therapy hour and then pick up little things that they can work on. So, I guess my short answer, or rather my long answer is really, to figure out a system that can function as a platform from which we can begin to work on the more nuanced stuff in the role of supervisor. Am I making sense about this? 

A Portfolio of Mentors

LR: You are indeed, Daryl, and related to this notion of the producer and artist working in collaboration, you recommended that clinicians build a portfolio of mentors. Does that mean that, even though supervision is, as you call it, a signature pedagogy, that clinicians should build a production studio of sorts with other professionals? 
DC: As much as supervision is a signature pedagogy for our field, what's interesting for me of late is how people reaching out for consults or coaching often follows having given up on working with a supervisor for various reasons, unless they are in an agency setting where that is provided. But, yes, I think the idea of a portfolio of mentors is to say that
if you can figure out what's your leading edge or the gap that you're trying to work on, your default supervisor may or may not have the knowledge to help you
if you can figure out what's your leading edge or the gap that you're trying to work on, your default supervisor may or may not have the knowledge to help you.

And what you want to do is to create a community of people that you can turn to, that you can talk with, and then maybe a certain person you turn to more routinely. For instance, I've known a supervisor for more than a decade, and I always return to her. But if there was something else that was missing, or I wanted to stretch out and pick another mind to think of it from a different perspective, I would reach out to other people, even people who are so-called experts, and send them an email. I would ask them, "What's the fee? Can I come talk with you?" And most people are friendly. 
LR: In a way, isn’t that what you are trying to provide through your online supervision training, Reigniting Clinical Supervision?
DC: My focus for Reigniting Clinical Supervision is to help clinical supervisors design better learning environments that sustain real development for therapists, so as to achieve better client outcomes. The choice of an online learning platform is not a mere substitute for live teaching. Instead, gleaning from the best of what we know of optimizing learning, adopting a “one idea at a time” drip-based method of delivery of content and maintaining learner engagement, helps the busy practitioner weave what they learn into practice, and return to renew and reconsolidate new knowledge as a result of being in the course with me and other clinicians/supervisors.

Here’s how I think about the difference between a live training and how Reigniting Clinical Supervision is designed: A real-time training/workshop is like a river. It is a constantly flowing torrent of ideas. If the learner steps out of the river for a few minutes, or needs some time to think, he is now behind. The learner may be able to ask questions but needs to constantly try and catch up and not fall behind. A chance for a revisit of the content after some time of reflection is not possible, with only the notes or slides that you've captured.
Online learning, on the other hand, is like a lake. The learner can step in and out of the water at her own time
Online learning, on the other hand, is like a lake. The learner can step in and out of the water at her own time, and pace herself as she moves along; the water remains the same. This stillness allows for pausing, revisiting the material, reflecting, and connecting with past knowledge. Online learning at its best allows for the learner to ask questions, revisit the materials, and for the person to master a difficult segment before moving on.
LR: Within this community of mentors model, there are different factors that predict therapeutic outcome. They include goal consensus, alliance and repairing therapeutic ruptures. Can the same principles be applied to improve supervisor performance and development?
DC: Hopefully, that's paralleled or modeled within the supervisory work. I would encourage supervisors to also elicit feedback within the supervision. And most of us do that, but it is also important to do it in a way that's a little bit more about a ritual. This would mean using some quick check-ins that give the supervisee some space to think about it, and then to explore the nuances of the supervisor/supervisee relationship. It's much harder when you really know somebody well, like the supervisor knowing the supervisee, to give feedback.
LR: Have you experienced working with expert clinicians who are lousy supervisors?
DC: I'm thinking of the converse. So, let me look back in my mind. I don't mean this in any disrespectful way because I really respect this person's work. Jay Haley of the strategic school of family therapy talked about this and said that he was really good as a supervisor, but not as good as a therapist [laughs].
LR: I think of myself as being a better supervisor and teacher than therapist. In your language, perhaps that’s because I have not deliberately practiced therapy.
DC: Yes, right.
LR: I've performed therapy, but in the words of Scott Miller, I've not deliberately practiced it. So, it's interesting that just because someone may be a very competent clinician, it doesn't mean that they have the patience or skill to guide a fellow clinician as a supervisee, and vice versa.
DC: This harkens back to your question about the role of training supervisors in how they do deliberate practice, because, to me, there are overlaps, of course, but there are also distinct skills required in their roles as supervisors and therapists.
The role of a supervisor requires some skill to be able to articulate the concepts without getting lost in the weeds of abstraction
The role of a supervisor requires some skill to be able to articulate the concepts without getting lost in the weeds of abstraction.

Cardinal Supervision Mistakes

LR: Talking about getting lost in the weeds, you wrote an article for us about seven mistakes in clinical supervision. If you were to pick the top two cardinal mistakes from that list of seven that supervisors make, which ones flash red to you, and what can supervisors to do about them? 
DC: This is tough because the language around mistakes is all negative. I think, for me, the one that I've seen in my own experience and through my own mistakes is that of too much theory talk.
I think we talk too much. On the ladder of abstraction, talk is quite high up there
I think we talk too much. On the ladder of abstraction, talk is quite high up there. Bear in mind, when we're in supervision and in the absence of the actual client, we spend all our time talking in abstractions, at the level of theories about the client rather than about the therapeutic relationship.

When we're doing that, we've got to bear that in mind, that we don't have that person there, and we're talking at the level of theoretical abstraction, so many steps removed from what is occurring between the supervisee and the client. It's very easy to speak of it from whatever orientation or whatever philosophy you hold, without joining the dots of what's going to ripple down into the actual therapeutic relationship where the real work is happening.

Another big mistake in supervision is that when the clinical work is stuck and the supervisee and client are not making progress, the supervisor may say something in an attempt at being supportive to the supervisee like, "Well, at least they keep coming back, right?" In this instance, the supervisor is doing little more than what I call, patting them on the back–encouraging the supervisee without giving her any clear direction out of the stuck situation.

I'm really conflicted about that statement that I hear very often. Is that good enough for you, that they still come back? Or what else? What else can we be thinking of? How do we escape this domain of just talking on their level and to be able to make some real impact?  
LR:
Another big mistake in supervision is…encouraging the supervisee without giving her any clear direction out of the stuck situation
I know that being able to effectively conceptualize a clinical case, to think about it from different theoretical perspectives, is important. But you're saying, Daryl, that sometimes we err on the side of overthinking the theory at the expense of guiding the supervisee in building the relationship with their client, and then we congratulate the therapist for minimal progress? Seems like damning by faint praise.
DC: Yes and no. I think all prudent supervisors know that therapeutic relationship really matters. And by therapeutic relationship, let's be clear, it's not just about the emotional bond, even though that is one critical part. But the other part is the focus, which is about the goals, the directionality, where it's going. The next is also about whether there is a cogent method for both the therapist and the client. Are we in agreement? Is there a fit in where we're going? All those things relate to the therapeutic alliance.

I think most people are focused about that. But as you will see in the upcoming blog that I am writing for Psychotherapy.net, I will be talking about the three types of supervisory knowledge. One type of knowledge is about the content knowledge, about the clinical case, about the psychopathology. Those things are necessary but not sufficient. The second type of knowledge is the process knowledge about how you engage with somebody who's, say, depressed? How do you engage with somebody who's anxious? That's a process or type of relating kind of knowledge. How do you have that kind of conversation? As David Whyte, the poet and philosopher, would say, "the conversational nature of reality." How do you engage in that? How do you come into being with another person into that field? But the third one is conditional knowledge, which is; if you're working with somebody who's depressed due to bereavement, it's going to be very different than when you're working with somebody who is depressed as well but due to, say, domestic violence. The context is very different, and you need to figure out a way of relating with them given the different situation. So, by considering all three of these in supervision; playing into the content knowledge, process knowledge and conditional knowledge, I think the supervisor can synergize them for the benefit of both the therapeutic work and the development of the supervisee. The supervisor and supervisee having this multi-level conversation will benefit both the client and the supervisee. 

The Humble Teacher

LR: What do you see as some of the important personal qualities of an effective supervisor or a clinician who might become an effective supervisor?
DC: For me, of course,
a good teacher is somebody who is willing to be a good student
a good teacher is somebody who is willing to be a good student. If I'm picking a supervisor for myself, I'm always looking for somebody who implicitly—and it's not something that people would say explicitly, is willing to be wrong, willing to seek the counterfactuals, and then to have by default a stance of humility not just because they're trying to act humbly or bragging about their humility.

This humble teacher will say, “Hmm. Oh, hang on a second. I've really never thought of that.” And they're rethinking. That, to me, is interesting. And it's not because they don't have a wealth of knowledge. It's because this is dis-confirming what they know. And that's so exciting. That's like fresh air, you know, when you're working with somebody that way.

Additionally, somebody who has mental models or mental representations and concepts in their head about different ways to think about clinical situations and suggestions for the supervisee. They know that when they're facing this kind of situation, they have what Gerd Gigerenzer calls fast and frugal heuristics. They have little maps of how they will approach stuff. You know, they've thought it through before. They have ideas in their memory bank that they will pull into their working memory.

And you know that because when they're just giving off-the-fly statements, you know that it's off the fly. But if you know that they've thought about it, you realize their mental networks are vast. They know that it's an “if-then” situation, and they're thinking about it and all kinds of communications. That excites me because that shows to you this person has done some thinking before meeting with you. 
LR: Is this what you refer to when you say that true experts think like novices, or beginning therapists, while true novices think they're experts? Is it related?
DC: I think so. [chuckles] I think so.
LR: I like that idea that the expert supervisor, who may or may not be an expert clinician, has these—what did you call them—fast and frugal heuristics? Was that the term that you used?
DC: That's right, and I mean that's the term from Gerd Gigerenzer, who studies cognitive science. He talks of the importance of having these sorts of heuristics. You know, the way we've been terming it is mental representation. Things that happen might not just be easily explained using therapeutic models but by different ways of thinking. Like, what do you do if you meet somebody who is angry or depressed in the session? These heuristics or maps are not like stock answers but are based on clear principles that flow from these mental representations. What do you do with somebody who doesn't have a goal? How do you work with them? They have a rough and ready guide.

At the Cutting Edge

LR: So, the supervisor should aspire to flexible thinking, drawing on different belief systems, different ways of looking at the human condition, different interpretations of the same clinical presentation? It sounds like the advanced supervisor is out at this cutting edge of creativity, untethered to any one way of thinking.
DC: Yes.

This domain of creativity is something I'm really interested in. I think one thing we need to remember about creativity is that it's about something novel and something useful coming together? Wouldn't it be great if supervisors were not restricted to thinking solely in terms of the field of psychotherapy in the course of doing their supervision, and could bring in greater creativity?

Just thinking about architecture, music, art—thinking about other aesthetic forms and how all of these can inform ways of thinking. Coming back again to the example about goal consensus, why do we need to only think about this within the domain of psychotherapy? Why don't we learn about how other fields and business organizations think about creating focus? 
LR: So, we should consider using a flexible system of metaphors that transcend psychology and psychotherapy. When we first contacted each other, I mentioned that there seemed to be almost a spiritual undertone to the way that you described your personal philosophy of living and helping. Am I seeing it correctly, that there's a certain spirituality or spiritual dimension to your work as a clinician and a supervisor, and perhaps we should embrace that as well?
DC: Well, I'm grateful that you picked that up. To me, the answer is yes. And I think that's personally a deep embedment in my life. I was raised a freethinker from my Singaporean days. You know, this means I'm free to think or whatever that means. But I converted to become a Catholic when I was 21. When everybody else was running out of the Church, I was going back in. So, to me, that was my start.

But I think, fundamentally beyond religion, what's really driving me on a first principle level is human dignity. And the way I think about this is that
if a person comes to seek help and opens up to another person, that's a sacred moment
if a person comes to seek help and opens up to another person, that's a sacred moment. We need to honor that. We need to figure out a way that we can help each other come alive, because it's not just about creating purpose and meaning, but it's really to help each other come alive. And the therapist needs to come alive. The therapist needs to be alive and kicking and playful and to be able to ignite that. And the therapist also needs help and guidance from a supervisor. And for the supervisor to do that, the supervisor also needs to come alive. 
LR: I remember Bill Moyer’s interview with Joseph Campbell at George Lucas’ Skywalker Ranch. He said to Joseph Campbell, “So, you're saying that people are searching for the meaning of life?” And Campbell said, “No. People are searching for the experience of being alive.” How does that find its way into the world of supervision, that tripartite relationship between supervisor, supervisee, and client? Where does that element of being alive get infused in that three-level process? And whose responsibility is it?
DC: Sounds like a family.
LR: Yeah, doesn't it?
DC: Yeah. I think everybody is going to come into play. I think it is the interaction. It's this ecology of a systemic perspective that's going to be important. How does it come alive? You know, I think we need some kind of platform for this to work, which we have talked about. But I think it critical is to keep this conversation going. Once we see that therapists are working hard to improve in what they are doing—once they figure out the baseline, once they figure out what to work on based on the baseline, then they develop a system to help them do their practice on an ongoing basis. And that they see the payoff of what they're doing.

It's like your child who's worked hard for the math test and starts seeing see the result. There's the real payoff. I mean the whole temperature of the room changes. Their focus becomes more intrinsic. And at that point, the role of the guidance is going to evolve as well. There's always going to be state of change. You’re right when you pointed out that quote from Joseph Campbell as well. That's something I'm very familiar with, and I think it's important that we continue to keep the conversation alive within clinical supervision as well as at the level of the therapist and client. 

Fanning the Flames

LR: So, just as we encourage clinicians to take care of themselves and to grow and to rest and to seek meaning and a reason for being alive, so too must supervisors continually replenish and rest and grow and seek internal expansion, because if they wither, then the supervisee withers and the client withers. Who are the roots, and who are the leaves in this tree? It's a quite interconnected system.
DC: [chuckles] It is. It's just like our world now, isn't it? I mean I'm suddenly reminded about this teenager from Sweden that's really been striking me about what she's doing. I don't know if you follow the news about Greta Thunberg and how she's doing this protest about climate change and rallying a million teens around the world to protest about how the adults in this world had better take this seriously. And she's been going on global forums just speaking about this.

And I heard one of her speeches which she starts by saying, “Our house is on fire. What would you do if your house was on fire?” And she expands on that. And I think that's so important, that somebody her age is speaking about this. 
LR: So, supervisees must find ways to, in your words, reignite supervision. I have one last question. You were born in Singapore, you live and practice in Australia, and you've traveled the world doing training in therapy and supervision. What have you noticed about teaching and supervising cross-culturally?
DC:
I think the first thing that comes to my mind is how similar across culture we are in terms of helping people
I think the first thing that comes to my mind is how similar across culture we are in terms of helping people, trainings and our roles as therapists and supervisors. But, of course, each culture has its own subcultures that you're dealing with. But to me, really what's striking is how much similarity there is. We're all in the same boat.
LR: What do you mean, the same boat, Daryl?
DC: We're all struggling to get better. We all want to. I mean all therapists and all supervisors want to do a better job. And that propels us. That makes us stay hopeful. It makes us invest time, money and effort to go and do CPE [continuing professional development] activities. You know, we're all trying to get better. But what's implicitly underneath that wish to get better is worry. We do worry about, “Am I getting any better? Is what I'm doing really helping to translate?”

And people are asking this question as they are looking deep, long, and hard. And I think the onus is on us as a collective, as a field, to start to come together, to start to build this brick-by-brick, to help out from the therapist's level and the supervisor's level, and to help us build this house, build it up again, and to help us to get just that 1-2% better each step of the way. Because the payoff and the morale that comes with that is going to move us even further. 
LR: So, if everyone in that multilevel relationship strives to be a little bit better, then the whole system becomes better.
DC: That's right.
LR: If client outcome improves, then that goodwill is shared beyond the therapeutic space. If the supervisor is dedicated to practicing their craft, then they are in a better position to teach clinicians. And if clinicians practice deliberately, they are in a better position to help their client. And that is consistent across cultures.
DC: That's right. And, you know, I'm not the only one who is doing this, but I think I've started doing this whole thing about clinical supervision because I think we are a critical piece to the puzzle. And I think this one little story might help to illuminate this. You know, this gentleman, he knocks on his son's door, and he says, “Jamie, wake up, please. Wake up. You've got to get to school.”

Jamie then says, “I'm not going.” And the father says, “Why not?” He says, “Well, Dad, there are three reasons. First, school is so dull. And second, the kids tease me. And third, I hate school anyway.” And the father says, “Well, I'm going to give you three reasons why you must go to school. First, because it's your duty. And second, because you're 41 years old. And third, because you are the headmaster.”
LR: [laughs]
DC: I think we play that critical role. We do need to show up. And when we show up, we then need to think about what's our status quo and what's the one thing we need to start in order to refine our work to bring us alive again.
LR: To play that instrument a little better, to hit that tennis ball a little straighter, to run a little bit more efficiently. The supervisor must have a commitment to continued growth and development if the supervisee and the client are to improve.
DC: Yes, and I will say one last thing, if I may, Lawrence.
LR: Of course.
DC: If we use the musician analogy, I don't think it's to play the instrument a bit better.
LR: No?
DC: I think it's to play the instrument well enough but to be able to become better songwriters. I think that's a tougher job, because you can get technically better as a musician, but to write the next Hard Day's Night or Yesterday or Bohemian Rhapsody, I think that's a different skill. And I think we need to find a way to become better songwriters in our field.
LR: So, we can make better music together and because the audience is indeed listening.
DC: That's it.
LR: I think on that note, Daryl, I'm going to say goodbye, and on behalf of our readers, thank you so very much.
DC: Thank you.

Seven Mistakes in Clinical Supervision and How to Avoid Them

Clinical supervision is the “signature pedagogy” of choice in psychotherapy (1). I’ve benefited a great deal from the lessons of my supervisors. Some of their words from a decade ago not only still echo but have become first principles I keep close in my own clinical and supervisory work and teaching. Most of us regard clinical supervision as highly integral to our professional development. It’s hard to imagine not having someone to turn to for case consultation and guidance, especially when stuck in a rut and not making expected or desired progress with a particular client.

Supervision and Clinical Impact

Given the benefit we often feel from clinical supervision, the logical next question to ask is whether clinical supervision actually translates into meaningful impact on our client’s wellbeing? About 8 years ago, Edward Watkins Jr., a researcher from the University of North Texas, conducted a review of 18 empirical studies that examined the impact of supervision on client outcomes. Based on the big picture analysis, Watkins said “…the collective data appears to shed little new light on the matter. We do not seem to be able to say anything new now, (as opposed to 30 years ago), that psychotherapy supervision contributes to client outcomes.” (2)

More recently, a team of researchers set out to investigate this question based on a large five-year dataset comprising 6521 clients seen in naturalistic settings by 175 therapists and guided by 23 clinical supervisors (3). Not only did factors such as supervisors’ experience level, profession (social work vs. psychology), and qualifications not predict differences between supervisors, the role of clinical supervisors explained less than 1% of the variance in client outcomes. Said in another way, and contrary to expectations, clinical supervision as we know it has little to no significant impact on improved outcomes in the lives of our client’s lives.

Taken together, we may very well feel the benefit from clinical supervision, but it doesn’t seem to translate into improved clinical outcomes.

Rethinking Clinical Supervision

This begs the question. Why is clinical supervision not translating to actual improvement of client outcomes? Given that we invest so much time and effort in our “signature pedagogy,” perhaps we need to rethink our current practices in supervision. Drawing from the existing psychotherapy evidence and the development of expertise literature outside of our field (4), here are seven supervisory mistakes I see us making, along with speculation on how these relate to apparent clinical stalemate:

1. Too Much Theory Talk

2. Pat-on-the-Back

3. Lack of Monitoring Client Progress

4. Lack of Monitoring Engagement Level in Supervision

5. Not Analyzing the Game

6. Overemphasis on the Self and Neglecting the Impact on Client

7. Lack of Focus on Therapist’s Learning Objectives

8. Too Much Theory-Talk

Often, the clinical supervision encounter revolves around cases discussion, case formulation and theorizing about the clinical pathology. This fits under the umbrella of clinical conceptual knowledge and does not actually delve into moment-by-moment interactional patterns that unfold in a therapy hour. We often end up waxing lyrical on how a case may be conceptualized in a psychodynamic framework or in an emotion focused or from a CBT perspective. Not only does this disembody the conversational nature of reality in therapy, we assume that the key is to obtain a thorough case formulation of the problem at hand. In 1939, Carl Rogers aptly pointed out, “…A full knowledge of psychiatric and psychological information, with a brilliant intellect capable of applying this knowledge, is of itself no guarantee of therapeutic skill.” (5)

2. Pat-on-the-Back

In my work with supervisors and therapists, I often hear this chant, “…But your client still comes back to see you right?” In actuality, a small percentage of clients (~10%) account for the largest percentage (~60-70%) of behavioral health care expenditures, showing a continued use of services without successful outcomes (6).

While it is vital to take care of the supervisee’s sense of self, what feels good doesn’t equate to what helps us grow. About a third of our clients continue therapy without experiencing reliable improvement in their well-being. If we continue to bolster their esteem with praises or consolations without helping them identify their growth edge and improve the outcomes of “stuck” cases, we are doing our therapists and clients a disservice.

3. Lack of Monitoring Client Progress

We therapists are an optimistic bunch. In the absence of real-time monitoring of outcomes and engagement, session-by-session, we fail to detect deterioration and dropouts. A groundswell of studies now show that the use of measures such as a real-time feedback tool not only reduces deterioration in client well-being by a third, but cuts drop-out by half, and as much as doubles the overall effectiveness of therapy (7). Even when we use routine outcome monitoring devices, like the Outcome Rating Scale (ORS) & Session Rating Scale (SRS), Outcome Questionnaire (OQ-45),or Clinical Outcome Routine Evaluation-Outcome Measure (CORE-OM),we fail to meaningfully integrate this into the supervisory process. We stick to using the measures as an assessment tool, and not as a conversational tool.

4. Lack of Monitoring Engagement Level in Supervision

For those of you who are already using routine outcome measures as a source of feedback, you know that it’s hard for clients to give feedback to the therapist. It’s also hard, if not harder, for a supervisee to provide feedback about the engagement levels in supervision — especially if the supervisor is a colleague.

The reality is, supervisors have a tough enough job of ensuring that their input has a ripple effect not only on the therapist, but also on their clients. Having some kind of formal procedure to elicit what’s been working for the learner can help the process of focus. In addition, given that supervisors and supervisees might have overlapping roles or collegial bonds outside of supervision, having a formalized feedback procedure in supervision allows for both parties to take a pit stop and address issues in real time — not 6 months down the road when it’s too late — that might be brushed aside.

5. Not Analyzing the Game

In any other domain of performance (e.g., sports, music), if one were to seek a coach’s help in improving their game, it would be unheard of for the performer not to analyze her performance. Yet, in the field of psychotherapy, we do less of examining the moment-by- moment dynamics of the therapy hour and more theorizing (see point #1). Most supervisors do not use the practice of watching snippets-segments of the video recording highlighting specific areas that the therapist can work on.

Much like other fields (music, sports), it’s important to record sessions in order to receive feedback about actual performance rather than feedback about a perceived or reported performance. Feedback is useful when it’s based on a well-defined objective, observables, and specifics.

6. Overemphasis on the Self and Neglecting the Impact on Client

You may not agree with this point, but there is an over-emphasis on the self of the therapist at the expense of impact on the client. Too much supervisory time is spent on superfluous issues such patting the supervisee on the back (see # 2), while not enough time is spent on using real-time progress monitoring to guide the conversation (see #3).

7. Lack of Focus on Therapist’s Learning Objectives

Finally, I would argue that there is a lack of focus on the therapist’s learning objectives. This is one of the four tenets in deliberate practice (8). (Stay tuned as we will cover this in future blog posts). This may be the most vital yet lacking element in a practitioner’s professional development. Too often, we engage in clinical supervision on a case-by-case basis, with no coherent thread weaving in the therapist’s learning needs and clinical case concerns. Even when we do so, there is often a lack of systematic tracking of the supervisee’s development. As useful as client feedback is to clinical practice — spotting anything glaring or missing and pointing out if the session is on-track or not — this does not help therapists improve on their therapeutic skill, based on the developmental stage of their profession.

Consider another example: A top musical performer does not benefit from the feedback of the crowd (the decibels of the audience’s applause, the verbal comments about the performance, etc.), as much as the nuanced and specific feedback they might receive from their maestro or producer.

***

In the upcoming blog posts, I will cover each of the seven points raised about the flaws in our default ways in clinical supervision, and I will provide specific pathways out for each of them.

References

(1) Watkins, C. E. (2010). Psychotherapy Supervision Since 1909: Some Friendly Observations About its First Century. Journal of Contemporary Psychotherapy, 1-11

(2) Watkins, C. E. (2011). Does Psychotherapy Supervision Contribute to Patient Outcomes? Considering Thirty Years of Research. The Clinical Supervisor, 30(2), 235-256.

(3) Tony G. Rousmaniere, Joshua K. Swift, Robbie Babins-Wagner, Jason L. Whipple & Sandy Berzins (2014): Supervisor variance in psychotherapy outcome in routine practice, Psychotherapy Research, 26(2), 196-205.

(4) A. Ericsson, K. A., Hoffman, R., Kozbelt, A., & Williams, A. (Eds.). (2018). The Cambridge Handbook of Expertise and Expert Performance (2 ed.). Cambridge: Cambridge University Press. B. Ericsson, A., & Pool, R. (2016). Peak: Secrets from the new science of expertise. Houghton Mifflin Harcourt.

Miller, S. D., Hubble, M., & Chow, (2020). Better Results: Using Deliberate Practice to Improve Therapeutic Effectiveness. American Psychological Association.

(5) Carl Rogers, 1939, p. 284 The Clinical Treatment of the Problem Child.

(6) Lambert, M. J., Whipple, J. L., Hawkins, E. J., Vermeersch, D. A., Nielsen, S. L., & Smart, D. W. (2003). Is It Time for Clinicians to Routinely Track Patient Outcome? A Meta-Analysis. Clinical Psychology: Science and Practice, 10(3), 288-301.

(7) Schuckard, E., Miller, S. D., & Hubble, M. A. (2017). Feedback-informed treatment: Historical and empirical foundations. Prescott, David S [Ed]; Maeschalck, Cynthia L [Ed]; Miller, Scott D [Ed] (2017) Feedback-informed treatment in clinical practice: Reaching for excellence (pp 13-35) x, 368 pp Washington, DC, US: American Psychological Association; US, 13-35.

(8) Chow, D. (2017). The practice and the practical: Pushing your clinical performance to the next level. In D. S. Prescott, C. L. Maeschalck, & S. D. Miller (Eds.), Feedback-informed treatment in clinical practice: Reaching for excellence (pp. 323-355). American Psychological Association.

Questions for Thought and Discussion

What kind of clinical supervision do you value and why?

Which of the author’s seven mistakes have you or do you currently engage in?

What have you done recently to improve the quality of your clinical skills?

What style of supervision do you practice, or would like to practice?

Scott Miller on Why Most Therapists Are Just Average (and How We Can Improve)

Escape from Babel

Tony Rousmaniere: Many people know you as a Common Factors researcher, but recently you’ve transitioned away from that. Could you explain both what Common Factors is and your transition away from it?
Scott Miller: Sure. As old-fashioned as it sounds, I’m interested in the truth—what it is that really matters in the effectiveness of treatment. Early on in my career, I learned and promoted and helped develop a very specific model of treatment, solution-focused therapy. We had some researchers come in near the end of my tenure at the Family Therapy Center in Milwaukee who found that, while what we were doing was effective, it wasn’t any more effective than anything else. Now, for somebody who had been running around claiming that doing solution-focused work would make you more effective in a shorter period of time, that was a huge shock.
All models are equivalent. Pick one that appeals to you and your client.


It was at that point that I started to cast about looking for an alternate explanation for the findings, which concluded that virtually everything clinicians did, however it was named, seemed to work despite the differences. That led back to the Common Factors—the theory that there are components shared by the various psychotherapy methodologies and that those shared components account more for positive therapy outcomes than any components that are unique to an approach. It was something that one of my college professors, Mike Lambert, had talked about, but that I had dismissed as not very sexy or interesting. I thought, how could that possibly be true?

It was at that time that I ran into a couple of people that I worked with for some time, Mark Hubble and Barry Duncan, and we had written several books about this. If you read Escape from Babel, which we coauthored, the argument wasn’t that Common Factors were a way of doing therapy, but rather a frame for people—therapists speaking different languages—to share and meet with each other. They were a common ground.

But by 1999, it was very clear to me that Common Factors were being turned into a model by folks, including members of our own team, and viewed as a way to do therapy. But you can’t do a Common Factors model of therapy—it’s illogical. The Common Factors are based on all models. This caused a large amount of consternation and difficulty, numerous discussions, and eventually I suggested to the team that the way therapists work didn’t make much of a difference.

What was critical was whether it worked with a particular client and a particular therapist at a particular time. Mike Lambert was already moving in this direction and said, “Let’s just measure them. Let’s find out. Who cares what model you use? Let’s make sure that the client is engaged by it and that it’s helping them.” So we began measuring, and what became clear very quickly was that some therapists were better at it than others.

So, since about 2004, Mark Hubble and others at the International Center for Clinical Excellence (ICCE) have been researching the practice patterns of top performing therapists. It’s not that I don’t believe, and in fact know, that the Common Factors are what accounts for effective psychotherapy. It’s just that an explanation is not the same as a strategy for effecting change. And the Common Factors can never be used as such. All models are equivalent. Pick one that appeals to you and your client.

The Siren Song

TR: So Common Factors are a way of studying the effects of psychotherapy, but not a way of actually implementing it.
SM: Well, by definition, you can’t do a Common Factors model because then it’s a specific factor. I’m not saying the Common Factors don’t matter—what I’m saying is that they are a therapeutic dead end. They will not help you do therapy. You still have to have a method for doing the therapy, and the Common Factors are not a method. Why?
What I say is, pick one of the 400 that appeals to you and then measure and see: Does your client like it, too? If not, then it’s time for you to change, not your client.
All treatment approaches return equal efficacy when the data is aggregated and methods compared in a randomized controlled trial. So you still need some kind of way to operationalize the Common Factors.

Since we have 400 or so different models of therapy, why invent a new one? It seems to be because in our field, each person has to have it their own way. The promise of a new model is a siren song in our profession that we have a hard time not turning our ship towards. What I say is, pick one of the 400 that appeals to you and then measure and see: Does your client like it, too? If not, then it’s time for you to change, not your client.
TR: You have an article out in Psychotherapy where you mentioned three keys for therapists to improve their work. Your major focus now seems to be how therapists improve their work with each client. Can you describe those three keys?
SM: The first one is knowing your baseline. You can’t get any better at an activity until you actually know how good you are at it now. We therapists think we know, but it turns out that data indicates that we generally, as a group, inflate our effectiveness by as much as 65%. So you really have to know just how effective you are in the aggregate. That means you’re going to have to use some kind of outcome tool to measure the effectiveness of your work with clients over time.
We generally, as a group, inflate our effectiveness by as much as 65%.


The second step is to get deliberate feedback. So once you know how effective you are, then it’s time to get some coaching, get some feedback, and you can do that in two ways. Number one, you can use the very same measures that you used to determine your effectiveness to get feedback from your clients on a case-by-case basis. Meaning that you can actually see when you’re helping and when you’re not, and use that to alter the course of the services provided to that individual client.

The second kind of feedback to get is from somebody whose work you admire, who has a slightly broader skill base than you do, and have them look at your work and comment specifically about those particular cases where your work falls short. In other words, you begin to look for patterns in your data about when it is you’re not particularly helpful to people, and seek out somebody who can provide you with coaching. It’s like in golf, once you know what your handicap is you can hire a coach who can look at your game and make fine tweaks. It’s not about revamping your whole style, or about learning an entirely new method of treatment, but pushing your skills and abilities to the next level of performance.

The third piece is deliberate practice. The key word in that expression is “deliberate.” All of us practice. We go to work. But it turns out the number of hours spent on a job is not a good predictor. In fact, it’s a poor predictor of treatment effectiveness. So what you have to do is identify the edge of your current realm of reliable performance. In other words, where’s the next spot where you don’t do your work quite as well? And then develop a plan, acquire the skills, practice those skills and then put them into place. Then measure again to see, have you made any improvement?

I can’t take credit for coming up with these three steps. We’ve simply borrowed them lock, stock, and barrel from the performance literature, and in particular, Anders Ericsson’s work, which has been applied in fields like the training of pilots, chess masters, computer programmers, surgeons, etc. If we have any sort of claim to fame, it’s that we’ve begun applying these to psychotherapy for the first time.
TR: One of my first reactions to this is, aren’t some people just born better therapists?
SM: Well Ericsson notes that the search for genetic factors responsible for the performance of eminent individuals has been surprisingly unsuccessful. In sports we often think, “Oh, there must be some genetic component involved here,” or “he just has the gift of music.” But it turns out that virtually everyone that researchers looked at where the “gift” is implied, even with Mozart—he had been playing the piano for 17 years before he wrote anything that was unique, which happened at about age 21. He’d been playing since he was 4. His father had been doing music scales with him since he was in the crib. So once you remove the practice component, you just don’t find any evidence for genetic factors—with very few exceptions.

For example, in boxing it appears that people with a slightly longer reach have a slight advantage. But we also know that if baseball pitchers don’t start pitching at a particular age, their arms will not make the adjustment required to throw the ball as fast and accurately as professional pitchers do.

There was another study that looked at social skills. You often will hear, in addition to the genetic claims, that, “Good therapists just have great social skills.” Well, they’ve measured that. It turns out not to be the case, and the reason is that these kinds of ideas are too high or general a level of abstraction. The real difference between the best and the rest is that they possess more deep, domain-specific knowledge. They have a highly contextualized knowledge base that is much thicker than average performers, and much more accessible to them and responsive to contextual clues.

Deep Contextual Knowledge

TR: Could you give a specific example of what a deep contextual knowledge would look like in a therapy room?
SM: Well the classic one—and I say it to make fun of it—is suicide contracting. Or the suicide prevention interview.
Somebody comes in and says, “I’m going to commit suicide.” And we respond with, “Do you have a plan? Have you ever attempted this before?” Blah, blah, blah. That’s decontextualized knowledge. You could ask those questions to a stick.
Somebody comes in and says, “I’m going to commit suicide.” And we respond with, “Do you have a plan? Have you ever attempted this before?” Blah, blah, blah. That’s decontextualized knowledge. You could ask those questions to a stick.

What a top performer does is ask those questions very differently, nuanced by the client’s presentation, in ways that the rest of us can’t see. Because of their more complex and well-organized knowledge, they can actually see patterns in what clients present that the rest of us would miss and respond to in a much more generic fashion. Is this making sense?
TR: Absolutely.
SM: So the real question is how to help clinicians develop that highly contextualized knowledge. Because once you have it, not only can you retrieve that knowledge at the appropriate moment, but it turns out you can make unique combinations and use them in novel ways that would never occur to the rest of us, or would only occur to the rest of us by chance.
TR: This also doesn’t suggest that treatment manuals are necessarily the best way to train therapists.
SM: We know that following a treatment manual doesn’t result in better outcomes and it doesn’t decrease variability among clinicians using the same manual. So you still get a spread of outcomes, even when everybody is doing the same treatment.

At the same time, I think it’s critical that therapists learn a way of working, and, in the beginning at least, they hew to that approach. Why? Well, if you begin to introduce variation in your performance early on, you will not have the same ability to extend your performance in the future.

Let me give you an example. The first time I had a guitar lesson, I was taking classical guitar with this really interesting teacher. We spent the entire first lesson on how he wanted me to hold the neck of the guitar with my left hand—and I’m right handed. He said, “If you try to vary your hand grip from the outset, you’ll never have the same reach and ability to vary reliably when you need to in the future. So start with a common foundation, and then when we need to introduce variations later, we will.” My sense is that therapists instead begin in a highly complex, nuanced way and introduce variations into their style randomly and without much thought.
TR: So it would be better to begin with a frame or structure that provides a stable base, and then develop the deep contextualized knowledge later on.
SM: And to vary your work in ways that allow you to measure the impact of your variation against what you usually do. This is the key. Otherwise, what you have is a bag of tricks. You can do them all, but there’s no cohesiveness to it, and you can’t explain why you vary at certain times rather than others.
TR: Starting with a manual isn’t necessarily a bad idea then.
SM: Absolutely not. In fact, I would suggest grabbing a manual and going to a place where they are teaching a specific approach that will allow you to practice and also watch others in a two-way mirror. Once you have that foundation down, you can introduce your own variations.
TR: I hear therapists say, “I have 20 years experience,” or “I have 30 years experience.” Does this research find that experience, itself, makes someone better?
SM: No, it doesn’t. We know that not only in therapy, but in a variety of activities. If you think about it, you’ll understand why. While you’re doing your work, you don’t have time enough to correct your mistakes thoughtfully.
The difference between the best and the rest is what they do before they meet a client and after they’ve met them, not what they’re doing when they’re with them.
So what we found, which I think is quite shocking, is that the difference between the best and the rest is what they do before they meet a client and after they’ve met them, not what they’re doing when they’re with them.  Let me give you an example from a field that is similar—figure skating. If you watch a championship figure skater perform a gold medal winning performance, you can describe what they did, but it won’t tell you how to do it yourself. Do you follow me?
TR: Yeah.
SM: In order to be able to accomplish that performance, that figure skater must do something before they go on the ice, and after they leave the ice. It’s that time that leads to superior performance. You can go out and try to turn triple axels during the performances as much as you want. That experience will not make you better. You have to plan, practice, perform, and then reflect. Most of us don’t see all of the effort that goes into that great performance. We just appreciate how good it is.
TR: But one of the tricky differences is that we’re trying to help each client. And if we’re practicing new skills, invariably we’re going to make mistakes. And that’s emotionally harder because you’re making a mistake with a real person sitting across from you.
SM: Well, number one, we’re all already making these mistakes. And the ones that I’m referring to are generally small and not fatal. So your performance doesn’t improve by isolating gross mistakes, or gross skills. Your performance improves when your usual skills begin to break down—meaning they don’t deliver—and remembering those, thinking about them after the session, and making a plan for what to do instead. That’s where improvement takes place.

When I hear people mention this kind of objection, I think they’re thinking that the errors are far grosser than what I’m talking about. Once therapists assess their baseline, most are going to find out—to their, perhaps, surprise—that they’re average in terms of their outcome, or slightly less than average. So if we’re average, then it’s not about bringing your game up to the average level. It’s about extending it to the next. That requires a focus on small process errors.

Let me give you another example. We have a pianist come and perform at one of our conferences. She is eight years old and she is really unbelievably able as a concert pianist. She plays a very difficult piece. I ask her if she made any mistakes. She says, “Of course, I made a lot.” I tell her I didn’t hear any, to which she says, “Well, that’s because you’re no good at this.”

I then say, “What do you mean? And what do you do about your mistakes?”

She says, “Look. I made lots of mistakes, but you cannot get better at playing the piano while you’re performing.” This is an 8-year-old.

I say, “So what do you do?”

She says, “Well, I hear these small errors. I remember them. My coach in the audience remembers them, and then that’s what I isolate for periods of practice between performances.”

Most of Us Are Average

TR: How many therapists really practice between sessions? I mean, that’s pretty rare, isn’t it?
SM: Most of us are average.
TR: Right.
SM: And 50% of us are below average, right?
The best performers spend significantly more time reading books and articles….and reviewing basic therapeutic texts.
So very few people do it, and this is the real mystery of expertise and excellence. Why do some go this extra mile? There’s no financial pay-off. I think this will change in the future, but at the present time, you don’t get paid one dime more if you’re average, crappy, or really good. The fees are set by the service provided.
TR: That is a great problem with our field and I hope that does change in the future.
SM: I think that we’re seeing movement in that direction. I think that our field will become like other fields, where outcome of the process is what leads to payment, rather than the delivery of it.
TR: So back to practicing. Therapists read books and go to workshops, but that’s kind of passive learning. What are your thoughts about that?
SM: That’s a component of practicing. A graduate student that I’ve been working with, Darryl Chow, who just finished his PhD at University of Perth in Australia, did his dissertation on this topic and found that the best performers spend significantly more time reading books and articles. We also know that the best performers spend more time reviewing basic therapeutic texts.

Therapists are often in search of the variation from their performance that will allow them to reach an individual client they’re struggling with. Top performers not only do that, but they’re also constantly going back to basics to make sure they’ve provided those. They spend time reading basic books that may be hugely boring but are nonetheless really helpful. Gerard Eagin’s The Skilled Helper, Corey Hammond’s book on therapeutic communication—these basic texts that remind us of things that we often forget in the flurry of cases we see every week.
TR: So reading counts. What about workshops?
SM:
We don’t know about workshops. I’m cynical about them, simply because they’re not set up in a way that respects any principles of the last 30 years of research on human learning.
We don’t know about workshops. I’m cynical about them, simply because they’re not set up in a way that respects any principles of the last 30 years of research on human learning. Six hours, chosen by the person who needs the continuing education, and there’s no testing of skills, acquisition of skills, no awareness of particular deficits in practice. Greg Neimeyer has done a fair bit of research on this and he finds no evidence that our current CE standards lead to improved performance. None.
TR: There’s a psychotherapy instructor I know, Jon Frederickson, who has his students go through psychotherapy drills, kind of like role-playing drills in a circle. Would that count as practice?
SM: It depends, but I like the sound of it. Not a scrimmage, where you do a whole game, but rather drilling people in very specific small skill sets again and again. That aligns with the principles of Ericsson’s researchers.

If you’re an experienced professional, your motivation for going to a CE event can be really varied. I know for me, I’m often just grateful to have a day off and hang out with friends. The particular content of the workshop, I’m ashamed to admit, is less important. The incentives are just all wrong.
TR: It goes back to your motivation question.
SM: I don’t think our field incentivizes that kind of stuff. In fact, you can be punished.
TR: Well, one incentive I discovered myself in my own private practice was my drop-out rate. That motivated me to get further training. Maybe other therapists don’t have the same problem I had, but I know that was a powerful motivation.
SM: Drop-out can be both a good and a bad thing. For example, our current system incentivizes therapists to have a butt in the seat every available, billable hour. What that means is that therapists may be incentivized—we have some data about this, too—to keep clients, whether they are changing or not. That’s what I mean when I say that the incentives are all screwed up. There are, every once in a while, motivated people like yourself who say, “Wait a second. There has to be something beyond this.” But that requires a degree of reflection that may be difficult for most of us, especially if we are well defended. For these folks, people drop out because they are in denial about their own problems, not because of anything they, themselves, might be doing.

You put those things together and it can be a fatal combination. We need to take a step back as payers for services and as consumers of services and think about the incentives in our current system. I know this sounds terribly economic, but I think it’s important for our field.
TR: That sounds sensible to me. What about watching psychotherapy videos by psychotherapy experts like the ones psychotherapy.net produces. Would that count as practice?
SM: Yes it would. Especially in the beginning, when you have identified a particular area or weakness in your skill set that you may need some help with. In essence, you’re spending more time swimming in it while reflecting, which is the key part.
TR: Do you have other examples of deliberate practice that you’ve heard of therapists engaging in?
SM: Well there’s the stop-start strategies that Darryl Chow has been talking about. And Chris Hall is doing a study at UNC that we’re involved with, where therapists will watch short segments of a video and then they have to respond in the moment in a way that is maximally empathic, collaborative, and non-distancing. So they’re training therapists to develop a certain degree of proficiency with fairly straightforward clients.

Then you begin to vary the emotional context, or the physical context, in which the service is delivered. So now the client’s not just saying, “Hey, I feel sad.” They’re threatening to drop out or to commit suicide. More difficult and challenging things. And then simply spending time outside of the office planning and discussing individual particular cases with peers or consultants is another strategy.

In Darryl Chow’s research, which I think is the most exciting stuff, he found that within the first eight years of practice, therapists with the best outcomes spend approximately seven times more hours than the bottom two-thirds of clinicians engaged in these kinds of activities. Seven times.
TR: Wow.
SM:
The key to this is really starting early and investing a little bit at a time. It’s sort of like how you’re advised to save for your retirement. Not in the last five years. Not in the first five years, but a little bit every year.
The good news is, now that we know this, we can start this process earlier. The bad news is, if you’ve been at this for awhile, it becomes impossible to catch up with the best. We just age out. We can’t do it. The key to this is really starting early and investing a little bit at a time. It’s sort of like how you’re advised to save for your retirement. Not in the last five years. Not in the first five years, but a little bit every year.
TR: One advantage that great athletes have is that their coaches gets to determine day by day what moves or what performances they’re going to practice. I run a training program here at University of Alaska, Fairbanks, at the University Center for Student Health and Counseling, and I don’t get to pick what clients come in day to day. It could be anxiety, depression, any number of different things, so I’ll do a training on, let’s say, working with anxiety, but the client that comes in will have depression. So what do you do about that?
SM: Well, in essence, we’re violating John Wooden’s primary rule, which is, we are allowing students to scrimmage before they drill. And I have to tell you, all students want to scrimmage, but what you need to do more of, before and during, is drilling. The kind of drilling that I think your colleague was talking about. Or you go back to, “Here’s how we hold the guitar.” And we play very simple songs and then we begin varying the drill with greater degrees of complexity once easier tasks are managed.
TR: So you’d recommend a longer period of training and practice and drills before seeing clients.
SM: I’d want to see that kind of mastery. Let me give you an example. Do you want the pilot to be proficient at flying in fair weather, as demonstrated on the simulator, before they fly a plane?
TR: Yes.
SM: You want them to be prepared for all the complications: “Wait a minute, it’s raining,” “Wait a minute, you’ve got problems with your rudder.” These are complex skills and, yes, we can teach people to manage them as one-offs, but then they never integrate it into a coherent package that makes it easier to retrieve from memory later on when they need that skill. If it’s viewed as a one-off—“With the anxiety client, I did this”—it’s not integrated into an organized structure for retrieval later on.
TR: So on a therapist’s resume, you’d want to see not just hours of direct service provided, but also hours spent practicing and learning.
SM: Or, better yet, somebody who has measured results, like yourself. All I need is an average pilot. I don’t need the best pilot in the world, because most of the time there’s not huge challenges. If you can document your results, and if you’re checking in with me, we’re going to catch most of the errors anyway. And then I want a therapist who has a professional development plan, that’s working on the aggregation of small improvements over a long period of time.
TR: So for tracking results, I know you recommend quantitative outcome measures, like the Outcome Rating Scale or the Outcome Questionnaire. But I have found that there are certain clients that quantitative measures just don’t seem valid for. It’s not a large percentage of clients, but there are some that underreport problems at first. So it can look like they’re deteriorating even while they’re improving. Can you recommend any kind of qualitative methods or other methods of trying to accurately assess outcome in addition to those measures?
SM: I don’t buy it. Personally, I just don’t see that stuff and I would offer a very different explanation for it. Let me give you an example.

We know that each time there is a deterioration in scores, the probability of client drop-out goes up, whether or not the therapist thinks that it’s a good sign that the client is “getting in touch with reality and finally admitting their issues,” or had inflated how they really were doing for the first visit. So the key task here is not to say, “There must be another measure,” but to figure out what skills are required for me to get a higher score.

Dig Into the One You Know

TR: That’s a new perspective. To look at what I can change about my performance, rather than a new measure to assess it.
SM: Now you see why I think our field is forever chasing its tail. Because instead of becoming fully connected to our performance, we are constantly looking for the trick that will make us great.
Instead of becoming fully connected to our performance, we are constantly looking for the trick that will make us great.
It’s like a singer looking for the song that will make them famous rather than learning how to sing. We’re forever going to workshops, and the level of the workshops are often so basic even when they’ve claimed to be advanced. The truth is, you can’t do an advanced workshop on psychotherapy for 100 people. You can’t do it. The content is too abstract and too general. You need to see a clinician’s performance and fine-tune it. So therapists go around and around, constantly picking up these techniques that they use in an unreliable fashion, and their outcomes don’t improve, but their confidence does.
TR: So instead of picking up a new modality every year, dig into the one you know, preferably with a real expert, and get individualized or maybe small group training and practice.
SM: I think that once you’ve achieved a level of proficiency, the only hope for improvement is to get feedback on your specific deficits. And yours will be different from mine.
TR: It sounds like you’d definitely be a fan of videotaping sessions and reviewing them and that kind of thing.
SM: Not alone—with an expert eye reviewing small segments. Otherwise the flood of information from video will have you second-guessing yourself, which can actually interrupt the way you work in an unhelpful way.
TR: What about live supervision?
SM: I’m not averse to it, but I think it’s a little bit like a GPS—it can correct your moves in the moment, but you become GPS-dependent and you don’t learn the territory. What’s required in learning is reflection. If you don’t reflect, you can’t learn. As my uncle used to say, “You got to study that thang.”

I actually had great opportunities with live supervision when I was at the Family Therapy Center and got corrected in the moment by two really masterful clinicians. But I also think that what really made a difference was sitting behind a mirror, without any financial worries, watching endless hours of psychotherapy being done, and then talking about it afterwards. “This was said. What could you have said? How come we said this? What do you need to do?” It was a heavenly experience and as a result, I came away with a very highly nuanced and contextualized way of delivering that particular model.

And today, when I’m doing my Scott Miller way of working and I notice that a particular client wasn’t engaged or interested at a particular moment, I think, “What could I have said differently?” It’s at that small micro level that improved outcome is likely to be found. As opposed to just gross generic level.

People go to workshops and say, “I’ve had some traumatized clients. Maybe I’ll learn that EMDR thing.”

“Really?” I think. “Do you know how effective you are in working with these clients already?”

“No, I don’t.”

“What makes you think you need to do EMDR?”

“Well, it just seems so interesting.”

And I think, “Oh, you’re doomed.” Not that there’s anything wrong with EMDR, but I have to tell you, I watched Francine Shapiro do it and it looks a lot different than some other people I’ve seen doing it.
TR: So the problem there is switching modalities rather than getting a lot better at the one you’re currently using.
SM: It’s looking for a trick rather than thinking through, what else could I have said? What else could I have done that I already know how to do? Or getting a little bit of tweaking from a trusted mentor.
TR: I know you present this information all over the world. Do you find therapists are open and receptive to these ideas?
SM: Yes. I think that there are some very real barriers that we need to address, but yes, I do.
TR: This has been a really fascinating conversation. Thank you for making the time.
SM: I like this stuff. I’m fascinated by it and I’m very hopeful about the direction we’re going research-wise, so thank you for giving me the opportunity.

Supershrinks: What is the secret of their success?

Clients of the best therapists improve at a rate at least 50 percent higher and drop out at a rate at least 50 percent lower than those of average clinicians. What is the key to superior performance? Are "supershrinks" made or born? Is it a matter of temperament or training? Have they discovered a secret unknown to other clinicians or are their superior results simply a fluke, more measurement error than reality? We know that who provides the therapy is a much more important determinant of success than what treatment approach is provided. The age, gender, and diagnosis of the client have no impact on the treatment success rate, nor do the experience, training, and theoretical orientation of the therapist. In attempting to answer these questions, Miller, Hubble and Duncan, have found that the best of the best simply work harder at improving their performance than others and attentiveness to feedback is crucial. When a measure of the alliance is used with a standardized outcome scale, available evidence shows clients are less likely to deteriorate, more likely to stay longer, and twice as likely to achieve a change of clinical significance.

The boisea trivittatus, better known as the box elder bug, emerges from the recesses of homes and dwellings in early spring. While feared neither for its bite nor sting, most people consider the tiny insect a pest. The critter comes out by the thousands, resting in the sun and staining upholstery and draperies with its orange-colored wastes. Few find it endearing, with the exception perhaps of entomologists. It doesn't purr and won't fetch the morning paper. What is more, you will be sorry if you step on it. When crushed, the diminutive creature emits a putrid odor worthy of an animal many times its size.

For as long as anyone could remember, boisea trivittatus was an unwelcome yet familiar guest in the offices and waiting area of a large Midwestern, multicounty community mental health center. Professional exterminators did their best to keep the bugs at bay, but inevitably many eluded the efforts to eliminate them. Tissues were placed strategically throughout the center for staff and clients to dispatch the escapees. In time, the arrangement became routine. Out of necessity, everyone tolerated the annual annoyance—with one notable exception.

Dawn, a 12-year veteran of the center, led the resistance to what she considered "insecticide." In a world turned against the bugs, she was their only ally. To save the tiny beasts, she collected and distributed old mason jars, imploring others to catch the little critters so that she could release them safely outdoors.

Few were surprised by Dawn's regard for the bugs. Most people who knew her would have characterized her as a holdout from the "Summer of Love." Her VW microbus, floor-length tie-dyed skirts, and Birkenstock sandals—combined with the scent of patchouli and sandalwood that lingered after her passage—solidified everyone's impression that she was a fugitive of Haight-Ashbury. Rumor had it that she'd been conceived at Esalen.

Despite these eccentricities, Dawn was hands-down the most effective therapist at the agency. This finding was established through a tightly controlled, research-to-practice study conducted at her agency. As part of this study of success rates in actual clinical settings, Dawn and her colleagues administered a standardized measure of progress to each client at every session.

What made her performance all the more compelling was that Dawn was the top performer seven years running. Moreover, factors widely believed to affect treatment outcome—the client's age, gender, diagnosis, level of functional impairment, or prior treatment history—did not affect her results. Other factors not correlated with her outcomes were her age, gender, training, professional discipline, licensure, or years of experience. Even her theoretical orientation proved inconsequential.

Contrast Dawn with Gordon, who could not have been more different. Rigidly conservative and brimming with confidence bordering on arrogance, Gordon managed to build a thriving private practice in an area where most practitioners were struggling to stay afloat financially. Many in the professional community sought to emulate his success. In the hopes of learning his secrets or earning his acknowledgment, they competed hard to become part of his inner circle.

Whispered conversations at parties and local professional meetings made clear that others regarded Gordon with envy and enmity. "Profits talk, patients walk," was one comment that captured the general feeling about him. And the critics could not have been more wrong. The people Gordon saw in his practice regarded him as caring and deeply committed to their welfare. Furthermore, he achieved outcomes that were far superior to those of the clinicians who carped about him. In fact, the same measures that confirmed Dawn's superior results placed Gordon in the top 25 percent of psychotherapists studied in the United States.

In 1974, researcher D. F. Ricks coined the term supershrink to describe a class of exceptional therapists—practitioners who stood head and shoulders above the rest. His study examined the long-term outcomes of "highly disturbed" adolescents. When the research participants were later examined as adults, he found that a select group, treated by one particular provider, fared notably better. In the same study, boys treated by the pseudoshrink demonstrated alarmingly poor adjustment as adults.

The fact that therapists differ in their ability to effect change is hardly a revelation. All of us have participated in hushed conversations about colleagues whose performance we feel falls short of the mark. We also recognize that some practitioners are a cut above the rest. With rare exceptions, whenever they take aim, they hit the bull's-eye. Nevertheless, since Ricks's first description, little has been done to further the investigation of super- and pseudoshrinks. Instead, professional time, energy, and resources have been directed exclusively toward identifying effective therapies. Trying to identify specific interventions that could be dispensed reliably for specific problems has a strong common-sense appeal. No one would argue with the success of the idea of problem-specific interventions in the field of medicine. But the evidence is incontrovertible. “Who provides the therapy is a much more important determinant of success than what treatment approach is provided.”

Consider a recent study conducted by Bruce Wampold and Jeb Brown in 2006 and published in the Journal of Consulting and Clinical Psychology. Briefly, the study included 581 licensed providers, including psychologists, psychiatrists, and master's-level providers, who were treating a diverse sample of over 6,000 clients. The therapists, the clientele, and the presenting complaints were not different in any meaningful way from clinical settings nationwide. As was the case with Dawn and Gordon, the clients' age, gender, and diagnosis had no impact on the treatment success rate and neither did the experience, training, or theoretical orientation of the therapists. However, clients of the best therapists in the sample improved at a rate at least 50 percent higher and dropped out at a rate at least 50 percent lower than those assigned to the average clinicians in the sample.

Another important finding emerged: in those cases in which psychotropic medication was combined with psychotherapy, the drugs did not perform consistently. As with talk therapy, effectiveness depended on who prescribed the drug. People seen by top providers achieved gains from the drugs 10 times greater than those seen by the less effective practitioners. Among the latter group, the drugs virtually made no difference. So, in the chemistry of mental health treatment, orientations, techniques, and even medications are inert. The clinician is the catalyst.

The making of a Supershrink

For the past eight years the Institute for the Study of Therapeutic Change (ISTC), an international group of researchers and clinicians dedicated to studying what works in psychotherapy, has been tracking the outcomes of thousands of therapists treating tens of thousands of clients in myriad clinical settings across the United States and abroad. Like D. F. Ricks and other researchers, we found wide variations in effectiveness among practicing clinicians. Intrigued, we decided to try to determine why.

We began our investigation by looking at the research literature. The Institute has earned its reputation in part by reviewing research and publishing summaries and critical analyses on its website (www.talkingcure.com). We were well aware at the outset that little had been done since D. F. Rick's original paper to deepen the understanding of super- and pseudoshrinks. Nevertheless, a massive amount of research had been conducted on what in general makes therapists and therapy effective. When we attempted to determine the characteristics of the most effective practitioners using our national database, with the hypothesis that therapists like Dawn and Gordon must simply do or embody more of "it," we smacked head-first into a brick wall. Neither the person of the therapist, nor technical prowess, separated the best from the rest.

Frustrated, but undeterred, we retraced our steps. Maybe we had missed something, a critical study, a nuance, a finding that would steer us in the right direction. We returned to our own database to take a second look, reviewing the numbers and checking the analyses. We asked consultants outside the Institute to verify our computations. We invited others to brainstorm possible explanations. Opinions varied from many of the factors we had already considered and ruled out to "it's all a matter of chance, noise in the system, more statistical artifact than fact." Put another way, supershrinks were not real and their emergence in any data analysis was entirely random. In the end, there was nothing we could point to that explained why some clinicians achieved consistently superior results. Seeing no solution, we gave up and turned our attention elsewhere.

The project would have remained shelved indefinitely had one of us not stumbled on the work of Swedish psychologist K. Anders Ericsson. Nearly two years had passed since we had given up. Then Scott, returning to the U.S. after providing a week of training in Norway, stumbled on an article published in Fortune magazine. Weary from the road and frankly bored, he had taken the periodical from the passing flight attendant more for the glossy pictures and factoids than for intellectual stimulation. In short order, however, the magazine title seized his attention—in big bold letters, "What it takes to be great." The subtitle cinched it, "Research now shows that the lack of natural talent is irrelevant to great success." Although the lead article itself was a mere four pages in length, the content kept him occupied for the remaining eight hours of the flight.

Ericsson, Scott learned, was considered to be "the expert on experts." For the better part of two decades, he had studied the world's best athletes, authors, chess players, dart throwers, mathematicians, pianists, teachers, pilots, physicians, and others. He was also a bit of a maverick. In a world prone to attribute greatness to genetic endowment, Ericsson did not mince words, "The search for stable heritable characteristics that could predict or at least account for superior performance of eminent individuals [in sports, chess, music, medicine, etc.] has been surprisingly unsuccessful . . . Systematic laboratory research . . . provides no evidence for giftedness or innate talent."

Should Ericsson's bold and sweeping claims prove difficult to believe, take the example of Michael Jordan, regarded widely as the greatest basketball player of all time. When asked, most would cite natural advantages in height, reach, and leap as key to his success. Notwithstanding, few know that "His Airness" was cut from his high school varsity basketball team! So much for the idea of being born great. It simply does not work that way.

“The key to superior performance? As absurd as it sounds, the best of the best simply work harder at improving their performance than others.” Jordan, for example, did not give up when thrown off the team. Instead, his failure drove him to the courts, where he practiced hour after hour. As he put it, "Whenever I was working out and got tired and figured I ought to stop, I'd close my eyes and see that list in the locker room without my name on it, and that usually got me going again."

“As time consuming as this level of practice sounds—and it is—it isn't enough. According to Ericsson, to reach the top level, attentiveness to feedback is crucial.”

Such deliberate practice, as Ericsson goes to great lengths to point out, isn't the same as the number of hours spent on the job, but rather the amount of time devoted specifically to reaching for objectives "just beyond one's level of proficiency." He chides anyone who believes that experience creates expertise, saying, "Just because you've been walking for 50 years doesn't mean you're getting better at it." Of interest, he and his group have found that elite performers across many different domains engage in the same amount of such practice, on average, every day, including weekends. In a study of 20-year-old musicians, for example, Ericsson and colleagues found that the top violinists spent twice  as much time (10,000 hours on average) working to meet specific performance targets as the next best players and 10 times as much time as the average musician.

“As time consuming as this level of practice sounds—and it is—it is not enough. According to Ericsson, to reach the top level, attentiveness to feedback is crucial.” Studies of physicians with an uncanny ability to diagnose baffling medical problems, for example, prove that they act differently than their less capable, but equally well-trained, colleagues. In addition to visiting, examining, taking careful notes, and reflecting on their assessment of a particular patient, they take one additional critical step. They follow up. Unlike their "proficient" peers, they do not settle. Call it professional compulsiveness or pride, these physicians need to know whether they were right, even though finding out is not required nor reimbursable. "This extra step," Ericsson says, gives the superdiagnostician"a significant advantage over his peers. It lets him better understand how and when he's improving."

Within days of touching down, Scott had shared Ericsson's findings with Mark and Barry. An intellectual frenzy followed. Articles were pulled, secondary references tracked down, and Ericsson's 918-page Cambridge Handbook of Expertise and Expert Performance purchased and read cover to cover. In the process, our earlier confusion gave way to understanding. With considerable chagrin, we realized that what therapists per se do is irrelevant to greatness. The path to excellence would never be found by limiting our explorations to the world of psychotherapy, with its attendant theories, tools, and techniques. Instead, we needed to redirect our attention to superior performance, regardless of calling or career.

Knowing what you don't know

Ericsson's work on practice and feedback also explained the studies that show how most of us grow continually in confidence over the course of our careers, despite little or no improvement in our actual rates of success. Hard to believe but true. On this score, the experience of psychologist Paul Clement is telling. Throughout his years of practice, he kept unusually thorough records of his work with clients, detailing hundreds of cases falling into 84 different diagnostic categories. "I had expected to find," he said in a quantitative analysis published in the peer-reviewed journal Professional Psychology, "that I had gotten better and better over the years . . . but my data failed to suggest any . . . change in my therapeutic effectiveness across the 26 years in question."

Contrary to conventional wisdom, the culprit behind such mistaken self-assessment is not incompetence, but rather proficiency. Within weeks and months of first starting out, noticeable mistakes in everyday professional activities become increasingly rare, and thereby make intentional modifications seem irrelevant, increasingly difficult, and costly in time and resources. Once more, this is human nature, a process that dogs every profession. Add to this the custom in our profession of conflating success with a particular method or technique, and the door to greatness for many therapists is slammed shut early on.

During the last few decades, for example, more than 10,000 "how-to" books on psychotherapy have been published. At the same time, the number of treatment approaches has mushroomed, going from around 60 in the early days to more than 400 psychological treatment models today. At present, there are 145 officially approved, manualized, evidence-based treatments for 51 of the 397 possible DSM diagnostic groups. Based on these numbers alone, one would be hard pressed to not believe that real progress has been made by the field. More than ever before, we know what works for whom. Or do we?

Comparing the success rates of today with those of 10, 20, or 30 years ago is one way to find out. One would expect that the profession is progressing in a manner comparable to the Olympics. Fans know that during the last century, the best performance for every event has improved—in some cases, by as much as 50 percent. What is more, excellence at the top has had a trickle-down effect, improving performance at every level. For example, the fastest time clocked for the marathon in the 1896 Olympics was just one minute faster than the time that is required now just to participate in the most competitive marathons like Boston and Chicago. By contrast, no measurable improvement in the effectiveness of psychotherapy has occurred in the last 30 years.

The time has come to confront the unpleasant truth: our tried-and-true strategies for improving what we do have failed. Instead of advancing as a field, we have stagnated, mistaking our feverish peddling on a stationary bicycle for progress in the Tour de Therapy. This is not to say that therapy is ineffective. Quite to the contrary, the data are clear and unequivocal: psychotherapy works. Studies conducted over the last three decades show effects equal to or greater than those achieved by a host of well-accepted medical procedures, such as coronary artery bypass surgery, the pharmacological treatment of arthritis, and AZT for AIDS. At issue, however, is how we can learn from our experiences and "improve" our rate of success, both as a discipline and in our individual practices.

Incidentally, psychotherapists are not alone in this struggle to increase our expertise. During our survey of the literature on greatness, we came across an engaging and provocative article published in the New Yorker magazine. Using the treatment of cystic fibrosis (CF) as an example, science writer Atul Gawande showed how the same processes that undermine excellence in psychotherapy play out in medicine. Since 1964, medical researchers have been tracking the outcomes of patients with CF, a genetic disease striking 1,000 children yearly. The disease is progressive and, over time, mucus fills, hardens, and eventually destroys the lungs.

As is the case with psychotherapy, the evidence indicates that standard CF treatment works. With medical intervention, life expectancy is on average 33 years; without care, few patients survive infancy. The real story, as Gawande points out, is not that patients with CF live longer when treated, but that, as with psychotherapy, there is a significant variation in treatment success rates. At the best treatment centers, survival rates are 50 percent higher than the national average, meaning that patients live to be 47 on average.

Such differences, however, have not been achieved through standardization of care and the top-down imposition of the "best" practices. Indeed, Cincinnati Children's Hospital (CCH), one of the nation's most respected treatment centers—which employs two of the physicians responsible for preparing the national CF treatment guidelines—produced only average to poor outcomes. In fact, on one of the most critical measures, lung functioning, this institution scored in the bottom 25 percent.

It is a small comfort to know that our counterparts in medicine, a field celebrated routinely for its scientific rigor, stumble and fall just as much as we "soft-headed" psychotherapists do in the pursuit of excellence. But Gawande's article, available for free at the Institute for Healthcare Improvement website (www.ihi.org), provides so much more than an opportunity to commiserate. His piece confirms what our own research revealed to be the essential first step in improving outcomes: knowing your baseline performance. It just stands to reason. If you call a friend for directions, her first question will be, "Where are you?" The same is true of RandMcNally, Yahoo! and every other online mapping service. To get where you want to go, you first have to know where you are—a fact the clinical staff at CCH put to good use.

In truth, most practicing psychotherapists have no hard data on their success rates with clients. Fewer still have any idea how their outcomes compare to those of other clinicians or to national norms. Unlike therapists, though, the staff at CCH not only determined their overall rate of effectiveness, they were able to compare their success rates with other major CF treatment centers across the country. With such information in hand, the medical staff acted to push beyond their current standard of reliable performance. In time, their outcomes improved markedly.

A formula for success

Turning to specifics, the truth is we have yet to discover how supershrinks like Dawn and Gordon ascertain their baseline. Our experience leads us to believe that they do not know either. What is clear is that their appraisal, intuitive though it may be, is more accurate than that of average practitioners. It is likely, and our analysis thus far confirms, that the methods they employ will prove to be highly variable, defying any simple attempt at classification. Despite such differences in approach, the supershrinks without exception possess a keen "situational awareness": they are observant, alert and attentive. They constantly compare new information with what they already know.

For the rest of us mere mortals, a shortcut to supershrinkdom exists. It entails using simple paper and pencil scales and some basic statistics to compute your baseline, a process we discuss in detail in what follows. In the end, you may not become the Frank Sinatra, Tiger Woods, or Melissa Etheridge of the therapy world, but you will be able to sing, swing and strum along with the best.

“The prospect of knowing one's true rate of success can provoke anxiety even in the best of us. For all that, studies of working clinicians provide little reason for concern.” To illustrate, the outcomes reported in a recent study of 6,000 practitioners and 48,000 clients were as good as or better than those typically reported in tightly controlled studies. These findings are especially notable because clinicians, unlike researchers, do not have the luxury of handpicking the clients they treat. Most clinicians do good work most of the time, and do so while working with complex, difficult cases.

At the same time, you should not be surprised or disheartened when your results prove to be average. As with height, weight, and intelligence, success rates of therapists are normally distributed, resembling the all-too-familiar bell curve. It is a fact, in nearly all facets of life, most of us are clustered tightly around the mean. As the research by Hiatt and Hargrave shows, a more serious problem is when therapists do not know how they are performing or, worse, think they know their effectiveness without outside confirmation.

Unfortunately, our own work with regard to tracking the outcomes of thousands of therapists working in diverse clinical settings has exposed a consistent and alarming pattern: those who are the slowest to adopt a valid and reliable procedure to establish their baseline performance typically have the poorest outcomes of the lot.

Should any doubt remain with regard to the value and importance of determining one's overall rate of success, let us underscore that the mere act of measuring yields improved outcomes. In fact, it is the first and among the most potent forms of feedback available to clinicians seeking excellence. Several recent studies, demonstrate convincingly that monitoring client progress on an ongoing basis improves effectiveness dramatically. Our own study published last year in the Journal of Brief Therapy found that providing therapists with real time feedback improved outcome nearly 65 percent. No downside exists to determining your baseline effectiveness. One either is proven effective or becomes more effective in the process.

There is more good news on this score. Share your baseline—good, bad, or average—with clients and the results are even more dramatic. Dropouts, the single greatest threat to therapeutic success, are cut in half. At the same time, outcomes improve yet again, in particular among those at greatest risk for treatment failure. Cincinnati Children's Hospital provides a case in point. Although surprised and understandably embarrassed about their overall poor national ranking, the medical staff nonetheless resolved to share the results with the patients and families. Contrary to what might have been predicted, not a single family chose to leave the program.

That everyone decided to remain committed rather than bolt should really come as no surprise. Across all types of relationships—business, family and friendship, medicine—success depends less on a connection during the good times than on maintaining engagement through the inevitable hard times. The fact the CCH staff shared the information about their poor performance increased the connection their patients felt with them and enhanced their engagement. It is no different in psychotherapy. Where we as therapists have the most impact on securing and sustaining engagement is through the relationship with our clients, what is commonly referred to as the "alliance." When it works well, client and therapist reach and maintain agreement about where they are going and the means by which they will get there. Equally important is the strength of the emotional connection—the bond.

Supershrinks, as our own research shows, are exquisitely attuned to the vicissitudes of client engagement. In what amounts to a quantum difference between themselves and average therapists, they are more likely to ask for and receive negative feedback about the quality of the work and their contribution to the alliance. We have now confirmed this finding in numerous independent samples of practitioners working in diverse settings with a wide range of presenting problems. The best clinicians, those falling in the top 25 percent of treatment outcomes, consistently achieve lower scores on standardized alliance measures at the outset of therapy, enabling them to address potential problems in the working relationship. By contrast, median therapists commonly receive negative feedback later in treatment, at a time when clients have already disengaged and are at heightened risk for dropping out.

How do the supershrinks use feedback with regard to the alliance to maintain engagement? A session conducted by Dawn, rescuer of the box elder bugs, is representative of the work done by the field's most effective practitioners. At the time of the visit, we were working as consultants to her agency, teaching the staff to use the standardized outcome and alliance scales, and observing selected clinical interviews from behind a one-way mirror. She had been meeting with an elderly man for the better part of an hour. Although the session initially had lurched along, an easy give and take soon developed between the two. Everyone watching agreed that, overall, the session had gone remarkably well.

At this point, Dawn gave the alliance measure to the client, saying "This is the scale I told you about at the beginning of our visit. It's something new we're doing here. It's a way for me to check in, to get your feedback or input about what we did here today."

Without comment, the man took the form, and after quickly completing it, handed it back to Dawn.

"Ohm wow," she remarked, after rapidly scoring the measure, "you've given me, or the session at least, the highest marks possible."

With that, everyone behind the one-way mirror began to stir in their chairs. Each of us was expecting Dawn to wrap up the session—even, it appeared, the client who was inching forward on his chair. Instead, she leaned toward him.

"I'm glad you came today," she said.

"It was a good idea," he responded, "um, my, uh, doctor told me to come, in, and . . . I did, and, um . . . it's been a nice visit."

"So, will you be coming back?"

Without missing a beat, the man replied, "You know, I'm going to be all right. A person doesn't get over a thing like this overnight. It's going to take me a while. But don't you worry."

Behind the mirror, we and the staff were surprised again. The session had gone well. He had been engaged. A follow-up appointment had been made. Now we heard ambivalence in his voice.

For her part, Dawn was not about to let him off the hook. "I'm hoping you will come back."

"You know, I miss her terribly," he said, "it's awfully lonely at night. But, I'll be all right. As I said, don't worry about me."

"I appreciate that, appreciate what you just said, but actually what I worry about is that I missed something. Come to think about it, if we were to change places, if I were in your shoes, I'd be wondering, 'What really can she know or understand about this, and more, what can she possibly do?'"

A long silence followed. Eventually, the man looked up, and with tears in his eyes, caught her gaze.

Softly, Dawn continued, "I'd like you to come back. I'm not sure what this might mean to you right now, but you don't have to do this alone."

Nodding affirmatively, the man stood, took Dawn's hand, and gave it a squeeze. "See you, then."

Several sessions followed. During that period his scores on the standardized outcome measure improved considerably. At the time, the team was impressed with Dawn. Her sensitivity and persistence paid off, keeping the elderly man engaged, and preventing his dropping out. The real import of her actions, however, did not occur to any of us until much later.

All therapists experience similar incisive moments in their work with clients; times when they are acutely insightful, discerning, even wise. However, such experiences are actually of little consequence in separating the good from the great. Instead, superior performance is found in the margins—the small but consistent difference in the number of times corrective feedback is sought, successfully obtained, and then acted on.

Most therapists, when asked, report that they check in routinely with their clients and know when to do so. But our own research found this to be far from the case. In early 1998, we initiated a study to investigate the impact on treatment outcome of seeking client feedback. Several formats were included. In one, therapists were supposed to seek informal client input on their own. In another, standardized, client-completed outcome and alliance measures were administered and the results shared with fellow therapists. Treatment-as-usual served as a third, control group.

Initial results of the study pointed to an advantage for the feedback conditions. Ultimately, however, the entire project had to be scrapped as a review of the videotapes showed that the therapists in the informal group failed routinely to ask clients for their input—even though, when later queried, the clinicians maintained they had sought feedback.

For their part, supershrinks consistently seek client feedback about how the client feels about them and their work together; they don't just say they do. Dawn perhaps said it best: "I always ask. Ninety-nine per cent of the time, it doesn't go anywhere—at least at the moment. Sometimes I'll get a call, but rarely. More likely, I'll call, and every so often my nosiness uncovers something, some, I don't know quite how to say it, some barrier or break, something in the way of our working together." Such persistence in the face of infrequent payoff is a defining characteristic of those destined for greatness.

Whereas birds can fly, the rest of us need an airplane. When a simple measure of the alliance is used in conjunction with a standardized outcome scale, available evidence shows clients are less likely to deteriorate, more likely to stay longer, and twice as likely to achieve a change of clinical significance. What is more, when applied on an agency-wide basis, tracking client progress and experience of the therapeutic relationship has an effect similar to the one noted earlier in the Olympics: across the board, performance improves; everyone gets better. As John F. Kennedy was fond of saying, "A rising tide lifts all boats."

While it is true that the tide raises everyone, we have observed that supershrinks continue to beat others out of the dock. Two factors account for this. As noted earlier, superior performers engage in significantly more deliberate practice. That is, as Ericsson, the expert on experts says, "effortful activity designed to improve individual target performance." Specific methods of deliberate practice have been developed and employed in the training of pilots, surgeons, and others in highly demanding occupations. Our most recent work has focused on adapting these procedures for use in psychotherapy.

In practical terms, the process involves three steps: think, act, and, finally, reflect. This approach can be remembered by the acronym, T.A.R. To prepare for moving beyond the realm of reliable performance, the best of the best engage in forethought. This means they set specific goals and identify the particular ways they will use to reach their goals. It is important to note that superior performance depends on simultaneously attending to both the ends and the means.

To illustrate, suppose a therapist wanted to improve the engagement level of clients mandated into treatment for substance abuse. First, they would need to define in measurable terms how they would know, what they would see, that would tell them the client is engaged actively in the treatment (e.g., attendance, dialog, eye contact, posture, etc.). Following this, the therapist would develop a step-by-step plan to achieve the specific objectives. Because therapies that focus on client goals result in greater participation, the therapist might, for example, create a list of questions designed to elicit and confirm what the client wants. Not only this, but time would be spent in anticipating what the client might say and planning a strategy for each response.

In the act phase, successful experts track their performance. They monitor on an ongoing basis whether they used each of the steps or strategies outlined in the thinking phase and the quality with which each step was executed. The sheer volume of detail gathered in assessing their performance distinguishes the exceptional from their more average counterparts.

During the reflection phase, top performers review the details of their performance, and identify specific actions and alternate strategies for reaching their goals. Where unsuccessful learners paint with broad strokes, and attribute failure to external and uncontrollable factors (e.g., "I had a bad day," "I wasn't with it"), the experts know exactly what they do, more often citing controllable factors (e.g., "I should have done x instead of y," of "I forgot to do x and will do x plus y next time"). In our work with psychotherapists, for example, we have found that average practitioners are more likely to spend time hypothesizing about failed strategies, believing perhaps that understanding the reasons why an approach did not work will lead to better outcomes, and less time thinking about strategies that might be more effective.

Returning to the example above, an average therapist would be more likely to attribute failure to engage the mandated substance abuser to denial, resistance, or lack motivation. The expert on the other hand would say, "Instead of organizing the session around 'drug use,' I should have emphasized what the client wanted—getting his driver's license back. Next time, I will explore in detail what the two of us need to do right now to get him back in the driver's seat."

The penchant for seeking explanations for treatment failures can have life-and-death consequences. In the 1960s, the average lifespan of children with cystic fibrosis treated by "proficient" pediatricians was three years. The field as a whole attributed the high mortality rate routinely to the illness itself, a belief which, in retrospect, can only be viewed as a self-fulfilling prophecy. After all, why search for alternative methods if the disease invariably kills? Although certainly less dramatic, psychologist William Miller makes a similar point about psychotherapy, noting that most models do not account for how people change, but rather why they stay the same. In our experience, diagnostic classifications often serve a similar function by attributing the cause of a failing or failed therapy to the disorder.

By comparison, deliberate practice bestows clear advantages. In place of static stories and summary conclusions, options predominate. Take chess, for example. The unimaginable speed with which master players intuit the board and make their moves gives them the appearance of wizards, especially to dabblers. Research proves this to be far from the case. In point of fact, they possess no unique or innate ability or advantage in memory. Far from it. Their command of the game is simply a function of numbers: they have played this game and a thousand others before. As a result, they have more means at their disposal.

The difference between average and world-class players becomes especially apparent when stress becomes a factor. Confronted by novel, complex, or challenging situations, the focus of the merely proficient performers narrows to the point of tunnel vision. In chess, these people are easy to spot. They are the ones sitting hunched over the board, their finger glued to a piece, contemplating the next move. But studies of pilots, air traffic controllers, emergency room staff, and others in demanding situations and pursuits show that superior performers expand their awareness, availing themselves of all the options they have identified, rehearsed, and perfected over time.

Deliberate practice, to be sure, is not for the harried or hassled. Neither is it for slackers. Yet the willingness to engage in deliberate practice is what separates the "wheat from the chaff." The reason is simple: doing it is unrewarding in almost every way. As Ericsson notes, "Unlike play, deliberate practice is not inherently motivating; and unlike work, it does not lead to immediate social and monetary rewards. In addition, engaging in [it] generates costs." No third party (e.g., client, insurance company, or government body) will pay for the time spent to track client progress and alliance, identify at-risk cases, develop alternate strategies, seek permission to record treatment sessions, insure HIPAA compliance and confidentiality, systematically review the recordings, evaluate and refine the execution of the strategies, and solicit outside consultation, training, or coaching specific to particular skill sets. And, let's face it, few of us are willing pay for it out of pocket. But this, and all we have just described, is exactly what the supershrinks do. In a word, they are self-motivated. What leads people, children and adults, to devote the time, energy, and resources necessary to achieve greatness is poorly understood. Even when the path to improved performance is clear and requires little effort, most do not follow through. As recently reported in The New York Times, a study of 12 highly experienced gastroenterologists, each having performed a minimum of 3,000 colonoscopies, found that some were 10 times better at finding precancerous polyps than others. An extremely simple solution, one involving no technical skill or diagnostic prowess, was found to increase the polyp-detection rate by 50 percent. Sadly, despite this dramatic improvement, most of the doctors stopped using the remedy the moment the clinical trial ended.

Ericsson and colleagues believe that future studies of elite performers will give us a better idea of how motivation is promoted and sustained. Until then, we know that deliberate practice works best when done multiple times each day, including weekends, for short periods, interrupted by brief rest breaks. "Cramming" or "crash courses" don't work and increase the likelihood of exhaustion and burnout.

The Institute for the Study of Therapeutic Change is developing a web-based system to facilitate deliberate practice. The system is patterned after similar programs in use with pilots, surgeons, and other professionals. The advantage here is that the steps to excellence are automated. At www.myoutcomes.com, clinicians are already able to track their outcomes, establish their baseline, and compare their performance to national norms. The system also provides feedback to therapists when clients are at risk for deterioration or drop-out.

At present, we are testing algorithms that identify patterns in the data associated with superior outcomes. Such formulas, based on thousands of clients and therapists, will enable us to identify when an individual's performance is at variance with the pattern of excellence. When this happens, the clinician will be notified by e-mail of an online deliberate practice opportunity. Such training will differ from traditional continuing education in two critical ways. First, it will be targeted to the development of skill sets specific to the needs of the individual clinician. Second, and of greater consequence in the pursuit of excellence, the impact on outcome can be measured immediately. It is our hope that such a system will make the process of deliberate practice more accessible, less onerous, and more efficient.

The present era in psychotherapy has been referred to by many leading thinkers as the "age of accountability." Everyone wants to know what they are getting for their money. But it is no longer a simple matter of cost and the bottom line. People are looking for value. As a field, we have the means at our disposal to demonstrate the worth of psychotherapy in eyes of consumers and payers and increase its value. The question is, will we?

References

Clement, P. (1994). Quantitative Evaluation of 26 Years of Private Practice. Professional Psychology: Research and Practice, 25, 2, 173-76.

Colvin, G. (2006, October 19). What It Takes to Be Great. Fortune.

Ericsson, K. A. (2006). Cambridge Handbook of Expertise and Expert Performance. United Kingdom: Cambridge University Press.

Gawande, Atul. (2004, December 6). The Bell Curve. The New Yorker.

Hiatt, D. & Hargrave, G. E. (1995). The Characteristics of Highly Effective Therapists in Managed Behavioral Provider Networks. Behavioral Healthcare Tomorrow, 4, 19-22.

Miller S., Duncan, B., Brown, J., Sorrell, R., & Chalk, M. (2007). Using Formal Client Feedback to Improve Retention and Outcome. Journal of Brief Therapy, 5, 19-28.

Ricks, D.F. (1974). Supershrink: Methods of a therapist judged successful on the basis of adult outcomes of adolescent patients. In D. F. Ricks, M. Roff (Eds.), Life History Research in Psychopathology. Minneapolis: University of Minnesota Press, 275-297.

Villarosa, L. (2006, December 19). Done Right, Colonoscopy Takes Time, Study Finds. The New York Times, Health Section.

Wampold, B. E. & Brown, J. (2005). Estimating Variability in Outcomes Attributable to Therapists: A Naturalistic Study of Outcomes in Managed Care. Journal of Consulting and Clinical Psychology, 73, 5, 914-23.

“When I’m good, I’m very good, but when I’m bad I’m better”: A New Mantra for Psychotherapists

Current estimates suggest that nearly 50 percent of therapy clients drop out and at least one third, and up to two thirds, do not benefit from our usual strategies. Barry Duncan and Scott Miller provide a comprehensive summary of the Outcome-Informed, Client-Directed approach and a detailed, practical overview of its application in clinical practice. Through case examples they demonstrate how most practitioners can increase their therapeutic effectiveness substantially through accurate identification of those clients who are not responding, and addressing the lack of change in a way that keeps clients engaged in treatment and forges new directions.

Introduction

At first blush, Mae West's famous words 'When I'm good, I'm very good, but when I'm bad I'm better' hardly seem like a guide for therapists to live by—but, as it turns out, they could be. Research demonstrates consistently that who the therapist is accounts for far more of the variance of change (6 to 9 percent) than the model or technique administered (1 percent). In fact, therapist effectiveness ranges from a paltry 20 percent to an impressive 70 percent. A small group of clinicians—sometimes called 'supershrinks'—obtain demonstrably superior outcomes in most of their cases, while others fall predictably on the less-exalted sections of the bell-shaped curve. However, most practitioners can join the ranks of supershrinks, or at least increase their therapeutic effectiveness substantially.
 
Consider Matt, a twenty-something software whiz who was on the road frequently to trouble-shoot customer problems. Matt loved his job but travelling was an ordeal—not because of flying but because of another, far more embarrassing problem. Matt was long past feeling frustrated about standing and standing in public restrooms trying to 'go.' What started as a mild discomfort and inconvenience easily solved by repeated restroom visits had progressed to full-blown anxiety attacks, an excruciating pressure, and an intense dread before each trip. Feeling hopeless and demoralized, Matt considered changing jobs but as a last resort decided instead to see a therapist.
 
Matt liked the therapist and it felt good finally to tell someone about the problem. The therapist worked with Matt to implement relaxation and self-talk strategies. Matt practiced in session and tried to use the ideas on his next trip, but still no 'go.' The problem continued to get worse. Now three sessions in, Matt was at significant risk for a negative outcome—either dropping out or continuing in therapy without benefit.
 
We have all encountered clients unmoved by treatment. Therapists often blame themselves. The overwhelming majority of psychotherapists, as cliched as it sounds, want to be helpful. Many of us answered "I want to help people" on graduate school applications as the reason we chose to be therapists. Often, some well-meaning person dissuaded us from that answer because it didn't sound sophisticated or appeared too 'co-dependent.' Such aspirations, we now believe, are not only noble but can provide just what is needed to improve clinical effectiveness. After all, there is not much financial incentive for doing better therapy—we don't do this work because we thought we would acquire the lifestyles of the rich and famous.
 
Unfortunately, the altruistic desire to be helpful sometimes leads us to believe that if we were just smart enough or trained correctly, clients would not remain inured to our best efforts—if we found the Holy Grail, that special model or technique, we could once and for all defeat the psychic dragons that terrorize clients. “Amid explanations and remedies aplenty, therapists search courageously for designer explanations and brand-name miracles, but continue to observe that clients drop out, or even worse, continue without benefit.” Current estimates suggest that nearly 50 percent of our clients drop out and at least one third, and up to two thirds, do not benefit from our usual strategies.
 
So what can we do to channel our healthy desire to be helpful? If we listen to the lessons of the top performers, the first thing we should do is step outside of our comfort zones and push the limits of our current performance—to identify accurately those clients not responding to our therapeutic business as usual, and address the lack of change in a way that keeps clients engaged in treatment and forges new directions.
 
To recapture those clients who slip through the cracks, we need to embrace what is known about change: Many studies reveal that the majority of clients experience change in the first six visits—clients reporting little or no change early on tend to show no improvement over the entire course of therapy, or wind up dropping out. Early change, in other words, predicts engagement in therapy and ongoing benefit. This doesn't mean that a client is 'cured' or the problem is totally resolved, but rather that the client has a subjective sense that things are getting better. And second, a mountain of studies have long demonstrated another robust predictor—that reliable, tried-and-true but taken-for-granted old friend—the therapeutic alliance. Clients who highly rate the relationship with their therapist tend to be those clients who stick around in therapy and benefit from it.
 
Next we need to measure those known predictors in a systematic way with reliable and valid instruments. So instead of regarding the first few therapy sessions as a 'warm-up' period or a chance to try out the latest technique, we engage the client in helping us judge whether therapy is providing benefit. Obtaining feedback on standardized measures about success or failure during those initial meetings provides invaluable information about the match between ourselves, our approach, and the client—enabling us to know when we are bad, so we can be even better. The only way we can improve our outcomes is to know, very early on, when the client is not benefiting—we need something akin to an early warning signal.
 
Using standardized measures to monitor outcome may make your skin crawl and bring to mind torture devices like the Rorschach or MMPI. But the forms for these measures are not used to pass judgment, diagnose or unravel the mysteries of the human psyche. Rather, these measures invite clients into the inner circle of mental health and substance abuse services—they involve clients collaboratively in monitoring progress toward their goals and the fit of the services they are receiving, and amplify their voices in any decisions about their care.

The Outcome Rating Scale (ORS)

You might also think that the last thing you need is to add more paperwork to your practice. But finding out who is and isn't responding to therapy need not be cumbersome. In fact, it only takes a minute. Dissatisfied with the complexity, length, and user- unfriendliness of existing outcome measures, we developed the Outcome Rating Scale (ORS) as a brief clinical alternative. The ORS (child measures also available) and all the measures discussed here are available for free download at talkingcure.com. The ORS assesses three dimensions:
  1. Personal or symptomatic distress (measuring individual well-being)
  2. Interpersonal well-being (measuring how well the client is getting along in intimate relationships)
  3. Social role (measuring satisfaction with work/school and relationships outside of the home)
Changes in these three areas are considered widely to be valid indicators of successful outcome. The ORS simply translates these three areas and an overall rating into a visual analog format of four 10-cm lines, with instructions to place a mark on each line with low estimates to the left and high to the right. The four 10-cm lines add to a total score of 40. The score is simply the summation of the marks made by the client to the nearest millimeter on each of the four lines, measured by a centimeter ruler or available template. A score of 25, the clinical cutoff, differentiates those who are experiencing enough distress to be in a helping relationship from those who are not. Because of its simplicity, ORS feedback is available immediately for use at the time the service is delivered. Rated at an eighth-grade reading level, the ORS is understood easily and clients have little difficulty connecting it their day-to-day lived experience.
 
Matt completed the ORS before each session. He entered therapy with a score of 18, about average for those attending outpatient settings, but continued to hover at that score. At the third session, when the ORS reflected no change, it was not front-page news to Matt. But a different process ensued. In the same spirit of collaboration as the assessment process, Matt and his therapist brainstormed ideas, a free-for-all of unedited speculations and suggestions of alternatives, from changing nothing about the therapy to taking medication to shifting treatment approaches. During this open exchange Matt intimated that he was beginning to feel angry about the whole thing—real angry. The therapist noticed that when Matt worked himself up to a good anger—about how his problem interfered with his work and added a huge hassle in any extended situation away from his own bathroom—that he became quite animated, a stark contrast to the passively resigned person that had characterized their previous sessions. One of them, which one remains a mystery, mentioned the words 'pissed off' and both broke into a raucous laughter. Subsequently, the therapist suggested that instead of responding with hopelessness when the problem occurred, that Matt work himself up to a good anger—about how this problem made his life miserable. Matt added (he was a rock-and-roll buff) that he could also sing the Tom Petty song "Won't Back Down" during his tirade at the toilet. Matt allowed himself, when standing in front of the urinal to become incensed—downright 'pissed off,' and amused. And he started to go.
 
This process, the delightful creative energy that emerges from the wonderful interpersonal event we call therapy, could have happened to any therapist working with Matt. The difference is that the use of the outcome measure spotlighted the lack of change and made it impossible to ignore. The ORS brought the risk of a negative outcome front and center and allowed the therapist to enact the second characteristic of supershrinks, to be exceptionally alert to the risk of dropout and treatment failure. In the past, we might have continued with the same treatment for several more sessions, unaware of its ineffectiveness or believing (hoping, even praying) that our usual strategies would eventually take hold, but the reliable outcome data pushed us to explore different treatment options by the end of the third visit.
 
Pushing the limits of one's performance requires monitoring the fit of your service with the client's expectations about the alliance. The ongoing assessment of the alliance enables therapists to identify and correct areas of weakness in the delivery of services before they exert a negative effect on outcome.
 

The Session Rating Scale (SRS)

Research shows repeatedly that clients' ratings of the alliance are far more predictive of improvement than the type of intervention or the therapist's ratings of the alliance. Recognizing these much-replicated findings, we developed the Session Rating Scale (SRS) as a brief clinical alternative to longer research-based alliance measures to encourage routine conversations with clients about the alliance. The SRS also contains four items. First, a relationship scale rates the meeting on a continuum from "I did not feel heard, understood, and respected" to "I felt heard, understood, and respected." Second is a goals and topics scale that rates the conversation on a continuum from "We did not work on or talk about what I wanted to work on or talk about" to "We worked on or talked about what I wanted to work on or talk about." Third is an approach or method scale (an indication of a match with the client's theory of change) requiring the client to rate the meeting on a continuum from "The approach is not a good fit for me" to "The approach is a good fit for me." Finally, the fourth scale looks at how the client perceives the encounter in total along the continuum: "There was something missing in the session today" to "Overall, today's session was right for me."
 
The SRS simply translates what is known about the alliance into four visual analog scales, with instructions to place a mark on a line with negative responses depicted on the left and positive responses indicated on the right. The SRS allows alliance feedback in real time so that problems may be addressed. Like the ORS, the instrument takes less than a minute to administer and score. The SRS is scored similarly to the ORS, by adding the total of the client's marks on the four 10-cm lines. The total score falls into three categories:
  • SRS score between 0–34 reflects a poor alliance,
  • SRS Score between 35–38 reflects a fair alliance,
  • SRS Score between 39–40 reflects a good alliance.

The SRS allows the implementation of the final lesson of the supershrinks—seek, obtain, and maintain more consumer engagement. Clients drop out of therapy for two reasons: one is that therapy is not helping (hence monitoring outcome) and the other is alliance problems—they are not engaged or turned on by the process. The most direct way to improve your effectiveness is simply to keep people engaged in therapy.

 
An alliance problem that occurs frequently emerges when client's goals do not fit our own sensibilities about what they need. This may be particularly true if clients carry certain diagnoses or problem scenarios. Consider 19-year-old Sarah, who lived in a group home and received social security disability for mental illness. Sarah was referred for counseling because others were concerned that she was socially withdrawn. Everyone was also worried about Sarah's health because she was overweight and spent much of her time watching TV and eating snack foods.
 
In therapy Sarah agreed that she was lonely, but expressed a desire to be a Miami Heat cheerleader. Perhaps understandably, that goal was not taken seriously. After all, Sarah had never been a cheerleader, was 'schizophrenic,' and was not exactly in the best of shape. So no one listened, or even knew why Sarah had such an interesting goal. And the work with Sarah floundered. She spoke rarely and gave minimal answers to questions. In short, Sarah was not engaged and was at risk for dropout or a negative outcome.
 
The therapist routinely gave Sarah the SRS and she had reported that everything was going swimmingly, although the goals scale was an 8.7 out of 10, instead of a 9 or above out of 10 like the rest.
 
Sometimes it takes a bit more work to create the conditions that allow clients to be forthright with us, to develop a culture of feedback in the room. The power disparity combined with any socioeconomic, ethnic, or racial differences make it difficult to tell authority figures that they are on the wrong track. Think about the last time you told your doctor that he or she was not performing well. Clients, however, will let us know subtly on alliance measures far before they will confront us directly.
 
At the end of the third session, the therapist and Sarah reviewed her responses on the SRS. Did she truly feel understood? Was the therapy focused on her goals? Did the approach make sense to her? Such reviews are helpful in fine-tuning the therapy or addressing problems in the therapeutic relationship that have been missed or gone unreported. Sarah, when asked the question about goals, all the while avoiding eye contact and nearly whispering, repeated her desire to be a Miami Heat cheerleader.
 
The therapist looked at the SRS and the lights came on. The slight difference on the goals scale told the tale. When the therapist finally asked Sarah about her goal, she told the story of growing up watching Miami Heat basketball with her dad who delighted in Sarah's performance of the cheers. Sarah sparkled when she talked of her father, who passed away several years previously, and the therapist noted that it was the most he had ever heard her speak. He took this experience to heart and often asked Sarah about her father. The therapist also put the brakes on his efforts to get Sarah to socialize or exercise (his goals), and instead leaned more toward Sarah's interest in cheerleading. Sarah watched cheerleading contests regularly on ESPN and enjoyed sharing her expertise. She also knew a lot about basketball.
 
Sarah's SRS score improved on the goal scale and her ORS score increased dramatically. After a while, Sarah organized a cheerleading squad for her agency's basketball team who played local civic organizations to raise money for the group home. Sarah's involvement with the team ultimately addressed the referral concerns about her social withdrawal and lack of activity. The SRS helps us take clients and their engagement more seriously, like the supershrinks do. Walking the path cut by client goals often reveals alternative routes that would have never been discovered otherwise.
 
Providing feedback to clinicians on the clients' experience of the alliance and progress has been shown to result in significant improvements in both client retention and outcome. “We found that clients of therapists who opted out of completing the SRS were twice as likely to drop out and three times more likely to have a negative outcome.” In the same study of over 6000 clients, effectiveness rates doubled. As incredible as the results appear, they are consistent with findings from other researchers.
 
In a 2003 meta-analysis of three studies, Michael Lambert, a pioneer of using client feedback, reported that those helping relationships at risk for a negative outcome which received formal feedback were, at the conclusion of therapy, better off than 65 percent of those without information regarding progress. Think about this for a minute. Even if you are one of the most effective therapists, for every cycle of 10 clients you see, three will go home without benefit. Over the course of a year, for a therapist with a full caseload, this amounts to a lot of unhappy clients. This research shows that you can recover a substantial portion of those who don't benefit by first identifying who they are, keeping them engaged, and tailoring your services accordingly.
 

The Nuts and Bolts

Collecting data on standardized measures and using what we call 'practice-based evidence' can improve your effectiveness substantially. "Wait a minute," you say, "this sounds a lot like research!" Given the legionary schism between research and practice, sometimes getting therapists to do the measures is indeed a tall order because it does sound a lot like the 'R' word.
 
A story illustrates the sentiments that many practitioners feel about research. Two researchers were attending an annual conference. Although enjoying the proceedings, they decided to find some diversion to combat the tedium of sitting all day and absorbing vast amounts of information. They settled on a hot air balloon ride and were quite enjoying themselves until a mysterious fog rolled in. Hopelessly lost, they drifted for hours until a clearing in the fog appeared finally and they saw a man standing in an open field. Joyfully, they yelled down at the man, "Where are we?" The man looked at them, and then down at the ground, before turning a full 360 degrees to survey his surroundings. Finally, after scratching his beard and what seemed to be several moments of facial contortions reflecting deep concentration, the man looked up and said, "You are above my farm."
 
The first researcher looked at the second researcher and said, "That man is a researcher—he is a scientist!" To which the second researcher replied, "Are you crazy, man? He is a simple farmer!" "No," answered the first researcher emphatically, "that man is a researcher and there are three facts that support my assertion: First, what he said was absolutely 100% accurate; second, he addressed our question systematically through an examination of all of the empirical evidence at his disposal, and then deliberated carefully on the data before delivering his conclusion; and finally, the third reason I know he is a researcher is that what he told us is absolutely useless to our predicament."
 
But unlike much of what is passed off as research, the systematic collection of outcome data in your practice is not worthless to your predicament. It allows you the luxury of being useful to clients who would otherwise not be helped. And it helps you to get out of the way of those clients you are not helping, and connecting them to more likely opportunities for change.
 
First, collaboration with clients to monitor outcome and fit actually starts before formal therapy. This means that they are informed when scheduling the first contact about the nature of the partnership and the creation of a 'culture of feedback' in which their voice is essential.
 
"I want to help you reach your goals. I have found it important to monitor progress from meeting to meeting using two very short forms. Your ongoing feedback will tell us if we are on track, or need to change something about our approach, or include other resources or referrals to help you get what you want. I want to know this sooner rather than later, because if I am not the person for you, I want to move you on quickly and not be an obstacle to you getting what you want. Is that something you can help me with?"
 
We have never had anyone tell us that keeping track of progress is a bad idea. There are five steps to using practice based evidence to improve your effectiveness.
 

Step One: Introducing the ORS in the First Session

The ORS is administered prior to each meeting and the SRS toward the end. In the first meeting, the culture of feedback is continually reinforced. It is important to avoid technical jargon, and instead explain the purpose of the measures and their rationale in a natural commonsense way. Just make it part of a relaxed and ordinary way of having conversations and working. The specific words are not important—there is no protocol that must be followed. This is a clinical tool! Your interest in the client's desired outcome speaks volumes about your commitment to the client and the quality of service you provide.
 
"Remember our earlier conversation? During the course of our work together, I will be giving you two very short forms that ask how you think things are going and whether you think things are on track. To make the most of our time together and get the best outcome, it is important to make sure we are on the same page with one another about how you are doing, how we are doing, and where we are going. We will be using your answers to keep us on track. Will that be okay with you?"
 

Step Two: Incorporating the ORS in the first session

The ORS pinpoints where the client is and allows a comparison for later sessions. Incorporating the ORS entails simply bringing the client's initial and subsequent results into the conversation for discussion, clarification and problem solving. The client's initial score on the ORS is either above or below the clinical cutoff. You need only to mention the client scores as it relates to the cutoff. Keep in mind that the use of the measures is 100-percent transparent. There is nothing that they tell you that you cannot share with the client. It is their interpretation that ultimately counts.
 
"From your ORS it looks like you're experiencing some real problems." Or: "From your score, it looks like you're feeling okay." "What brings you here today?" Or: "Your total score is 15—that's pretty low. A score under 25 indicates people who are in enough distress to seek help. Things must be pretty tough for you. Does that fit your experience? What's going on?"
 
"The way this ORS works is that scores under 25 indicate that things are hard for you now or you are hurting enough to bring you to see me. Your score on the individual scale indicates that you are really having a hard time. Would you like to tell me about it?"
 
Or if the ORS is above 25: "Generally when people score above 25, it is an indication that things are going pretty well for them. Does that fit your experience? It would be really helpful for me to get an understanding of what it is that brought you here now."
 
Because the ORS has face validity, clients usually mark the scale the lowest that represents the reason they are seeking therapy, and often connect that reason to the mark they've made without prompting from the therapist. For example, Matt marked the Individual scale the lowest with the Social scale coming in a close second. As he was describing his problem in public restrooms, he pointed to the ORS and explained that this problem accounted for his mark. Other times, the therapist needs to clarify the connection between the client's descriptions of the reasons for services and the client's scores. The ORS makes no sense unless it is connected to the described experience of the client's life. This is a critical point because clinician and client must know what the mark on the line represents to the client and what will need to happen for the client to both realize a change and indicate that change on the ORS.
 
At some point in the meeting, the therapist needs only to pick up on the client's comments and connect them to the ORS:
 
"Oh, okay, it sounds like dealing with the loss of your brother (or relationship with wife, sister's drinking, or anxiety attacks, etc.) is an important part of what we are doing here. Does the distress from that situation account for your mark here on the individual (or other) scale on the ORS? Okay, so what do you think will need to happen for that mark to move just one centimeter to the right?"
 
The ORS, by design, is a general outcome instrument and provides no specific content other than the three domains. The ORS offers only a bare skeleton to which clients must add the flesh and blood of their experiences, into which they breathe life with their ideas and perceptions. At the moment in which clients connect the marks on the ORS with the situations that are distressing, the ORS becomes a meaningful measure of their progress and potent clinical tool.
 

Step Three: Introducing the SRS

The SRS, like the ORS, is best presented in a relaxed way that is integrated seamlessly into your typical way of working. The use of the SRS continues the culture of client privilege and feedback, and opens space for the client's voice about the alliance. The SRS is given at the end of the meeting, but leaving enough time to discuss the client's responses.
 
"Let's take a minute and have you fill out the form that asks for your opinion about our work together. It's like taking the temperature of our relationship today. Are we too hot or too cold? Do I need to adjust the thermostat? This information helps me stay on track. The ultimate purpose of using these forms is to make every possible effort to make our work together beneficial. Is that okay with you?"
 

Step Four: Incorporating the SRS

Because the SRS is easy to score and interpret, you can do a quick visual check and integrate it into the conversation. If the SRS looks good (score more than 9 cm on any scale), you need only comment on that fact and invite any other comments or suggestions. If the client marks any scales lower than 9 cm, you should definitely follow up. Clients tend to score all alliance measures highly, so the practitioner should address any hint of a problem. Anything less than a total score of 36 might signal a concern, and therefore it is prudent to invite clients to comment. Keep in mind that a high rating is a good thing, but it doesn't tell you very much. Always thank the client for the feedback and continue to encourage their open feedback. Remember that unless you convey you really want it, you are unlikely to get it.
 
And know for sure that there is no 'bad news' on these forms. Your appreciation of any negative feedback is a powerful alliance builder. In fact, alliances that start off negatively but result in your flexibility to client input tend to be very predictive of a positive outcome. When you are bad, you are even better! In general, a score:
  • that is poor and remains poor predicts a negative outcome,
  • that is good and remains good predicts a positive outcome,
  • that is poor or fair and improves predicts a positive outcome even more,
  • that is good and decreases is predictive of a negative outcome.
The SRS allows the opportunity to fix any alliance problems that are developing and shows that you do more than give lip service to honoring the client's perspectives.
 
"Let me just take a look at this SRS—it's like a thermometer that takes the temperature of our meeting here today. Great, looks like we are on the same page, that we are talking about what you think is important and you believe today's meeting was right for you. Please let me know if I get off track, because letting me know would be the biggest favor you could do for me."
 
"Let me quickly look at this other form here that lets me know how you think we are doing. Okay, seems like I am missing the boat here. Thanks very much for your honesty and giving me a chance to address what I can do differently. Was there something else I should have asked you about or should have done to make this meeting work better for you? What was missing here?"
 
Graceful acceptance of any problems and responding with flexibility usually turns things around. Again, clients reporting alliance problems that are addressed are far more likely to achieve a successful outcome—up to seven times more likely! Negative scores on the SRS, therefore, are good news and should be celebrated. Practitioners who elicit negative feedback tend to be those with the best effectiveness rates. Think about it—it makes sense that if clients are comfortable enough with you to express that something isn't right, then you are doing something very right in creating the conditions for therapeutic change.
 

Step Five: Checking for change in subsequent sessions

With the feedback culture set, the business of practice-based evidence can begin, with the client's view of progress and fit really influencing what happens. Each subsequent meeting compares the current ORS with the previous one and looks for any changes. The ORS can be made available in the waiting room or via electronic software (ASIST) and web systems (MyOutcomes.com). Many clients will complete the ORS (some will even plot their scores on provided graphs) and greet the therapist already discussing the implications. Using a scale that is simple to score and interpret increases client engagement in the evaluation of the services. Anything that increases participation is likely to have a beneficial impact on outcome.
 
The therapist discusses if there is an improvement (an increase in score), a slide (a decrease in score), or no change at all. The scores are used to engage the client in a discussion about progress, and more importantly, what should be done differently if there isn't any.
 
"Your marks on the personal well-being and overall lines really moved—about 4 cm to the right each! Your total increased by 8 points to 29 points. That's quite a jump! What happened? How did you pull that off? Where do you think we should go from here?"
 
If no change has occurred, the scores invite an even more important conversation.
 
"Okay, so things haven't changed since the last time we talked. How do you make sense of that? Should we be doing something different here, or should we continue on course steady as we go? If we are going to stay on the same track, how long should we go before getting worried? When will we know when to say 'when?' "
 
The idea is to involve the client in monitoring progress and the decision about what to do next. The discussion prompted by the ORS is repeated in all meetings, but later ones gain increasing significance and warrant additional action. We call these later interactions either checkpoint conversations or last-chance discussions. In a typical outpatient setting, checkpoint conversations are conducted usually at the third meeting and last-chance discussions are initiated in the sixth session. This is simply saying that based on over 300,000 administrations of the measures, by the third encounter most clients who do receive benefit from services usually show some benefit on the ORS; and if change is not noted by meeting three, then the client is at a risk for a negative outcome. Ditto for session six except that everything just mentioned has an exclamation mark. Different settings could have different checkpoints and last-chance numbers. Determining these highlighted points of conversation requires only that you collect the data. The calculations are simple and directions can be found in our book, The Heroic Client. Establishing these two points helps evaluate whether a client needs a referral or other change based on a typical successful client in your specific setting. The same thing can be accomplished more precisely by available software or web-based systems that calculate the expected trajectory or pattern of change based on our data base of ORS administrations. These programs compare a graph of the client's session-by-session ORS results to the expected amount of change for clients in the data base with the same intake score, serving as a catalyst for conversation about the next step in therapy.
 
If change has not occurred by the checkpoint conversation, the therapist responds by going through the SRS item by item. Alliance problems are a significant contributor to a lack of progress. Sometimes it is useful to say something like, "It doesn't seem like we are getting anywhere. Let me go over the items on this SRS to make sure you are getting exactly what you are looking for from me and our time together." Going through the SRS and eliciting client responses in detail can help the practitioner and client get a better sense of what may not be working. Sarah, the woman who aspired to be a Miami Heat cheerleader, exemplifies this process.
 
Next, a lack of progress at this stage may indicate that the therapist needs to try something different. This can take as many forms as there are clients: inviting others from the client's support system, using a team or another professional, a different approach; referring to another therapist, religious advisor, or self-help group—whatever seems to be of value to the client. Any ideas that surface are then implemented, and progress is monitored via the ORS. Matt and the idea of encouraging his anger illustrate this kind of discussion.
 

The Importance of Referrals

If the therapist and client have implemented different possibilities and the client is still without benefit, it is time for the last-chance discussion. As the name implies, there is some urgency for something different because most clients who benefit have already achieved change by this point, and the client is at significant risk for a negative conclusion. A metaphor we like is that of the therapist and client driving into a vast desert and running on empty, when a sign appears on the road that says 'last chance for gas.' The metaphor depicts the necessity of stopping and discussing the implications of continuing without the client reaching a desired change.
 
This is the time for a frank discussion about referral and other available resources. If the therapist has created a feedback culture from the beginning, then this conversation will not be a surprise to the client. There is rarely justification for continuing work with clients who have not achieved change in a period typical for the majority of clients seen by a particular practitioner or setting.
 
Why? Because research shows no correlation between a therapy with a poor outcome and the likelihood of success in the next encounter. Although we've found that talking about a lack of progress turns most cases around, we are not always able to find a helpful alternative.
 
“Where in the past we might have felt like failures when we weren't being effective with a client, we now view such times as opportunities to stop being an impediment to the client and their change process.” Now our work is successful when the client achieves change and when, in the absence of change, we get out of their way. We reiterate our commitment to help them achieve the outcome they desire, whether by us or by someone else. When we discuss the lack of progress with clients, we stress that failure says nothing about them personally or their potential for change. Some clients terminate and others ask for a referral to another therapist or treatment setting. If the client chooses, we will meet with her or him in a supportive fashion until other arrangements are made. Rarely do we continue with clients whose ORS scores show little or no improvement by the sixth or seventh visit.
 
Ending with clients who are not making progress does not mean that all therapy should be brief. On the contrary, our research and the “findings of virtually every study of change in therapy over the last 40 years provide substantial evidence that more therapy is better than less therapy for those clients who make progress early in treatment” and are interested in continuing. When little or no improvement is forth coming, however, this same data indicates that therapy should, indeed, be as brief as possible. Over time, we have learned that explaining our way of working and our beliefs about therapy outcomes to clients avoids problems if therapy is unsuccessful and needs to be terminated.
 
Barry Duncan writes: But it can be hard to believe that stopping a great relationship is the right thing to do.
 
Alina sought services because she was devastated and felt like everything important to her had been savagely ripped apart—because it had. She worked her whole life for but one goal, to earn a scholarship to a prestigious Ivy-league university. She was captain of the volleyball team, commanded the first position on the debating team, and was valedictorian of her class. Alina was the pride of her Guatemalan community—proof positive of the possibilities her parents always envisioned in the land of opportunity. Alina was awarded a full ride in minority studies at Yale University. But this Hollywood caliber story hit a glitch. Attending her first semester away from home and the insulated environment in which she excelled, Alina began hearing voices.
 
She told a therapist at the university counseling center and before she knew it she was whisked away to a psychiatric unit and given antipsychotic medications. Despondent about the implications of this turn of events, Alina threw herself down a stairwell, prompting her parents to bring her home. Alina returned home in utter confusion, still hearing voices, and with a belief that she was an unequivocal failure to herself, her family, and everyone else in her tightly knit community whose aspirations rode on her shoulders.
 
Serendipity landed Alina in my office. I was the twentieth therapist the family called and the first who agreed to see Alina without medication. Alina's parents were committed to honor her preference to not take medication. We were made for each other and hit it off famously. I loved this kid. I admired her intelligence and spunk in standing up to psychiatric discourse and the broken record of medication. I couldn't wait to be useful to Alina and get her back on track. When I administered the ORS, Alina scored a 4, the lowest score I'd ever had.
 
We discussed her total demoralization and how her episodes of hearing voices and confusion led to the events that took everything she had always dreamed of from her—the life she had worked so hard to prepare for. I did what I usually did that is helpful—I listened, I commiserated, I validated, and I worked hard to recruit Alina's resilience to begin anew. But nothing happened.
 
By session three, Alina remained unchanged in the face of my best efforts. Therapy was going nowhere and I knew it because the ORS makes it hard to ignore—that score of 4 was a rude reminder of just how badly things were going.
 
At the checkpoint session, I went over the SRS with her, and unlike many clients, Alina was specific about what was missing and revealed that she wanted me to be more active, so I was. She wanted ideas about what to do about the voices, so I provided them—thought stopping, guided imagery, content analysis. But, no change ensued and she was increasingly at risk for a negative outcome. Alina told me she had read about hypnosis on the internet and thought that might help. Since I had been around in the '80s and couldn't escape that time without hypnosis training, I approached Alina from a couple of different hypnotic angles—offering both embedded suggestions as well as stories intended to build her immunity to the voices. She responded with deep trances and gave high ratings on the SRS. But the ORS remained a paltry 4.
 
At the last-chance conversation, I brought up the topic of referral but we settled instead on a consult from a team (led by Jacqueline Sparks). Alina, again, responded well, and seemed more engaged than I had noticed with me—she rated the session the highest possible on the SRS. The team addressed topics I hadn't, including differentiation from her family, as well as gender and ethnic issues. Alina and I pursued the ideas from the team for a couple more sessions. But her ORS score was still a 4.
 
Now what? We were in session nine, well beyond how clients typically change in my practice. After collecting data for several years, I know that 75 percent of clients who benefit from their work with me show it by the third session; a full 98 per cent of my clients who benefit do it by the sixth session. So is it right that I continue with Alina? Is it even ethical?
 
Despite our mutual admiration society, it wasn't right to continue. A good relationship in the absence of benefit is a good definition of dependence. So I shared my concern that her dream would be in jeopardy if she continued seeing me. I emphasized that the lack of change had nothing to do with either of us, that we had both tried our best, and for whatever reason, it just wasn't the right mix for change. We discussed the possibility that Alina see someone else. If you watch the video, you would be struck, as many are, by the decided lack of fun Alina and I have during this discussion.
 
Finally, after what seemed like an eternity, including Alina's assertion that she wanted to keep seeing me, we started to talk about who she might see. She mentioned she liked someone from the team, and began seeing our colleague Jacqueline Sparks.
 
By session four, Alina had an ORS score of 19 and enrolled to take a class at a local university. Moreover, she continued those changes and re-enrolled at Yale the following year with her scholarship intact! When I wrote a required recommendation letter for the Dean, I administered the ORS to Alina and she scored a 29. By my getting out of her way and allowing her and myself to 'fail successfully,' Alina was given another opportunity to get her life back on track—and she did. Alina and Jacqueline, for reasons that escape us even after pouring over the video, just had the right chemistry for change.
 
This was a watershed client for me. Although I believed in practice-based evidence, especially how it puts clients center stage and pushes me to do something different when clients don't benefit, I always struggled with those clients who did not benefit, but who wanted to continue with me nevertheless. This was more difficult when I really liked the client and had become personally invested in them benefiting. Alina awakened me to the pitfalls of such situations and showed a true value-added dimension to monitoring outcome—namely the ability to fail successfully with our clients. Alina was the kind of client I would have seen forever. I cared deeply about her and believed that surely I could figure out something eventually.
 
But such is the thinking that makes 'chronic' clients—an inattention to the iatrogenic effects of the continuation of therapy in the absence of benefit. Therapists, no matter how competent or trained or experienced, cannot be effective with everyone, and other relational fits may work out better for the client. Although some clients want to continue in the absence of change, far more do not want to continue when given a graceful way to exit. The ORS allows us to ask ourselves the hard questions when clients are not, by their own ratings, seeing benefit from services. The benefits of increased effectiveness of my work, and feeling better about the clients that I am not helping, have allowed me to leave any squeamishness about forms far behind.
 
Practice-based evidence will not help you with the clients you are already effective with; rather, it will help you with those who are not benefiting by enabling an open discussion of other options and, in the absence of change, the ability to honorably end and move the client on to a more productive relationship. The basic principle behind this way of working is that our day-to-day clinical actions are guided by reliable, valid feedback about the factors that account for how people change in therapy. These factors are the client's engagement and view of the therapeutic relationship, and—the gold standard—the client's report of whether change occurs. Monitoring the outcome and the fit of our services helps us know that when we are good, we are very good, and when we are bad, we can be even better.