Psychotherapy outcomes: The best therapy or the best therapist?

I’m often asked, “What’s the best therapy for anxiety/depression/trauma/etc?”  CBT, EMDR, ISTDP, ACT, DBT – the alphabet soup of therapies – how do we (and our clients) choose?  Research shows that psychotherapy outcomes often vary more between therapists than therapies, suggesting that picking the right therapy may actually be the wrong approach. In other words, choosing the most effective psychotherapist is more important than choosing the most effective therapy.   

How can our clients pick the most effective therapist? They can’t. There is no industry standard for tracking and reporting psychotherapy outcomes. This won’t last. Regulators and consumers are going to demand public accounting of treatment effectiveness. If I have the right to ask my surgeon for their success rate, then why can’t my clients ask for mine?

In a recent panel, the eminent psychotherapy researcher David Barlow noted the “inexorable trend” toward outcomes measurement. He believes it will bring “enormous benefit for all of us,” by improving the connection between clinical research and the effectiveness of actual clinical practice.

Many therapists, however, dread the movement towards measuring outcomes. They raise important concerns about the ability of outcome measures to assess subtle nuances of psychotherapy in long-term treatment. Other concerns include paperwork hassles, and the danger of “therapist profiling” by outcome. (You can join a lively discussion of these concerns in the forums here.)

However, the benefits of embracing outcomes far outweigh the concerns. I’d like to suggest four major benefits to tracking psychotherapy outcome:

  1. Measuring outcomes will help us become better therapists. How else can we know if all the workshops, trainings and supervision we do are actually helping?
  2. If we get out in front of this movement then we will have a stronger hand in designing it. If we resist the push towards accountability, it will be forced upon us. (For example, the Los Angeles Times recently published a report outcomes of public school teachers in Los Angeles county, by teacher name.)
  3. Online therapist-review websites (such as yelp.com or healthgrades.com) lets one or two disgruntled clients hurt your reputation. A public system for reporting outcomes gives a fair perspective of your work.
  4. Most importantly, our clients deserve to know about the treatment they are getting. Research consistently shows that most therapy is very successful. Dodging accountability can foster the impression that our failures are more common than our successes.
One good example of a therapist who has embraced outcome measurement is Allan Abbass. He tracked and reported his therapy outcomes for his first six years in private practice, and then published the results.

How can a therapist start tracking their outcomes?  I use the Outcome Rating Scale, which takes about one minute at the beginning of each therapy session. The free scale and instructions can be downloaded here  and here. There are also three online services that help therapists track their outcomes: myoutcomes, oqmeasures, and core-net.

[This blog is dedicated to exploring training tools and techniques that help us become better therapists. Please email me at trousmaniere@yahoo.com if you have any feedback or new psychotherapy training techniques you would like to share.] 

Preventing Psychotherapy Dropouts with Client Feedback

“You understand me thirty percent of the time.”

“I need to you to slow down.”

“I was sad and you cut me off.”

These words of dissatisfaction are from my clients. They weren’t easy to hear, but they have changed how I practice psychotherapy and have significantly reduced my dropout rate.

Anne: A Case Study

I had been treating Anne, a Latin-American woman in her early 20s, in psychotherapy for six months. She presented with weekly panic attacks, daily cutting, severe sleep disturbances, a range of somatic symptoms that she attributed to her anxiety, and persistent interpersonal difficulties. She presented as attentive and likeable, though beneath her mask of smiling and compliance she clearly hid a tremendous amount of pain. Anne has a history of sexual abuse by multiple family members over a six-year period starting before age four. Her mother had been a prostitute for most of Anne’s life, and both her biological father and stepfather are in prison for sexual assault. Despite these and many other challenges, Anne demonstrated tremendous resiliency and had just graduated from college with a very strong GPA.

Anne had been in individual and group therapy for much of her childhood and teens, but by her own report she had never really tried to make it work. After graduating from college, Anne decided she wanted to find a solution to her anxiety, sought out individual therapy, and found me.

Anne’s treatment progressed well at first. In the first few months her panic attacks stopped, her general anxiety decreased, she stopped cutting, her somatic symptoms decreased, and her sleep gradually improved. Anne’s interpersonal difficulties, however, persisted. We had been digging into that material for a few months but had made little progress. In fact, her social and romantic life was getting worse. Anne was becoming restless and frustrated. I pulled out my two favorite “getting therapy unstuck” tools: consultation groups and additional training. Neither helped. As a dynamic therapist, I knew what I was supposed to do: work in the transference, bring insight to the dynamics in the room, monitor my counter-transference, and above all hold the frame. But “the frame of a therapy case cannot be stronger than the frame of a therapy practice, and mine was starting to splinter.”

Existential Threat

In the same month that my treatment of Anne was getting stuck, I had two new clients drop out after one session in the same week. I knew about the research that we are all told in graduate school about how the modal number of psychotherapy sessions nationwide is one, and how not every client and therapist is a good match, and yada yada. But for a new therapist trying to build a practice during a recession, having two new clients drop out in one week is an existential threat. I decided something had to change.

On my commute home one evening that week, I listened to a recording of Scott Miller’s presentation at the 2009 Evolution of Psychotherapy Conference regarding his pioneering work on feedback-informed psychotherapy. Scott got my attention when he referred to dropouts as the “largest threat to outcome facing behavioral health” in the United States and Canada. He was talking about my practice! I realized that I was not the only therapist with a dropout problem, and there was no reason to hide it out of embarrassment. I resolved to seek counsel from my colleagues and mentors.

The Ubiquitous Scourge

In the first, difficult year of building my private practice, I ate a lot of lunch. Networking lunches are like lottery tickets: one in ten results in a few referrals, and every referral was worth its weight in gold in that difficult first year. I enjoy networking lunches, because it’s fun to meet senior clinicians and hear their war stories. They tell me that they enjoy the lunches because they get to pass on the gift of mentoring that was once given to them. Senior clinicians are a generally calm, relaxed and self-assured bunch; they have established referral sources and can easily afford to lose a client here and there. Want to make some highly regarded pillars of the therapeutic community stop eating their free lunch and sweat a bit? Ask about their dropout rate. It’s as if you’re asking what sexually transmitted diseases they may have. It’s not polite. Never mind that dropouts are one of the ubiquitous scourges of our profession, affecting all diagnoses and treatment modalities. Therapy dropouts are the dirty secret of our profession: everyone has them yet few want to talk about them. Unfortunately, avoidance has not proven to be an effective solution to the problem. With few exceptions, the overall psychotherapy dropout rate is as bad now as it was fifty years ago, despite decades of treatment research and empirical certification.

What Counts as a Dropout?

For 2010, the overall dropout rate for my private practice was 37%. Unfortunately, it is hard to know whether this number is good, average or poor, because there is no general consensus in the literature on what exactly constitutes a “dropout.” The average psychotherapy dropout rate has been reported to be from 15% to 60%, or higher, depending upon whether you define dropout as quitting therapy before all treatment goals were achieved, terminating without the therapist’s agreement, or a variety of other definitions. For my own practice, I define dropout as any time a client terminates therapy without telling me that they are stopping because they have achieved enough positive results. I chose this definition because I think it points most directly to the problem I want to resolve: clients who could benefit from more therapy but choose to not be in treatment with me anymore. Of course, this definition is not precise and won’t work for all therapists. If a client terminates due to factors that make continued treatment impossible, such as moving out of town, then I do not count it as a dropout; but if the given reason is that he or she cannot afford therapy anymore, but isn’t interested in talking about a sliding scale, then I do count this.

Of course, there are many reasons a client may drop out. Most of the research on dropouts has focused on what we call client factors, such as the client’s diagnosis, demographics, rate of progress in therapy, etc. But this research doesn’t help my dropout problem because I’m trying to keep my practice full, and I don’t have the luxury of excluding clients who are at high risk of dropout. So instead I have to focus on therapist factors: what can I change about how I work to reduce my dropout rate.

Insisting on Feedback

“Of course I ask for feedback from my clients. I do it every session!” Every therapist believes they ask for client feedback. True for you too? Then tell me why your last three dropouts happened. Sure, we ask for feedback, in the same way that my previous dentists asked—as an offhand, pro-forma fly-by at the end of the root canal. “Was that ok?” And the information we get is usually as meaningful as the effort we expend asking. “Yeah, that was great,” or “You’re a great therapist,” or “I’m really feeling better.” Vague and general; even worse, polite. Just enough for the client to think that they have satisfied the therapist and just enough for the therapist to keep the specter of dropout in the closet. It’s a mutual con-job—a wink and a nod to accountability. But if we don’t embrace accountability in the therapy room, then it will make itself known in dropouts.

Sure, some clients are tripping all over themselves to give you feedback. Sometimes you can’t stop the feedback. But those aren’t the clients I’m worried about losing to dropout. Maybe some therapists are able to get meaningful information through informal soliciting of feedback, but I’ve found the hard way that if I don’t make a Big Formal Procedure out of it, I end up with empty, vague generalities.

Another fruitless session had just ended with Anne, and I was pretty sure that she was about to drop out. I handed her a feedback form and asked her to complete it. “She looked at the piece of paper, snorted and said, “Are you kidding me?”” As a beginning therapist, I have a lot of practice hiding my nervousness. I replied, “I need your feedback in order to learn how to help you better, but also to become a better therapist overall, so I appreciate your time and candor in filling this out.” Anne snorted again, rolled her eyes, and completed the Session Rating Scale, an ultra-brief tool that measures the working alliance along four dimensions. She handed the form back to me and I saw that our working alliance, as I would have guessed, was a sinking ship. I asked what specifically I could do to help her better. Anne replied, “You could listen.”

I said, “More specifically, tell me how I don’t listen and how I can help you better.”

She gave me the look clients give you when they’re not sure if you really mean what you say or if you’re just doing a canned intervention. “You understand me thirty percent of the time,” she said, visibly angry. I asked for an example. “When I mentioned my cousin you cut me off,” Anne said. “That was important.”

I couldn’t remember Anne mentioning her cousin. “What else?” I said.

“You tuned out two or three times this session. I can always tell you’re tired when we meet this time of day.” I thought I had managed to hide my mid-afternoon fatigue.

“What else?”

“There are times when I am sad that you really don’t understand how I’m feeling—even though I can tell that you think you do.”

None of Anne’s feedback struck me as accurate. Above all, I pride myself on accurate empathy. What kind of therapist am I if I don’t feel a client’s sadness?

Four Rules for Receiving Feedback

We all have areas of known weakness. Take cultural diversity, for example. I am a straight, white, middle-aged male. Anne is a young bisexual Latina. I would expect for her to tell me about culturally based misunderstandings. This would be ego-syntonic for me and not cause anxiety. But tuning out or missing sadness—that’s not me!

The feedback I get from clients that is confusing or seems inaccurate is the most important feedback I get. “Why is it that we trust our supervisors to point out our blind spots, but not the people who are actually in the room with us?” It’s odd how we spend so much effort and money getting feedback from peers and experts, yet so little effort on getting formal feedback from our customers.

I’ve come to see that there were two major problems with how I had been using feedback. First, my collection of feedback was pro-forma. I wasn’t invested in getting it, and my clients could tell. Second, I interpreted the feedback. I conceptualized it as part of the therapeutic process, which meant that it was ultimately about the client, not about me. Of course, getting and using feedback affects and informs the therapeutic process. I needed to learn, however, to set aside the process for a moment to accurately hear the feedback as it pertained to me.

Since then I have developed a four-step feedback rule. First, I make a Big Deal out of it. I use a paper form (the Session Rating Scale) because the act of pulling out the paper and pen serves as a symbolic shift in focus away from the client’s process towards my performance. If a client always gives me high marks on the form, or responds with platitudes like, “Tony, everything is great,” I’ll say, “Well, there’s always something I can improve. Can you give me one or two specific ideas on what I could be doing better?” In therapy, it’s all about the client. In feedback, it’s all about me—I’m downright selfish!

The second rule of feedback is that I don’t interpret. If I make the feedback about the therapeutic process then I am missing the actual feedback. As a dynamic therapist, all my training was telling me to interpret Anne’s response as transference or a projection: she was reliving her past pathological attachments in our relationship. But I’m convinced this approach would have caused Anne to drop out, because she would have seen (correctly) that I was ignoring her.

Scott Miller calls this kind of attribution “burden shifting”—when we misattribute our mistakes to client factors. He warns therapists that blaming dropouts on client demographics or diagnostic categories can block our insight into our own mistakes.

The American Psychological Association is moving towards requiring trainees to learn how to collect clinical outcome data. Likewise, Michael Lambert1 and others have developed tools to predict and reduce dropout by tracking clients’ session-by-session clinical progress throughout treatment. This data is valuable, but still focuses on client factors, and thus can miss important information that only the client has on what the therapist is doing wrong. I need to know my part in the story so I can stay ahead of potential dropouts. Without session-by-session feedback, when a client drops out, it is already too late to find out why.

As therapists we claim clinical legitimacy by using empirically certified treatments. We advertise our professional trainings and certifications proudly. But just as important are our personal treatment data, including our dropout rate, which we generally hide in the closet. Krause, Lutz and Saunders2 have argued that instead of having empirically certified therapies, we should have empirically certified psychotherapists. As public health providers, assessing outcome is an ethical responsibility. If we continue to hide to our mess then we run the risk of others exposing it for us. (For example, teachers’ unions across the country are getting clobbered for their resistance to incorporating meaningful outcome evaluations into their work.)

Incorporating Feedback

How do I actually use feedback? Sometimes it is easy. For example, in response to Anne’s feedback, I moved her appointment to a time of day when I wouldn’t be tired. (Now I use her previous time for a midday nap, so other afternoon clients are benefiting from Anne’s feedback as well.) Other feedback can be harder to use, especially when it is about my own unconscious behaviors. Anne insisted that I cut her off when she had brought up her cousin, but I couldn’t remember doing so. Likewise, I had no awareness of avoiding her sadness. While I did want to take her comments seriously, I also didn’t want to automatically assume her perceptions were correct.

However, feedback that points to my unconscious behaviors is also the most valuable. This is the third rule of feedback, which is the hardest rule to follow: to “focus most on the feedback that seems inaccurate, confusing, or anxiety-provoking. This is where the treasure is buried. “

When I’m unsure about the accuracy of the feedback I am getting, I use a strategy I call perspective triangulation. First, I videotape my sessions with that client and review the video myself. I then review it with colleagues in consultation groups. Comparing the perspectives of the client, myself and my colleagues usually results in a definitive answer.

In my experience, the client’s perceptions are correct at least two-thirds of the time, and I make consequent course corrections in their treatment. It is important to note, however, that even when I think the client’s perceptions are incorrect, I still have to substantively address their feedback, or else there is a growing risk of dropout.

My review of the video showed that, yes, I had cut her off. Colleagues in a consultation group watched the video and pointed out multiple instances where Anne was about to have a rise of sadness, but I had blocked her sadness by refocusing on her anger. (Later sessions revealed that the two were in fact connected, as her sadness was about being unable to protect her cousin from abuse.) This was the hardest feedback for me to receive; I never would have believed it, had it not been clear as day on the video. Investigation of videos revealed that I had an unconscious pattern of re-directing from sadness with a range of other clients in addition to Anne. I never would have found out had I not insisted on feedback.

The fourth step in my feedback process brings it back to the client. If I agree with their comments, then I make appropriate course corrections in our work. If I disagree, then we discuss our different points of view. Either way, I make sure to be clear and transparent in my process, and to let clients know that I take their feedback seriously. So in this case Anne and I had a discussion about her feedback. I agreed to be more attentive to not cutting off her sadness. She agreed to let me know, in the moment, if she saw me doing it.

I was trained to get a review of my clinical weaknesses from my trainers and supervisors. Now I also get it from my clients. They have given me an amazing gift: an empirically validated list of my clinical weaknesses. I can’t think of a better resource to prevent dropouts.

Now, six months later, Anne has made significant progress on her interpersonal challenges. She has improved her relationships with friends, roommates and employers. She started setting firm boundaries with previously abusive family members. Her sleep, anxiety and somatic symptoms all continue to improve. Every session Anne teaches me how to better help her.

Before using feedback, I had one to three dropouts per month. Since getting serious about feedback, I’ve had only one dropout in over three months. While this is too soon to draw definitive conclusions, the results so far are very encouraging.

The client sitting across from me knows something about my dropout problem that I don’t. All I have to do is ask, and listen.

2011 Update

 I am pleased to report that my dropout rate for 2011 was 18%, one-half what it was in 2010. I'm confident that getting serious about client feedback contributed to this improvement. This raises the question: how low can a dropout rate realistically go? Besides improving as a therapist, what else can help lower the rate further? (One of my clients recently suggested offering coffee in the waiting room for night sessions!) Hopefully we will find answers to these questions from future research.

Footnotes

1. Lambert, M. J., Harmon, C., Slade, K., Whipple, J. L., & Hawkins, E. J. (2005). Providing feedback to psychotherapists on their patients' progress: Clinical results and practice suggestions. Journal of Clinical Psychology, 61, 165–174.

2. Krause, M.S.; Lutz, W. & Saunders, S.M. Empirically certified treatments or therapists: The issue of separability. (2007). Psychotherapy: Theory, Research, Practice, Training. 44, 347-353.

Further Reading

“When I’m good I’m very good , but when I’m bad I’m better”: A New Mantra for Psychotherapists. by Barry Duncan, PhD and Scott Miller, PhD.

What if Its All Been a Big Fat Psychotherapeutic Lie?

In the early 90's I developed a classroom exercise to teach my students an important academic lesson. This is one of those experiential exercises where the professor feels holier-than-thou because he or sheknows the outcome in advance. First, I placed the students in groups of two's and asked one of the students to play the part of the helper while the other played the part of the client who tells a real or fictitious problem.

Next I pulled the helpers into the hallway. During the first trial the helpers were merely instructed to give the clients advice, suggestions, ask lots of questions, be extremely directive, and provide psychological interpretations. There was absolutely no empathy, warmth, or relationship building . . . I repeat no relationship building.  This session was a strict Rogerian's worse nightmare.

I then gave the helpers and the helpees about a five or ten minute session together. I then pulled the folks playing the helpers out in the hall once more and explained that during trial number two they were forbidden to give any advice, interpretations, or suggestions. They were also told not to ask the person playing the client any questions. Instead, they were merely instructed to be totally nondirective, paraphrase, reflect, and make statements that conveyed a high degree of empathy. Using the same partner with the same problem, the students were given another five minutes together.

Next using a scale of 0 to 100 (in which 0 is terrible, 50 is average, and100 is perfection) the students playing the part of the client were going to rate their helpers. Needless to say, I knew that the clients would rate their helper higher during trial two; except for one thing: it didn't happen!  The ratings for the first session devoid of empathy were significantly higher.  In fact, it was a blow-away landslide in favor of the directive approach. Say what?

I mentally scratched my head and made a joke out of the whole experience, convinced the results in this class were merely an anomaly. "Listen," I told the class, "I knew you guys were strange, but I didn't know how strange." I then explained that exercises in class often do not parallel what transpires in the real world of therapy.  Secretly, I also told myself that these were undergraduate students that most likely didn't do the interventions correctly.

There is only one problem: I have now been doing this experiential exercise (switching the order of the trials) for approximately 17 years and I can't remember a single trial when the relationship building non-directive approach won when I looked at the results for the entire class! And while no self-respecting researcher would be impressed by my experimental rigor, they would be impressed by my N; over 1000 individuals have now participated in my therapeutic scenario. Since the aforementioned first trial I've added grad students, probation and parole officers, guidance counselors, therapists in training seminars, and therapeutic supervisors, to the rank of participants.

How can this be? Many, if not most, research studies insist empathy is the most important trait for a counselor. I nearly always use what I consider a Rogerian, person-centered, non-directive, heavy on the empathy approach during my initial sessions with a client even if I plan to switch to more directive interventions during subsequent sessions. Heck, it has to be true, it says so in most counseling books, including some I have penned! So what is the explanation for these seemingly contradictory results?

1. Well, there's the rationale (or should I say rationalization?) I've been giving to my classes and in seminars for years now; simply that students and workshop participants are not like real clients and this exercise would turn out differently if we used real clients. In other words, the folks in my classes or seminars are training to work in the field or they are working in the field and therefore believe in suggestions and advice . . . no empathy necessary! The problem with this explanation is that often students are real clients, otherwise we wouldn't have college and university counseling centers.  In the case of therapists, many do seek treatment from other helpers. Indeed, if my armchair experiments are on target then relationship building, non-directive, empathy laden initial sessions, should not be used with those in the field or folks planning to go into the field.

2. Students, grad students, or helpers in the field don't really know how to perform person-centered, Rogerian slanted interventions. Maybe it's just too complicated. Although this is theoretically possible, the eminent psychologist Ray Corsini once told me that Rogers confided in him that he could teach anybody to do client-centered therapy in two weeks.

3. The paraphrasing, reflecting, and rating responses on an empathy scale paradigm we use to teach this approach actually bears little or no resemblance to what Carl R. Rogers was actually doing with his clients. Hmm that's certainly conceivable. Or . . .

4. What if it has all been a big fat psychotherapeutic lie?

As for me, well at this point in time I guess I must admit that despite a wealth of experience and knowledge, I remain a psychotherapeutic agnostic. You decide.

Methinks Jay Haley Hit the Bulls Eye

My client began her session with an interesting saga. In an attempt to improve her health she began each day by ingesting a nutritional drink that was loaded with nearly 100 superfoods. Since I personally take enough vitamin and mineral supplements a day to capsize a small battleship, I was all ears. Unfortunately, my client lamented that the supplement seemed counter-productive. That is to say, instead of having unlimited energy, she was nearly falling asleep at the wheel on the way to work. The client was quite savvy when it came to nutrition and therefore hypothesized that the product was excellent, but it needed more protein.  In other words, the high carbohydrate formula was the problem.

Truth is always stranger than fiction and the very next week — as if the supplement company had a bug or a webcam in my office — they released the identical drink in a high protein low carb version. Problem solved? Well to use the oft-quoted phraseology of our times: not so much. The client reported that she was dragging through the morning just as bad as ever. Her dilemma was solved quite by accident when one day she discovered she was out of her superfood protein drink and thus she began the day with a banana and a slice of white devitalized bread and a low-tech multiple vitamin. (Sheer blasphemy, incidentally, for nutritional zealots like myself or my poor client.) The verdict: She had boundless energy and felt terrific. After that day she continued with the banana/bread regiment with excellent results.

Along these same lines another client was telling me about how he became very serious about his golf game.  The golf pro felt his swing was sound but he almost fell over laughing when he saw my client's antiquated clubs. The pro promised to set him up with some serious equipment. The irony, however, was that his his golf game suffered markedly when he began using the new high-tech, super high price tag, custom fit clubs. My client became somewhat obsessive and in the years that followed and he secured club recommendations from golf pro after golf pro and purchased set after set to no avail. Finally, one day, just as a joke, he pulled out his early 1970s aluminum shafted clubs and shot the best round he had in years.  He decided to stick with the zero tech clubs of yesteryears and his game continued to improve.

Like most therapists, I have literally heard hundreds of stories like this including:
• Men who gave their wives flowers or compliments based on the recommendation of some self-improvement expert, an Oprah approved bibliotherapeutic work, or a well-credentialed psychotherapist, and the relationship deteriorated.
• Parents who followed the behavior modification instructions to reinforce their child's behavior and saw the behavior stay the same or perhaps get worse.
• Clients who were told to wear orthotics in their shoes to take their comfort to a whole new level and now had pain in their feet or legs that never existed prior to wearing the devices and
• People who jogged extremely long distances every day to "do something good for themselves and to ward off old age" and now look considerably older than their peers (yes, there is even some scientific research that seems to be backing up this one) . . .  to name a few.

So what in the world is going on here? At least for me, the riddle was solved in an instant when I attended a lecture of Jay Haley's several years before he passed away. An audience participant asked Haley to spell out what caused most people's discord and Haley remarked, "The solution to the problem is the problem." I'll leave it up to historians of psychotherapy to discern whether Haley really came up with this on his own or whether he lifted the idea from the great Milton H. Erickson or perhaps Gregory Bateson.

In any event, the key point is that often, the very strategies that the client is using to make his or her life better are at the root of the problem. But I ask you: How often as therapists do we investigate this dynamic? In all probability, it is not nearly enough. We like it and get excited when clients seemingly do good things. Nevertheless, the message to take back to the therapy room is that something that appears positive is not always positive. The protein shake, the orthotics, and giving a spouse flowers could be the culprit. Most of us would never suggest that the client give up the protein shake, or perhaps stop complimenting a spouse. Instead, many therapists will gloss right over these behaviors and look elsewhere for the root of the problem. In essence, The solution to the problem — even when it appears to be a good one — can the problem. Jay Haley hit the bull's eye. Now it's your turn.
 
 
 

How Therapists Fail: Why Too Many Clients Drop Out of Therapy Prematurely

Depending on which study you read, between 20 and 57 percent of therapy clients do not return after their initial session. Another 37 to 45 percent only attend therapy a total of two times. Although many factors contribute to premature client termination, the number one cited reason by clients is dissatisfaction with the therapist. The problem of the “disappearing client” is what Arnold Lazarus has called “the slippery underbelly to the successful practice of psychotherapy that is almost never discussed in graduate programs or medical schools.”

As clinical supervisors of interns at a university community clinic, we are painfully aware of the high rate of client dropout, and thus the idea for our book How to Fail as a Therapist was born. What we found in doing the research for the book is that high dropout rates are not just common amongst interns, but are equally prevalent among experienced therapists regardless of training and clinical orientation.

When clients drop out early, everyone loses. We clinicians lose a chance to help someone in need and our wallets and reputation suffer as well. The consequences for clients are even more dire. Those clients who drop out early display poor treatment outcomes, over-utilize mental health services, and demoralize clinicians.

Now the good news (after all, therapists should be optimistic): there are a number of well-researched strategies which have been proven to reduce dropout rates and increase positive treatment outcomes. For example, in one study a simple phone call to confirm a client’s first appointment resulted in a two-thirds reduction in dropouts. Unfortunately, it is often labor intensive to seek out and review much of the relevant research because it is scattered throughout the literature–a journal article here, a chapter in a book there. And, unfortunately, most mental health clinicians, with and without a PhD, rate reading research as a very low clinical priority.

Thus, a major task in writing the book How to Fail as a Therapist was to assemble, organize and condense the vast body of research addressing therapeutic effectiveness. Of the 50 therapeutic errors described in the book, here we present five of the most common ones made by clinicians–both beginners and “master” therapists.

The “Infallibility Error”

One of the most distinguishing characteristics of therapists who have low dropout rates is that they actively seek feedback–both positive and negative–regarding the effectiveness of their clinical work. On the other hand are those therapists who believe that after years and years of study, comprehensive exams, postgraduate supervision, and licensing exams, they do or should have all of the answers to clinical matters. So when their clients voice concerns about their progress, or worse yet, when they drop out or deteriorate under the therapists’ care, there is a tendency to avoid accepting responsibility for committing a possible therapeutic error. It is easier to point the finger elsewhere: “maybe the problems were too severe”; “the patient was not ready or willing to change”; there was too much transference operating.” The possibility for rationalization and denial is endless. These explanations, even when partially valid, may soothe the ego, but they protect clinicians from engaging in an honest and comprehensive exploration of what might have gone wrong in a particular case.

A group of interns were asked to describe a case in which a client of theirs terminated early in therapy. One intern described the case of a 10-year-old male client, who had been referred by his teacher because he seemed disconsolate over his parents’ divorce. When, in the first session, the intern probed about the effect of the parents’ separation, the client became emotional and wanted to change the subject. The intern persisted, however. The client stood up, tears falling, and refused thereafter to return to therapy. The supervisor responded to the case presentation by emphasizing the need for therapists to be very cautious during early sessions, particularly when eliciting difficult material from clients. Before the supervisor could get very far, the intern interrupted by stating: “I am already discussing this case with my other supervisor, so I probably shouldn’t get input from both of you.”

Clearly, this intern was desperate to avoid facing the possibility that he did not handle the case as delicately as perhaps he should have. None of us really relishes the idea that we may have blundered, but if we deny this possibility, we deny ourselves the chance to grow as clinicians.

One way to avoid the infallibility error is to seek feedback from clients who have dropped out prematurely. Arnold Lazarus describes in his book, Multimodal Behavior Therapy, how he has gained great insights by writing “early terminators” and suggesting that they come in for a “feedback session” for which he doesn’t charge. In one such case, a client reported that she felt the therapist had not been sympathetic when she was recounting the loss of a beloved pet. The therapist apologized for the insensitivity and the client decided to continue in therapy.

One crucial statistic to keep is mind is that the majority of clients who drop out do so after the first or second session. Thus, we must elicit client feedback, positive and negative, early on to head off any misunderstandings or negative feelings about the therapist, the therapeutic process or the therapists. Clients can be asked directly at the end of the first session if they feel therapy is on track and if they feel liked, understood and respected. “Asking for direct feedback may feel a little awkward; however, a little awkwardness is better than losing a client before he or she can be helped.”

The “Pathology Orientation” Error

In the field of psychotherapy, the term “The Bible” has become synonymous with the publication known as Diagnostic and Statistical Manual. This definitive compendium of emotional disorders was first published in 1952. Since that time, the Manual has gone through a number of revisions (four major and several minor ones) and has continued to add new diagnostic categories. In addition, it has really bulked up over the decades, growing from a mere 138 pages at the outset to over 800 pages in its most recent incarnation.

Currently every student entering the field of psychiatry, psychology, social work or counseling is required to virtually memorize the DSM-IV-TR, and thus professionals in our field have greatly increased their knowledge base of diagnostic criteria, demographics and prognoses of emotional disorders. Alas, these advances have a downside as well: it has created an overemphasis on pathology to the near exclusion of what is healthy, resilient, and capable in the clients that we treat.

At the same time that the fields of diagnosis and assessment were becoming more sophisticated, an alternative view of human potential was also advancing. Theorists such as Carl Rogers, Abraham Maslow and Victor Frankl were among the forerunners of those who tended to take a broader view of the client, looking beyond pathology toward human capability. Milton Erickson’s work, which emphasized client resources, was in the vanguard of this new perspective.

Following Erickson’s lead, a number of other clinicians and researchers have explored the idea of utilizing client strengths as a resource in the treatment of emotional problems. Narrative Therapy avoids the exclusive focus on problems and pathology by instead exploring clients’ alternative stories–occasions in which healthy, productive behaviors were enacted instead of the usual counter-productive responses.

Ryan was described as “incorrigible” by his teachers. He spent as much time in the principal’s office as he did in the classroom. His main transgressions revolved around aggressive and bullying behavior. Ryan’s counselor applied a narrative approach by first asking Ryan about his “problem story”–the things that get him in trouble. They then gave a name to his problem story–“Mr. Trouble.” In addition to gathering the nasty details of his misbehavior, the counselor also inquired about occasions when a different Ryan, a kinder Ryan, surfaced. The question itself seemed to shock the 10-year-old. However, after reflection he confessed that on occasion he had shown care to his younger brother when he was ill, or was lonely and needed a playmate. The counselor then asked follow-up questions to explore the way “Kind Ryan” felt after demonstrating care to his brother.

“What did you think of yourself for being helpful to your brother?”
“How did your brother respond to your help?”
“What did your parents think of you?”
“What does it say about you that you show care to your brother?”

Unfortunately, despite the advent of “positive psychological” approaches to therapy, we have been programmed to look more at what clients are lacking and less at client strengths. Most intake forms have a space in which the client’s clinical diagnosis is supposed to be entered. To avoid the pathology orientation, we need to expand the initial interview to include a thorough assessment of clients’ skills, talents and resources. We need to know what challenges they have surmounted, what kinds of accomplishments they have attained, what special abilities they have developed. When therapists and clients shift their focus from the pathologized victim to the heroic victor, therapy becomes a much more creative and productive process.

Emphasizing Therapeutic Techniques Over Relationship Building

One of the best things about attending continuing education seminars is learning about the latest therapeutic interventions. And every year or so, such new “breakthroughs” arrive—EMDR, DBT, ACT—you name it. We rush home from the seminars, and can hardly wait for the first patient that we can try out our newfound knowledge on. Many of these innovations do have credibility, but there is one glitch in all of the focus on techniques. Decades of research have consistently demonstrated that the most powerful predictor of positive therapeutic outcome depends less on what type of therapeutic interventions you employ, and more on what kind of therapist-client bond you develop.

An intern related to her ever-patient supervisor that she had been learning about the use of “paradoxical intentions” in her advanced counseling class. She was hoping to try out this new dramatic technique with one of her clients, and did so with a patient during their very first session. The patient had returned to school after a recent divorce, and complained of being totally overwhelmed. She couldn’t get herself to do any homework and was no longer the organized housewife she used to be–failing to do even the simplest of chores like laundry or dishes. The intervention the intern tried was to “ join the symptom” and prescribe the homework assignment to do “absolutely no work at all this week,” then report back at the next session about how this went.

Unfortunately, there was no next session–the client was never heard from again. The lesson here is one that is all too commonly missed: the therapeutic relationship trumps technique. To be more precise, no other single factor affects therapy outcomes more than the quality of the client-therapist relationship. Although exact percentages of therapeutic effect are difficult to ascertain, one study did attempt to do just that. After reviewing over a hundred outcome studies, Lambert and Barley1 derived an estimate of the relative contribution of the myriad factors which have been studied in outcome research. Surprisingly, the specific techniques employed by therapists (cognitive, psychodynamic, etc.), accounted for only 30 percent of therapeutic outcome. However, the quality of the client-therapist relationship predicted results 40 percent of the time.

In the case discussed above, the paradoxical intervention might have proven effective in the long run, if the therapist and client had developed enough rapport and a trusting relationship before implementing the approach. The tendency to rush into the therapist tool kit and resolve the problem quickly is of course exacerbated by the current emphasis on brief or time-limited therapy. Suffice it to say, this bottom-line, time-is-money orientation is not always in the patient’s best interests. Relationship building begins with the first hello and handshake. In fact, in one study of medical doctors, the handshake was cited by patients on an exit questionnaire as the most positive factor in the office visit.

One of the best (and least utilized) methods to ensure that the therapist and client are on the same page is to employ a relationship assessment tool such as the Working Alliance Inventory developed by Horvath and Greenberg. This user-friendly tool predicts with a high degree of accuracy whether or not a client is at risk of dropping out of therapy. It also points to the areas of disconnect which can be addressed sympathetically with the client.

The Homework Assignment Trap

Providing clients with opportunities to apply what they have learned in therapy is one of the keys to therapeutic effectiveness. This makes good sense, given that clients spend only an hour or two per week in therapy and 165+ hours in the real world. So it would stand to reason that the majority of therapists would regularly utilize out-of-session activities as part of their therapeutic arsenal. However, the sad truth is that the majority of therapists report never using such assignments. Why would there be this disconnection between what the research shows and what most therapists do?

What the research doesn’t show is that creating homework assignments that clients actually comply with is a tricky business–and there are a multitude of therapeutic errors that can interfere with the process.

A case history will help illustrate:

Dr. Doom was working with Sabrina, whom he diagnosed as socially phobic. Sabrina had particular difficulty in her college classes, worrying excessively about bringing attention to herself. To avoid the possibility of embarrassment, she always arrived early to class, sat in the last row, and never raised her hand. After several weeks of therapy in which he gave her no assignments, Dr. Doom decided it was time for action and suggested that Sabrina arrive five minutes late to her next class meeting. At her next session, Sabrina at first told her therapist that she forgot to do the assignment but later admitted that she was able to comply with the first part of the assignment–being late–but could not muster the courage to actually enter the classroom, so she ended up cutting class.

Was Sabrina’s case just another example of client resistance, lack of commitment, or lack of readiness to change? In fact, a careful analysis of the approach the therapist used reveals several therapeutic errors that greatly decrease the likelihood of compliance.

Unilateral Assignments (“Here’s what you need to do…”)
For starters, Dr. Doom “decided” on his own, without input from his client, that it was time for action, and then he chose what that action should be. This one-sided approach helped guarantee noncompliance. Just as the entire therapeutic process should be collaborative, each assignment needs to be arrived at by a joint meeting of the minds. Thus, the term “assignment” is not really appropriate at all because it connotes one person doing the assigning and the other person complying. Far better are concepts such as “experiments,” “activities,” or “tasks.” Therapists certainly can take the lead in developing possible strategies, but clients must be encouraged to provide their input and feedback as the tasks are developed. Clients who feel they have participated in the process of generating the activity are more likely to attempt it, complete it, and maintain whatever they have learned from it. Leaving the client out of the decision-making process increases the likelihood that the task may be beyond the reach of the client’s capabilities. In this case, suggesting the client arrive late to class was an attempt to hit a home run with one pitch instead of moving gradually toward the ultimate goal.

Failing to Prepare Clients for the Assignment
All too often, clinicians employ a “take two aspirin and stay out of drafts” approach to therapy. That is, they act as if mental health work is identical to the medical model in which clients ask the all-knowing physician for a diagnosis, prognosis, and treatment recommendations. In reality, most therapy clients need information about the efficacy of specific interventions. In the course of Dr. Doom’s assignment-giving, he neither sought Sabrina’s input nor gave her even a clue what this fear-inducing activity was supposed to accomplish. What might have seemed obvious to the therapist was probably not at all clear to the client. For those with phobias such as Sabrina’s, education about the efficacy of gradual exposure should have preceded any specific homework recommendations.

Failing to Provide Backup Support to Increase Compliance
As any therapist quickly learns, just because clients say they will perform an activity outside of session, this does not mean they will actually follow through with the commitment. Getting clients to comply with homework (even those assignments they have helped design) is about as difficult as getting students to complete school assignments on time. Understanding this, successful therapists utilize a wide array of approaches designed to overcome the numerous obstacles to completing out-of-session activities.

1. Use Post-it notes. At the conclusion of a session, suggest that the client write down the assignment and then post it at home in a convenient location. The therapist should also make a note of the assignment so it can be reviewed at the next session.

2. Encourage the client to tell a trusted individual about the task, asking the friend to check back and see how the assignment is going. This person should not be a guilt inducer or have any vested interest in the activity other than the welfare of the client. Typically spouses, children, and parents are not useful choices.

3. Determine whether the client has a buddy who is also willing to engage in the desired activity. This can be especially helpful with assignments such as increased exercise or attending classes or support groups.

4. Frame the assignments as a way to learn about oneself while trying new things. Emphasize the possibility of enjoying the opportunity to develop new skills that could be beneficial for a lifetime.

5. Leave little or nothing to chance by carefully clarifying the how, when, and where components of the assignment.

6. Do a thorough assessment of any an all obstacles which might prevent the client from following through with the assignment. Make no assumptions. For example, one client committed to doing an online search for employment during the week. However, an inspection of barriers revealed that the client had never used the internet and in fact did not even have an internet connection for his computer!

Underutilizing Clinical Assessment Instruments

Assessment tools, used early in therapy to measure the type and intensity of the initial problem and occasionally during the course of treatment, can aid in treatment effectiveness, client morale and reduction of termination by resistant clients.

Despite this, clinicians by and large are often skeptical about the value of utilizing assessment tools. For example, one clinical supervisor described a case where a postdoctoral intern was not following agency policy to administer a well-known and highly validated instrument. The trainee stated that she did not “believe in” the assessment because it was not particularly useful and took a lot of time to score–despite the fact that the specific instrument had proven its validity and utility in dozens of studies.

There are a number of factors that contribute to the effectiveness of utilizing assessment instruments:

1. The therapist gains information from a source that allows comparisons to other clients regarding the severity of the problem.

2. Repeating the test at periodic intervals can help demonstrate to the therapist and client whether treatment is being effective.

3. If the results indicate improvement, positive expectations are reinforced. If there is no improvement, the client and therapist can adjust the treatment approach appropriately.

4. Clients tend to see assessment utilization by the therapists as an act of caring, and it enhances client regard for a clinician’s expertise.

All of this and more–and yet clinicians often ignore assessment tools like the plague. Two common reasons for the underutilization of these instruments involve the perception that they require a lot of time to take and score, and that they cost an arm and a leg. To counter this problem we have compiled a list of short, easy-to-score tests which are in the public domain–meaning they are free for the taking. (These are listed at the end of this article.)

While utilizing assessment tools is a good starting point for improving therapeutic outcome, there are two other factors which can enhance their use. First it is crucial to explain to clients that just like medical doctors, therapists utilize assessments in order to pinpoint possible problem areas. Lastly, results of assessments should not be kept secret from the client. It would seem quite odd if your medical doctor did not provide any feedback after a patient had a series of tests such as blood work or X-rays. Similarly, several studies have shown that an open discussion of the results of psychological tests enhances therapeutic outcome by increasing client engagement in the therapeutic process.

A Final Note

All clinicians have no doubt experienced something like the following scenario: You provide your client with some helpful information–“for all the reasons we have discussed, maybe now is not the time to start a new romantic relationship”; your client nods his head in agreement; and at the following session the client announces that he has fallen head over heels in love. The helpful information somehow went in one ear and out the other. Our hope in writing this article and the book upon which it is based is that it will actually impact clinician behavior, that readers will not just nod their heads in agreement, but also put one or two concepts into practice.

To help clinicians move beyond the conceptual to the behavioral involves some self-assessment. This assessment involves taking a few minutes to answer the following questions: What is your clinical batting average?—or conversely, what percentage of your clients are dropping out prematurely? What type of clients are the dropouts? What is it about those clients that makes them more difficult to work with? What type of clients do you tend to do well with?

Addressing questions such as these enables us to take stock of our clinical strengths and weakness and can help us locate the therapeutic errors we may be making with clients – errors such as the ones discussed in this article. This in turn can lead to the implementation of new therapeutic practices and better outcomes for clients and ourselves.

Public Domain Assessment Tools

Following is a list of just a few of the many public domain assessment tools available:
Depression: Center for Epidemiologic Studies. Depression Scale (CES_D)

Eating Disorders (Anorexia Nervosa): Eating Attitudes Test (EAT)
Social Anxiety: Fear of Negative Evaluation (FNE)
Post-Traumatic Stress Disorder: Impact of Event Scale – Revised (IES – R)
Substance Abuse (Alcohol): Michigan Alcoholism Screening Test (MAST)

While utilizing assessment tools is a good starting point for improving therapeutic outcome, there are two other factors which can enhance their use. First it is crucial to explain to clients that just like medical doctors, therapists utilize assessments in order to pinpoint possible problem areas. Lastly, results of assessments should not be kept secret from the client. It would seem quite odd if your medical doctor did not provide any feedback after a patient had a series of tests such as blood work or X-rays. Similarly, several studies have shown that an open discussion of the results of psychological tests enhances therapeutic outcome by increasing client engagement in the therapeutic process.

1Lambert, M., J. & Barley, D., E. (2001). Research Summary on the therapeutic relationship and psychotherapy outcome. Psychotherapy, 38, 4, 357-361.

Transition Into Sports Psychology

Coming Home to Sports Psychology

Sports involvement has been an integral part of my life since childhood. As a psychologist, the transition of my private practice work and teaching at University of California, Berkeley, to include sports psychology has been a natural process. When searching for a dissertation topic 18 years ago, I had considered studying marathon runners but instead chose a "practical" topic, employee assistance programs. Interestingly enough, both these areas of interest were directly impacted by my childhood experiences.

As a child, I participated in a wide array of sports and grew up in a corporate family that was often moved to different locations in the United States. Sports became a mainstay for meeting and establishing relationships wherever we lived. Sports became a familiar and comfortable venue for connection. I participated in such sports as swimming, golf, equestrian, canoeing, tennis, and badminton. In elementary school, I competed in hunter jumper events with horses. As a high school student, I played on both the tennis and badminton teams. Entering high school in the sixties, I encountered resistance from my parents to participate in non-traditional women's sports. I tried out for the school's first girls cross country team, which I was asked to join but my parents didn't allow me to participate in. Their (mostly my mother's) rationale was that the sport wasn't ladylike. I particularly thought of this as I was running the Western States 100-mile race across the Sierras in 1993. As you might imagine, sports have become an integral part of my life as an adult. Thus in the last several years as I've shifted the focus of my practice to include a greater sports orientation, I've felt a sense of coming home.

Building a Practice

Working with both active and injured athletes, I've seen individuals from such sports as running, track and field, cycling, golf, tennis, and equestrian events, to name a few. In order to begin the shift to working with more sports-oriented clientele, I started brainstorming about ways to promote my sports psychology services and selected several directions to take. Since I myself have been a runner for over 20 years, I first reached out to the running community to offer my sports psychology expertise. For several years, I initially volunteered my time and worked with the cross-country and track and field teams with San Francisco City College. I knew the coach through my personal involvement and suggested this pro bono service to him. He had me speak at an afternoon meeting with his track and field team and immediately seized upon the value of sports psychology. In addition, I joined the Association for the Advancement of Sports Psychology (the major association of sport psychology professionals) and began attending their conferences. In addition, I approached my boss at the University of California, Berkeley Extension where I had been teaching in the Alcohol and Drug Studies Program since 1986 and suggested offering an Introduction to Sports Psychology Class that I still teach.

When I did my first doctoral internship at Cal State Hayward Counseling Center in 1982-83, I was lucky to obtain supervision with Dr. Betty Wenz, one of the grandmothers of the sports psychology movement. Dr. Wenz was instrumental teaching me about basic sports psychology principles and brought me along to assist in some of her work with synchronized swimmers. She also gave me guidance about the fundamental skills essential for providing thorough and competent sport psychology services as well as the specific areas of knowledge that I needed to acquire and develop. The next two years of internships were in places where I could build my repertoire of skills that built a foundation for later application of sports psychology principles. I learned about using biofeedback for managing stress and promoting intervention/performance enhancement as well as the extensive use of cognitive-behavioral techniques. Also, training in group dynamics helped assist in working with team sports and a general knowledge of the physiology of sports was essential. In addition to the specific clinical training, each psychologist needs to have a intimate and complete understanding, knowledge, and appreciation of sports and athletes, whether it be recreational, competitive, or elite level, when working with athletes.

Working with Athletes

When dealing directly with athletes, you may need to be flexible by varying your work settings when doing individual sessions or presenting to groups or teams. Often, I've presented in gyms, playing fields, parks in the howling wind, or even gone out to where the individual athlete is competing to get a look at their appearance while they are directly involved in practice or competition. One factor that I usually emphasize is that the primary focus of our work will be on the mental skills applicable to the sport and not the technical skills that is the domain of their coaches.

An example of one client whom I worked with was an accomplished Iron Man–level triathlete who appeared to be intimidated at the prospect of running the Western States 100-mile race even though she had fully trained for the event relatively pain- and injury-free. Upon reviewing her past accomplishments, recalling previous successful performances, and connecting the feelings and thoughts associated with them, she was able to regain her sense of self confidence, and have a great time at Western States with the successful completion of the race in 26+ hours.

Another client was a older scratch golfer who was considering retiring from his current job and playing golf professionally. He had been plagued for years by his short game (particularly putting). In gathering information about his current approach, we discovered that when he approached putting he powered into it just as he did his long game (irons and woods on the fairway). He often thought about putts just like long 250-yard drives down the freeway. He thought: Power! Power! Power! We worked on changing his thoughts toward putting as more of a mental strategy–driven rather than a power-driven part of the game. His new thought: Contain and Direct! Needless to say, this took focus and concentration even to adjust to the differences in the game, which also helped him improve.

Training Requirements

As you might have noticed, I've referred several times to psychologists working with athletes. This is due primarily to the criteria that the Association for the Advancement of Applied Sports Psychology has established. They require a doctoral degree as part of their criteria for becoming a certified consultant. The general feeling is that the skills lie within the scope of an individual trained at this level. A large number of sports psychology professionals work within academic or organizational settings and are involved in both applied and research work. They view sports psychology as a specialty for doctoral-level therapists only who must have the aforementioned skills and training as well as enthusiasm, excitement, and a positive manner toward athletes.

Sport psychology is an exciting area of specialty that is in a period of new and challenging growth. Part of our task as sports psychology professionals is to educate the public about the usefulness and applicability of our skills for athletes of every caliber. To further educate yourself about "fitness," you might utilize University of California, Berkeley Extension's offerings in Fitness or even take the Introduction to Sports Psychology class next spring. In addition, to learn more about the Association for Applied Sports Psychology, you can go to their web site at www.aaasponline.org and possibly attend their next conference which is in Nashville, Tennessee in late September.

Supershrinks: What is the secret of their success?

Clients of the best therapists improve at a rate at least 50 percent higher and drop out at a rate at least 50 percent lower than those of average clinicians. What is the key to superior performance? Are "supershrinks" made or born? Is it a matter of temperament or training? Have they discovered a secret unknown to other clinicians or are their superior results simply a fluke, more measurement error than reality? We know that who provides the therapy is a much more important determinant of success than what treatment approach is provided. The age, gender, and diagnosis of the client have no impact on the treatment success rate, nor do the experience, training, and theoretical orientation of the therapist. In attempting to answer these questions, Miller, Hubble and Duncan, have found that the best of the best simply work harder at improving their performance than others and attentiveness to feedback is crucial. When a measure of the alliance is used with a standardized outcome scale, available evidence shows clients are less likely to deteriorate, more likely to stay longer, and twice as likely to achieve a change of clinical significance.

The boisea trivittatus, better known as the box elder bug, emerges from the recesses of homes and dwellings in early spring. While feared neither for its bite nor sting, most people consider the tiny insect a pest. The critter comes out by the thousands, resting in the sun and staining upholstery and draperies with its orange-colored wastes. Few find it endearing, with the exception perhaps of entomologists. It doesn't purr and won't fetch the morning paper. What is more, you will be sorry if you step on it. When crushed, the diminutive creature emits a putrid odor worthy of an animal many times its size.

For as long as anyone could remember, boisea trivittatus was an unwelcome yet familiar guest in the offices and waiting area of a large Midwestern, multicounty community mental health center. Professional exterminators did their best to keep the bugs at bay, but inevitably many eluded the efforts to eliminate them. Tissues were placed strategically throughout the center for staff and clients to dispatch the escapees. In time, the arrangement became routine. Out of necessity, everyone tolerated the annual annoyance—with one notable exception.

Dawn, a 12-year veteran of the center, led the resistance to what she considered "insecticide." In a world turned against the bugs, she was their only ally. To save the tiny beasts, she collected and distributed old mason jars, imploring others to catch the little critters so that she could release them safely outdoors.

Few were surprised by Dawn's regard for the bugs. Most people who knew her would have characterized her as a holdout from the "Summer of Love." Her VW microbus, floor-length tie-dyed skirts, and Birkenstock sandals—combined with the scent of patchouli and sandalwood that lingered after her passage—solidified everyone's impression that she was a fugitive of Haight-Ashbury. Rumor had it that she'd been conceived at Esalen.

Despite these eccentricities, Dawn was hands-down the most effective therapist at the agency. This finding was established through a tightly controlled, research-to-practice study conducted at her agency. As part of this study of success rates in actual clinical settings, Dawn and her colleagues administered a standardized measure of progress to each client at every session.

What made her performance all the more compelling was that Dawn was the top performer seven years running. Moreover, factors widely believed to affect treatment outcome—the client's age, gender, diagnosis, level of functional impairment, or prior treatment history—did not affect her results. Other factors not correlated with her outcomes were her age, gender, training, professional discipline, licensure, or years of experience. Even her theoretical orientation proved inconsequential.

Contrast Dawn with Gordon, who could not have been more different. Rigidly conservative and brimming with confidence bordering on arrogance, Gordon managed to build a thriving private practice in an area where most practitioners were struggling to stay afloat financially. Many in the professional community sought to emulate his success. In the hopes of learning his secrets or earning his acknowledgment, they competed hard to become part of his inner circle.

Whispered conversations at parties and local professional meetings made clear that others regarded Gordon with envy and enmity. "Profits talk, patients walk," was one comment that captured the general feeling about him. And the critics could not have been more wrong. The people Gordon saw in his practice regarded him as caring and deeply committed to their welfare. Furthermore, he achieved outcomes that were far superior to those of the clinicians who carped about him. In fact, the same measures that confirmed Dawn's superior results placed Gordon in the top 25 percent of psychotherapists studied in the United States.

In 1974, researcher D. F. Ricks coined the term supershrink to describe a class of exceptional therapists—practitioners who stood head and shoulders above the rest. His study examined the long-term outcomes of "highly disturbed" adolescents. When the research participants were later examined as adults, he found that a select group, treated by one particular provider, fared notably better. In the same study, boys treated by the pseudoshrink demonstrated alarmingly poor adjustment as adults.

The fact that therapists differ in their ability to effect change is hardly a revelation. All of us have participated in hushed conversations about colleagues whose performance we feel falls short of the mark. We also recognize that some practitioners are a cut above the rest. With rare exceptions, whenever they take aim, they hit the bull's-eye. Nevertheless, since Ricks's first description, little has been done to further the investigation of super- and pseudoshrinks. Instead, professional time, energy, and resources have been directed exclusively toward identifying effective therapies. Trying to identify specific interventions that could be dispensed reliably for specific problems has a strong common-sense appeal. No one would argue with the success of the idea of problem-specific interventions in the field of medicine. But the evidence is incontrovertible. “Who provides the therapy is a much more important determinant of success than what treatment approach is provided.”

Consider a recent study conducted by Bruce Wampold and Jeb Brown in 2006 and published in the Journal of Consulting and Clinical Psychology. Briefly, the study included 581 licensed providers, including psychologists, psychiatrists, and master's-level providers, who were treating a diverse sample of over 6,000 clients. The therapists, the clientele, and the presenting complaints were not different in any meaningful way from clinical settings nationwide. As was the case with Dawn and Gordon, the clients' age, gender, and diagnosis had no impact on the treatment success rate and neither did the experience, training, or theoretical orientation of the therapists. However, clients of the best therapists in the sample improved at a rate at least 50 percent higher and dropped out at a rate at least 50 percent lower than those assigned to the average clinicians in the sample.

Another important finding emerged: in those cases in which psychotropic medication was combined with psychotherapy, the drugs did not perform consistently. As with talk therapy, effectiveness depended on who prescribed the drug. People seen by top providers achieved gains from the drugs 10 times greater than those seen by the less effective practitioners. Among the latter group, the drugs virtually made no difference. So, in the chemistry of mental health treatment, orientations, techniques, and even medications are inert. The clinician is the catalyst.

The making of a Supershrink

For the past eight years the Institute for the Study of Therapeutic Change (ISTC), an international group of researchers and clinicians dedicated to studying what works in psychotherapy, has been tracking the outcomes of thousands of therapists treating tens of thousands of clients in myriad clinical settings across the United States and abroad. Like D. F. Ricks and other researchers, we found wide variations in effectiveness among practicing clinicians. Intrigued, we decided to try to determine why.

We began our investigation by looking at the research literature. The Institute has earned its reputation in part by reviewing research and publishing summaries and critical analyses on its website (www.talkingcure.com). We were well aware at the outset that little had been done since D. F. Rick's original paper to deepen the understanding of super- and pseudoshrinks. Nevertheless, a massive amount of research had been conducted on what in general makes therapists and therapy effective. When we attempted to determine the characteristics of the most effective practitioners using our national database, with the hypothesis that therapists like Dawn and Gordon must simply do or embody more of "it," we smacked head-first into a brick wall. Neither the person of the therapist, nor technical prowess, separated the best from the rest.

Frustrated, but undeterred, we retraced our steps. Maybe we had missed something, a critical study, a nuance, a finding that would steer us in the right direction. We returned to our own database to take a second look, reviewing the numbers and checking the analyses. We asked consultants outside the Institute to verify our computations. We invited others to brainstorm possible explanations. Opinions varied from many of the factors we had already considered and ruled out to "it's all a matter of chance, noise in the system, more statistical artifact than fact." Put another way, supershrinks were not real and their emergence in any data analysis was entirely random. In the end, there was nothing we could point to that explained why some clinicians achieved consistently superior results. Seeing no solution, we gave up and turned our attention elsewhere.

The project would have remained shelved indefinitely had one of us not stumbled on the work of Swedish psychologist K. Anders Ericsson. Nearly two years had passed since we had given up. Then Scott, returning to the U.S. after providing a week of training in Norway, stumbled on an article published in Fortune magazine. Weary from the road and frankly bored, he had taken the periodical from the passing flight attendant more for the glossy pictures and factoids than for intellectual stimulation. In short order, however, the magazine title seized his attention—in big bold letters, "What it takes to be great." The subtitle cinched it, "Research now shows that the lack of natural talent is irrelevant to great success." Although the lead article itself was a mere four pages in length, the content kept him occupied for the remaining eight hours of the flight.

Ericsson, Scott learned, was considered to be "the expert on experts." For the better part of two decades, he had studied the world's best athletes, authors, chess players, dart throwers, mathematicians, pianists, teachers, pilots, physicians, and others. He was also a bit of a maverick. In a world prone to attribute greatness to genetic endowment, Ericsson did not mince words, "The search for stable heritable characteristics that could predict or at least account for superior performance of eminent individuals [in sports, chess, music, medicine, etc.] has been surprisingly unsuccessful . . . Systematic laboratory research . . . provides no evidence for giftedness or innate talent."

Should Ericsson's bold and sweeping claims prove difficult to believe, take the example of Michael Jordan, regarded widely as the greatest basketball player of all time. When asked, most would cite natural advantages in height, reach, and leap as key to his success. Notwithstanding, few know that "His Airness" was cut from his high school varsity basketball team! So much for the idea of being born great. It simply does not work that way.

“The key to superior performance? As absurd as it sounds, the best of the best simply work harder at improving their performance than others.” Jordan, for example, did not give up when thrown off the team. Instead, his failure drove him to the courts, where he practiced hour after hour. As he put it, "Whenever I was working out and got tired and figured I ought to stop, I'd close my eyes and see that list in the locker room without my name on it, and that usually got me going again."

“As time consuming as this level of practice sounds—and it is—it isn't enough. According to Ericsson, to reach the top level, attentiveness to feedback is crucial.”

Such deliberate practice, as Ericsson goes to great lengths to point out, isn't the same as the number of hours spent on the job, but rather the amount of time devoted specifically to reaching for objectives "just beyond one's level of proficiency." He chides anyone who believes that experience creates expertise, saying, "Just because you've been walking for 50 years doesn't mean you're getting better at it." Of interest, he and his group have found that elite performers across many different domains engage in the same amount of such practice, on average, every day, including weekends. In a study of 20-year-old musicians, for example, Ericsson and colleagues found that the top violinists spent twice  as much time (10,000 hours on average) working to meet specific performance targets as the next best players and 10 times as much time as the average musician.

“As time consuming as this level of practice sounds—and it is—it is not enough. According to Ericsson, to reach the top level, attentiveness to feedback is crucial.” Studies of physicians with an uncanny ability to diagnose baffling medical problems, for example, prove that they act differently than their less capable, but equally well-trained, colleagues. In addition to visiting, examining, taking careful notes, and reflecting on their assessment of a particular patient, they take one additional critical step. They follow up. Unlike their "proficient" peers, they do not settle. Call it professional compulsiveness or pride, these physicians need to know whether they were right, even though finding out is not required nor reimbursable. "This extra step," Ericsson says, gives the superdiagnostician"a significant advantage over his peers. It lets him better understand how and when he's improving."

Within days of touching down, Scott had shared Ericsson's findings with Mark and Barry. An intellectual frenzy followed. Articles were pulled, secondary references tracked down, and Ericsson's 918-page Cambridge Handbook of Expertise and Expert Performance purchased and read cover to cover. In the process, our earlier confusion gave way to understanding. With considerable chagrin, we realized that what therapists per se do is irrelevant to greatness. The path to excellence would never be found by limiting our explorations to the world of psychotherapy, with its attendant theories, tools, and techniques. Instead, we needed to redirect our attention to superior performance, regardless of calling or career.

Knowing what you don't know

Ericsson's work on practice and feedback also explained the studies that show how most of us grow continually in confidence over the course of our careers, despite little or no improvement in our actual rates of success. Hard to believe but true. On this score, the experience of psychologist Paul Clement is telling. Throughout his years of practice, he kept unusually thorough records of his work with clients, detailing hundreds of cases falling into 84 different diagnostic categories. "I had expected to find," he said in a quantitative analysis published in the peer-reviewed journal Professional Psychology, "that I had gotten better and better over the years . . . but my data failed to suggest any . . . change in my therapeutic effectiveness across the 26 years in question."

Contrary to conventional wisdom, the culprit behind such mistaken self-assessment is not incompetence, but rather proficiency. Within weeks and months of first starting out, noticeable mistakes in everyday professional activities become increasingly rare, and thereby make intentional modifications seem irrelevant, increasingly difficult, and costly in time and resources. Once more, this is human nature, a process that dogs every profession. Add to this the custom in our profession of conflating success with a particular method or technique, and the door to greatness for many therapists is slammed shut early on.

During the last few decades, for example, more than 10,000 "how-to" books on psychotherapy have been published. At the same time, the number of treatment approaches has mushroomed, going from around 60 in the early days to more than 400 psychological treatment models today. At present, there are 145 officially approved, manualized, evidence-based treatments for 51 of the 397 possible DSM diagnostic groups. Based on these numbers alone, one would be hard pressed to not believe that real progress has been made by the field. More than ever before, we know what works for whom. Or do we?

Comparing the success rates of today with those of 10, 20, or 30 years ago is one way to find out. One would expect that the profession is progressing in a manner comparable to the Olympics. Fans know that during the last century, the best performance for every event has improved—in some cases, by as much as 50 percent. What is more, excellence at the top has had a trickle-down effect, improving performance at every level. For example, the fastest time clocked for the marathon in the 1896 Olympics was just one minute faster than the time that is required now just to participate in the most competitive marathons like Boston and Chicago. By contrast, no measurable improvement in the effectiveness of psychotherapy has occurred in the last 30 years.

The time has come to confront the unpleasant truth: our tried-and-true strategies for improving what we do have failed. Instead of advancing as a field, we have stagnated, mistaking our feverish peddling on a stationary bicycle for progress in the Tour de Therapy. This is not to say that therapy is ineffective. Quite to the contrary, the data are clear and unequivocal: psychotherapy works. Studies conducted over the last three decades show effects equal to or greater than those achieved by a host of well-accepted medical procedures, such as coronary artery bypass surgery, the pharmacological treatment of arthritis, and AZT for AIDS. At issue, however, is how we can learn from our experiences and "improve" our rate of success, both as a discipline and in our individual practices.

Incidentally, psychotherapists are not alone in this struggle to increase our expertise. During our survey of the literature on greatness, we came across an engaging and provocative article published in the New Yorker magazine. Using the treatment of cystic fibrosis (CF) as an example, science writer Atul Gawande showed how the same processes that undermine excellence in psychotherapy play out in medicine. Since 1964, medical researchers have been tracking the outcomes of patients with CF, a genetic disease striking 1,000 children yearly. The disease is progressive and, over time, mucus fills, hardens, and eventually destroys the lungs.

As is the case with psychotherapy, the evidence indicates that standard CF treatment works. With medical intervention, life expectancy is on average 33 years; without care, few patients survive infancy. The real story, as Gawande points out, is not that patients with CF live longer when treated, but that, as with psychotherapy, there is a significant variation in treatment success rates. At the best treatment centers, survival rates are 50 percent higher than the national average, meaning that patients live to be 47 on average.

Such differences, however, have not been achieved through standardization of care and the top-down imposition of the "best" practices. Indeed, Cincinnati Children's Hospital (CCH), one of the nation's most respected treatment centers—which employs two of the physicians responsible for preparing the national CF treatment guidelines—produced only average to poor outcomes. In fact, on one of the most critical measures, lung functioning, this institution scored in the bottom 25 percent.

It is a small comfort to know that our counterparts in medicine, a field celebrated routinely for its scientific rigor, stumble and fall just as much as we "soft-headed" psychotherapists do in the pursuit of excellence. But Gawande's article, available for free at the Institute for Healthcare Improvement website (www.ihi.org), provides so much more than an opportunity to commiserate. His piece confirms what our own research revealed to be the essential first step in improving outcomes: knowing your baseline performance. It just stands to reason. If you call a friend for directions, her first question will be, "Where are you?" The same is true of RandMcNally, Yahoo! and every other online mapping service. To get where you want to go, you first have to know where you are—a fact the clinical staff at CCH put to good use.

In truth, most practicing psychotherapists have no hard data on their success rates with clients. Fewer still have any idea how their outcomes compare to those of other clinicians or to national norms. Unlike therapists, though, the staff at CCH not only determined their overall rate of effectiveness, they were able to compare their success rates with other major CF treatment centers across the country. With such information in hand, the medical staff acted to push beyond their current standard of reliable performance. In time, their outcomes improved markedly.

A formula for success

Turning to specifics, the truth is we have yet to discover how supershrinks like Dawn and Gordon ascertain their baseline. Our experience leads us to believe that they do not know either. What is clear is that their appraisal, intuitive though it may be, is more accurate than that of average practitioners. It is likely, and our analysis thus far confirms, that the methods they employ will prove to be highly variable, defying any simple attempt at classification. Despite such differences in approach, the supershrinks without exception possess a keen "situational awareness": they are observant, alert and attentive. They constantly compare new information with what they already know.

For the rest of us mere mortals, a shortcut to supershrinkdom exists. It entails using simple paper and pencil scales and some basic statistics to compute your baseline, a process we discuss in detail in what follows. In the end, you may not become the Frank Sinatra, Tiger Woods, or Melissa Etheridge of the therapy world, but you will be able to sing, swing and strum along with the best.

“The prospect of knowing one's true rate of success can provoke anxiety even in the best of us. For all that, studies of working clinicians provide little reason for concern.” To illustrate, the outcomes reported in a recent study of 6,000 practitioners and 48,000 clients were as good as or better than those typically reported in tightly controlled studies. These findings are especially notable because clinicians, unlike researchers, do not have the luxury of handpicking the clients they treat. Most clinicians do good work most of the time, and do so while working with complex, difficult cases.

At the same time, you should not be surprised or disheartened when your results prove to be average. As with height, weight, and intelligence, success rates of therapists are normally distributed, resembling the all-too-familiar bell curve. It is a fact, in nearly all facets of life, most of us are clustered tightly around the mean. As the research by Hiatt and Hargrave shows, a more serious problem is when therapists do not know how they are performing or, worse, think they know their effectiveness without outside confirmation.

Unfortunately, our own work with regard to tracking the outcomes of thousands of therapists working in diverse clinical settings has exposed a consistent and alarming pattern: those who are the slowest to adopt a valid and reliable procedure to establish their baseline performance typically have the poorest outcomes of the lot.

Should any doubt remain with regard to the value and importance of determining one's overall rate of success, let us underscore that the mere act of measuring yields improved outcomes. In fact, it is the first and among the most potent forms of feedback available to clinicians seeking excellence. Several recent studies, demonstrate convincingly that monitoring client progress on an ongoing basis improves effectiveness dramatically. Our own study published last year in the Journal of Brief Therapy found that providing therapists with real time feedback improved outcome nearly 65 percent. No downside exists to determining your baseline effectiveness. One either is proven effective or becomes more effective in the process.

There is more good news on this score. Share your baseline—good, bad, or average—with clients and the results are even more dramatic. Dropouts, the single greatest threat to therapeutic success, are cut in half. At the same time, outcomes improve yet again, in particular among those at greatest risk for treatment failure. Cincinnati Children's Hospital provides a case in point. Although surprised and understandably embarrassed about their overall poor national ranking, the medical staff nonetheless resolved to share the results with the patients and families. Contrary to what might have been predicted, not a single family chose to leave the program.

That everyone decided to remain committed rather than bolt should really come as no surprise. Across all types of relationships—business, family and friendship, medicine—success depends less on a connection during the good times than on maintaining engagement through the inevitable hard times. The fact the CCH staff shared the information about their poor performance increased the connection their patients felt with them and enhanced their engagement. It is no different in psychotherapy. Where we as therapists have the most impact on securing and sustaining engagement is through the relationship with our clients, what is commonly referred to as the "alliance." When it works well, client and therapist reach and maintain agreement about where they are going and the means by which they will get there. Equally important is the strength of the emotional connection—the bond.

Supershrinks, as our own research shows, are exquisitely attuned to the vicissitudes of client engagement. In what amounts to a quantum difference between themselves and average therapists, they are more likely to ask for and receive negative feedback about the quality of the work and their contribution to the alliance. We have now confirmed this finding in numerous independent samples of practitioners working in diverse settings with a wide range of presenting problems. The best clinicians, those falling in the top 25 percent of treatment outcomes, consistently achieve lower scores on standardized alliance measures at the outset of therapy, enabling them to address potential problems in the working relationship. By contrast, median therapists commonly receive negative feedback later in treatment, at a time when clients have already disengaged and are at heightened risk for dropping out.

How do the supershrinks use feedback with regard to the alliance to maintain engagement? A session conducted by Dawn, rescuer of the box elder bugs, is representative of the work done by the field's most effective practitioners. At the time of the visit, we were working as consultants to her agency, teaching the staff to use the standardized outcome and alliance scales, and observing selected clinical interviews from behind a one-way mirror. She had been meeting with an elderly man for the better part of an hour. Although the session initially had lurched along, an easy give and take soon developed between the two. Everyone watching agreed that, overall, the session had gone remarkably well.

At this point, Dawn gave the alliance measure to the client, saying "This is the scale I told you about at the beginning of our visit. It's something new we're doing here. It's a way for me to check in, to get your feedback or input about what we did here today."

Without comment, the man took the form, and after quickly completing it, handed it back to Dawn.

"Ohm wow," she remarked, after rapidly scoring the measure, "you've given me, or the session at least, the highest marks possible."

With that, everyone behind the one-way mirror began to stir in their chairs. Each of us was expecting Dawn to wrap up the session—even, it appeared, the client who was inching forward on his chair. Instead, she leaned toward him.

"I'm glad you came today," she said.

"It was a good idea," he responded, "um, my, uh, doctor told me to come, in, and . . . I did, and, um . . . it's been a nice visit."

"So, will you be coming back?"

Without missing a beat, the man replied, "You know, I'm going to be all right. A person doesn't get over a thing like this overnight. It's going to take me a while. But don't you worry."

Behind the mirror, we and the staff were surprised again. The session had gone well. He had been engaged. A follow-up appointment had been made. Now we heard ambivalence in his voice.

For her part, Dawn was not about to let him off the hook. "I'm hoping you will come back."

"You know, I miss her terribly," he said, "it's awfully lonely at night. But, I'll be all right. As I said, don't worry about me."

"I appreciate that, appreciate what you just said, but actually what I worry about is that I missed something. Come to think about it, if we were to change places, if I were in your shoes, I'd be wondering, 'What really can she know or understand about this, and more, what can she possibly do?'"

A long silence followed. Eventually, the man looked up, and with tears in his eyes, caught her gaze.

Softly, Dawn continued, "I'd like you to come back. I'm not sure what this might mean to you right now, but you don't have to do this alone."

Nodding affirmatively, the man stood, took Dawn's hand, and gave it a squeeze. "See you, then."

Several sessions followed. During that period his scores on the standardized outcome measure improved considerably. At the time, the team was impressed with Dawn. Her sensitivity and persistence paid off, keeping the elderly man engaged, and preventing his dropping out. The real import of her actions, however, did not occur to any of us until much later.

All therapists experience similar incisive moments in their work with clients; times when they are acutely insightful, discerning, even wise. However, such experiences are actually of little consequence in separating the good from the great. Instead, superior performance is found in the margins—the small but consistent difference in the number of times corrective feedback is sought, successfully obtained, and then acted on.

Most therapists, when asked, report that they check in routinely with their clients and know when to do so. But our own research found this to be far from the case. In early 1998, we initiated a study to investigate the impact on treatment outcome of seeking client feedback. Several formats were included. In one, therapists were supposed to seek informal client input on their own. In another, standardized, client-completed outcome and alliance measures were administered and the results shared with fellow therapists. Treatment-as-usual served as a third, control group.

Initial results of the study pointed to an advantage for the feedback conditions. Ultimately, however, the entire project had to be scrapped as a review of the videotapes showed that the therapists in the informal group failed routinely to ask clients for their input—even though, when later queried, the clinicians maintained they had sought feedback.

For their part, supershrinks consistently seek client feedback about how the client feels about them and their work together; they don't just say they do. Dawn perhaps said it best: "I always ask. Ninety-nine per cent of the time, it doesn't go anywhere—at least at the moment. Sometimes I'll get a call, but rarely. More likely, I'll call, and every so often my nosiness uncovers something, some, I don't know quite how to say it, some barrier or break, something in the way of our working together." Such persistence in the face of infrequent payoff is a defining characteristic of those destined for greatness.

Whereas birds can fly, the rest of us need an airplane. When a simple measure of the alliance is used in conjunction with a standardized outcome scale, available evidence shows clients are less likely to deteriorate, more likely to stay longer, and twice as likely to achieve a change of clinical significance. What is more, when applied on an agency-wide basis, tracking client progress and experience of the therapeutic relationship has an effect similar to the one noted earlier in the Olympics: across the board, performance improves; everyone gets better. As John F. Kennedy was fond of saying, "A rising tide lifts all boats."

While it is true that the tide raises everyone, we have observed that supershrinks continue to beat others out of the dock. Two factors account for this. As noted earlier, superior performers engage in significantly more deliberate practice. That is, as Ericsson, the expert on experts says, "effortful activity designed to improve individual target performance." Specific methods of deliberate practice have been developed and employed in the training of pilots, surgeons, and others in highly demanding occupations. Our most recent work has focused on adapting these procedures for use in psychotherapy.

In practical terms, the process involves three steps: think, act, and, finally, reflect. This approach can be remembered by the acronym, T.A.R. To prepare for moving beyond the realm of reliable performance, the best of the best engage in forethought. This means they set specific goals and identify the particular ways they will use to reach their goals. It is important to note that superior performance depends on simultaneously attending to both the ends and the means.

To illustrate, suppose a therapist wanted to improve the engagement level of clients mandated into treatment for substance abuse. First, they would need to define in measurable terms how they would know, what they would see, that would tell them the client is engaged actively in the treatment (e.g., attendance, dialog, eye contact, posture, etc.). Following this, the therapist would develop a step-by-step plan to achieve the specific objectives. Because therapies that focus on client goals result in greater participation, the therapist might, for example, create a list of questions designed to elicit and confirm what the client wants. Not only this, but time would be spent in anticipating what the client might say and planning a strategy for each response.

In the act phase, successful experts track their performance. They monitor on an ongoing basis whether they used each of the steps or strategies outlined in the thinking phase and the quality with which each step was executed. The sheer volume of detail gathered in assessing their performance distinguishes the exceptional from their more average counterparts.

During the reflection phase, top performers review the details of their performance, and identify specific actions and alternate strategies for reaching their goals. Where unsuccessful learners paint with broad strokes, and attribute failure to external and uncontrollable factors (e.g., "I had a bad day," "I wasn't with it"), the experts know exactly what they do, more often citing controllable factors (e.g., "I should have done x instead of y," of "I forgot to do x and will do x plus y next time"). In our work with psychotherapists, for example, we have found that average practitioners are more likely to spend time hypothesizing about failed strategies, believing perhaps that understanding the reasons why an approach did not work will lead to better outcomes, and less time thinking about strategies that might be more effective.

Returning to the example above, an average therapist would be more likely to attribute failure to engage the mandated substance abuser to denial, resistance, or lack motivation. The expert on the other hand would say, "Instead of organizing the session around 'drug use,' I should have emphasized what the client wanted—getting his driver's license back. Next time, I will explore in detail what the two of us need to do right now to get him back in the driver's seat."

The penchant for seeking explanations for treatment failures can have life-and-death consequences. In the 1960s, the average lifespan of children with cystic fibrosis treated by "proficient" pediatricians was three years. The field as a whole attributed the high mortality rate routinely to the illness itself, a belief which, in retrospect, can only be viewed as a self-fulfilling prophecy. After all, why search for alternative methods if the disease invariably kills? Although certainly less dramatic, psychologist William Miller makes a similar point about psychotherapy, noting that most models do not account for how people change, but rather why they stay the same. In our experience, diagnostic classifications often serve a similar function by attributing the cause of a failing or failed therapy to the disorder.

By comparison, deliberate practice bestows clear advantages. In place of static stories and summary conclusions, options predominate. Take chess, for example. The unimaginable speed with which master players intuit the board and make their moves gives them the appearance of wizards, especially to dabblers. Research proves this to be far from the case. In point of fact, they possess no unique or innate ability or advantage in memory. Far from it. Their command of the game is simply a function of numbers: they have played this game and a thousand others before. As a result, they have more means at their disposal.

The difference between average and world-class players becomes especially apparent when stress becomes a factor. Confronted by novel, complex, or challenging situations, the focus of the merely proficient performers narrows to the point of tunnel vision. In chess, these people are easy to spot. They are the ones sitting hunched over the board, their finger glued to a piece, contemplating the next move. But studies of pilots, air traffic controllers, emergency room staff, and others in demanding situations and pursuits show that superior performers expand their awareness, availing themselves of all the options they have identified, rehearsed, and perfected over time.

Deliberate practice, to be sure, is not for the harried or hassled. Neither is it for slackers. Yet the willingness to engage in deliberate practice is what separates the "wheat from the chaff." The reason is simple: doing it is unrewarding in almost every way. As Ericsson notes, "Unlike play, deliberate practice is not inherently motivating; and unlike work, it does not lead to immediate social and monetary rewards. In addition, engaging in [it] generates costs." No third party (e.g., client, insurance company, or government body) will pay for the time spent to track client progress and alliance, identify at-risk cases, develop alternate strategies, seek permission to record treatment sessions, insure HIPAA compliance and confidentiality, systematically review the recordings, evaluate and refine the execution of the strategies, and solicit outside consultation, training, or coaching specific to particular skill sets. And, let's face it, few of us are willing pay for it out of pocket. But this, and all we have just described, is exactly what the supershrinks do. In a word, they are self-motivated. What leads people, children and adults, to devote the time, energy, and resources necessary to achieve greatness is poorly understood. Even when the path to improved performance is clear and requires little effort, most do not follow through. As recently reported in The New York Times, a study of 12 highly experienced gastroenterologists, each having performed a minimum of 3,000 colonoscopies, found that some were 10 times better at finding precancerous polyps than others. An extremely simple solution, one involving no technical skill or diagnostic prowess, was found to increase the polyp-detection rate by 50 percent. Sadly, despite this dramatic improvement, most of the doctors stopped using the remedy the moment the clinical trial ended.

Ericsson and colleagues believe that future studies of elite performers will give us a better idea of how motivation is promoted and sustained. Until then, we know that deliberate practice works best when done multiple times each day, including weekends, for short periods, interrupted by brief rest breaks. "Cramming" or "crash courses" don't work and increase the likelihood of exhaustion and burnout.

The Institute for the Study of Therapeutic Change is developing a web-based system to facilitate deliberate practice. The system is patterned after similar programs in use with pilots, surgeons, and other professionals. The advantage here is that the steps to excellence are automated. At www.myoutcomes.com, clinicians are already able to track their outcomes, establish their baseline, and compare their performance to national norms. The system also provides feedback to therapists when clients are at risk for deterioration or drop-out.

At present, we are testing algorithms that identify patterns in the data associated with superior outcomes. Such formulas, based on thousands of clients and therapists, will enable us to identify when an individual's performance is at variance with the pattern of excellence. When this happens, the clinician will be notified by e-mail of an online deliberate practice opportunity. Such training will differ from traditional continuing education in two critical ways. First, it will be targeted to the development of skill sets specific to the needs of the individual clinician. Second, and of greater consequence in the pursuit of excellence, the impact on outcome can be measured immediately. It is our hope that such a system will make the process of deliberate practice more accessible, less onerous, and more efficient.

The present era in psychotherapy has been referred to by many leading thinkers as the "age of accountability." Everyone wants to know what they are getting for their money. But it is no longer a simple matter of cost and the bottom line. People are looking for value. As a field, we have the means at our disposal to demonstrate the worth of psychotherapy in eyes of consumers and payers and increase its value. The question is, will we?

References

Clement, P. (1994). Quantitative Evaluation of 26 Years of Private Practice. Professional Psychology: Research and Practice, 25, 2, 173-76.

Colvin, G. (2006, October 19). What It Takes to Be Great. Fortune.

Ericsson, K. A. (2006). Cambridge Handbook of Expertise and Expert Performance. United Kingdom: Cambridge University Press.

Gawande, Atul. (2004, December 6). The Bell Curve. The New Yorker.

Hiatt, D. & Hargrave, G. E. (1995). The Characteristics of Highly Effective Therapists in Managed Behavioral Provider Networks. Behavioral Healthcare Tomorrow, 4, 19-22.

Miller S., Duncan, B., Brown, J., Sorrell, R., & Chalk, M. (2007). Using Formal Client Feedback to Improve Retention and Outcome. Journal of Brief Therapy, 5, 19-28.

Ricks, D.F. (1974). Supershrink: Methods of a therapist judged successful on the basis of adult outcomes of adolescent patients. In D. F. Ricks, M. Roff (Eds.), Life History Research in Psychopathology. Minneapolis: University of Minnesota Press, 275-297.

Villarosa, L. (2006, December 19). Done Right, Colonoscopy Takes Time, Study Finds. The New York Times, Health Section.

Wampold, B. E. & Brown, J. (2005). Estimating Variability in Outcomes Attributable to Therapists: A Naturalistic Study of Outcomes in Managed Care. Journal of Consulting and Clinical Psychology, 73, 5, 914-23.

“When I’m good, I’m very good, but when I’m bad I’m better”: A New Mantra for Psychotherapists

Current estimates suggest that nearly 50 percent of therapy clients drop out and at least one third, and up to two thirds, do not benefit from our usual strategies. Barry Duncan and Scott Miller provide a comprehensive summary of the Outcome-Informed, Client-Directed approach and a detailed, practical overview of its application in clinical practice. Through case examples they demonstrate how most practitioners can increase their therapeutic effectiveness substantially through accurate identification of those clients who are not responding, and addressing the lack of change in a way that keeps clients engaged in treatment and forges new directions.

Introduction

At first blush, Mae West's famous words 'When I'm good, I'm very good, but when I'm bad I'm better' hardly seem like a guide for therapists to live by—but, as it turns out, they could be. Research demonstrates consistently that who the therapist is accounts for far more of the variance of change (6 to 9 percent) than the model or technique administered (1 percent). In fact, therapist effectiveness ranges from a paltry 20 percent to an impressive 70 percent. A small group of clinicians—sometimes called 'supershrinks'—obtain demonstrably superior outcomes in most of their cases, while others fall predictably on the less-exalted sections of the bell-shaped curve. However, most practitioners can join the ranks of supershrinks, or at least increase their therapeutic effectiveness substantially.
 
Consider Matt, a twenty-something software whiz who was on the road frequently to trouble-shoot customer problems. Matt loved his job but travelling was an ordeal—not because of flying but because of another, far more embarrassing problem. Matt was long past feeling frustrated about standing and standing in public restrooms trying to 'go.' What started as a mild discomfort and inconvenience easily solved by repeated restroom visits had progressed to full-blown anxiety attacks, an excruciating pressure, and an intense dread before each trip. Feeling hopeless and demoralized, Matt considered changing jobs but as a last resort decided instead to see a therapist.
 
Matt liked the therapist and it felt good finally to tell someone about the problem. The therapist worked with Matt to implement relaxation and self-talk strategies. Matt practiced in session and tried to use the ideas on his next trip, but still no 'go.' The problem continued to get worse. Now three sessions in, Matt was at significant risk for a negative outcome—either dropping out or continuing in therapy without benefit.
 
We have all encountered clients unmoved by treatment. Therapists often blame themselves. The overwhelming majority of psychotherapists, as cliched as it sounds, want to be helpful. Many of us answered "I want to help people" on graduate school applications as the reason we chose to be therapists. Often, some well-meaning person dissuaded us from that answer because it didn't sound sophisticated or appeared too 'co-dependent.' Such aspirations, we now believe, are not only noble but can provide just what is needed to improve clinical effectiveness. After all, there is not much financial incentive for doing better therapy—we don't do this work because we thought we would acquire the lifestyles of the rich and famous.
 
Unfortunately, the altruistic desire to be helpful sometimes leads us to believe that if we were just smart enough or trained correctly, clients would not remain inured to our best efforts—if we found the Holy Grail, that special model or technique, we could once and for all defeat the psychic dragons that terrorize clients. “Amid explanations and remedies aplenty, therapists search courageously for designer explanations and brand-name miracles, but continue to observe that clients drop out, or even worse, continue without benefit.” Current estimates suggest that nearly 50 percent of our clients drop out and at least one third, and up to two thirds, do not benefit from our usual strategies.
 
So what can we do to channel our healthy desire to be helpful? If we listen to the lessons of the top performers, the first thing we should do is step outside of our comfort zones and push the limits of our current performance—to identify accurately those clients not responding to our therapeutic business as usual, and address the lack of change in a way that keeps clients engaged in treatment and forges new directions.
 
To recapture those clients who slip through the cracks, we need to embrace what is known about change: Many studies reveal that the majority of clients experience change in the first six visits—clients reporting little or no change early on tend to show no improvement over the entire course of therapy, or wind up dropping out. Early change, in other words, predicts engagement in therapy and ongoing benefit. This doesn't mean that a client is 'cured' or the problem is totally resolved, but rather that the client has a subjective sense that things are getting better. And second, a mountain of studies have long demonstrated another robust predictor—that reliable, tried-and-true but taken-for-granted old friend—the therapeutic alliance. Clients who highly rate the relationship with their therapist tend to be those clients who stick around in therapy and benefit from it.
 
Next we need to measure those known predictors in a systematic way with reliable and valid instruments. So instead of regarding the first few therapy sessions as a 'warm-up' period or a chance to try out the latest technique, we engage the client in helping us judge whether therapy is providing benefit. Obtaining feedback on standardized measures about success or failure during those initial meetings provides invaluable information about the match between ourselves, our approach, and the client—enabling us to know when we are bad, so we can be even better. The only way we can improve our outcomes is to know, very early on, when the client is not benefiting—we need something akin to an early warning signal.
 
Using standardized measures to monitor outcome may make your skin crawl and bring to mind torture devices like the Rorschach or MMPI. But the forms for these measures are not used to pass judgment, diagnose or unravel the mysteries of the human psyche. Rather, these measures invite clients into the inner circle of mental health and substance abuse services—they involve clients collaboratively in monitoring progress toward their goals and the fit of the services they are receiving, and amplify their voices in any decisions about their care.

The Outcome Rating Scale (ORS)

You might also think that the last thing you need is to add more paperwork to your practice. But finding out who is and isn't responding to therapy need not be cumbersome. In fact, it only takes a minute. Dissatisfied with the complexity, length, and user- unfriendliness of existing outcome measures, we developed the Outcome Rating Scale (ORS) as a brief clinical alternative. The ORS (child measures also available) and all the measures discussed here are available for free download at talkingcure.com. The ORS assesses three dimensions:
  1. Personal or symptomatic distress (measuring individual well-being)
  2. Interpersonal well-being (measuring how well the client is getting along in intimate relationships)
  3. Social role (measuring satisfaction with work/school and relationships outside of the home)
Changes in these three areas are considered widely to be valid indicators of successful outcome. The ORS simply translates these three areas and an overall rating into a visual analog format of four 10-cm lines, with instructions to place a mark on each line with low estimates to the left and high to the right. The four 10-cm lines add to a total score of 40. The score is simply the summation of the marks made by the client to the nearest millimeter on each of the four lines, measured by a centimeter ruler or available template. A score of 25, the clinical cutoff, differentiates those who are experiencing enough distress to be in a helping relationship from those who are not. Because of its simplicity, ORS feedback is available immediately for use at the time the service is delivered. Rated at an eighth-grade reading level, the ORS is understood easily and clients have little difficulty connecting it their day-to-day lived experience.
 
Matt completed the ORS before each session. He entered therapy with a score of 18, about average for those attending outpatient settings, but continued to hover at that score. At the third session, when the ORS reflected no change, it was not front-page news to Matt. But a different process ensued. In the same spirit of collaboration as the assessment process, Matt and his therapist brainstormed ideas, a free-for-all of unedited speculations and suggestions of alternatives, from changing nothing about the therapy to taking medication to shifting treatment approaches. During this open exchange Matt intimated that he was beginning to feel angry about the whole thing—real angry. The therapist noticed that when Matt worked himself up to a good anger—about how his problem interfered with his work and added a huge hassle in any extended situation away from his own bathroom—that he became quite animated, a stark contrast to the passively resigned person that had characterized their previous sessions. One of them, which one remains a mystery, mentioned the words 'pissed off' and both broke into a raucous laughter. Subsequently, the therapist suggested that instead of responding with hopelessness when the problem occurred, that Matt work himself up to a good anger—about how this problem made his life miserable. Matt added (he was a rock-and-roll buff) that he could also sing the Tom Petty song "Won't Back Down" during his tirade at the toilet. Matt allowed himself, when standing in front of the urinal to become incensed—downright 'pissed off,' and amused. And he started to go.
 
This process, the delightful creative energy that emerges from the wonderful interpersonal event we call therapy, could have happened to any therapist working with Matt. The difference is that the use of the outcome measure spotlighted the lack of change and made it impossible to ignore. The ORS brought the risk of a negative outcome front and center and allowed the therapist to enact the second characteristic of supershrinks, to be exceptionally alert to the risk of dropout and treatment failure. In the past, we might have continued with the same treatment for several more sessions, unaware of its ineffectiveness or believing (hoping, even praying) that our usual strategies would eventually take hold, but the reliable outcome data pushed us to explore different treatment options by the end of the third visit.
 
Pushing the limits of one's performance requires monitoring the fit of your service with the client's expectations about the alliance. The ongoing assessment of the alliance enables therapists to identify and correct areas of weakness in the delivery of services before they exert a negative effect on outcome.
 

The Session Rating Scale (SRS)

Research shows repeatedly that clients' ratings of the alliance are far more predictive of improvement than the type of intervention or the therapist's ratings of the alliance. Recognizing these much-replicated findings, we developed the Session Rating Scale (SRS) as a brief clinical alternative to longer research-based alliance measures to encourage routine conversations with clients about the alliance. The SRS also contains four items. First, a relationship scale rates the meeting on a continuum from "I did not feel heard, understood, and respected" to "I felt heard, understood, and respected." Second is a goals and topics scale that rates the conversation on a continuum from "We did not work on or talk about what I wanted to work on or talk about" to "We worked on or talked about what I wanted to work on or talk about." Third is an approach or method scale (an indication of a match with the client's theory of change) requiring the client to rate the meeting on a continuum from "The approach is not a good fit for me" to "The approach is a good fit for me." Finally, the fourth scale looks at how the client perceives the encounter in total along the continuum: "There was something missing in the session today" to "Overall, today's session was right for me."
 
The SRS simply translates what is known about the alliance into four visual analog scales, with instructions to place a mark on a line with negative responses depicted on the left and positive responses indicated on the right. The SRS allows alliance feedback in real time so that problems may be addressed. Like the ORS, the instrument takes less than a minute to administer and score. The SRS is scored similarly to the ORS, by adding the total of the client's marks on the four 10-cm lines. The total score falls into three categories:
  • SRS score between 0–34 reflects a poor alliance,
  • SRS Score between 35–38 reflects a fair alliance,
  • SRS Score between 39–40 reflects a good alliance.

The SRS allows the implementation of the final lesson of the supershrinks—seek, obtain, and maintain more consumer engagement. Clients drop out of therapy for two reasons: one is that therapy is not helping (hence monitoring outcome) and the other is alliance problems—they are not engaged or turned on by the process. The most direct way to improve your effectiveness is simply to keep people engaged in therapy.

 
An alliance problem that occurs frequently emerges when client's goals do not fit our own sensibilities about what they need. This may be particularly true if clients carry certain diagnoses or problem scenarios. Consider 19-year-old Sarah, who lived in a group home and received social security disability for mental illness. Sarah was referred for counseling because others were concerned that she was socially withdrawn. Everyone was also worried about Sarah's health because she was overweight and spent much of her time watching TV and eating snack foods.
 
In therapy Sarah agreed that she was lonely, but expressed a desire to be a Miami Heat cheerleader. Perhaps understandably, that goal was not taken seriously. After all, Sarah had never been a cheerleader, was 'schizophrenic,' and was not exactly in the best of shape. So no one listened, or even knew why Sarah had such an interesting goal. And the work with Sarah floundered. She spoke rarely and gave minimal answers to questions. In short, Sarah was not engaged and was at risk for dropout or a negative outcome.
 
The therapist routinely gave Sarah the SRS and she had reported that everything was going swimmingly, although the goals scale was an 8.7 out of 10, instead of a 9 or above out of 10 like the rest.
 
Sometimes it takes a bit more work to create the conditions that allow clients to be forthright with us, to develop a culture of feedback in the room. The power disparity combined with any socioeconomic, ethnic, or racial differences make it difficult to tell authority figures that they are on the wrong track. Think about the last time you told your doctor that he or she was not performing well. Clients, however, will let us know subtly on alliance measures far before they will confront us directly.
 
At the end of the third session, the therapist and Sarah reviewed her responses on the SRS. Did she truly feel understood? Was the therapy focused on her goals? Did the approach make sense to her? Such reviews are helpful in fine-tuning the therapy or addressing problems in the therapeutic relationship that have been missed or gone unreported. Sarah, when asked the question about goals, all the while avoiding eye contact and nearly whispering, repeated her desire to be a Miami Heat cheerleader.
 
The therapist looked at the SRS and the lights came on. The slight difference on the goals scale told the tale. When the therapist finally asked Sarah about her goal, she told the story of growing up watching Miami Heat basketball with her dad who delighted in Sarah's performance of the cheers. Sarah sparkled when she talked of her father, who passed away several years previously, and the therapist noted that it was the most he had ever heard her speak. He took this experience to heart and often asked Sarah about her father. The therapist also put the brakes on his efforts to get Sarah to socialize or exercise (his goals), and instead leaned more toward Sarah's interest in cheerleading. Sarah watched cheerleading contests regularly on ESPN and enjoyed sharing her expertise. She also knew a lot about basketball.
 
Sarah's SRS score improved on the goal scale and her ORS score increased dramatically. After a while, Sarah organized a cheerleading squad for her agency's basketball team who played local civic organizations to raise money for the group home. Sarah's involvement with the team ultimately addressed the referral concerns about her social withdrawal and lack of activity. The SRS helps us take clients and their engagement more seriously, like the supershrinks do. Walking the path cut by client goals often reveals alternative routes that would have never been discovered otherwise.
 
Providing feedback to clinicians on the clients' experience of the alliance and progress has been shown to result in significant improvements in both client retention and outcome. “We found that clients of therapists who opted out of completing the SRS were twice as likely to drop out and three times more likely to have a negative outcome.” In the same study of over 6000 clients, effectiveness rates doubled. As incredible as the results appear, they are consistent with findings from other researchers.
 
In a 2003 meta-analysis of three studies, Michael Lambert, a pioneer of using client feedback, reported that those helping relationships at risk for a negative outcome which received formal feedback were, at the conclusion of therapy, better off than 65 percent of those without information regarding progress. Think about this for a minute. Even if you are one of the most effective therapists, for every cycle of 10 clients you see, three will go home without benefit. Over the course of a year, for a therapist with a full caseload, this amounts to a lot of unhappy clients. This research shows that you can recover a substantial portion of those who don't benefit by first identifying who they are, keeping them engaged, and tailoring your services accordingly.
 

The Nuts and Bolts

Collecting data on standardized measures and using what we call 'practice-based evidence' can improve your effectiveness substantially. "Wait a minute," you say, "this sounds a lot like research!" Given the legionary schism between research and practice, sometimes getting therapists to do the measures is indeed a tall order because it does sound a lot like the 'R' word.
 
A story illustrates the sentiments that many practitioners feel about research. Two researchers were attending an annual conference. Although enjoying the proceedings, they decided to find some diversion to combat the tedium of sitting all day and absorbing vast amounts of information. They settled on a hot air balloon ride and were quite enjoying themselves until a mysterious fog rolled in. Hopelessly lost, they drifted for hours until a clearing in the fog appeared finally and they saw a man standing in an open field. Joyfully, they yelled down at the man, "Where are we?" The man looked at them, and then down at the ground, before turning a full 360 degrees to survey his surroundings. Finally, after scratching his beard and what seemed to be several moments of facial contortions reflecting deep concentration, the man looked up and said, "You are above my farm."
 
The first researcher looked at the second researcher and said, "That man is a researcher—he is a scientist!" To which the second researcher replied, "Are you crazy, man? He is a simple farmer!" "No," answered the first researcher emphatically, "that man is a researcher and there are three facts that support my assertion: First, what he said was absolutely 100% accurate; second, he addressed our question systematically through an examination of all of the empirical evidence at his disposal, and then deliberated carefully on the data before delivering his conclusion; and finally, the third reason I know he is a researcher is that what he told us is absolutely useless to our predicament."
 
But unlike much of what is passed off as research, the systematic collection of outcome data in your practice is not worthless to your predicament. It allows you the luxury of being useful to clients who would otherwise not be helped. And it helps you to get out of the way of those clients you are not helping, and connecting them to more likely opportunities for change.
 
First, collaboration with clients to monitor outcome and fit actually starts before formal therapy. This means that they are informed when scheduling the first contact about the nature of the partnership and the creation of a 'culture of feedback' in which their voice is essential.
 
"I want to help you reach your goals. I have found it important to monitor progress from meeting to meeting using two very short forms. Your ongoing feedback will tell us if we are on track, or need to change something about our approach, or include other resources or referrals to help you get what you want. I want to know this sooner rather than later, because if I am not the person for you, I want to move you on quickly and not be an obstacle to you getting what you want. Is that something you can help me with?"
 
We have never had anyone tell us that keeping track of progress is a bad idea. There are five steps to using practice based evidence to improve your effectiveness.
 

Step One: Introducing the ORS in the First Session

The ORS is administered prior to each meeting and the SRS toward the end. In the first meeting, the culture of feedback is continually reinforced. It is important to avoid technical jargon, and instead explain the purpose of the measures and their rationale in a natural commonsense way. Just make it part of a relaxed and ordinary way of having conversations and working. The specific words are not important—there is no protocol that must be followed. This is a clinical tool! Your interest in the client's desired outcome speaks volumes about your commitment to the client and the quality of service you provide.
 
"Remember our earlier conversation? During the course of our work together, I will be giving you two very short forms that ask how you think things are going and whether you think things are on track. To make the most of our time together and get the best outcome, it is important to make sure we are on the same page with one another about how you are doing, how we are doing, and where we are going. We will be using your answers to keep us on track. Will that be okay with you?"
 

Step Two: Incorporating the ORS in the first session

The ORS pinpoints where the client is and allows a comparison for later sessions. Incorporating the ORS entails simply bringing the client's initial and subsequent results into the conversation for discussion, clarification and problem solving. The client's initial score on the ORS is either above or below the clinical cutoff. You need only to mention the client scores as it relates to the cutoff. Keep in mind that the use of the measures is 100-percent transparent. There is nothing that they tell you that you cannot share with the client. It is their interpretation that ultimately counts.
 
"From your ORS it looks like you're experiencing some real problems." Or: "From your score, it looks like you're feeling okay." "What brings you here today?" Or: "Your total score is 15—that's pretty low. A score under 25 indicates people who are in enough distress to seek help. Things must be pretty tough for you. Does that fit your experience? What's going on?"
 
"The way this ORS works is that scores under 25 indicate that things are hard for you now or you are hurting enough to bring you to see me. Your score on the individual scale indicates that you are really having a hard time. Would you like to tell me about it?"
 
Or if the ORS is above 25: "Generally when people score above 25, it is an indication that things are going pretty well for them. Does that fit your experience? It would be really helpful for me to get an understanding of what it is that brought you here now."
 
Because the ORS has face validity, clients usually mark the scale the lowest that represents the reason they are seeking therapy, and often connect that reason to the mark they've made without prompting from the therapist. For example, Matt marked the Individual scale the lowest with the Social scale coming in a close second. As he was describing his problem in public restrooms, he pointed to the ORS and explained that this problem accounted for his mark. Other times, the therapist needs to clarify the connection between the client's descriptions of the reasons for services and the client's scores. The ORS makes no sense unless it is connected to the described experience of the client's life. This is a critical point because clinician and client must know what the mark on the line represents to the client and what will need to happen for the client to both realize a change and indicate that change on the ORS.
 
At some point in the meeting, the therapist needs only to pick up on the client's comments and connect them to the ORS:
 
"Oh, okay, it sounds like dealing with the loss of your brother (or relationship with wife, sister's drinking, or anxiety attacks, etc.) is an important part of what we are doing here. Does the distress from that situation account for your mark here on the individual (or other) scale on the ORS? Okay, so what do you think will need to happen for that mark to move just one centimeter to the right?"
 
The ORS, by design, is a general outcome instrument and provides no specific content other than the three domains. The ORS offers only a bare skeleton to which clients must add the flesh and blood of their experiences, into which they breathe life with their ideas and perceptions. At the moment in which clients connect the marks on the ORS with the situations that are distressing, the ORS becomes a meaningful measure of their progress and potent clinical tool.
 

Step Three: Introducing the SRS

The SRS, like the ORS, is best presented in a relaxed way that is integrated seamlessly into your typical way of working. The use of the SRS continues the culture of client privilege and feedback, and opens space for the client's voice about the alliance. The SRS is given at the end of the meeting, but leaving enough time to discuss the client's responses.
 
"Let's take a minute and have you fill out the form that asks for your opinion about our work together. It's like taking the temperature of our relationship today. Are we too hot or too cold? Do I need to adjust the thermostat? This information helps me stay on track. The ultimate purpose of using these forms is to make every possible effort to make our work together beneficial. Is that okay with you?"
 

Step Four: Incorporating the SRS

Because the SRS is easy to score and interpret, you can do a quick visual check and integrate it into the conversation. If the SRS looks good (score more than 9 cm on any scale), you need only comment on that fact and invite any other comments or suggestions. If the client marks any scales lower than 9 cm, you should definitely follow up. Clients tend to score all alliance measures highly, so the practitioner should address any hint of a problem. Anything less than a total score of 36 might signal a concern, and therefore it is prudent to invite clients to comment. Keep in mind that a high rating is a good thing, but it doesn't tell you very much. Always thank the client for the feedback and continue to encourage their open feedback. Remember that unless you convey you really want it, you are unlikely to get it.
 
And know for sure that there is no 'bad news' on these forms. Your appreciation of any negative feedback is a powerful alliance builder. In fact, alliances that start off negatively but result in your flexibility to client input tend to be very predictive of a positive outcome. When you are bad, you are even better! In general, a score:
  • that is poor and remains poor predicts a negative outcome,
  • that is good and remains good predicts a positive outcome,
  • that is poor or fair and improves predicts a positive outcome even more,
  • that is good and decreases is predictive of a negative outcome.
The SRS allows the opportunity to fix any alliance problems that are developing and shows that you do more than give lip service to honoring the client's perspectives.
 
"Let me just take a look at this SRS—it's like a thermometer that takes the temperature of our meeting here today. Great, looks like we are on the same page, that we are talking about what you think is important and you believe today's meeting was right for you. Please let me know if I get off track, because letting me know would be the biggest favor you could do for me."
 
"Let me quickly look at this other form here that lets me know how you think we are doing. Okay, seems like I am missing the boat here. Thanks very much for your honesty and giving me a chance to address what I can do differently. Was there something else I should have asked you about or should have done to make this meeting work better for you? What was missing here?"
 
Graceful acceptance of any problems and responding with flexibility usually turns things around. Again, clients reporting alliance problems that are addressed are far more likely to achieve a successful outcome—up to seven times more likely! Negative scores on the SRS, therefore, are good news and should be celebrated. Practitioners who elicit negative feedback tend to be those with the best effectiveness rates. Think about it—it makes sense that if clients are comfortable enough with you to express that something isn't right, then you are doing something very right in creating the conditions for therapeutic change.
 

Step Five: Checking for change in subsequent sessions

With the feedback culture set, the business of practice-based evidence can begin, with the client's view of progress and fit really influencing what happens. Each subsequent meeting compares the current ORS with the previous one and looks for any changes. The ORS can be made available in the waiting room or via electronic software (ASIST) and web systems (MyOutcomes.com). Many clients will complete the ORS (some will even plot their scores on provided graphs) and greet the therapist already discussing the implications. Using a scale that is simple to score and interpret increases client engagement in the evaluation of the services. Anything that increases participation is likely to have a beneficial impact on outcome.
 
The therapist discusses if there is an improvement (an increase in score), a slide (a decrease in score), or no change at all. The scores are used to engage the client in a discussion about progress, and more importantly, what should be done differently if there isn't any.
 
"Your marks on the personal well-being and overall lines really moved—about 4 cm to the right each! Your total increased by 8 points to 29 points. That's quite a jump! What happened? How did you pull that off? Where do you think we should go from here?"
 
If no change has occurred, the scores invite an even more important conversation.
 
"Okay, so things haven't changed since the last time we talked. How do you make sense of that? Should we be doing something different here, or should we continue on course steady as we go? If we are going to stay on the same track, how long should we go before getting worried? When will we know when to say 'when?' "
 
The idea is to involve the client in monitoring progress and the decision about what to do next. The discussion prompted by the ORS is repeated in all meetings, but later ones gain increasing significance and warrant additional action. We call these later interactions either checkpoint conversations or last-chance discussions. In a typical outpatient setting, checkpoint conversations are conducted usually at the third meeting and last-chance discussions are initiated in the sixth session. This is simply saying that based on over 300,000 administrations of the measures, by the third encounter most clients who do receive benefit from services usually show some benefit on the ORS; and if change is not noted by meeting three, then the client is at a risk for a negative outcome. Ditto for session six except that everything just mentioned has an exclamation mark. Different settings could have different checkpoints and last-chance numbers. Determining these highlighted points of conversation requires only that you collect the data. The calculations are simple and directions can be found in our book, The Heroic Client. Establishing these two points helps evaluate whether a client needs a referral or other change based on a typical successful client in your specific setting. The same thing can be accomplished more precisely by available software or web-based systems that calculate the expected trajectory or pattern of change based on our data base of ORS administrations. These programs compare a graph of the client's session-by-session ORS results to the expected amount of change for clients in the data base with the same intake score, serving as a catalyst for conversation about the next step in therapy.
 
If change has not occurred by the checkpoint conversation, the therapist responds by going through the SRS item by item. Alliance problems are a significant contributor to a lack of progress. Sometimes it is useful to say something like, "It doesn't seem like we are getting anywhere. Let me go over the items on this SRS to make sure you are getting exactly what you are looking for from me and our time together." Going through the SRS and eliciting client responses in detail can help the practitioner and client get a better sense of what may not be working. Sarah, the woman who aspired to be a Miami Heat cheerleader, exemplifies this process.
 
Next, a lack of progress at this stage may indicate that the therapist needs to try something different. This can take as many forms as there are clients: inviting others from the client's support system, using a team or another professional, a different approach; referring to another therapist, religious advisor, or self-help group—whatever seems to be of value to the client. Any ideas that surface are then implemented, and progress is monitored via the ORS. Matt and the idea of encouraging his anger illustrate this kind of discussion.
 

The Importance of Referrals

If the therapist and client have implemented different possibilities and the client is still without benefit, it is time for the last-chance discussion. As the name implies, there is some urgency for something different because most clients who benefit have already achieved change by this point, and the client is at significant risk for a negative conclusion. A metaphor we like is that of the therapist and client driving into a vast desert and running on empty, when a sign appears on the road that says 'last chance for gas.' The metaphor depicts the necessity of stopping and discussing the implications of continuing without the client reaching a desired change.
 
This is the time for a frank discussion about referral and other available resources. If the therapist has created a feedback culture from the beginning, then this conversation will not be a surprise to the client. There is rarely justification for continuing work with clients who have not achieved change in a period typical for the majority of clients seen by a particular practitioner or setting.
 
Why? Because research shows no correlation between a therapy with a poor outcome and the likelihood of success in the next encounter. Although we've found that talking about a lack of progress turns most cases around, we are not always able to find a helpful alternative.
 
“Where in the past we might have felt like failures when we weren't being effective with a client, we now view such times as opportunities to stop being an impediment to the client and their change process.” Now our work is successful when the client achieves change and when, in the absence of change, we get out of their way. We reiterate our commitment to help them achieve the outcome they desire, whether by us or by someone else. When we discuss the lack of progress with clients, we stress that failure says nothing about them personally or their potential for change. Some clients terminate and others ask for a referral to another therapist or treatment setting. If the client chooses, we will meet with her or him in a supportive fashion until other arrangements are made. Rarely do we continue with clients whose ORS scores show little or no improvement by the sixth or seventh visit.
 
Ending with clients who are not making progress does not mean that all therapy should be brief. On the contrary, our research and the “findings of virtually every study of change in therapy over the last 40 years provide substantial evidence that more therapy is better than less therapy for those clients who make progress early in treatment” and are interested in continuing. When little or no improvement is forth coming, however, this same data indicates that therapy should, indeed, be as brief as possible. Over time, we have learned that explaining our way of working and our beliefs about therapy outcomes to clients avoids problems if therapy is unsuccessful and needs to be terminated.
 
Barry Duncan writes: But it can be hard to believe that stopping a great relationship is the right thing to do.
 
Alina sought services because she was devastated and felt like everything important to her had been savagely ripped apart—because it had. She worked her whole life for but one goal, to earn a scholarship to a prestigious Ivy-league university. She was captain of the volleyball team, commanded the first position on the debating team, and was valedictorian of her class. Alina was the pride of her Guatemalan community—proof positive of the possibilities her parents always envisioned in the land of opportunity. Alina was awarded a full ride in minority studies at Yale University. But this Hollywood caliber story hit a glitch. Attending her first semester away from home and the insulated environment in which she excelled, Alina began hearing voices.
 
She told a therapist at the university counseling center and before she knew it she was whisked away to a psychiatric unit and given antipsychotic medications. Despondent about the implications of this turn of events, Alina threw herself down a stairwell, prompting her parents to bring her home. Alina returned home in utter confusion, still hearing voices, and with a belief that she was an unequivocal failure to herself, her family, and everyone else in her tightly knit community whose aspirations rode on her shoulders.
 
Serendipity landed Alina in my office. I was the twentieth therapist the family called and the first who agreed to see Alina without medication. Alina's parents were committed to honor her preference to not take medication. We were made for each other and hit it off famously. I loved this kid. I admired her intelligence and spunk in standing up to psychiatric discourse and the broken record of medication. I couldn't wait to be useful to Alina and get her back on track. When I administered the ORS, Alina scored a 4, the lowest score I'd ever had.
 
We discussed her total demoralization and how her episodes of hearing voices and confusion led to the events that took everything she had always dreamed of from her—the life she had worked so hard to prepare for. I did what I usually did that is helpful—I listened, I commiserated, I validated, and I worked hard to recruit Alina's resilience to begin anew. But nothing happened.
 
By session three, Alina remained unchanged in the face of my best efforts. Therapy was going nowhere and I knew it because the ORS makes it hard to ignore—that score of 4 was a rude reminder of just how badly things were going.
 
At the checkpoint session, I went over the SRS with her, and unlike many clients, Alina was specific about what was missing and revealed that she wanted me to be more active, so I was. She wanted ideas about what to do about the voices, so I provided them—thought stopping, guided imagery, content analysis. But, no change ensued and she was increasingly at risk for a negative outcome. Alina told me she had read about hypnosis on the internet and thought that might help. Since I had been around in the '80s and couldn't escape that time without hypnosis training, I approached Alina from a couple of different hypnotic angles—offering both embedded suggestions as well as stories intended to build her immunity to the voices. She responded with deep trances and gave high ratings on the SRS. But the ORS remained a paltry 4.
 
At the last-chance conversation, I brought up the topic of referral but we settled instead on a consult from a team (led by Jacqueline Sparks). Alina, again, responded well, and seemed more engaged than I had noticed with me—she rated the session the highest possible on the SRS. The team addressed topics I hadn't, including differentiation from her family, as well as gender and ethnic issues. Alina and I pursued the ideas from the team for a couple more sessions. But her ORS score was still a 4.
 
Now what? We were in session nine, well beyond how clients typically change in my practice. After collecting data for several years, I know that 75 percent of clients who benefit from their work with me show it by the third session; a full 98 per cent of my clients who benefit do it by the sixth session. So is it right that I continue with Alina? Is it even ethical?
 
Despite our mutual admiration society, it wasn't right to continue. A good relationship in the absence of benefit is a good definition of dependence. So I shared my concern that her dream would be in jeopardy if she continued seeing me. I emphasized that the lack of change had nothing to do with either of us, that we had both tried our best, and for whatever reason, it just wasn't the right mix for change. We discussed the possibility that Alina see someone else. If you watch the video, you would be struck, as many are, by the decided lack of fun Alina and I have during this discussion.
 
Finally, after what seemed like an eternity, including Alina's assertion that she wanted to keep seeing me, we started to talk about who she might see. She mentioned she liked someone from the team, and began seeing our colleague Jacqueline Sparks.
 
By session four, Alina had an ORS score of 19 and enrolled to take a class at a local university. Moreover, she continued those changes and re-enrolled at Yale the following year with her scholarship intact! When I wrote a required recommendation letter for the Dean, I administered the ORS to Alina and she scored a 29. By my getting out of her way and allowing her and myself to 'fail successfully,' Alina was given another opportunity to get her life back on track—and she did. Alina and Jacqueline, for reasons that escape us even after pouring over the video, just had the right chemistry for change.
 
This was a watershed client for me. Although I believed in practice-based evidence, especially how it puts clients center stage and pushes me to do something different when clients don't benefit, I always struggled with those clients who did not benefit, but who wanted to continue with me nevertheless. This was more difficult when I really liked the client and had become personally invested in them benefiting. Alina awakened me to the pitfalls of such situations and showed a true value-added dimension to monitoring outcome—namely the ability to fail successfully with our clients. Alina was the kind of client I would have seen forever. I cared deeply about her and believed that surely I could figure out something eventually.
 
But such is the thinking that makes 'chronic' clients—an inattention to the iatrogenic effects of the continuation of therapy in the absence of benefit. Therapists, no matter how competent or trained or experienced, cannot be effective with everyone, and other relational fits may work out better for the client. Although some clients want to continue in the absence of change, far more do not want to continue when given a graceful way to exit. The ORS allows us to ask ourselves the hard questions when clients are not, by their own ratings, seeing benefit from services. The benefits of increased effectiveness of my work, and feeling better about the clients that I am not helping, have allowed me to leave any squeamishness about forms far behind.
 
Practice-based evidence will not help you with the clients you are already effective with; rather, it will help you with those who are not benefiting by enabling an open discussion of other options and, in the absence of change, the ability to honorably end and move the client on to a more productive relationship. The basic principle behind this way of working is that our day-to-day clinical actions are guided by reliable, valid feedback about the factors that account for how people change in therapy. These factors are the client's engagement and view of the therapeutic relationship, and—the gold standard—the client's report of whether change occurs. Monitoring the outcome and the fit of our services helps us know that when we are good, we are very good, and when we are bad, we can be even better.