"When I'm good, I'm very good, but when I'm bad I'm better": A New Mantra for Psychotherapists

"When I'm good, I'm very good, but when I'm bad I'm better": A New Mantra for Psychotherapists

by Barry Duncan, PhD and Scott Miller, PhD
Barry Duncan and Scott Miller provide a comprehensive summary of the Outcome-Informed, Client-Directed approach and a detailed, practical overview of its application in clinical practice.

PSYCHOTHERAPY.NET MEMBERSHIPS

Get Endless Inspiration and
Insight from Master Therapists,
Members-Only Content & More


 

Current estimates suggest that nearly 50 percent of therapy clients drop out and at least one third, and up to two thirds, do not benefit from our usual strategies. Barry Duncan and Scott Miller provide a comprehensive summary of the Outcome-Informed, Client-Directed approach and a detailed, practical overview of its application in clinical practice. Through case examples they demonstrate how most practitioners can increase their therapeutic effectiveness substantially through accurate identification of those clients who are not responding, and addressing the lack of change in a way that keeps clients engaged in treatment and forges new directions.

 

Introduction

At first blush, Mae West's famous words 'When I'm good, I'm very good, but when I'm bad I'm better' hardly seem like a guide for therapists to live by—but, as it turns out, they could be. Research demonstrates consistently that who the therapist is accounts for far more of the variance of change (6 to 9 percent) than the model or technique administered (1 percent). In fact, therapist effectiveness ranges from a paltry 20 percent to an impressive 70 percent. A small group of clinicians—sometimes called 'supershrinks'—obtain demonstrably superior outcomes in most of their cases, while others fall predictably on the less-exalted sections of the bell-shaped curve. However, most practitioners can join the ranks of supershrinks, or at least increase their therapeutic effectiveness substantially.
 
Consider Matt, a twenty-something software whiz who was on the road frequently to trouble-shoot customer problems. Matt loved his job but travelling was an ordeal—not because of flying but because of another, far more embarrassing problem. Matt was long past feeling frustrated about standing and standing in public restrooms trying to 'go.' What started as a mild discomfort and inconvenience easily solved by repeated restroom visits had progressed to full-blown anxiety attacks, an excruciating pressure, and an intense dread before each trip. Feeling hopeless and demoralized, Matt considered changing jobs but as a last resort decided instead to see a therapist.
 
Matt liked the therapist and it felt good finally to tell someone about the problem. The therapist worked with Matt to implement relaxation and self-talk strategies. Matt practiced in session and tried to use the ideas on his next trip, but still no 'go.' The problem continued to get worse. Now three sessions in, Matt was at significant risk for a negative outcome—either dropping out or continuing in therapy without benefit.
 
We have all encountered clients unmoved by treatment. Therapists often blame themselves. The overwhelming majority of psychotherapists, as cliched as it sounds, want to be helpful. Many of us answered "I want to help people" on graduate school applications as the reason we chose to be therapists. Often, some well-meaning person dissuaded us from that answer because it didn't sound sophisticated or appeared too 'co-dependent.' Such aspirations, we now believe, are not only noble but can provide just what is needed to improve clinical effectiveness. After all, there is not much financial incentive for doing better therapy—we don't do this work because we thought we would acquire the lifestyles of the rich and famous.
 
Unfortunately, the altruistic desire to be helpful sometimes leads us to believe that if we were just smart enough or trained correctly, clients would not remain inured to our best efforts—if we found the Holy Grail, that special model or technique, we could once and for all defeat the psychic dragons that terrorize clients.
Amid explanations and remedies aplenty, therapists search courageously for designer explanations and brand-name miracles, but continue to observe that clients drop out, or even worse, continue without benefit.
Amid explanations and remedies aplenty, therapists search courageously for designer explanations and brand-name miracles, but continue to observe that clients drop out, or even worse, continue without benefit. Current estimates suggest that nearly 50 percent of our clients drop out and at least one third, and up to two thirds, do not benefit from our usual strategies.
 
So what can we do to channel our healthy desire to be helpful? If we listen to the lessons of the top performers, the first thing we should do is step outside of our comfort zones and push the limits of our current performance—to identify accurately those clients not responding to our therapeutic business as usual, and address the lack of change in a way that keeps clients engaged in treatment and forges new directions.
 
To recapture those clients who slip through the cracks, we need to embrace what is known about change: Many studies reveal that the majority of clients experience change in the first six visits—clients reporting little or no change early on tend to show no improvement over the entire course of therapy, or wind up dropping out. Early change, in other words, predicts engagement in therapy and ongoing benefit. This doesn't mean that a client is 'cured' or the problem is totally resolved, but rather that the client has a subjective sense that things are getting better. And second, a mountain of studies have long demonstrated another robust predictor—that reliable, tried-and-true but taken-for-granted old friend—the therapeutic alliance. Clients who highly rate the relationship with their therapist tend to be those clients who stick around in therapy and benefit from it.
 
Next we need to measure those known predictors in a systematic way with reliable and valid instruments. So instead of regarding the first few therapy sessions as a 'warm-up' period or a chance to try out the latest technique, we engage the client in helping us judge whether therapy is providing benefit. Obtaining feedback on standardized measures about success or failure during those initial meetings provides invaluable information about the match between ourselves, our approach, and the client—enabling us to know when we are bad, so we can be even better. The only way we can improve our outcomes is to know, very early on, when the client is not benefiting—we need something akin to an early warning signal.
 
Using standardized measures to monitor outcome may make your skin crawl and bring to mind torture devices like the Rorschach or MMPI. But the forms for these measures are not used to pass judgment, diagnose or unravel the mysteries of the human psyche. Rather, these measures invite clients into the inner circle of mental health and substance abuse services—they involve clients collaboratively in monitoring progress toward their goals and the fit of the services they are receiving, and amplify their voices in any decisions about their care.

The Outcome Rating Scale (ORS)

You might also think that the last thing you need is to add more paperwork to your practice. But finding out who is and isn't responding to therapy need not be cumbersome. In fact, it only takes a minute. Dissatisfied with the complexity, length, and user- unfriendliness of existing outcome measures, we developed the Outcome Rating Scale (ORS) as a brief clinical alternative. The ORS (child measures also available) and all the measures discussed here are available for free download at talkingcure.com. The ORS assesses three dimensions:
  1. Personal or symptomatic distress (measuring individual well-being)
  2. Interpersonal well-being (measuring how well the client is getting along in intimate relationships)
  3. Social role (measuring satisfaction with work/school and relationships outside of the home)
Changes in these three areas are considered widely to be valid indicators of successful outcome. The ORS simply translates these three areas and an overall rating into a visual analog format of four 10-cm lines, with instructions to place a mark on each line with low estimates to the left and high to the right. The four 10-cm lines add to a total score of 40. The score is simply the summation of the marks made by the client to the nearest millimeter on each of the four lines, measured by a centimeter ruler or available template. A score of 25, the clinical cutoff, differentiates those who are experiencing enough distress to be in a helping relationship from those who are not. Because of its simplicity, ORS feedback is available immediately for use at the time the service is delivered. Rated at an eighth-grade reading level, the ORS is understood easily and clients have little difficulty connecting it their day-to-day lived experience.
 
Matt completed the ORS before each session. He entered therapy with a score of 18, about average for those attending outpatient settings, but continued to hover at that score. At the third session, when the ORS reflected no change, it was not front-page news to Matt. But a different process ensued. In the same spirit of collaboration as the assessment process, Matt and his therapist brainstormed ideas, a free-for-all of unedited speculations and suggestions of alternatives, from changing nothing about the therapy to taking medication to shifting treatment approaches. During this open exchange Matt intimated that he was beginning to feel angry about the whole thing—real angry. The therapist noticed that when Matt worked himself up to a good anger—about how his problem interfered with his work and added a huge hassle in any extended situation away from his own bathroom—that he became quite animated, a stark contrast to the passively resigned person that had characterized their previous sessions. One of them, which one remains a mystery, mentioned the words 'pissed off' and both broke into a raucous laughter. Subsequently, the therapist suggested that instead of responding with hopelessness when the problem occurred, that Matt work himself up to a good anger—about how this problem made his life miserable. Matt added (he was a rock-and-roll buff) that he could also sing the Tom Petty song "Won't Back Down" during his tirade at the toilet. Matt allowed himself, when standing in front of the urinal to become incensed—downright 'pissed off,' and amused. And he started to go.
 
This process, the delightful creative energy that emerges from the wonderful interpersonal event we call therapy, could have happened to any therapist working with Matt. The difference is that the use of the outcome measure spotlighted the lack of change and made it impossible to ignore. The ORS brought the risk of a negative outcome front and center and allowed the therapist to enact the second characteristic of supershrinks, to be exceptionally alert to the risk of dropout and treatment failure. In the past, we might have continued with the same treatment for several more sessions, unaware of its ineffectiveness or believing (hoping, even praying) that our usual strategies would eventually take hold, but the reliable outcome data pushed us to explore different treatment options by the end of the third visit.
 
Pushing the limits of one's performance requires monitoring the fit of your service with the client's expectations about the alliance. The ongoing assessment of the alliance enables therapists to identify and correct areas of weakness in the delivery of services before they exert a negative effect on outcome.
 

The Session Rating Scale (SRS)

Research shows repeatedly that clients' ratings of the alliance are far more predictive of improvement than the type of intervention or the therapist's ratings of the alliance. Recognizing these much-replicated findings, we developed the Session Rating Scale (SRS) as a brief clinical alternative to longer research-based alliance measures to encourage routine conversations with clients about the alliance. The SRS also contains four items. First, a relationship scale rates the meeting on a continuum from "I did not feel heard, understood, and respected" to "I felt heard, understood, and respected." Second is a goals and topics scale that rates the conversation on a continuum from "We did not work on or talk about what I wanted to work on or talk about" to "We worked on or talked about what I wanted to work on or talk about." Third is an approach or method scale (an indication of a match with the client's theory of change) requiring the client to rate the meeting on a continuum from "The approach is not a good fit for me" to "The approach is a good fit for me." Finally, the fourth scale looks at how the client perceives the encounter in total along the continuum: "There was something missing in the session today" to "Overall, today's session was right for me."
 
The SRS simply translates what is known about the alliance into four visual analog scales, with instructions to place a mark on a line with negative responses depicted on the left and positive responses indicated on the right. The SRS allows alliance feedback in real time so that problems may be addressed. Like the ORS, the instrument takes less than a minute to administer and score. The SRS is scored similarly to the ORS, by adding the total of the client's marks on the four 10-cm lines. The total score falls into three categories:
  • SRS score between 0–34 reflects a poor alliance,
  • SRS Score between 35–38 reflects a fair alliance,
  • SRS Score between 39–40 reflects a good alliance.
The SRS allows the implementation of the final lesson of the supershrinks—seek, obtain, and maintain more consumer engagement. Clients drop out of therapy for two reasons: one is that therapy is not helping (hence monitoring outcome) and the other is alliance problems—they are not engaged or turned on by the process. The most direct way to improve your effectiveness is simply to keep people engaged in therapy.
 
An alliance problem that occurs frequently emerges when client's goals do not fit our own sensibilities about what they need. This may be particularly true if clients carry certain diagnoses or problem scenarios. Consider 19-year-old Sarah, who lived in a group home and received social security disability for mental illness. Sarah was referred for counseling because others were concerned that she was socially withdrawn. Everyone was also worried about Sarah's health because she was overweight and spent much of her time watching TV and eating snack foods.
 
In therapy Sarah agreed that she was lonely, but expressed a desire to be a Miami Heat cheerleader. Perhaps understandably, that goal was not taken seriously. After all, Sarah had never been a cheerleader, was 'schizophrenic,' and was not exactly in the best of shape. So no one listened, or even knew why Sarah had such an interesting goal. And the work with Sarah floundered. She spoke rarely and gave minimal answers to questions. In short, Sarah was not engaged and was at risk for dropout or a negative outcome.
 
The therapist routinely gave Sarah the SRS and she had reported that everything was going swimmingly, although the goals scale was an 8.7 out of 10, instead of a 9 or above out of 10 like the rest.
 
Sometimes it takes a bit more work to create the conditions that allow clients to be forthright with us, to develop a culture of feedback in the room. The power disparity combined with any socioeconomic, ethnic, or racial differences make it difficult to tell authority figures that they are on the wrong track. Think about the last time you told your doctor that he or she was not performing well. Clients, however, will let us know subtly on alliance measures far before they will confront us directly.
 
At the end of the third session, the therapist and Sarah reviewed her responses on the SRS. Did she truly feel understood? Was the therapy focused on her goals? Did the approach make sense to her? Such reviews are helpful in fine-tuning the therapy or addressing problems in the therapeutic relationship that have been missed or gone unreported. Sarah, when asked the question about goals, all the while avoiding eye contact and nearly whispering, repeated her desire to be a Miami Heat cheerleader.
 
The therapist looked at the SRS and the lights came on. The slight difference on the goals scale told the tale. When the therapist finally asked Sarah about her goal, she told the story of growing up watching Miami Heat basketball with her dad who delighted in Sarah's performance of the cheers. Sarah sparkled when she talked of her father, who passed away several years previously, and the therapist noted that it was the most he had ever heard her speak. He took this experience to heart and often asked Sarah about her father. The therapist also put the brakes on his efforts to get Sarah to socialize or exercise (his goals), and instead leaned more toward Sarah's interest in cheerleading. Sarah watched cheerleading contests regularly on ESPN and enjoyed sharing her expertise. She also knew a lot about basketball.
 
Sarah's SRS score improved on the goal scale and her ORS score increased dramatically. After a while, Sarah organized a cheerleading squad for her agency's basketball team who played local civic organizations to raise money for the group home. Sarah's involvement with the team ultimately addressed the referral concerns about her social withdrawal and lack of activity. The SRS helps us take clients and their engagement more seriously, like the supershrinks do. Walking the path cut by client goals often reveals alternative routes that would have never been discovered otherwise.
 
Providing feedback to clinicians on the clients' experience of the alliance and progress has been shown to result in significant improvements in both client retention and outcome.
We found that clients of therapists who opted out of completing the SRS were twice as likely to drop out and three times more likely to have a negative outcome.
We found that clients of therapists who opted out of completing the SRS were twice as likely to drop out and three times more likely to have a negative outcome. In the same study of over 6000 clients, effectiveness rates doubled. As incredible as the results appear, they are consistent with findings from other researchers.
 
In a 2003 meta-analysis of three studies, Michael Lambert, a pioneer of using client feedback, reported that those helping relationships at risk for a negative outcome which received formal feedback were, at the conclusion of therapy, better off than 65 percent of those without information regarding progress. Think about this for a minute. Even if you are one of the most effective therapists, for every cycle of 10 clients you see, three will go home without benefit. Over the course of a year, for a therapist with a full caseload, this amounts to a lot of unhappy clients. This research shows that you can recover a substantial portion of those who don't benefit by first identifying who they are, keeping them engaged, and tailoring your services accordingly.
 

The Nuts and Bolts

Collecting data on standardized measures and using what we call 'practice-based evidence' can improve your effectiveness substantially. "Wait a minute," you say, "this sounds a lot like research!" Given the legionary schism between research and practice, sometimes getting therapists to do the measures is indeed a tall order because it does sound a lot like the 'R' word.
 
A story illustrates the sentiments that many practitioners feel about research. Two researchers were attending an annual conference. Although enjoying the proceedings, they decided to find some diversion to combat the tedium of sitting all day and absorbing vast amounts of information. They settled on a hot air balloon ride and were quite enjoying themselves until a mysterious fog rolled in. Hopelessly lost, they drifted for hours until a clearing in the fog appeared finally and they saw a man standing in an open field. Joyfully, they yelled down at the man, "Where are we?" The man looked at them, and then down at the ground, before turning a full 360 degrees to survey his surroundings. Finally, after scratching his beard and what seemed to be several moments of facial contortions reflecting deep concentration, the man looked up and said, "You are above my farm."
 
The first researcher looked at the second researcher and said, "That man is a researcher—he is a scientist!" To which the second researcher replied, "Are you crazy, man? He is a simple farmer!" "No," answered the first researcher emphatically, "that man is a researcher and there are three facts that support my assertion: First, what he said was absolutely 100% accurate; second, he addressed our question systematically through an examination of all of the empirical evidence at his disposal, and then deliberated carefully on the data before delivering his conclusion; and finally, the third reason I know he is a researcher is that what he told us is absolutely useless to our predicament."
 
But unlike much of what is passed off as research, the systematic collection of outcome data in your practice is not worthless to your predicament. It allows you the luxury of being useful to clients who would otherwise not be helped. And it helps you to get out of the way of those clients you are not helping, and connecting them to more likely opportunities for change.
 
First, collaboration with clients to monitor outcome and fit actually starts before formal therapy. This means that they are informed when scheduling the first contact about the nature of the partnership and the creation of a 'culture of feedback' in which their voice is essential.
 
"I want to help you reach your goals. I have found it important to monitor progress from meeting to meeting using two very short forms. Your ongoing feedback will tell us if we are on track, or need to change something about our approach, or include other resources or referrals to help you get what you want. I want to know this sooner rather than later, because if I am not the person for you, I want to move you on quickly and not be an obstacle to you getting what you want. Is that something you can help me with?"
 
We have never had anyone tell us that keeping track of progress is a bad idea. There are five steps to using practice based evidence to improve your effectiveness.
 

Step One: Introducing the ORS in the First Session

The ORS is administered prior to each meeting and the SRS toward the end. In the first meeting, the culture of feedback is continually reinforced. It is important to avoid technical jargon, and instead explain the purpose of the measures and their rationale in a natural commonsense way. Just make it part of a relaxed and ordinary way of having conversations and working. The specific words are not important—there is no protocol that must be followed. This is a clinical tool! Your interest in the client's desired outcome speaks volumes about your commitment to the client and the quality of service you provide.
 
"Remember our earlier conversation? During the course of our work together, I will be giving you two very short forms that ask how you think things are going and whether you think things are on track. To make the most of our time together and get the best outcome, it is important to make sure we are on the same page with one another about how you are doing, how we are doing, and where we are going. We will be using your answers to keep us on track. Will that be okay with you?"
 

Step Two: Incorporating the ORS in the first session

The ORS pinpoints where the client is and allows a comparison for later sessions. Incorporating the ORS entails simply bringing the client's initial and subsequent results into the conversation for discussion, clarification and problem solving. The client's initial score on the ORS is either above or below the clinical cutoff. You need only to mention the client scores as it relates to the cutoff. Keep in mind that the use of the measures is 100-percent transparent. There is nothing that they tell you that you cannot share with the client. It is their interpretation that ultimately counts.
 
"From your ORS it looks like you're experiencing some real problems." Or: "From your score, it looks like you're feeling okay." "What brings you here today?" Or: "Your total score is 15—that's pretty low. A score under 25 indicates people who are in enough distress to seek help. Things must be pretty tough for you. Does that fit your experience? What's going on?"
 
"The way this ORS works is that scores under 25 indicate that things are hard for you now or you are hurting enough to bring you to see me. Your score on the individual scale indicates that you are really having a hard time. Would you like to tell me about it?"
 
Or if the ORS is above 25: "Generally when people score above 25, it is an indication that things are going pretty well for them. Does that fit your experience? It would be really helpful for me to get an understanding of what it is that brought you here now."
 
Because the ORS has face validity, clients usually mark the scale the lowest that represents the reason they are seeking therapy, and often connect that reason to the mark they've made without prompting from the therapist. For example, Matt marked the Individual scale the lowest with the Social scale coming in a close second. As he was describing his problem in public restrooms, he pointed to the ORS and explained that this problem accounted for his mark. Other times, the therapist needs to clarify the connection between the client's descriptions of the reasons for services and the client's scores. The ORS makes no sense unless it is connected to the described experience of the client's life. This is a critical point because clinician and client must know what the mark on the line represents to the client and what will need to happen for the client to both realize a change and indicate that change on the ORS.
 
At some point in the meeting, the therapist needs only to pick up on the client's comments and connect them to the ORS:
 
"Oh, okay, it sounds like dealing with the loss of your brother (or relationship with wife, sister's drinking, or anxiety attacks, etc.) is an important part of what we are doing here. Does the distress from that situation account for your mark here on the individual (or other) scale on the ORS? Okay, so what do you think will need to happen for that mark to move just one centimeter to the right?"
 
The ORS, by design, is a general outcome instrument and provides no specific content other than the three domains. The ORS offers only a bare skeleton to which clients must add the flesh and blood of their experiences, into which they breathe life with their ideas and perceptions. At the moment in which clients connect the marks on the ORS with the situations that are distressing, the ORS becomes a meaningful measure of their progress and potent clinical tool.
 

Step Three: Introducing the SRS

The SRS, like the ORS, is best presented in a relaxed way that is integrated seamlessly into your typical way of working. The use of the SRS continues the culture of client privilege and feedback, and opens space for the client's voice about the alliance. The SRS is given at the end of the meeting, but leaving enough time to discuss the client's responses.
 
"Let's take a minute and have you fill out the form that asks for your opinion about our work together. It's like taking the temperature of our relationship today. Are we too hot or too cold? Do I need to adjust the thermostat? This information helps me stay on track. The ultimate purpose of using these forms is to make every possible effort to make our work together beneficial. Is that okay with you?"
 

Step Four: Incorporating the SRS

Because the SRS is easy to score and interpret, you can do a quick visual check and integrate it into the conversation. If the SRS looks good (score more than 9 cm on any scale), you need only comment on that fact and invite any other comments or suggestions. If the client marks any scales lower than 9 cm, you should definitely follow up. Clients tend to score all alliance measures highly, so the practitioner should address any hint of a problem. Anything less than a total score of 36 might signal a concern, and therefore it is prudent to invite clients to comment. Keep in mind that a high rating is a good thing, but it doesn't tell you very much. Always thank the client for the feedback and continue to encourage their open feedback. Remember that unless you convey you really want it, you are unlikely to get it.
 
And know for sure that there is no 'bad news' on these forms. Your appreciation of any negative feedback is a powerful alliance builder. In fact, alliances that start off negatively but result in your flexibility to client input tend to be very predictive of a positive outcome. When you are bad, you are even better! In general, a score:
  • that is poor and remains poor predicts a negative outcome,
  • that is good and remains good predicts a positive outcome,
  • that is poor or fair and improves predicts a positive outcome even more,
  • that is good and decreases is predictive of a negative outcome.
The SRS allows the opportunity to fix any alliance problems that are developing and shows that you do more than give lip service to honoring the client's perspectives.
 
"Let me just take a look at this SRS—it's like a thermometer that takes the temperature of our meeting here today. Great, looks like we are on the same page, that we are talking about what you think is important and you believe today's meeting was right for you. Please let me know if I get off track, because letting me know would be the biggest favor you could do for me."
 
"Let me quickly look at this other form here that lets me know how you think we are doing. Okay, seems like I am missing the boat here. Thanks very much for your honesty and giving me a chance to address what I can do differently. Was there something else I should have asked you about or should have done to make this meeting work better for you? What was missing here?"
 
Graceful acceptance of any problems and responding with flexibility usually turns things around. Again, clients reporting alliance problems that are addressed are far more likely to achieve a successful outcome—up to seven times more likely! Negative scores on the SRS, therefore, are good news and should be celebrated. Practitioners who elicit negative feedback tend to be those with the best effectiveness rates. Think about it—it makes sense that if clients are comfortable enough with you to express that something isn't right, then you are doing something very right in creating the conditions for therapeutic change.
 

Step Five: Checking for change in subsequent sessions

With the feedback culture set, the business of practice-based evidence can begin, with the client's view of progress and fit really influencing what happens. Each subsequent meeting compares the current ORS with the previous one and looks for any changes. The ORS can be made available in the waiting room or via electronic software (ASIST) and web systems (MyOutcomes.com). Many clients will complete the ORS (some will even plot their scores on provided graphs) and greet the therapist already discussing the implications. Using a scale that is simple to score and interpret increases client engagement in the evaluation of the services. Anything that increases participation is likely to have a beneficial impact on outcome.
 
The therapist discusses if there is an improvement (an increase in score), a slide (a decrease in score), or no change at all. The scores are used to engage the client in a discussion about progress, and more importantly, what should be done differently if there isn't any.
 
"Your marks on the personal well-being and overall lines really moved—about 4 cm to the right each! Your total increased by 8 points to 29 points. That's quite a jump! What happened? How did you pull that off? Where do you think we should go from here?"
 
If no change has occurred, the scores invite an even more important conversation.
 
"Okay, so things haven't changed since the last time we talked. How do you make sense of that? Should we be doing something different here, or should we continue on course steady as we go? If we are going to stay on the same track, how long should we go before getting worried? When will we know when to say 'when?' "
 
The idea is to involve the client in monitoring progress and the decision about what to do next. The discussion prompted by the ORS is repeated in all meetings, but later ones gain increasing significance and warrant additional action. We call these later interactions either checkpoint conversations or last-chance discussions. In a typical outpatient setting, checkpoint conversations are conducted usually at the third meeting and last-chance discussions are initiated in the sixth session. This is simply saying that based on over 300,000 administrations of the measures, by the third encounter most clients who do receive benefit from services usually show some benefit on the ORS; and if change is not noted by meeting three, then the client is at a risk for a negative outcome. Ditto for session six except that everything just mentioned has an exclamation mark. Different settings could have different checkpoints and last-chance numbers. Determining these highlighted points of conversation requires only that you collect the data. The calculations are simple and directions can be found in our book, The Heroic Client. Establishing these two points helps evaluate whether a client needs a referral or other change based on a typical successful client in your specific setting. The same thing can be accomplished more precisely by available software or web-based systems that calculate the expected trajectory or pattern of change based on our data base of ORS administrations. These programs compare a graph of the client's session-by-session ORS results to the expected amount of change for clients in the data base with the same intake score, serving as a catalyst for conversation about the next step in therapy.
 
If change has not occurred by the checkpoint conversation, the therapist responds by going through the SRS item by item. Alliance problems are a significant contributor to a lack of progress. Sometimes it is useful to say something like, "It doesn't seem like we are getting anywhere. Let me go over the items on this SRS to make sure you are getting exactly what you are looking for from me and our time together." Going through the SRS and eliciting client responses in detail can help the practitioner and client get a better sense of what may not be working. Sarah, the woman who aspired to be a Miami Heat cheerleader, exemplifies this process.
 
Next, a lack of progress at this stage may indicate that the therapist needs to try something different. This can take as many forms as there are clients: inviting others from the client's support system, using a team or another professional, a different approach; referring to another therapist, religious advisor, or self-help group—whatever seems to be of value to the client. Any ideas that surface are then implemented, and progress is monitored via the ORS. Matt and the idea of encouraging his anger illustrate this kind of discussion.
 

The Importance of Referrals

If the therapist and client have implemented different possibilities and the client is still without benefit, it is time for the last-chance discussion. As the name implies, there is some urgency for something different because most clients who benefit have already achieved change by this point, and the client is at significant risk for a negative conclusion. A metaphor we like is that of the therapist and client driving into a vast desert and running on empty, when a sign appears on the road that says 'last chance for gas.' The metaphor depicts the necessity of stopping and discussing the implications of continuing without the client reaching a desired change.
 
This is the time for a frank discussion about referral and other available resources. If the therapist has created a feedback culture from the beginning, then this conversation will not be a surprise to the client. There is rarely justification for continuing work with clients who have not achieved change in a period typical for the majority of clients seen by a particular practitioner or setting.
 
Why? Because research shows no correlation between a therapy with a poor outcome and the likelihood of success in the next encounter. Although we've found that talking about a lack of progress turns most cases around, we are not always able to find a helpful alternative.
 
Where in the past we might have felt like failures when we weren't being effective with a client, we now view such times as opportunities to stop being an impediment to the client and their change process.
Where in the past we might have felt like failures when we weren't being effective with a client, we now view such times as opportunities to stop being an impediment to the client and their change process. Now our work is successful when the client achieves change and when, in the absence of change, we get out of their way. We reiterate our commitment to help them achieve the outcome they desire, whether by us or by someone else. When we discuss the lack of progress with clients, we stress that failure says nothing about them personally or their potential for change. Some clients terminate and others ask for a referral to another therapist or treatment setting. If the client chooses, we will meet with her or him in a supportive fashion until other arrangements are made. Rarely do we continue with clients whose ORS scores show little or no improvement by the sixth or seventh visit.
 
Ending with clients who are not making progress does not mean that all therapy should be brief. On the contrary, our research and the
findings of virtually every study of change in therapy over the last 40 years provide substantial evidence that more therapy is better than less therapy for those clients who make progress early in treatment
findings of virtually every study of change in therapy over the last 40 years provide substantial evidence that more therapy is better than less therapy for those clients who make progress early in treatment and are interested in continuing. When little or no improvement is forth coming, however, this same data indicates that therapy should, indeed, be as brief as possible. Over time, we have learned that explaining our way of working and our beliefs about therapy outcomes to clients avoids problems if therapy is unsuccessful and needs to be terminated.
 
Barry Duncan writes: But it can be hard to believe that stopping a great relationship is the right thing to do.
 
Alina sought services because she was devastated and felt like everything important to her had been savagely ripped apart—because it had. She worked her whole life for but one goal, to earn a scholarship to a prestigious Ivy-league university. She was captain of the volleyball team, commanded the first position on the debating team, and was valedictorian of her class. Alina was the pride of her Guatemalan community—proof positive of the possibilities her parents always envisioned in the land of opportunity. Alina was awarded a full ride in minority studies at Yale University. But this Hollywood caliber story hit a glitch. Attending her first semester away from home and the insulated environment in which she excelled, Alina began hearing voices.
 
She told a therapist at the university counseling center and before she knew it she was whisked away to a psychiatric unit and given antipsychotic medications. Despondent about the implications of this turn of events, Alina threw herself down a stairwell, prompting her parents to bring her home. Alina returned home in utter confusion, still hearing voices, and with a belief that she was an unequivocal failure to herself, her family, and everyone else in her tightly knit community whose aspirations rode on her shoulders.
 
Serendipity landed Alina in my office. I was the twentieth therapist the family called and the first who agreed to see Alina without medication. Alina's parents were committed to honor her preference to not take medication. We were made for each other and hit it off famously. I loved this kid. I admired her intelligence and spunk in standing up to psychiatric discourse and the broken record of medication. I couldn't wait to be useful to Alina and get her back on track. When I administered the ORS, Alina scored a 4, the lowest score I'd ever had.
 
We discussed her total demoralization and how her episodes of hearing voices and confusion led to the events that took everything she had always dreamed of from her—the life she had worked so hard to prepare for. I did what I usually did that is helpful—I listened, I commiserated, I validated, and I worked hard to recruit Alina's resilience to begin anew. But nothing happened.
 
By session three, Alina remained unchanged in the face of my best efforts. Therapy was going nowhere and I knew it because the ORS makes it hard to ignore—that score of 4 was a rude reminder of just how badly things were going.
 
At the checkpoint session, I went over the SRS with her, and unlike many clients, Alina was specific about what was missing and revealed that she wanted me to be more active, so I was. She wanted ideas about what to do about the voices, so I provided them—thought stopping, guided imagery, content analysis. But, no change ensued and she was increasingly at risk for a negative outcome. Alina told me she had read about hypnosis on the internet and thought that might help. Since I had been around in the '80s and couldn't escape that time without hypnosis training, I approached Alina from a couple of different hypnotic angles—offering both embedded suggestions as well as stories intended to build her immunity to the voices. She responded with deep trances and gave high ratings on the SRS. But the ORS remained a paltry 4.
 
At the last-chance conversation, I brought up the topic of referral but we settled instead on a consult from a team (led by Jacqueline Sparks). Alina, again, responded well, and seemed more engaged than I had noticed with me—she rated the session the highest possible on the SRS. The team addressed topics I hadn't, including differentiation from her family, as well as gender and ethnic issues. Alina and I pursued the ideas from the team for a couple more sessions. But her ORS score was still a 4.
 
Now what? We were in session nine, well beyond how clients typically change in my practice. After collecting data for several years, I know that 75 percent of clients who benefit from their work with me show it by the third session; a full 98 per cent of my clients who benefit do it by the sixth session. So is it right that I continue with Alina? Is it even ethical?
 
Despite our mutual admiration society, it wasn't right to continue. A good relationship in the absence of benefit is a good definition of dependence. So I shared my concern that her dream would be in jeopardy if she continued seeing me. I emphasized that the lack of change had nothing to do with either of us, that we had both tried our best, and for whatever reason, it just wasn't the right mix for change. We discussed the possibility that Alina see someone else. If you watch the video, you would be struck, as many are, by the decided lack of fun Alina and I have during this discussion.
 
Finally, after what seemed like an eternity, including Alina's assertion that she wanted to keep seeing me, we started to talk about who she might see. She mentioned she liked someone from the team, and began seeing our colleague Jacqueline Sparks.
 
By session four, Alina had an ORS score of 19 and enrolled to take a class at a local university. Moreover, she continued those changes and re-enrolled at Yale the following year with her scholarship intact! When I wrote a required recommendation letter for the Dean, I administered the ORS to Alina and she scored a 29. By my getting out of her way and allowing her and myself to 'fail successfully,' Alina was given another opportunity to get her life back on track—and she did. Alina and Jacqueline, for reasons that escape us even after pouring over the video, just had the right chemistry for change.
 
This was a watershed client for me. Although I believed in practice-based evidence, especially how it puts clients center stage and pushes me to do something different when clients don't benefit, I always struggled with those clients who did not benefit, but who wanted to continue with me nevertheless. This was more difficult when I really liked the client and had become personally invested in them benefiting. Alina awakened me to the pitfalls of such situations and showed a true value-added dimension to monitoring outcome—namely the ability to fail successfully with our clients. Alina was the kind of client I would have seen forever. I cared deeply about her and believed that surely I could figure out something eventually.
 
But such is the thinking that makes 'chronic' clients—an inattention to the iatrogenic effects of the continuation of therapy in the absence of benefit. Therapists, no matter how competent or trained or experienced, cannot be effective with everyone, and other relational fits may work out better for the client. Although some clients want to continue in the absence of change, far more do not want to continue when given a graceful way to exit. The ORS allows us to ask ourselves the hard questions when clients are not, by their own ratings, seeing benefit from services. The benefits of increased effectiveness of my work, and feeling better about the clients that I am not helping, have allowed me to leave any squeamishness about forms far behind.
 
Practice-based evidence will not help you with the clients you are already effective with; rather, it will help you with those who are not benefiting by enabling an open discussion of other options and, in the absence of change, the ability to honorably end and move the client on to a more productive relationship. The basic principle behind this way of working is that our day-to-day clinical actions are guided by reliable, valid feedback about the factors that account for how people change in therapy. These factors are the client's engagement and view of the therapeutic relationship, and—the gold standard—the client's report of whether change occurs. Monitoring the outcome and the fit of our services helps us know that when we are good, we are very good, and when we are bad, we can be even better.


Copyright © 2008 Psychotherapy in Australia. Reprinted with Permission.
Order CE Test
$30.00 or 2.00 CE Points

CE points are a great way to save if you need multiple CEUs. Get up to 45% discount when you buy packages of 10, 20 or 40 points. Your CE points will be redeemed automatically at checkout. Get CE packages here.

Earn 2.00 Credits
Buy Now

*Not approved for CE by Association of Social Work Boards (ASWB)

Bios
CE Test
Barry Duncan and Scott Miller are co-founders of the Institute for the Study of Therapeutic Change. Together, they have authored and edited numerous professional articles and books, including The Heart and Soul of Change: What Works in Therapy, Escape from Babel, Psychotherapy with Impossible Cases, and The Heroic Client. Recently, they released self-help books, Staying on Top and Keeping the Sand Out of Your Pants: A Surfer's Guide to the Good Life, and What's Right with You: Debunking Dysfunction and Changing Your Life. On the web, they can be found at Better Outcomes Now and Outcome Measurement Tool In Mental Health.

CE credits: 2

Learning Objectives:

  • Utilize outcome measures with clients
  • Discuss challenges of working with clients who are not responding to therapy

Articles are not approved by Association of Social Work Boards (ASWB) for CE. See complete list of CE approvals here