Psychotherapy and Applied Psychology: Conversations with research experts about mental health and psychotherapy for those interested in research, practice, and training

Improving treatment with feedback: Feedback-informed care and deliberate practice with Dr. Scott Miller

Season 1 Episode 6

In this conversation, Dan speaks with Dr. Scott Miller about improving psychotherapy using feedback and using feedback to facilitate deliberate practice.

The conversation covers Scott's journey in the field of therapy, his experiences in different settings, and the development of feedback-informed treatment (FIT). FIT involves soliciting formal feedback from clients about their progress and engagement in therapy. The therapist uses this feedback to tailor the treatment to better fit the client's needs. The conversation also touches on the importance of monitoring outcomes and the challenges of implementing feedback-informed care. In this part of the conversation, Scott and Dan discuss the importance of measuring progress and using feedback to inform treatment decisions. They emphasize the need to consider adding or removing elements from the treatment plan if there is no progress after a certain number of sessions. They also discuss the concept of outcome-focused treatment and the importance of involving the client in the decision-making process. Scott shares an analogy from medicine to illustrate the need for a collaborative approach to treatment. They also discuss the challenges and objections to implementing measurement-based care, including concerns about burdening clients and the need for training and support for clinicians. They highlight the benefits of using feedback measures to facilitate conversations about termination and the potential for improving outcomes through deliberate practice.

Special Guest:
Dr. Scott Miller: Make sure to check out Scott's website which is full of useful resources!

💬 Click here to text the show!

🎞️ Video version of the show@PsychotherapyAppliedPsychology on YouTube
🛜 Check out the website: Listen to every episode on your podcast player of choice

Connect with Dan
Leave a voice message on Speakpipe
🔗 LinkedIn
🐥 @TheAPPod on twitter
📬 TheAppliedPsychologyPodcast@gmail.com
📸 Instagram

Broadcasting from the most beautiful city in the world, I'm your host, Dr. Dan Cox, a professor of counseling psychology at the University of British Columbia. Welcome to episode number six of Psychotherapy and Applied Psychology. Here we dive deep with the world's leading researchers to uncover practical insights, pull back the curtain, and hopefully have some fun along the way.
If you find the show enjoyable, kindly consider sharing it. You can conveniently tap the share button on your phone and send it via text, share on social media or any other method that you prefer. Your support in spreading the word is greatly appreciated.
On today's show, I am so excited to have this conversation with my excellent guest, who's a leader in the area of feedback-informed treatment and deliberate practice. In our conversation, we discuss therapists using feedback, measuring progress and making adjustments, the success probability index, challenges to feedback-informed treatment and deliberate practice. We begin this conversation with my guest answering my question about a time that he felt insecure as a practitioner.
So without further ado, I'm thrilled to have one of the world's foremost experts in feedback-informed treatment and deliberate practice, Dr. Scott Miller.
I think my entire professional training and experience is one of continuously asking, am I cut out for this? So as a beginning graduate student, I was in a cohort of very talented, very smart, and generally older people than me. I came straight out of a bachelor's program into a PhD program.
All my classmates had masters degrees and were working as therapists. Not only that, but translating the ideas that I was hearing in classrooms and in consultation with professors into actual work with real people who were really suffering, was a fraught activity for me. So I've been an anxious practitioner from the outset.
I would still describe myself in that way. I am always full of doubt about whether or not I've said the right thing, done enough, studied enough in order to do the work that we do. Several times along my developmental path, so to speak, I've chucked the whole enterprise and gone in a very different direction based on what the research seemed to be saying.
So I left California in a very comfortable practice environment, where I went right after graduate school to do a post-doc, moved to Milwaukee to work with Insu Berg and Steve DeShazer. I was there five years. It was a phenomenal experience.
A lot of the anxiety I had at one time about being a practitioner came under better control. I was given things to do, things to say, a way to act with clients. And then five years in, our data came back and indicated that what we were promising really didn't happen.
We weren't doing care more briefly. We weren't having more single session cures. It wasn't any better than anything else that someone could do, which really shook me to my core.
And that's just been the pathway that I've been on from the beginning. And I would describe it as an exciting one, not as a discouraging one. Right when we build this foundation and the building on top of it, something comes along that says, this isn't exactly right.
And I've followed that over the course of my career and ended up here today.
You were practicing in California. And then you, so like, can you dig into that a little bit? Like that decision and what that, that sounds like a huge transition and decision.
So I grew up in Southern California in a small town called Glendora. I went to high school there, left the state to go to graduate school, and was, I think like many Californians, looking for some pathway back. And I ended up with a postdoc position in Palm Springs, started to establish myself in the community.
But I knew that there was something more that I wanted out of my life and my career than just doing the work on a day-to-day basis. And if I get real specific, I'll tell you that I was sitting opposite a client, and as this person spoke about things that were a big concern to that person, I thought to myself, if I do this every day for the next 40 years, I won't live that long. I'll take my own life.
And I had, in the process, been corresponding with lots of people whose ideas I was interested in. One of them happened to be in Suburg and Steve DeShazer. At one point, I wrote an article, was published or headed for a publication in the Journal of Systemic Therapy, and I sent it to her, and she said, why don't you come and talk to us?
So, I got on a plane, left my comfortable environment in Southern California, my home where my parents and my brothers were, flew to Milwaukee, and it was a dump. This place was a dump. I had imagined this giant, shiny building with the latest computer equipment.
They were using headphones behind the mirror that you would see in 1950s film, these little black things. It was the most exciting three days I had experienced in clinical work ever. And as I got ready to leave, Insoo said, I hope you'll think about coming to work here.
And within about a month and a half, I decided to go work there. It was a hard transition, not just culturally, because LA was different than Milwaukee, but also the person that I was living with at the time didn't want to go to Milwaukee. And I was going to actually make, even though I was doing a postdoc and making hardly any money, I was going to make less money moving to Milwaukee.
So I ended up finding a room in a rooming house. I never lived in a rooming house, dorm, apartment, never a rooming house. And I lived in that place for about five years while I worked with Insu and Steve.
So what were they doing there that was so exciting?
They were watching what they did, not just doing what they did. They were reflecting on it, thinking about it, asking difficult questions. They were teaching other people about their ideas, publishing, writing.
And that added an element or a dimension to my professional work that I think had been missing while I was doing my postdoc. And I had a great postdoc experience. It wasn't that it was bad.
I just knew that I wasn't cut out to do that full time every day. And I also wanted a different kind of clientele. So at Brief, which was in Milwaukee, we were working with people on the margins, folks who didn't have the means to pay for services, lots of people on the streets.
We had relationships with a couple of Capuchin brothers who were bringing clients who were on the streets, living on the streets for us to see if we could help and work with. And that also had the additional advantage of forwarding our research. And I could see that as soon as we were showing the kind of work we were doing with the kind of folks we were working with, folks who weren't the traditional, quote unquote, psychotherapy patient, end quote, really had a persuasive effect.
If you can use these ideas with this population, then maybe some of them might be applicable to my own clients who are much more privileged.
What were some of the ideas?
This was really the early days or maybe the middle period of solution-focused therapy. So we were really trying to develop a structure that led to shorter episodes of care with clients. We were looking at the types of questions that we were asking and the interventions, the homework assignment that we gave at the end.
And seeing, number one, did clients return, and number two, were they feeling better? So this was the era that the miracle question emerged from, that scaling questions became prominent. They had, Steve had already been writing about exceptions, but the exceptions and the miracle question and the homework and scaling questions, all of those started to fit into a framework that was easy to teach and to utilize in day-to-day clinical services.
And you had said, like, they were... One of the things that I heard is, like, they're actually watching therapy happen, and they're asking questions about what is happening in my guesses. Like, what's working?
What's not working? Am I in the ballpark?
I, you know, I had had a fantastic opportunity while I was in graduate school to work with another clinician. His name was Lynn Johnson. Lynn was the person who actually referred me to Insu.
That's how I first got in touch with Insu. Lynn had a one-way mirror in his office just in a city just south of Salt Lake, where I was going to graduate school at the University of Utah. And so he offered to allow me to sit behind his mirror, and then he and I would talk about the cases afterwards.
This was that on steroids. So I'm a clinician who has actually, I could count probably on two hands and my two feet the number of times that I have not worked, observed by someone. So there was always a team behind the one-way mirror, and we were looking at the process.
What were we asking? How could we tweak the questions so that hopefully the client would be more engaged and have more ideas come to them about how they might solve their particular problem. We then talk about it as a team.
We try out new ideas the next time. There was this constant iterative process of trying to improve the work that we were doing. And then I think critically also teach that process as we were understanding it at any given time to others to be able to do.
So the other really important thing that I think happened during that time is we had people from everywhere, all over the United States and really all over the world that were coming and spending three or four days with us. During the summer months, we had multiple residency programs so they would come and spend an entire month. They brought something entirely different to the process because they were coming from different cultures and working in different settings.
So it really was a very rich experience.
As I'm listening to you describe this, I'm simultaneously experiencing excitement and envy. That's such a unique and wonderful experience, it sounds like.
I feel very fortunate that I had this opportunity and there aren't many opportunities to do that nowadays. At the time that I was coming up in the field, so to speak, there were several locations that offered a chance to sit behind the one-way mirror. And by the way, not only was I doing the work, but I was watching other people do it and having input from two people whose clinical styles I really admired.
That was Insu and Steve, all the folks from MRI were coming. John Weekland was there a fair bit. Neil Jacobson was there.
So there were lots of folks that came through that I got to interact with and see, work with, have input from. So it was an amazing experience, really.
How come there aren't places like that? If I say there aren't any, I'm sure that I'll be wrong. But in general, how come there aren't many, if any, places like that anymore?
I don't know. I think these were even unique at the time. Most people, if they do any mirror work, they might do it while they're at university.
But we're talking about something quite different here. INSU and Steve's, the Brie Family Therapy Center, was not affiliated with the university, although there were university professors who were regular visitors and participants in the trainings and in teaching. So it was about being dedicated to this mission.
And the mission was, let's figure out how to improve the service that we're delivering without all of the obligations of being in a university. So we weren't really training students, graduate students. We were working with fully fledged, fully licensed professionals.
It's a different environment. And we funded that. So the income that we made from doing trainings funded the operation of the clinic, because many of the clients that were seen there weren't paying anything for the service.
So it was really, I think, a dedication to the mission. That's what attracted me to the Brie Family Therapy Center. Around the same time I'd written to another person whose work I'd seen and admired, that was Bill O'Hanlon.
And at the time, he was married to Pat. And Pat had a clinic that had been started by her father, who was a very well-known marriage and family therapist. And I had gone up there to spend several days as well, to see if I might want to work there in Omaha.
And it was just a completely different atmosphere. One was a working clinic. The other one was a clinic that was working at perfecting or studying the process of therapy.
And then you said a couple of minutes ago that when you guys started looking at some of the data, you didn't see what you thought you would see.
So we invited some outside researchers to come in. This would have been around late 1991, 1992. And they did some follow up with a huge number of our clients.
And we were really hoping that they would find that what we were doing was more effective than what other people were doing. In a way, that's what we were claiming, brief therapy, solution-focused therapy. Well, there would be more solutions.
It would be shorter term. And it wasn't. The average number of sessions was about the same.
It had been since the 1940s, since health care statistics had been gathered on the subject of length of mental health care. And the modal number of sessions was one, which is what it's always been in our field. So that really rocked our foundation.
What we were claiming simply didn't make a difference. And you get hints of this in de Schaeser's last, and what I thought was his most important work, which was called Words. We spent a lot of time trying to be very clever, trying to identify the invariant process that if we helped our clients through, it would result in a good outcome.
And that just didn't seem to be the case. And for me, that meant that I needed to go someplace else. We needed to do something else.
And it was about that time that I met two people at a conference. That was Mark Hubbell and Barry Duncan. And those two were in much the same place that I was, in a state of transition and flux.
We had all the training. We had all the ideas. We were very devoted to improving psychotherapy.
And we were looking, casting about, so to speak, for some way to explain a couple of competing facts. One is that psychotherapy, this activity we call psychotherapy, in its many and varied forms, works. There's absolutely no question in my mind about that.
The data is convincing. We're not in the 1950s when iSync said, hey, it's actually worse than nothing at all. We now know that it works.
But the paradox is that it doesn't matter which approach you use for the most part. And that was very hard to wrap your head around. You could just do anything.
That was the usual conclusion. Just do anything and people would help. It's not exactly like that.
So we spent the next five or six years studying what we called the therapeutic factors. What common elements did all therapies share that were responsible for a successful outcome in psychotherapy? Wrote several books, published lots of papers, and interestingly enough, returning to a theme throughout my career, it wasn't successful.
The main reason was, well, in fact, I got a phone call late at night from my colleague and co-author Mark, and Mark was in a bit of a panic and said, Hey Scott, we were now into writing our second edition of The Heart and Soul of Change, which was a big book about these common therapeutic factors and how regardless of whatever approach you used, if you could leverage these factors, your outcome would be better as sort of like the nouns and the verbs, the sentence structure rather than the language, the particular language that you might use. He calls me up and he says, Scott, what we're saying makes no sense. And I said, why not?
And he says, well, think about this. If all approaches are equal in terms of outcome, why would anybody want to learn about these therapeutic factors? And I can remember thinking, shit, you're right.
What is the advantage of learning about these factors if whatever you do, as long as it's a bona fide therapy model, that it contains a theory of change, that it tries to secure client engagement in a set of rituals, that promise if they're engaged in will lead to a better... Why would anybody learn this? Didn't make any sense.
We hucked the whole thing for the time, set it on the shelf is probably a better statement. And we moved on to something that I'd been exposed to because of a former professor of mine and this character, Lynn Johnson, that I'd mentioned I'd spent time behind his mirror. The idea behind all this was that, hey, I may not be able to learn the right way to do treatment, but I can know if my way with this client is making a difference.
Michael Lambert had developed a measurement tool called the OQ45. Michael, during my graduate school years, and I was a research assistant for him for a number of years, he was obsessed with deterioration in psychotherapy, something that at the time no one talked about. No one talked about.
Michael said, we better measure because one out of ten clients is at risk for being made worse while they're in care with us. That seems like a pretty big deal. So the OQ was right there.
I started applying that in my regular clinical work. Meanwhile, Lynn Johnson had been writing me, and we'd published an article about this other simple measurement tool that would be given at the end of the session about the relationship. Since the relationship, as we'd written about in two editions of The Heart and Soul, was such a potent predictor of outcome and engagement, maybe we should ask clients, how was it at the end of the visit?
And I'm not talking in an indirect way, but in a formal way, asking them to actually rate how we did in various domains or dimensions of the relationship. So we started using the OQ and the 10-item SRS in combination at each session, mostly with the intent of finding out was I helping this particular person or not. And if I wasn't, then maybe I could make some small subtle changes to the work we were doing, or at least talk about it openly with the client so that they wouldn't drop out before maybe I could at least attempt to fix what we were doing that wasn't helping them.
And that for me is where feedback-informed treatment began to take place. Now Michael was several steps down that road already, along with other people that were just amazing figures in routine outcome monitoring and measurement. Ken Howard was at Northwestern University.
He published this article that I like to joke sometimes that nobody read nor understood when it first came out in the 1980s about the dose effect relationship. And that was that there was a kind of sensitive period in therapeutic relationships. When two people meet, there was a period of time during which change probably should be happening.
And if it didn't, that that particular relationship was at risk for a negative or null outcome. So that started happening back in the late 1980s. His article is still, when you read it, it's just an amazing piece of scholarship.
Where was the field in 1986? It was worrying about psychodynamic therapy and also the rise of managed care. Where was Ken Howard?
Ken Howard was saying, I wonder how clients experience the benefit of psychotherapy. He was going direct to the source and asking, are you being helped? That was a pretty novel idea.
Take it out of our clinical judgment, solely our clinical judgment, and let's have a discussion around some formal structure process for evaluating the effectiveness of the care we're doing together.
I want to go back to when you're in Milwaukee, and then you get these outcomes, and then do you stay in Milwaukee while you're starting to do this work? Do you go somewhere else to work with others? Where's your next move?
So, I left Brief in 1993, and I worked with a couple of former employees that we had worked together at. Brief Larry Hopwood, a delightful human being. He was a doctoral-level microbiologist at the Medical School of Wisconsin, and we started a free clinic in the state-funded, or federally-funded homeless shelter downtown Milwaukee.
And around that time, I met the person who I eventually married. And so, I was going back and forth between Chicago, where she lived and worked and working in Milwaukee. We did that for a number of years.
And then right there, and then as I transitioned full-time to Chicago, really started working together with Mark and Barry to develop this set of measurement tools and the process for understanding the results or the experiences that clients were sharing with us.
And did you guys develop a center? What did that look like?
In Milwaukee, we had a small clinic that was called Problems to Solutions. And the space to meet with the... This was in a homeless shelter, so the space was donated by that homeless shelter to us, and we staffed the clinic two or three times a week.
We did not form an agency or a clinic once I left Milwaukee permanently and went to Chicago. I just started doing this in the work that I was doing privately, and then Barry and Mark and I were talking about it constantly.
So I think that this is, in thinking about when you first moved to Milwaukee and the work that they were doing there, and we talked about how those sorts of places, how they were unique then and they're basically extinct now, and then continuing, you know, you start at this other place, this homeless shelter, and then you go to Chicago, and then you're working with your collaborators, but you're working in practice somehow sort of integrating this work and really sort of focusing on this work. I feel like this sort of, this sort of hero's journey is something, it's unique, that you're sort of chasing these ideas and these ways of practice in sort of, in your thinking, in your practice, and in your geographical location. Really, it jumps out to me, and that's why I'm sort of going back because, as you're talking, you're going to the different ideas and the work that you did over, a decade or two or whatever it happens to be.
But there's that experience, that sort of journey that you went on to me is, I don't know if it's just something that my generation doesn't do very much, and maybe is something that happened more frequently in the past, or if it's just totally unique. And I think it's pretty fascinating to listen to.
Well, I think that one of the benefits of being a brief is, I learned a different way to make a living than to charge my clients. So one of the things that happened at brief is, I began to teach and not in a university context, but workshops and consultations, etc. And that was instead of charging the clients.
Some of the clients at brief were charged, but many, many of the people we saw paid nothing, and we never even asked about payment. There was never any assumption that they could actually pay. So instead, it's been focused on writing and teaching in terms of making a living, and that's a model that I've carried through today.
So I'm one of the, I would say, a handful of people who's been able to eke out a living without, depending on reimbursement by third-party payers or out-of-pocket payments by clients.
This was one of the questions I was going to ask you was, how are you not an academic? You know, how have you been able to do so much research, to write so much, to have the time and the energy to, you know, have ideas, put them into practice to further them without having a salary from some sort of an institution? And I think you're answering this question without me having to ask it.
And when I, I have many friends who are employed in the university context, and much of their time is spent not pursuing ideas that they find interesting. They're eking that out on the weekend. They're having the same kind of conversations I did.
What they get in exchange for the university context, I suppose, is a guaranteed salary and in some cases a pension and health insurance. All of that stuff, I sort of had to figure out how to do that on my own, and I was greatly helped by the model provided by Stephen Insu at Brie Family Therapy Center. But it's been great if you're willing and interested in pursuing ideas.
So this morning, it is now about one o'clock or so Eastern time. I've been up since 6 a.m. and writing. And the reason I'm writing is because there is a book due that will in turn, hopefully drive consultations and teaching at some point in the future.
Or that's sort of been the formula. It's never been as crass as that in my mind. But that is the formula.
So if you're able to do that, sometimes it means you're kind of living on the edge. As I said, I lived in a rooming house. The person that owned the home was not a pleasant person to live with.
But I also had this model for me. The first week I was at Brie Family Therapy Center on Friday. Friday evening, we saw clients late into the evening, 8 or 9 o'clock.
I can remember, like it were yesterday, I turned into them and I say, Wow, we're done, I'm going home. She says, See you tomorrow. And I said, Tomorrow is Saturday.
This is my first week there. And she goes, Yes, 8 a.m. is what she said. Same thing happened on Saturday night.
So we literally worked all the time. There was this kind of dedication to the mission idea that just was pervasive in the work that we did. And I think that's something that's carried me through to the present time, late in my career now.
So why don't we go into describing feedback-informed treatment? So you can sort of decide how you want to describe this. I was thinking about it, that it would be helpful to describe what, you know, sometimes we call it routine outcome monitoring or ROM, sort of these, what sort of the generic, non-specific routine outcome or process monitoring might look like.
Or maybe it would make more sense to describe the feedback-informed treatment fit, which is more your system as far as I can tell, right, that you guys, you and your colleagues have developed. I'm not sure what way you want to go with that, but to sort of explain what that is, what that looks like.
Well, I do think there's sort of a 20,000-foot view that is really about soliciting formal feedback from the people that we are engaged with around their progress and engagement levels in care. And I don't think that I started any of that. I traced that to people like Lynn Johnson and Michael Lambert, who pursued this for their own reasons.
But I was certainly the beneficiary of that. And nowadays, and we've sort of been, I would say, practicing in the wilderness for a very long time. There weren't many big fans of let's monitor and measure every session.
There were some, but not a lot of big fans. And academia sort of pooh-poohed the whole thing. APA this last year had a committee that got together and decided to call this measurement-based care.
And then personally, and in a response that I wrote that was published alongside Boswell's piece describing what the committee had decided, I don't like this term measurement-based care, mostly because it turns the attention back on what the therapist does, or in this case, a psychologist does. What do psychologists do? Well, we measure it.
That's our whole raison d'etre is to measure things. That was one of our, or it's going back to the very early days of the field. Feedback-informed care, patient-related outcome, or routine outcome monitoring.
Routine outcome monitoring doesn't really say that we're doing anything. We're just monitoring the outcomes. So I suppose this would be like physicians measuring your blood pressure and then sending you on your way, no matter what the readings were.
Feedback-informed meant that, for me at least, that we monitored and then we did something with the feedback. And namely, that was to discuss it, in particular whether or not that client felt like we'd understood what they were there for, given them what they wanted and in a way that made sense to them and was actionable. And that when they returned, that what we were doing had resulted in some measurable benefit to them.
And initially, we were simply, as I said earlier, using the OQ45 and the 10 item session rating scale that Lynn had developed. The population of people that I was working with had continued to be folks. Many of them had literacy challenges.
The reading level of the OQ45 was at about the eighth to the ninth grade reading level. And many of the clients I was working with at the time, we had to read the entire measure to them in order to get the items filled out. 45 items.
So it wasn't really very realistic. I met an Israeli psychologist on a teaching trip to Israel who listened to me complain constantly about this 45 item and 10 item measure. And he said to me his name was Chaim Omer, a very well known Israeli psychologist, who revolutionized the whole school system in that country.
And he said to me, why wasn't I using a visual analog measure? And I said, I don't even know what that is. And he said, I'll send you some stuff.
So he sent me some materials. We converted the OQ45 into a four-item visual analog tool. Four lines.
Mark left to right, hide a low. And it was a much simpler piece. After that, followed validation studies.
And then randomized trials. And in time, with the gathering of data from diverse practitioners around the world, we're actually to create a database of outcomes and begin looking more carefully at when should a lack of progress begin to concern the therapist. What kinds of feedback on the shorter version of the SRS, the four-item session rating scale, what kinds of changes on the individual items or in the total score indicated that engagement was at risk?
The client might not show up for the next visit. And that's just turned into scores of research papers with little tidbits of information about how therapists can optimize and understand the feedback that their clients are giving them.
So I'm a clinician, and I give the outcome rating scale or some measure of how a client is doing. I get that information, so I give it to my client. Different systems do it differently.
I have that information. What do I then do with that information?
Well, it depends on a couple of things. If you're early in the process of implementing FIT or any other system of feedback, the chances are you'll do nothing with it. This is Wolfgang Lutz's data.
Therapists get feedback, they don't do anything with it. But if you stick with it long enough and you get some consultation, some training and support, you start to see patterns in the data. So, for example, a single point decline on the session rating scale.
This is a 40-point measure, four items, ten points each item. A single point decline on that tool is associated with a greater risk of slowed progress or deterioration in the subsequent sessions. So what I would do with that now is I would lean forward and say, help me understand the score.
Last week it was this, this week it's this. What was missing? And see what the client said.
If they say nothing, which happens a fair bit of the time, it's sort of like I say in trainings, it's sort of like you go to a restaurant, the food, I don't know if you experienced this, but oftentimes people say, well, how was the food? And I go, yeah, you know, it was good. It's rare that I say, oh my God, you know, it's the best, unless I'm lying, which I do as well.
You know, I lie about it, oh, it's the best place I've ever been. Big fat lie. But really, most of the time it's kind of like, you know, it was okay.
It was okay. If the server came up and said to me, what could we change? I'd go, you know, I don't know.
You know, I have no idea. It's rare I can put my finger on something exactly. I'm not a chef.
I don't know how to make it better. Their interest in knowing is what is key. So that they would even ask, and that they might have some targeted areas to ask about.
Is it okay if I just ask a couple of follow up questions here? Did it take too long to get your meal? Were the different courses bunched up or come too quickly?
Did we fill your water glass enough? Was the temperature right? Did it have the kind of flavor that you appreciate?
So two or three or four follow up questions that they know contribute to the better experience at a restaurant would help me. Same thing in therapy. So if I say, what can I do better?
Most clients go, they're not a therapist. How do they know? They might be able to reflect on something for a couple of days and then come to it, which is another thing I do, by the way.
If something comes to you, you don't have to wait. Here's my number, text it to me, and I'll do my best to see what we can do. And I may even follow up with a call to see if we can flesh this out further, if that's okay with you.
Otherwise, I'm going to have to give them some specific guidance. Did we talk about what you were hoping to talk about? At the conclusion of the session, are you thinking, geez, I wish we'd had time to pursue this, but we spent so much time on that.
Is there anything that you think, I'd like to tell Scott this, but I don't feel ready yet. You don't have to tell me what it is. But is there?
So I'm using some of my knowledge and experience to see if there's anything that I could lift up to have the client feel like we're more connected, I understand them better, and therefore I can fine tune and tailor the care to better fit them.
So what you're describing here is, within fit, is what would be from the session rating scale. So that's a very, very brief measure of the alliance, of the relationship, how things are going in therapy, sort of the traditional goals-tasks bond that in fit is given at the end of the session or towards the end of the session. So what about with the outcome data in terms of the client, how the client is doing that you typically collect or one would typically collect either at the beginning of the session or maybe even right before the beginning of the session?
What would a therapist perhaps do with that information?
Right, you're exactly right. Right at the beginning, I'm looking at progress. How do I know what to talk about this visit if I don't know whether what we talked about the last fit made a difference in their life?
So I'm not having casual conversations at the beginning. There's a friendly maybe interchange before we each take our seat, but I'm going to look at the measure and say, hey, the scores are improving. What happened?
What role did you play in that? What, if anything, did you take from the prior visit that made a difference or didn't make a difference? If there is in progress, I'm leaning forward and I'm saying, talk to me about that.
If there's a deterioration, I'm going to be right on top of that. I see that the scores have gone down some. Can you tell me what's going on?
And I'm going to tie it to, as well, the relationship variables. What do we need to be talking about during today's visit that might have an impact on your progress between this and the next visit? Is there something about, if I find out, for example, that the counsel I gave, the advice I gave, the homework I gave wasn't done, I'm going to take that under advisement and try to figure out how do I need to fine tune and tailor that, as well.
I'm listening for client's preferences. Did I see them in a way that was incompatible or incongruent with the way they want to see themselves? This is a fairly common error, especially given that we think about and describe the work in terms friendly to our models and our diagnostic system, which may not fit the client's theories, ideas and experiences, so I'm going to try to push us closer in terms of our understanding of one another.
First thing I'm going to do in the absence of change is look at the what. What have we done together? What can we change about it?
Second thing, if that goes on for two or three more visits and I'm still not seeing progress via the measure, the client filled out the measure, they're saying there's no difference, then I'm going to start to think about what can we add to this service or take away. If I'm seeing people as a couple, do I need to separate them? If I'm seeing an individual, do I need to bring in the family?
Do I need to augment this with a group or a book? Is this time to consider referral to a prescriber? If we get out 8, 10, 12 visits and there's still no change, then I'm starting to think about me as the problem.
Somehow or other, adding me into the mix hasn't served as a catalyst for improvement. And so I'm grateful when me and the client both know that, and I'm going to talk about where else and who else might be more beneficial or helpful.
I think this is tremendously important, and I think it harkens back to your comment earlier and your noting of the limitation of the term measurement-based care, right? That it's not just measurement for the sake of measurement, it's feedback that I then do something with clinically based on the very specific situation of me and this person in this context at this time.
And I'm often using an analogy that I think is easier to understand and accept for some reason in medicine. I think they've just been better at doing this than we have. And that is, you go see your physician, they tell you what to do, they give you a script or some counsel or advice, and it doesn't help.
You go back, typically you go back to the same person, hey, I took this pill, I tried this salve, it's still here. They say, hmm, well, let's try this. Maybe I got the diagnosis, maybe it's a little more complicated, I need a stronger version of this.
You leave happy, and it still doesn't work. What are you starting to think then? Hmm, maybe I need a second opinion or a specialist.
The physician doesn't look at you and go, what, you don't like me anymore? Our relationship isn't good? No, your relationship is good.
That's why I'm telling you that what you're doing with me isn't helping you. And I'm trusting that what you will do is have my outcome in mind, not just more of the process. And I will tell you, there are multiple threats against this, against taking those steps.
One is not involving the client in that process all along. So if you get out to session 10 or 12 and you say, hey, you know, it's not working, let's get you to my colleague over here, people are going to be upset and feel abandoned. But if they have been brought along the entire process with me saying at session 3 and 5, hey, it's not working, let's try a little bit of this, oh, it's still not working, let's involve these other people, still not working, hmm, maybe we need different eyes looking at that.
And for me, by the way, that includes frequently having a referral to somebody who's going to do a full physical workup of the client. Many, many times getting the client in front of a sophisticated medical team to do a complete evaluation has saved me and the client, because they've discovered something that I never would have found on my own that is at the core of the problem that I'm seeing the client for. So the first thing is you got to bring the client along.
The second, I think, is you have to explain this in a simple term, because clients will also choose a relationship over outcome. It's nice to have company when you're miserable. It's a lot better than being miserable on your own.
That means it's incumbent upon me to help them make that step to the next provider, treatment or setting that might be of more help to them. We know, for example, clients are willing to trade outcome for propinquity, for example. They're willing to see somebody closer because they're easier to get to than somebody who is actually more effective.
Now, you and I can sit here and go, how could that possibly be? But we all do this. We go back to the same providers that didn't help us before because we know them, we get into them, there's a friendly conversation with them.
Is it the best choice? No, but there are lots of costs associated with finding a new provider. And, of course, there's a risk.
They won't take me and they may not help me either. At least this person is friendly, has an open door and tries. But if outcome is the objective rather than process, if outcome is the objective, that's what I got to keep front and center from a feedback informed treatment perspective.
So, one, I just learned a new word, but I don't know how to spell it, propinquity. Okay, I'll have to look that one up later. And, I mean, I got the definition, but never heard the word before.
Second, I think it'd be worth spending 30 to 90 seconds on empirical support for the value of outcome or process monitoring.
Yeah. So, now there are scores of randomized trials that say that doing this in an organized and systematic way improves outcomes. And various figures have been bandied about over the years, but Bram Boventer, a Dutch researcher, published a recent study, and I think his figures are about right.
Once the system is fully implemented with feedback, that means not just when you just started learning about the scales, but that you have embraced what changes are required in practice policy to be a fit practitioner. So, for example, I have to be willing to refer. I have to have a referral network.
I have to facilitate clients moving from one treatment to the next. Because between 25% and 35% of the people I see are not going to benefit from me. And that's not just me, that's you too.
And that's that person over there and that person. What happens to these people? Well, generally, they sort of disappear into the fabric or they keep going to the same provider without measurable benefit.
And the clients can sometimes swear, Oh no, it helps every time. I love coming here. You know, I feel relaxed every time.
And I think, yeah, it's a lot like getting a massage once a month. You know, it feels great. I love getting a massage.
It feels great. When I say it's led to systematic improvement in my overall life, I think that's something much more focused and planful needs to take place in order for that to happen. So your question was?
Evidence that doing this, looking at outcome, engaging in the process that you're describing is beneficial.
Right, so once you're fully implemented then, what is the benefit about a 25% improvement in outcome? And in particular, among those who are not on track for a positive outcome by the end. Here's something that I just read actually yesterday in a study by the person I mentioned earlier, Wolfgang Lutz.
Again, one of my heroes runs a very different set of feedback measures. It doesn't matter, we're all in this for the same objective. Clients who have a single warning that they're off track, a single warning, are 50% less likely to end treatment with a reliable improvement in their well-being.
That means greater than chance, greater than passage of time, maturation effects, and greater than measurement error. 50% less. So I really need to hear from those clients that aren't making progress, and I probably need to double down on my efforts to see what can I do or where can we do it or who can do it better than me.
So I realized that one thing I think that we should hit is in these systems, these monitoring systems that have been developed, there are two things, they're related. One is trajectories of change, and two is those sort of red flags or green flags, on track, off track kind of indicators.
So in our system and in some of the systems, sometimes when I explain this, I use another medical analogy. If you go to the physician, heaven forbid, and they say, hey, we did some tests and it turns out you got cancer, that's nothing anybody wants to hear. And one of the first things people ask in response to hearing this is, what are my chances?
And then the physician says something that sounds like it makes sense, and we all go, ah, right? That actually tells you nothing. So let me tell you what it is.
It's just junko information. They'll say, well, you have a 60% chance of survival at five years. And here's the way most of us think of that.
Oh, 60%. Well, that's better than 50. Not so good as 75.
I wish I had a 90% chance. But what they didn't tell you was, are you part of the 60 or the 40?
Are you part of the 60% who survive? That's what fit can do. So what we've done is we've created these trajectories, and we call them treatment response trajectories.
It takes your initial score and it plots the minimum amount of change you, that client with that score, needs to make from visit to visit in order to be on track for a successful outcome at the end. So it's telling you, as long as you're at this line, you're in the green, you are among the 60%. Critically, what it also does is if you're not in the green, if you're in the red zone, that's the most amount of change you can experience and still not be on track for a successful outcome.
So you get a red flag, it's saying, well, right now you're at the 40%. That doesn't mean, because you're not dead yet, going back to our medical analogy, the cancer hasn't killed you yet. But if we keep doing what we're doing, there's a high likelihood it will, which means we need to get you out of the 40%, figure out, is there anything we can do that would make a difference here?
That gives me a crucial opportunity in that sensitive phase, as I described, that first two, two and a half months of care to make adjustments, increases, decreases, additional resources that might push us out of the red or the 40% and into the 60%.
So, the idea is these trajectories are used to then be a bit of a guide to say, to give information of if your actual, if your actual, let's say...
Amount of progress from session to session is on track or sufficient enough to end successfully.
And historically, those data, those sort of, that information, those conclusions seems a little bit too strong. Inferences might be better, or based on initial scores. And you guys have just developed this success probability index, which I think fits in here.
Can you describe that a bit? The trajectories were static, and they're still static. They're plotted based on a single score.
And it's true to say that if you were at or above the green line, it's still true to say that you were progressing, although at the slowest and least amount possible to still be on track at the end. In our system, for example, millions of cases, every one, every single case with that score, start score, and above the green line at the 10th visit ended successfully. So it's a pretty powerful predictor.
But what we wanted was to provide a dynamic figure, one that gave you an opportunity to make adjustments sooner. Because let me tell you what we therapists do. We suffer from terminal hope disorder.
And that's a good thing in the room with a client. It's a bad thing to look at the client and go, that's it, it's done, I can't help you, you're done. That's not so good.
But we also have to have our judgments chastened by reality. So this figure takes into effect prior progress levels. You don't get a static prediction from session to session, but it uses the pattern of results on both the ORS and the SRS and the combination of those two to give you a prediction of the likelihood of success by the end of treatment.
So it's giving you a more fine way to, and I like to use the idea of the trajectories are the boundary or border of the country. As long as you're in the green, you're in the land of the successful. However, if you look at the SPI or the success probability index, it's a little bit more like a GPS.
It's giving you explicit directions about where to turn in order to optimize the journey to the final outcome that you're looking for and hoping for.
So let me attempt to summarize this because there's a lot here, and then you edit what I say. So the idea was or has been, based on a client's initial score, you'll get to see, so if I'm now at session five with this client, is my client, based on that initial score, the system will tell me, is my client on track for having a successful outcome, or are they off track? And then if they're on track, that's great.
I might do a little work to figure out what is keeping us on track, what's going well here. If they're off track, okay, now I have that information that's going to facilitate conversations with my client consistently, changes in what we're doing, and then potentially, of course, changes in all that we're doing. In other words, I should no longer be doing it with you, you should be doing it with somebody else or something else because this isn't working.
Now, that historically has always been based on that initial score, but now what you all have developed is that whenever data is collected, so let's just say it's session by session, sometimes folks will do it every other session or whatever it happens to be, that what that score needs to be at session 10 is now influenced by all of the sessions, all of the data that has come before, not just from that initial score. So it updates regularly. So I have more information.
My guess is that for most clients, that change based on that update is going to be small, but it's still a more precise, but for some clients, I'm sure it's not. I'm sure it's notable that it gives me at session 10, when I'm looking at my data with my client, I have a more accurate representation of, am I on track for a successful outcome, or am I on track for a not successful outcome, so I should do something different?
Yeah, let me give an example that I think has a bit of intuitive appeal. The SPI, let's say that between sessions 2 and 3, there is a dramatic steep drop in the outcome scale. Now, most therapists intuitively know that, well, probably something outside of therapy happened.
Lost my job, got ripped off, my child is sick, etc. They're not going to look at that downward turn and think, not most of us, would downward turn in the first assumption be, oh my god, the client is going to commit suicide. You wouldn't think that.
And you would be right most of the time. The system is going to know that. Dramatic drops, because it's taking into account all of the combination of factors, are not really predictors of poor outcomes at the end, because most dramatic drops tend to revert very quickly.
However, so your SPI is not going to be gravely affected by this dramatic drop. But if your session, if your outcome rating scale drops two or three points after continuously moving up, then you might see your SPI say, hello, wake up, this is something to pay attention to, because something is not right. Energy is dissipating slowly, and you don't want that to sneak up on you.
So it would give you a more sharp piece of feedback about that gradual deterioration rather than sharpening, given that it now knows which patterns of scores are associated with poorer and better outcomes at the end. It knows, in quotes, so to speak.
So I did my internship at BYU's Counseling Center, and so we used the OQ45. That was one of the reasons I went there, because I was like, I need to go to sort of the center of this work. And occasionally, a client would really underreport their initial distress on the measure.
And I always felt like, you know, it didn't happen often, but it did, and it makes sense that it would happen, because they don't understand why they're doing this. They don't necessarily trust why they're—whatever. And then I would always feel like, well, those— we need to reset here in the data, because that isn't—that's not a real score.
And, you know, but after, you know, in the first session, second session, every session, at the beginning of the session, I sit down, I pull up my computer, I show them the data, and I say to them, first, does this seem about right? You know? And then particularly session to session, what's going on?
You know, was this—does this change seem about right? Is this—is there something—what happened in your life? What happened in here that maybe influenced it?
So when I read about the success probability index, I was like, oh, this is lovely, because this can help deal with this problem of— Exactly. Yeah, 10% of clients, 5% of clients, whatever it is that really under report, now I actually—that gets to update, and so now I have something that's really usable.
And it would also mean that once it saw that decrease between the first and the second session with a high score moving slightly lower, you're probably not going to get a super low success probability index, because it knows that certain percentage of clients, that these clients that self-corrected the second visit, are likely to have just as good a trajectories of change. Whereas I think earlier we had to guess, well, wait a second, they started high, now they're moving low. What happened?
Was it something that we missed, etc.? Did we tip them into oblivion, into a more serious concern or disorder, because we all question them in a particular way? It's going to take that into account.
I think it's a real useful technological success that you all have done in developing this. I can't imagine the amount of work that went into it and the amount of clients and data points that were needed to do this.
That's the thing to be in awe of, the amount of clients and data points. Because as you know, most people, the preponderance of people who go into treatment, go about, like 90% are done in the United States in 12 sessions or less. But many therapies go longer than that.
So to fill those sessions with enough data at session 13, 15, 20, so that we can make successful predictions, that takes a fair bit of time. And none of this should supplant conversation with the client. This is, to me, to be seen as an adjunct.
None of this obviates the need for client choice and clinical wisdom about what to do. So if I have a client with a success probability is low, we're at the sixth or seventh visit, and they say, please, don't cut me loose now. I feel like I'm on the verge of something.
Instead of saying, that's it, the numbers say you're out. I'm going to say, how many times should we give this before we revisit this? If they say three or four, I say fine.
And then I'm not bringing it up till the third or fourth visit again.
So this is a perfect lead in. So this is my, for anybody who's watching this, this is my copy from, well, so it was published in 2004. I think this is the first edition of The Heroic Client, your book.
Are we a couple of editions into this? Where are we with this book?
Is this the version with Jackie Sparks?
Duncan, Miller and Sparks.
Yeah, so this is the second edition.
Oh, this is the second edition?
Of that book. And what's happened since then is that we've moved from a focus on feedback really to deliberate practice. So that's been our most recent work.
Feedback and form, I think, care, really answered a puzzle. The puzzle was how could we know, given the diverse ways that we are working, whether or not we're helping a client, what do we do when we're not? Measuring your results now opens up a door to look at your own deficits.
Who do I help? Who do I help less? What kinds of things do I do on a day-to-day basis that get in the way of optimal client progress?
We call those non-random errors. Are there particular types of people? Are there particular interactional patterns?
By looking at our data, we're able to develop a picture of our strengths and our deficits. And deliberate practice is a process that we've been researching and trying to describe about. How do you fill those deficits?
How do you address them so that your outcomes can improve?
And high level, what are some of the major conclusions at this point? You know, sort of where are you now about how to deal with those deficits?
So, two kinds, random and non-random errors or deficits. Non-random errors really do require deliberate practice, focused training, helping you address whatever that deficit may be. Generally, that means I'm going to have to develop what we describe in the second to most recent book, called Better Results, a learning project.
So, a learning project is my plan for addressing this particular deficit. I find out, for example, that I tend to have slightly poor outcomes with men who happen to be angry and who are looking for specific direction and advice. Now, as a very traditional practitioner, I rarely was trained to give direct advice.
Stop doing that, start doing this. Many of my clients thought I was giving advice, but I secretly believed that I had not given a word of advice. That was their interpretation.
How do I learn to give advice? I create this learning project and I'm going to have to create a plan for filling that deficit, get some expert consultation, and then playfully experiment and monitor my results and see what, if anything, begins to make a difference.
So when I was a relatively, I don't know, a couple of years in as a graduate student and read your book, and then some of my colleagues, my peers, did as well, and we thought, hey, this would be great. Let's just, because the session rating scale and the outcome rating scale are, they take a minute. You know, they're so fast.
And we were, I won't say where the exact content, I won't say where we were working, but we were working in a context together, and we were like, oh, this is great. And so, you know, I went and I printed out, I don't know, 50 of each or whatever, and sort of we just put it on the tables because we were all, you know, in the same place, but on different days, so we could just share it and use it with our clients, and we got our little rulers so we could measure. And because we were naive, idealist graduate students, we were just like, oh, let's just do this.
This is fantastic. The person who was in charge of training wasn't very happy about this when this person found out about it.
When that you were using the measures or?
And so that got, and so, you know, we were sort of, it resulted in a uncomfortable meeting with all of us. And, you know, we were just like, what? Like, I mean, you know, is one of those, you know, before the term gaslighting existed, at least, you know, that was the experience.
Like, what the hell? What? What?
What? We did something wrong? Like, how is this wrong?
I don't understand. This is so noninvasive. This is like, what?
So, but I don't think that experience was unique. And so I'm curious about whether it's simply collecting these sort of routine data from clients and using them in practice, whether it's talking about deliberate practice or collecting these data to use them to facilitate deliberate practice. What is the pushback that you've gotten?
I think I can answer this question both personally and then with some research experience that we have. We're writing up a study right now, actually, with Joan Drian as the lead researcher on an implementation project. So the first objections are always theoretical, but the second is really erratical, because people are trying to make sense of it and fit it into their heads and into their way of practice.
So there are things like, well, when will I do this? And won't this interrupt the relationship? And what happens if I'm in the middle of processing?
And won't clients use this to avoid? Are clients truthful is another big question. Weren't they just going to lie?
And once you've started to use them, there is a critical thing that has to happen in order for people to continue for any length of time and begin to use them usefully. And that is they have to have at least one ratifying experience. And we know this from a big implementation project out of Vermont that Joan Andrian is writing up right now.
If you don't have one of those critical experiences where you go, oh my, I would have never known this, even if you don't say it to the client, but just you realize that you got some crucial bits of information, then about half of the people stop using it at that point. And the reason is that you are asking, I think, you are soliciting a great deal of information for a very small return. This is protecting you against black swan events.
That's the whole point of it. 35% of clients, 25 to 35% are not going to make any progress. Between 5 and 10% are going to get worse.
So we're talking about a fraction of your actual clientele that are going to deteriorate. But that means that you have to administer it 100 times for it to be useful 5 times.
That's all upside down in terms of reinforcement contingencies. For me, it's just part of the... It's the same thing when I reach in and get my appointment book or my Open My Cell calendar on my cell phone to write down the next appointment.
It's that integrated into the process. I would never try to keep my appointment straight in my head. Could I?
Maybe with a lot of effort, but why? This is so easy. And 90% of clients, the latest data say, like it.
This was going to be my next question, which is, what do you say when clinicians say, you know, this is too much to put on the clients, clients won't like it? Then I think the other thing, another thing would probably be, I have so much I'm doing already, how do I balance this as well?
Yeah, two different questions. The second one is easy to address. And the answer is, this one will actually help.
And let's figure out how to help you. Most of the time, in an agency setting, clinicians are exactly what you describe, overworked and running and buried in paperwork. That means that implementing isn't about management saying, oh, we're going to do this measurement-based care thing.
And I tell you, every week I get phone calls to do training, and here's a phone call, we are going to implement FIT. I say, okay, have you attended any training? No, but you know, we've seen the measures and they're not that hard.
I say, it's not about the measures, after which there's this long pause. And they say, well, what do you mean, it's not about the measures? I say, it's what you do about the measures.
Oh, they then say, our clinicians are very willing to do anything. I said, it's not about willingness, it's about way. Is there a way for them to do what the outcome data say?
What are you talking about, I say? Well, let's say transferring a case from one therapist, oh, we have a policy against that. That's, I say, a big barrier to fit.
So, the second question is very easy to answer. Let's get you implementation support, because you're not lying to me when you say that you're inundated with bureaucratic nonsense. The first question is, this is a burden to the client.
And I say, have you had an experience? Can you tell me more about that? What are you talking about?
If they say, I can just imagine, then I say, you know, I don't have to imagine. I know what the data actually say. And by the way, and then I lean forward with a little guilt intended, I say, it's the same thing about cultural differences.
Clients want to talk about their culture. They want to, but we therapists have avoided it. So let's talk about the cultural difference between us right now, right here.
It will be okay. We can tolerate it. We will figure it out.
And until we get that out, it could be a big barrier to us connecting with one another, because it's like neither of us are saying the thing that might be the most obvious thing in the room. Same thing with measurement or with using the outcome tools. Most of the clients say that they actually like it.
But then I have to crucially give them some help trying to put it into practice, because the data gets mixed up with our theories and our thinking. So let me give you an example. A client comes in with a high score on this scale, meaning that they're doing just fine.
I was trying to think of that as denial and resistance. When I think of it in a very different way, clients come in on the ORS and they're scoring high. That is a non-problematic score.
And I say, you're doing actually really well. Yes, they say. And I ask, why are you here in my office?
My mom made me come, the teachers, my employer, the police, the judge, etc. Perfect. So you think you're doing well, they have a problem with you.
Yes. I turn the measure around and I say, fill it out as if you were them. We can use your estimate of them, because you're the weak person.
You're the person with no power in this situation. You just had to do what they said. So let's figure out what's going to mollify them.
So I have to give them some skills to use the measures usefully. Here's another thing. Most of us were not trained, at least I wasn't.
And I think, by the way, younger grad students, the current crop, wicked smart, the most wicked smart generation of practitioners ever, and also exposed to feedback in a way that I was never exposed to as a growing up. But most therapists are not very good at soliciting feedback. Our questions are very limited, like, how was it?
Did you like it? Did you think we talked about what you wanted? And then our questions run out of steam.
And so we have to help people with skills about pulling apart clients' feedback and getting some action steps out of that feedback. And that takes time and some specific skill training.
So one of the things that you said earlier was having that, I can't remember exactly the language you used, but having that event that really helped you see the benefit of this. So I think that in addition to what you were talking about, about the event where you're really seeing a person's off track and getting information, one of the things that I found it most helpful for was facilitating conversations about termination. That because you sort of see that line of this is where sort of, when a person becomes subclinical, right, that I always took getting below that line, I didn't make, I didn't think of it in a black and white way, but I thought about it as a conversation time, that now I can say to the client, hey, so this is blah, blah, blah, this is what this means.
What are your thoughts about where we should go from here and what you want? And I have to tell you somewhat, it always sort of was a little bit of a shot to my ego, how frequently like my thinking typically going into this conversation would be, ah, the client will want to keep coming because they're getting so much out of this and we're doing so much great work. And it's, you know, they're wonderful.
But the vast majority, I mean, the majority of times it was, oh, yeah, I feel great. I think we could stop doing this. And, you know, that was always a little shot to my ego.
But I always think that, you know, and I don't think it would have been bad for the client, but in an agency where there are other clients, where there's a wait list that if it saves me, I don't want to say saves me, but if we spend a couple of less sessions that aren't helpful, right, that aren't beneficial to the client, and an agency with 30 therapists does that regularly, that we could see hundreds, or whatever the number would be, of clients who wouldn't have been seen in a year, and that's just such, and I just felt that constantly using these tools.
Yeah, and I appreciate very much your description of how this can facilitate conversations that are hard for both therapists and their clients, which is how do we say goodbye? At the end of an office visit with my dentist, I don't just linger around standing there. The dentist looks at me and says, okay, that's it for now, you can expect some numbness, probably here for a little bit longer, if that doesn't go away, or if you have any more tooth pain, call me back.
Did you understand what I said? Yes, boom, and it's over. Now, the other thing I don't think is, well, I guess I can never go back if I have another tooth problem, because we terminated, it's the end of it forever.
I mean, it's kind of silly. So the implicit, which is not stated explicitly in therapy very often, but the implicit message is, you come back if you have another issue. You have this conversation that you can see that one more session is not additive.
It's not going to add anymore. Is now a time we should think about spacing out? Another thing I do quite frequently is I'll use an analogy again to learning to play a musical instrument.
See, in the beginning, weekly lessons are good. If you're still doing weekly lessons two years from now, there's something wrong. Because now you have to practice more in between, simply because the pieces are far more challenging, difficult, lengthy, et cetera.
Same thing in therapy. Should we start to taper here? Should we space out here?
And I, like you, am often surprised at clients. And by the way, even when the client outcomes are negative, not positive, when they're negative, and I say, geez, you know, this hasn't been helpful so far. I've been surprised by the number of clients who say, yeah, don't you think it's time to talk with your team about this and what we're doing here?
Is there somebody else who might help? You know, because at that point, it's not personal. It's professional.
I'm hiring you for a job. It's not that they don't care or they don't like me, and not that I have a true affection for them. Most of them I do.
So why don't we end on this? So if a person has listened to this conversation, and they're in practice, and they're saying, all right, I really am attracted to this, I buy what they're selling, what would be a resource that I could put in the show notes that they could go to to get started?
I think Better Results, which is a book that was released right in May of 2020, two months into the pandemic, is probably the most current coverage of FIT, and it adds this whole piece about how to use the feedback for professional development. There's a book that follows that called The Field Guide, but I would say give yourself a chance to really digest the Better Results book before you go on to The Field Guide. Secondly, go to my website.
I have 10 years of blog posts, videos, a YouTube channel, featuring lots of different people, every three or four weeks, I put out what we call a fit tip, which is a two-minute video saying here's how to get the most out of the measures or the feedback process, etc. All of that's there on my website.
And just to be clear to the listener that Scott has, there's a system and a computer system and things where you can pay for, but there is also just the measures are just free.
As are all the videos on the website, they're all free.
So you can get started with this without spending a penny, and you can spend an entire career with the resources that they've developed without spending a penny and help your clients. And if you wanted to get one of these systems, which are lovely, they're absolutely lovely, then you can pay for those things as well. But I just want to make that clear for folks that barrier to entry here is very low.
So anyway, so Scott, let's do a real fake goodbye and thank you. This has been wonderful. I can't tell you how much I appreciate it.
It's my pleasure.
That's a wrap on today's conversation with Dr. Scott Miller, to whom I want to send my sincere appreciation. We dove deep into some truly fascinating topics, and I hope you found it as enlightening as I did. Don't forget to subscribe to the podcast for more episodes like this, and to share it with anyone who might enjoy it as well.
I appreciate your support. Until next time!

People on this episode