Psychotherapy and Applied Psychology: Conversations with research experts about mental health and psychotherapy for those interested in research, practice, and training

Do therapists get better over time? Discussing therapist expertise with Dr. Terence Tracey

April 09, 2024 Season 1 Episode 1
Do therapists get better over time? Discussing therapist expertise with Dr. Terence Tracey
Psychotherapy and Applied Psychology: Conversations with research experts about mental health and psychotherapy for those interested in research, practice, and training
More Info
Psychotherapy and Applied Psychology: Conversations with research experts about mental health and psychotherapy for those interested in research, practice, and training
Do therapists get better over time? Discussing therapist expertise with Dr. Terence Tracey
Apr 09, 2024 Season 1 Episode 1

In this conversation, Dan and Dr. Terence Tracey talk about therapist expertise. 

Terry explains the importance of routine outcome monitoring (ROM) and the limitations of therapist expertise. They delve into the concept of hypothesis testing and the value of disconfirming hypotheses. Overall, the conversation explores the themes of therapy effectiveness, growth as a therapist, and the role of feedback in improving therapists' outcomes. The conversation explores the challenges of disconfirmation and the importance of routine outcome monitoring for therapist growth. It highlights the value of professional self-doubt and the willingness to ask questions and seek disconfirming evidence. The concept of confidence and the need to challenge and let go of ideas is discussed. The distinction between structured and unstructured professions is examined, with a focus on the ill-defined nature of psychotherapy. The role of deliberate practice in psychotherapy is explored, along with the need for supervision and ongoing feedback. The lack of improvement in therapists over time and the potential impact of training programs on therapist growth are also discussed. The conversation concludes with reflections on the reactions to the findings and the importance of minimizing confirmatory biases. The conversation explores the need for reflective practice in therapy and the barriers that prevent therapists from engaging in it. The idea of incentivizing reflective practice is discussed, along with the need for time to reflect and generate hypotheses. The conversation also touches on the challenges in integrating reflection into training and certification processes. The performative nature of expertise and the value of outcome data are explored. They cover topics such as building time for reflection, integrating reflection into practice, differentiating good and bad therapists, and the importance of outcome data.


πŸ’¬ Click here to text the show!

☏Leave a voice message on Speakpipe
🎞️Video version of the show@PsychotherapyAppliedPsychology on YouTube
🎧 Listen on your podcast player of choice
Connect with Dan
πŸ”—LinkedIn
πŸ₯@TheAPPod on twitter
πŸ“¬TheAppliedPsychologyPodcast@gmail.com





Show Notes Transcript

In this conversation, Dan and Dr. Terence Tracey talk about therapist expertise. 

Terry explains the importance of routine outcome monitoring (ROM) and the limitations of therapist expertise. They delve into the concept of hypothesis testing and the value of disconfirming hypotheses. Overall, the conversation explores the themes of therapy effectiveness, growth as a therapist, and the role of feedback in improving therapists' outcomes. The conversation explores the challenges of disconfirmation and the importance of routine outcome monitoring for therapist growth. It highlights the value of professional self-doubt and the willingness to ask questions and seek disconfirming evidence. The concept of confidence and the need to challenge and let go of ideas is discussed. The distinction between structured and unstructured professions is examined, with a focus on the ill-defined nature of psychotherapy. The role of deliberate practice in psychotherapy is explored, along with the need for supervision and ongoing feedback. The lack of improvement in therapists over time and the potential impact of training programs on therapist growth are also discussed. The conversation concludes with reflections on the reactions to the findings and the importance of minimizing confirmatory biases. The conversation explores the need for reflective practice in therapy and the barriers that prevent therapists from engaging in it. The idea of incentivizing reflective practice is discussed, along with the need for time to reflect and generate hypotheses. The conversation also touches on the challenges in integrating reflection into training and certification processes. The performative nature of expertise and the value of outcome data are explored. They cover topics such as building time for reflection, integrating reflection into practice, differentiating good and bad therapists, and the importance of outcome data.


πŸ’¬ Click here to text the show!

☏Leave a voice message on Speakpipe
🎞️Video version of the show@PsychotherapyAppliedPsychology on YouTube
🎧 Listen on your podcast player of choice
Connect with Dan
πŸ”—LinkedIn
πŸ₯@TheAPPod on twitter
πŸ“¬TheAppliedPsychologyPodcast@gmail.com





you This is it. This is the first episode of the Applied Psychology Podcast. Welcome. And I have to admit that I'm feeling pretty nervous. I'm sitting here in my office by myself. I can re -record this as many times as I want. And I'm still feeling pretty nervous. Funny how these things work. But hopefully this will be okay. It'll be enjoyable. We're gonna go through this together. So if it's alright, if it's tolerable, do me a favor, please subscribe to the podcast. And if it's not, if it stinks, do me a favor, if you're willing, come back in five or 10 episodes, because hopefully it'll be a little bit better by then. So today I am very excited, very excited to be inviting in my first guest. And you know, one of the things about being in this racket for a while as a... researcher as a professor in applied psychology, one of the nice things is that you get to meet and know a lot of people and you get to meet and know a lot of your heroes. and you get to even be friends with some of your heroes. And that's the case with my first guest today. Today we're going to be covering research on therapist expertise. And this is kind of a contentious topic. And in particular, my guest with several of his colleagues, they've written a series of papers critiquing the idea that simply by being a therapist for a while, simply by putting the reps in, that you get better. that you as a therapist, your expertise grows, your expertise grows, and that you become more effective. Rather, he argues, and we get into this in the conversation, but he and his colleagues, they argue that therapists need to be engaging in certain specific activities to get better. And frankly, what he argues, and they have good data to support this, is that therapists on average aren't engaging in those activities. And frankly, that it's not as much a part of therapist training programs as we would like to see. And again, we get into this in the conversation. Because of the nature of our conversation, I wanna make sure to bang a couple of things home. First, and we have tons of evidence to support this, therapy is effective. Therapy, on average, is actually quite effective and is often just as effective, if not more effective, than medications. So let's not lose track of this, that in this conversation. Secondly, that therapists can get better over time. That if they do engage in certain activities, certain behaviors and sort of push themselves in certain ways, they can become better. They can become more effective. So I think that this conversation can feel a little bit pessimistic in a way. but I think that I sort of left it actually feeling rather optimistic. Like there are lots of things that we know how to do, that we can do to facilitate therapists becoming more effective. So it thrills me to no end. Tab with us for our very first episode, Professor Emeritus from Arizona State University, Dr. Terry Tracy. So trying to get into this, I was like, all right, what's a cool way to enter into this conversation? Because, you know, obviously, like, I read your stuff, you know your stuff. But again, for the handful of people that listen to this, they don't know your stuff. Right. So like, it's got multiple purposes. Yeah. Right. So like how to have a conversation so that for the handful of people who listen to it, who haven't read it will be able to follow along. Yeah. So. One of the ways I thought we would get this started. So I've been like playing around with all the different like AI chat bot things. And so I use Google's notebook LM, their new AI assistant. And I sort of pointed it to, I think it was five papers that you guys have done on expertise. And I said, this is what I said. Why I went through several prompts, like messing around with it. But I said explain to, so explain like you're explaining it to a child. The primary arguments that the authors are making. So like that's what I had to do. So I said, I pointed to the five papers and I said, or six papers and I said, hey, so explain the primary arguments to a child. So what I'm going to do is I'm going to read you the primary arguments that it gave me. Yeah, let me, I'd be curious. And then you tell me what you think. And it's kind of fun. So like, because it is, it's sort of like a little kindergarten. So it gave me four things. So it said, the first thing is proving therapists get better with experiences is tough. There's no surefire link between experience and how well a therapist performs their job. All right, so the second thing. Some therapists may not be great at the things they say they're good at. Just because someone calls themselves a therapist doesn't mean they're an expert. Third one, therapists can improve their abilities by working with other therapists and practicing specific skills. Doing these things can make therapists better at their jobs. Then the fourth and final one. It's super important to know what good therapy looks like. This helps us understand what therapists should focus on to improve their skills and give clients the best care. So those are the four things that gave. So what are your terms of your primary arguments? What are your thoughts? Yeah, not bad. I think. Let me see if I could rephrase them so I'd be happier. First, and this sometimes gets missed, therapy is effective. No one does that. But a lot of variance across therapists. And so. How should we evaluate treatment? Our argument is the only criterion that matters is outcome. There are many other things that people, therapists differ on. But the one we think that matters is outcome. Does the client get better? Can we document that? Given that, we don't get Therapists do not get better over time or with practice or with training or with awards. And so that calls into question an idea of expertise, which is expertise is associated with lots of practice, lots of experience. And you master it, and you can get better over time. But we don't document that. It doesn't happen. Certainly some therapists get better, some therapists get worse, but on average we stay the same or actually there's data to say that we get a little bit worse. And what can we do? And I think at this point, a lot of that's just conjecture. We have a lot of good ideas, but there aren't any data to show that these work. So. There's a lot you said there and I want to get more into it. So let's just back up at the start. So. Two questions. So how did you get into this area of research? And then when you first got into it, what did you think that you would find versus what did you actually find? Um, so I. I can point to a time when I said, oh, let me kind of look at this, but it really, throughout my kind of career, I have been fascinated by both as a scholar and as a therapist, clinical decision -making. So I took several courses back when I was a graduate student on that and was fascinated and I kind of get in with Meals book and with Robin Dawes book. in all of their work and Kahneman and Tversky's work. I always thought these were important pieces that... very few people in therapy knew about. So it was kind of, so I was kind of had that in mind all along and tried to think about things in that way for my own therapy. When I, I was always struck by the lack of feedback I got about how clients are doing. You know, when they would, When we would terminate, I kind of went by what they said, well, I'm doing a little bit better. But that didn't really help me do anything. I mean, it's very vague in general. Because that was certainly before we were collecting any kind of data. I would make up my own instruments and collect it with my clients just because I wanted something. Um, so I kind of started a session by session, very, um, imprecise, but more of a, how we doing kind of questionnaire, five questions. And I just gave it to them so I could see, um, yeah. And I found that very helpful because that told me things that I might not have picked up otherwise. And when you met, was that, so was that like, how's therapy going? What was that like? How we doing? Are you feeling like we're communicating well? So kind of a little evaluation semi, as it before Working Alliance was around, something like that. And the other thing I did, was I contacted my client six months later because I wanted to find out how they were doing. And what I did with that was I made some predictions about how I think they would do. And in some cases they were confirmed and in a few hours I was kind of like, what? I never saw it coming. And it told me that there's just a whole lot of stuff I didn't have information on and access to. And... And so clearly some of the assumptions I made about some clients are just wrong because they did not turn out the way I thought they were going to turn out. And so that made me kind of really do some thinking about how could I improve that? And that's kind of, that's when I also started looking more closely at kind of the heuristic literature and the clinical decision making literature. So when you're saying like stuff that you didn't expect, it sounds like... you're alluding to like you thought that they would be doing better than they ended up doing. Or I thought that they would be kind of stuck in this rut or either way. And so some people I thought were just, you know, we did OK. I didn't think they would do that well. I think they would continue having relationship problems. And lo and behold, some of them didn't. And so, you know, that raisin. So maybe something happened afterwards. But what did I miss? Um, or in the ones that didn't get as well as I thought, what did I miss? Um, so I found, I found those cases more interesting and, um, They helped me think about my approach much more than the ones, than the cases that were confirmed. You know, cause I had to think about them. You know, if this client I thought would kind of go out and they kind of makes very slow kind of in -road progress on say starting relationships and things and indeed they did and it kind of, you know, hit and miss, that's fine. Kind of confirm my ideas, but you know, but when they see that I kind of stopped thinking about it. For the other ones, those are the ones that, well, what did I miss? And I started to rethink about things I could have asked or tried to pick up that I didn't. And so that's kind of what got me into thinking about hypothesis testing and disconfirmation. And that's clearly what the decision-making literature was saying. So, all right. So one thing I'm realizing is I made it, you know, I think I sent you a bunch of questions. in like an order. I'm quickly realizing that it's good I made the questions. But if I stick to this order, like I'm not gonna be paying attention to you. And no, you're saying what you want to like, I'm going to bail on that. And I'll probably try to hit most of them. But I think it's I think it's more interesting for me to pay attention to you. Because all right, so like, there's several things. So one thing you said there, and I think that this is an important thing. to sort of hit home is, so you talk about how in your work, you talk about how routine outcome monitoring, some sort of outcome assessment, how there's value in that and how, and you talk about the limitations in terms of the research, but just sort of the idea of like, there's evidence that when you do routine outcome monitoring of your clients, that it results in better outcomes, that on average clients are going to do better. So there's one of the problems with that is we then assume that that makes me a better therapist. That's what I wanted to follow up on, which is differentiating those two things. Yeah. Because the literature says, yes, if I apply this particularly in its signal way that Lambert does, that it has an impact. You get a message saying something's not right here. Do something. You got a message saying things aren't going as well as they could or should. And you do something. We don't know what people do. We don't know what's effective in those moments. We just know you give them something, you scare them, and they then change approach and they get a better outcome. What the other part of that is... we are kind of dependent upon that. People don't get any better in the future having done that. But with those cases, they're better. So it helps me get better outcome, but I'm still kind of wedded to that system. You take that away, I'm not going to do anything. I'm not learning. I was thinking about it when I was reading your stuff. The question is, what is therapist expertise? And The term that I use is growth, right? That therapists are going to be getting better, that there's that trajectory that goes up, that over hours, client hours, over months, years, decades, that on average, my clients are going to be getting better and better. And that's probably not a linear slope, right? That's probably not a linear, I would guess. But right, so that's how I thought about expertise, is it's about growth over time, not just outcome, but outcome growth over time for my clients as a whole. Right. No, the idea is that I'm learning and I've learned these things over long times. Time, practice, extra things. Nobody talks about people having expertise without practice. in your work, you talk about how feedback is necessary, but not sufficient. Yeah. So that's why that's what I was talking about with respect to, you know, routine outcome measures. So how is it helpful? How does it help me grow as a therapist? Well, the literature isn't showing that it does. But I think You can't do it without the information. So I think it has potential. And I think how it gets used. I mean, one of the things, so this kind of comes down to kind of solutions to this. One of them is really focusing on and using this material in a very kind of, again, I think, planful kind of way. So. So why is this working? Why is it not working? One of the things you see in the routine outcome measurement literature is even if people do it doesn't mean they pay any attention to it. Or they don't really utilize it or they don't ask a lot of questions about what the implications of this are. So I think that's where it's necessary. The information is there, but we can't make people use it. And so... And I think also what using it means is you really have to kind of have some hypotheses. And I need, so I need to know, OK, what am I trying to do here? What specifically? And then what effects should that have on, say, my routine outcome measures or other behaviors that the client demonstrates? So I need to have specific things I'm going to look for that are indicative of what my hypothesis is. And then ideally, I have alternative hypotheses. So if I'm thinking like, well, I'm having my client engage in this kind of homework, just as an example, I should see the scores on the routine outcome measurement, or ROM, I'm just going to go ROM, get better in this area here specifically, and not in these other areas. Because that's what we're working on. And if I'm getting, I could use that for feedback about how is it, what we're working, is that actually improving? And if it's not, I really need to kind of think about something else. So that, okay, so there, so there again, it's, okay, so that helps me understand is what I'm doing with my current client, is that helping them. in the ways I would expect it to. And if it isn't, then change things up or whatever. So that's going to help with my client right here. So how does that facilitate my growth overall as a therapist? Well, okay. As you start to do that and as you start using this, I think it will one, pay off with better outcomes and your behaviors are going to be rewarded because they are working. And so you will see in documented evidence that this stuff is working. My clients are getting better. But just applying the ROM now, you're getting that panic of, I got to do something. But it's not necessarily planful. It's not necessarily well thought out. It could be shooting from the probably with me. If I got that, it'd be go, oh my god. And I just kind of panic and do something different. But. You know, it's not necessarily planned for, I think if it's not planned for, what am I to learn from that other than when I'm getting this message panic and then that my panic pays off. So then the idea is that, so if, you know, I'm, I'm, I'm doing this hypothesis testing with all of my clients regularly. I'm going to be getting feedback on my hypothesis. And I'm going to start to see patterns over time of like, Oh, my hypothesis that when I would do this, it would facilitate this change in my client. When that's not supported again and again and again and again and again and again and again, then I'm going to say, okay, I need, I need to take this out of my repertoire or add this other thing into my repertoire. So it's not just about seeing, getting the feedback, like creating a hypothesis. having evidence supporting it or refuting it for this particular client, it's the pattern across clients over time that then I'm able to grow as a therapist. And yes, and further refine your hypothesis testing. You'll come up with newer, better hypotheses over time because you can kind of see more of the nuances of them. And yeah, so that's where you're going to get it. So, oh, go ahead. So no, it's just, yeah. ROM just provides data that you can use. Certainly, there are other data you can use, too, that are actually more immediate interaction with the client. If I have hypotheses about how things are going and that there are certain issues, I can. So generally, what we do, maybe I'll speak for myself, with clients is I'll go in. I have a. vague idea of what we're going to do. I might have some broad treatment plans and I go in and see where the client comes from and I kind of play off of that and kind of come up with a few ideas. And maybe I'll make them formal or not. But if I don't make them formal, I'm never really testing my ideas. And I think that's the only way for me to know I'm doing something wrong is for me to actually test it. So when you say formal, what does that mean? Actually saying, this is what I expect to happen. And this is how I'm going to get evidence that that is what is happening. It almost sounds like if I was a clinician and I was adopting this approach, that would be something I would put in my weekly note. Yeah. Would be like, what's my hypothesis? How can I look at the evidence? You know, just a little. Yes. No. And I think you have to make it explicit. Writing it down is one form of doing that. versus kind of having a fuzzy idea that isn't particularly well articulated. So I've supervised a lot of people. So a client comes in, and after a lot of interaction, I notice that the client is very flat in effect whenever we start. getting around to talking about the father. And there is a literature that says that that could be associated with abuse. So it's also an example of a first conditional probability, but other than that. So, OK, so maybe I think, well, maybe there's some abuse here. How would I go about finding that out? And so I could, you know, use, decide to do, I'll do a little bit of confronting to kind of see the effect that this would have on the client. And what, if I'm right, what should the client do in response to that? So if this abuse thing is a correct assumption, what should the client do? What, so what would I expect to see? and then I look for those behaviors. But this is also, that's a good thing to do. But beyond that, I think I need to kind of really think about alternative hypotheses because I could work real hard at confirming the idea that this young woman was abused by her father because I'm only asking questions that are going to confirm that. What is also important is whatever my hypothesis is, what should I look for that would tell me no, I'm wrong? Because it's really easy for us to confirm our hypotheses. And so I think, yeah, the three steps. You specify hypotheses. You specify things that you look for. And it could be in ROM. It could be in the behaviors of the client. And then I look for alternative hypotheses and alternative indicators, roughly that would disconfirm it. So, let me tell you one of my big faux pas which brought this home to me. I'm a... I'm working at a counseling center. Well, actually I was an intern at a counseling center. And I'm meeting this client that comes in and she looks to me kind of classically depressed. She's really kind of low affect, very kind of low energy. She's kind of flat. And so I'm looking at her going, kind of a classic depressive. Probably, I'm thinking we might need to get her on some programs, to join clubs or things like that, work on social skills, things like that, which would kind of get her out and kind of with others, because it didn't seem like she had a lot. Fine. And so we're meeting. We have a few sessions, and I think I'm doing a hell of a job. So I followed her. I had to leave quickly after our session. And so. I wound up following her about 20 yards behind her as she went down kind of the main street. On the main streets on campuses, there are bars. Not on purpose following her, just to clarify. Not on purpose because I had to go, we were going in the same direction. No, I'm not stalking her. Just want to make that clear. So I'm walking behind her and... She walks past this one bar, it has a big kind of picture window and all of a sudden there's banging and apparently people inside the bar are banging to get our attention. They come running out the door of the bar and they go, oh yeah, Mabel. And they're all excited and she starts beaming and she's kind of really kind of talking heavy and then she goes into the bar with them and they're sitting at table in the window and she's just. going off and talking to all the people there. And I just stood at the window with my mouth open because I did not see this as being part of this woman's constitution at all. I didn't ask any questions about those sorts of things because I thought she really was a classic depressed, didn't have a whole lot of social skills. I'm watching her run the whole group. There are eight people sitting around the table. She was running the group. And I'm like, Wow, did I miss the boat? And I missed the boat because it just didn't ask questions. And I didn't ask questions because they didn't try and disconfirm what I thought. And that's when I learned that just the value of looking to try and disconfirm your ideas, because it's too easy to go down an inaccurate line of thought. So you thought that this client, that this sort of depressive appearance that she had when she came into your office, that this was sort of how she lived her life. This was everything in her life. And she had no, really no social skills. Right. And that you just, and it's a reasonable. And that, yeah, it's a very, it's a common picture among depressives. And I just didn't ask otherwise. And so obviously our treatment took a big shift after that. Um, cause I was thinking, yeah, I need to get her engaged. I need to get her some social skills. No. Right. Oh, I see. Right. Because earlier you said you were thinking you're getting, getting, you're involved in programs or more interpersonal. But I'm like, and in hindsight, it's sort of after that, and this is kind of an interesting. example, because typically, the typical therapist isn't going to see your client out in the social. Right. And that's kind of why but yeah, I hope not. But right, right, right. But like, but you but you did. And it gave you the information of your approach, what you were doing with his client was just not wrong. Yeah. But it so yes, and I can't, I can't rely upon following clients. So it really did. I can but I can rely on looking for information. that disconfirms my ideas. Right. So you could have asked her about her social life experience, whatever, to collect some data on that and realize, oh, this is, you know, perhaps she did, she was in a depressive episode, perhaps. But it was much more nuanced than just the general things that I was thinking of. Right. Right. And this is something that you talk about in your work, which I think is important, which is, you're generating hypothesis to take a disconfirmatory approach. This is what really brought it home, yeah. Right. So how can I, what data would tell me that I am wrong? Right. Because we don't, one, we don't like that. And so we don't naturally ask those. If we ask questions, we ask them that will show that I'm right. Right. And so again, so getting back to sort of the routine outcome monitoring, this idea of growth, The idea is that probably, we probably do this all of the time in life. And so as a therapist, it would be the same thing, which I'm gonna have certain assumptions I'm gonna make. And that it's not just about allowing these assumptions to be disconfirmed for this particular client. Because again, let's make the assumption that that helped that particular client. But it's that if I do this again and again and again with my clients, I'm going to learn, oh, these, assumptions that I'm making, these intuitions I have, oh, these seem to be on track, these seem to not be on track. I need to reframe how I think about my clients in general, which leads to the growth. Yeah. And in general, or it's the more, the more I do the disconfirmation approach, the more I see the benefits of it. And in a sense, I'm being reinforced for it. So that that's going to increase over time because this is not a natural thing we do. We do not ask. I want to go see where I'm wrong. Right. And so it's hard. Yeah. So this is so I was just listening to an interview with Danny Kahneman and of course, this one of the fathers of this whole sort of the heuristic stuff. And so one of the things that he talked about was he said, you know, ideas are cheap. And that, but that's not the human intuition. And of course I'm paraphrasing here. I'm probably getting some of it wrong, but this is what I took away from it is that our human intuition is to hold onto them, that our ideas are good and valuable. And so by creating these alternative hypotheses, even just the act of doing that helps us to find them less valuable and be willing for them to be disconfirmed and to throw them away. Yes. Just by coming up with those alternatives that that sort of natural thing happens. He's talking about more in the era of research, but Yeah, I think that that's it's an interesting idea as a therapist instead of I'm gonna have this sort of intuition This is the case but then just by coming up with alternative possibilities I might be I might hold on to that initial one less tightly One of the things that you know, it's been the literature on kind of how are therapists who characteristically have good outcomes different from the rest of us? Right. Because there's a lot of variants. And they're not that many things. But one of them is, and there are just a few things, a few studies that find this, but it seems it makes sense given this other context is kind of a... professional self doubt. And so they're much more willing to ask, is this right? Am I doing this right? Is this the best way to think about my work with this client? One of the things we do know about experience with therapists is that many things change. One of them is confidence. As therapists have more experience, they get much more confident. But what we're finding is that is unrelated to outcome. I mean, in some cases, even contraindicated to outcome. And you have these people, the people who have more doubts or less confident are the ones that are the best therapists. And that, you know, I think what you're getting there is kind of what we're talking about with doubt is that ability to kind of look for disconfirming evidence. Like, well, I'm not sure about this. What else can I see? Where is it? Or willingness to admit or look for things that they're wrong on. I think that's really what that professional doubt is capturing. It sort of makes me think that this idea of confidence or doubt that it's it's complicated. So that if we just measure therapist, self efficacy, and the therapist doubt that that's probably not going to tell us the whole story, because it's probably dynamic. Well, it's therapist to have confidence sometimes, but then question themselves. Like you can't like I would imagine there's a threshold there where you can't question yourself all of the time. Right. but it's sometimes and just like there's probably value in having like you have to have some some confidence to be willing to, you know, have a to have a somewhat uncomfortable interaction with a client, right? To be able to bring this thing up, I have to have some confidence in my ability that my you know, but but if I have too much confidence that I'm not going to question my so it seems it's probably a delicate dance. Well, yeah, I think clearly you need some baseline confidence in doing things. But I think what I'm talking about here is confidence less so in me and confidence more in the process. So if I'm asking these questions, even though I'm doubting myself, I'm not really doubting myself because that's part of who I am. And this is, so I, that's just part of how therapy is done. I ask questions, I look. for these things, I look for examples of where I'm wrong. I really think there is a willingness to kind of look for errors. So, following up what you said about the process, so one of the more striking findings to me as a researcher that I got from your work is that, There's a... Sort of like in the wild, we don't see these... Okay, so therapist experience has been associated with things like competency using certain therapeutic techniques and even relationship skills and... Go ahead, okay. No data there. No data there. Okay, so... But there that there is. OK, so fill in the blank here. So. There is an association with experience and certain therapeutic things I do getting better at certain things I do in therapy. A lot of other, again, many things that experience. Helps again, self -confidence goes up with experience. What you were talking about is kind of. the performative functions of what we do, the performance. I more experience, if I am a cognitive behavioral therapist, I start doing things that are more cognitive behavioral. Experts could look at that and go, ooh, you are getting better at doing cognitive behavioral therapy things. And so all of these are kind of, you are engaging the behaviors you're supposed to be engaging in. and that improves over time. Those behaviors are not associated with outcome. But so we have a whole bunch of things we change on. And that's nice. And indeed, much of the literature supports that. And if you look at where much of literature is in supervision and training, because we have a captive audience. And so you look at that, people change in terms of self -efficacy. People change in terms of usage of different basic counseling skills. They get better at the basic counseling skills. But all of these are kind of performance things. And they're all judged by our picture of what a good behavior is. Ostensibly, those of us who are longer in tooth or more expert, we look at these videos and go, oh, that one's better. This one's not so good. So there are some ostensible standards there. It has been difficult for us as a field to come up with real explicit standards. I mean, just the whole idea of kind of judging fidelity to treatment. That took us like 30 years to come up with. We have a few measures, but even those are kind of gross. But it took a lot of effort to get this far because one, we don't agree on what that is. There's a whole lot of difference. And then when you start talking about what are good therapist behaviors across different theoretical sets, good golly. There's no consensus. And so it becomes difficult to judge what the best stuff is. With experience, we do tend to better approximate whatever this ideal is. But That idea is nice, but it's not related to outcomes. So what you're saying then is, so over time, we see therapists getting better at a lot of performative things, but not so much the stuff. So the therapeutic alliances, of course, the one that's constantly brought up as sort of this, you know, if we can count on anything, we can count on a correlation between the therapeutic alliance and outcome. Yes. stuff like that, like the therapeutic alliance, we don't see that getting better over time. Or there are other, I think about sort of the edited book, the, was it the psychotherapy relationships that work? Is that what it's called? Yeah, yeah, yeah. That, you know, pretty much each, I think it's a great book. And that each chapter in there, they take one of those constructs, and really sort of, and they have met analysis, but they really sort of flesh out how that construct is related to outcome. So like, you know, empathy, of course, is in there, congruence, or, you know, these sorts of things. So we don't see therapists getting better over time on average on those variables, we see them getting better more at the performative stuff. So that would be, you know, if I'm an analyst, that would be dream interpretation. If I'm a cognitive behaviorist, it could be thought challenging, or it could be behavioral experiments, or whatever, but not those constructs that actually are associated with outcome. Right. And another thing, we get much better at, depending on how you evaluate it, but say case conceptualizations over time. Our case conceptualizations get much more sophisticated, coherent. But they're not necessarily. See, I would argue that they don't have the hypothesis. They could have hypotheses, but they don't. But we can talk much better game about clients with experience. And I can do it much more efficiently with experience. So much more FIC, a bunch of clients, I can go, OK, I quickly get the details and can make a nice picture out of that. We get much better at that. get better at talking about clients, you're better thinking about clients. We even there is evidence in some studies that we think about therapy differently with experience. So stuff is going on, but it just doesn't relate to outcome. So this gets to your, so you talk about this coherence versus correspondence criteria. And so you're And these terms aren't terms that you came up with. These are terms that sort of exist in the literature. So the coherence criteria is things like your ability to do stuff that is consistent with sort of common theory or practice. Yes. A consensus best practice thing. Right. And so, but what you argue is, yeah, that's... That's not really that helpful in this context because if it's not associated with some sort of external evaluation of client improvement, then who cares? Roughly, yeah. So yeah, the literature on expertise across professions says there are two criteria you can use. So just to summarize it for those of you that don't know these terms. Coherence, which is the profession agrees to some strict or some guidelines for how things should be done. And that that's how you evaluate good practice. And the other one is called correspondence, which is there is some external reality that these things can be compared to. And that's what we're arguing for is the only external reality is outcome. do get better. The coherence one, if I could give an example, two examples. Medicine for years struggled with the same issue. So go back a few hundred years. The best practice for easily 200 years was bloodletting in leeches. And that was defined. as the best practice, anything else. And a few of the practitioners during that era that had done some studying in some of the Arabian models, where they really kind of thought about things differently, were banished because it was bad practice, even though their clients were getting better. And so here you have a ostensibly rational, well -thought -out logic behind treatment that lasted for several hundred years in spite of data saying that this is wrong. So one, it was clearly a coherence. There is no external reality. If someone's getting sick, I let blood, they die. So you have some pretty good indicators things aren't working. So what you need is a good And so for us as a field, what we would need, one is to agree to what that is. And that's kind of where the problem with some of those performance measures come in. We can't agree across theoretical school. There's lots of writings on what good practice is and what it should look like, but we can't even agree on that. So I don't we can't even get out of get out of the gate with regard to that. The other one, and I will give a plug to your research, is, you know, if you think about some of the stuff you find in suicide, there are very clear guidelines for best practices and what should be done soon as anybody even mentions suicide or has any kind of tinkering near it. And your data show that that doesn't work. But we hold on very dearly to those guidelines. Right. No, yeah, that makes, yeah, actually, that makes a lot of that's essentially, even in reading this, I was sort of thinking about all sorts of things, but I wasn't thinking back to that, or I wasn't thinking to my own work, actually. You're probably presenting one of the best arguments for it, because the coherence of the suicide recommendation is so universal. Right. Do risk assessment. Yeah. Yes. I mean, it's kind of like it's banged into people's heads. And if you challenge it, that's clearly bad practice. Right. And so sort of the idea too with your disconfirmatory approach is, so whatever it is that's the standard in the field or even within your particular orientation, that if I actually create a hypothesis and allow that to be disconfirmed, I'm going to get evidence that supports or refutes my hypothesis. And then I can actually move forward with that. Right. Yeah. But we don't naturally do that. We don't like to do that. Yeah. So let's spin into the deliberate practice piece. I think that this is very interesting. So. So, okay, so there's a lot to, there's probably a lot here. Well, of course there's many books written about this and you know, much ink spilled. It's a hot one now. Yeah, it is. So just real quickly to give the background. So I sort of pulled for folks who don't know sort of what is deliberate practice. So, yeah, so this is what is the four, I think the four pieces in one of your attempts to define it is a commitment to committing time. to a commitment of improving, getting coaching, getting ongoing feedback on performance and improving and practicing and refining particular skills. And so I think about, when I think about deliberate practice, I think kind of the easiest place for me to see it is in sports. Yeah. Right. So you're thinking about music too. Oh, sure. Right. And so you can think about, you know, a basketball player learning to, you know, just. countless hours shooting free throws, but it's not just shooting. Well, so one nice thing about shooting free throws is you get immediate feedback. Did it go in or not? And I mean, it's actually get better feedback than that because if you're trying to, you know, if you're trying to hit it against the backboard versus not, right, if I want, if I'm trying to hit it against the backboard and it goes in, but it didn't hit the backboard first, then yeah, it went in, which is great, but that's actually not what I'm trying to do. So then you get, so you're getting the feedback on the outcome, but then through your own sort of appraisal of what you're doing or evaluation of what you're doing. And then often from external, from a coach, which could be anything or anyone giving you feedback, you bend your knees a little bit more, you flex your elbow this way, that way. So you're sort of getting, so you're committing to improving your free throw and you're getting lots of coaching and feedback. to practice that skill. So, okay, so you talk about how there's work on deliberate practice in structured and unstructured professions. And I was wondering if you could sort of, I think that's a nice way to think about how this fits into psychotherapy. So where, so could you sort of explain that and perhaps using sports or music? example and then talk about where that fits into psychotherapy from that sort of structured versus unstructured framework. Well, you know, this whole idea kind of started with, you know, the original expertise research in chess. And, you know, and what in the language there was, you know, chess is a very well defined thing, you can only do certain things or certain pieces. Ill -defined was what the term was used to do other things. And one reviewer hated the use of the word ill -defined, so that's why we use structured and unstructured. But the literature typically uses the term ill -defined. And that just means that there aren't real clear guidelines for what is and what is not constraints on the thing. So with basketball, It's fairly well articulated. You have to stay in certain areas of the floor. You can do certain things. There's a very clear rule structure about that. So that would be much more structured. Music, probably not quite so much, but still is fairly structured. You have compositions. You have to kind of do that. And so these are fairly well structured. And the benefit of having a structured thing is typically you can get very explicit feedback like you're talking. It's you know, yes. This is working. This isn't working. And so it's much easier to kind of use that. With ill -structured domains, the feedback is much less frequent, much less specific. So you don't always know. So that's why it's a bit harder. So that's why expertise in ill -structured domains is you don't see it as much. at least in the expertise kind of literatures. And where you do see it, it's defined as we talked about before in kind of the coherence way, standards that we all can agree to versus kind of an explicit outcome or explicit result. I was sort of thinking, as you're talking, I was thinking about how, like what the structure is in psychotherapy. I was thinking, you know, we do things that are structured. So we do have typically a 50 minute hour and we do... Of course, there are other models, but we do typically have a therapist and a client sitting in an office or a room that's quiet. And we do have some general structured frameworks for thinking about it. But at the same time, unlike basketball or chess, those are, I don't want to say arbitrary, but somewhat that if you find like, if it's if there are data to suggest that, you know, going for a walk with your client, you know, doing, uh, you know, there's all sorts of wacky models of therapy that, you know, um, and so some of that, I'll stand by the idea of there are some wacky, goofy and problematic things, but it's certainly also possible that there are some, what to me seem wacky models of therapy that are tremendously effective. So in a way, the constraints that we do have, that we do place on ourselves are. of arbitrary and there's nothing, I mean, other than obviously, you know, ethics codes and things like that, that actually does keep us restrained in the ways that we often kind of are in practice. Right. No, we basically it's wide open. And we can do all kinds of things. And, you know, one of the things that we try and teach students is some limits on what might be appropriate and inappropriate. Right. And so I think that's one of the things we do. But even there, there's a lot of variance in things that are done. And once people get out of training, I have to admit, in some cases, I am outright appalled at what people do. But it's viewed as appropriate. or acceptable, or it's, we don't really have real clear guidance as to what is and is not. That's where it's very ill defined. We can't agree on standards. So we can't agree as a field, we can't agree on much of anything. So in this context, so in the psychotherapeutic context, which is so ill defined, where does deliberate practice fit in, in terms of help facilitating therapist growth or expertise? It's harder because it's ill -defined. But it's crucial. We talk about what and how. So I think this has been taken in the literature a few different ways. There is a big deliberate practice movement now where they are training in very specific skills in, say, certain treatment modalities. And So it's defined by specificity. We're going to teach you just this skill. So cognitive behavioral decatastrophizing skills would be a good example. And we will have you practice it. So that's the other aspect of the current applications to deliberate power. They want to, first it's kind of going in and getting a lecture, like the old continuing education kind of thing. And so there's this has been and there's a big business in this now. It's all over the place. And so those are the two basic things. They're nice. They're. Nothing new. They are new for us, but it's that's how we should, you know, the literature on feedback and skills training, and that's how we should be done all the time anyway. I mean, that goes back 80 years. So we have these things and a little bit of research has been done on them. And they say, yeah, you know, I think people get a little bit better in the demonstration of these skills than they would if they just sat in the class. Yes, and that's great. And I think these are fabulous things. These are very nice. Again, we have no data. The demonstration of performance of those skills is related to outcome. So there's that. But what we are doing is getting people who are doing a very good job of Coming up to... standards, if you will, in the demonstration of these skills. So it's that performative thing. We're kind of getting that. We're training it very specifically. And since it's so specific and you have practice, it's pretty effective. Yeah, that's great. I have given that structure. I have serious doubts about how it's going to show up in outcome, because we have no data to show that it does. So what you're saying is the things, the skills that folks typically use deliberate practice for in psychotherapy are those behaviors that generally were not, they aren't correlated with outcome, or at least there isn't good evidence that they're correlated with outcome. So do you think that there's a way that we can, and my intuition is you do, or my sense is that you do, think that there is a way that we can sort of modify that deliberate practice model to actually help us grow. and get better as therapists? Well, not to beat a dead horse, but I think honing these skills is a great idea. But I think what matters is when and how I use them. And that's an issue of, again, that clinical decision making. So if I decide with the client that this might be an appropriate skill or intervention, Fine, what's leading me to that? What kind of evidence would I look for to show this is working? What kind of evidence would I kind of look for to show this is a wrong move? And I think those are the things that matter. And so clearly, if I can do a nicer job of doing the skill, it'll grease the skids if it's appropriate. But if I just start plugging this in all over, it's not going to be good. If I. In my example, I used earlier the woman I saw who was kind of not just plain Jane depressed. And I kind of went after more kind of getting her enhancing her social skills and stuff. I would have been wasting my time. But I may have learned how to do this. I hope I have good skills in terms of doing that, but I could have plugged them in. But in an inappropriate context because I didn't ask more. So I think getting these skills is good, but knowing when and how to use them and evaluate them is the key. So that becomes a deliberate practice. Well, it's not currently, but I think that's what it should be. In your mind. Yes, that is on my mind. Or so there was a very nice study, one of Goldberg's studies, well, the Calgary Counseling Center. And so, You know, one of the two places where they got all the data, the longitudinal data across therapists are BYU and Calgary. And one of the differences at Calgary was there, they had everybody get together and talk about the data they got from the DROM. And they really had everybody participating in this. And that was the whole environment did this. And they're all talking about. But by everybody, you mean all the therapists, all the therapists. And they were actually, this was an agency wide structure, intervention. Everybody would talk about their clients, talk show, talk about their ROMs and talk about the implications that this had for the working Alliance. And so what they're doing there is probably generating some things about what seems to be working, what seems not be working. How can we do that? How can we evaluate it? I mean, They talked about it and although they didn't put it in those terms, they kind of had to be including at least some of that. And they found that over time, there was a little gain in terms of therapist outcome scores. It was the only one we have where there's a gain over time. And there, the authors attributed it, and I think it's a good one, to really the structure of the center. That helps people think about this stuff they engaged in using. The ROM, they engage in how to think about it. And hopefully, I don't know the specifics, but I'm, well, what difference could there be for this? Could it have been wrong? You know, if you get those, some of those questions going into that too, I mean, that's the stuff that I think we need to be doing more of. You need the basic information. So like deliberate practice, I need to be able to do the skills. And like ROM, I need to know how my client's doing week to week, but what do I do with that? We have to engage in the process of how to think about those data. So they are necessary, but not sufficient. And so that's why - This sounds like, oh, sorry, go ahead. Yeah, that's it, go. Well, I mean, it sounds like, at least in training, a clear way of doing supervision. And I think the other implications are really to talk about this could be done in supervision. Currently, Supervision does not result in better outcomes. Overall, so if you just randomly sample supervision, generally, it doesn't help therapists have more effective outcomes. Right, right. And, you know, that's very, but again, there's variance. Some people get better. Some people stay the same. Some people get worse, but on average. So one of the things, okay, so. Big picture argument here. Your argument is that on average therapists do not get better over time. Some therapists do, but on average they do not. Right. some therapists, oh, here, let me back up a second. And part of that not getting better over time is that there's not evidence that going through training programs helps therapists get better over time, right? We would expect, we would expect if all was right in the world, that if we took a first year student seeing clients, a second year, a third year, a fourth year, fifth year, a sixth year, that we would see their clients outcomes improving. Right. Over time. Right. But we don't see that. No. So. So when I was reading and thinking about this, I was thinking, well, part of that, right, is that overall effect. So it certainly could be the case that some training programs do result in better growth over time. Or, you know, and even if you get more specific than that, because within a training program, different students have different experiences because they're in, they have different clinical supervisors, different clinical contexts. So, you know, just how much do you think that sort of the variation between therapists who improve and therapists who don't improve How much of that do you think is a? Therapist issue so like some therapists are just going to do the things that help them get better over time And how much do you think of it is a training issue? That some training programs are going to facilitate that pro growth while other training programs won't or don't Well, I think it's kind of both ideally I But we don't know that there are training programs out there that on average turn out people with better outcomes. So, you know, it's something nice to think about. My hypotheses are if training programs kind of highlight this self -doubt, this looking for hypothesis disconfirmation, that they will have better outcomes. But again, that's just a... conjecture on my part. But we have lots of training programs, and they don't tend to result in better outcomes. I remember I was way back when I was a student, I came across one of those studies, cross -sectional studies, that looked at outcomes across people at different experience levels. First year student, second year student, intern, professional. And there were no differences. And I was kind of like, I'm going like, why the hell am I in school? And brought that up to one of my professors. And he just kind of said, he just boo-booed that. And that's stupid study. And just left it at that. But that result we see again and again and again. And that's across many different schools and stuff. So there may be a few out there. But again, that's an empirical issue. So you're making some contentious arguments here. Gee, no. And... You're bringing several. What kind of reactions have you gotten to it? Oh, yeah. Well, some are quite nasty. Like what? Well, how dare you? You're just ruining the profession. You are taking away the credibility of the profession. No. Psychotherapy works. That is, without a doubt, true. But... What I am perhaps getting people think about is certainly one person that was very upset started his arguments with, I've been doing this for 40 years, and I know a thing or two. And I said, yes, you do. But I bet your outcomes aren't any better than anybody else's. And that's what he took offense at, because that's what the data say. So we are challenging. the very appealing idea that the older clinicians are the wiser clinicians. They are better at performative behaviors. They are better at talking about clients. They probably think about clients differently than novices do, but they don't get any better outcomes. So what I would, again, I think if I could get people to do anything, it would be step back. and think about maybe something, why? Why am I not getting better? Look for disconfirming evidence and just change our approach to kind of how we go about doing it. The skills are there, it's just, I'm not using them appropriately or at the right times or in the right way or whatever. And so that professional doubt is something I think we need more of, unless it's a professional... bravado. Several people said that. And here's one reaction to it. We gave a presentation of this at SPR. Society of Psychotherapy Research. Yes, society. And people there were not real happy. And it resulted in a rejoinder that Several authors got together and wrote about why we were wrong. And they suggested a whole bunch of different ways that we could evaluate expertise. But all of them were performative. And none of them were outcome. And so, yes, they cited data that we do get better with experience for all this stuff. Yeah, there's a lot of it. Anyway, one of the, and I found this very disconcerting. One of the. members of the audience, someone I know and I consider a friend, came up, top of the line researcher, came up to me and said, do you really believe this? I said, yeah, this is what the data say. No, no, no. I know what the data say. Do you really believe it? And that took me back. It's kind of like my beliefs take precedence over research results. And Spaniel's a well -respected researcher. Why are we doing research? If something conflicts with my beliefs, do I throw it away? And I have a hard time with that. And I think that's a struggle a lot of us have. And I think that... is a good representation of the confirmatory approach too. So even in research, if I just kind of go about looking for things I believe in and, you know, the only risk is I get nonsignificant results. Generally, I confirm the things I like, but I don't ever really subject them to kind of an attack or evidence that I'm wrong. You know, we all know not, you know. Non -significant results don't really mean anything. So even for researchers do this. And I think that's one of our biggest, it's a human nature thing. So I don't think therapists are worse than anybody else. It's just, I think for us to get better, we need to kind of minimize it as much as possible. When I was reading this, I found myself having a couple of different reactions. So one of them was the sort of pushing back, I got a little like in my head, I got a little defensive through different parts I was reading. But then another part of me was, you know, therapists are just like anybody else, which is they get up, they go to work, they, you know, see their... ex many clients a day and then they go home. And that it's not really built into the system, you know, to do stuff that is going to facilitate their growth. That it is very human for therapists to sort of, they have their shtick and they mostly do their shtick and for them to disrupt that is, it's disruptive. And it's extremely time consuming and that there isn't, you know, yes, I could do all this, you know, reflective and all that, but you know, I have a family, I have a mortgage, I have all this stuff. If I have extra time and extra energy, I'm either going to spend time with family, friends, hobbies, whatever, or I'm going to see more clients. I'm not going to, or it's not incentivized for me to spend you know, a couple of extra hours a week, not getting paid for evaluating my own practice. So do you, is there a way for us to incentivize this? Is there a way for us to integrate it into the system, if you will? Yeah, no, and I, I agree with that entirely in that you just, that mirrors my personal experience. I remember, you know, I'm working at a counseling center and I'm cranking through eight clients a day. And I would find that I might fall upon a particularly astute interpretation with my first client or two, damned if I didn't apply it all day long. And so what I found was the more clients I saw in part to do the time constraints you're talking about, the worse job I did. If I could cut back. and just kind of do take some time to think about them. I was doing a much better job. So thinking about generating hypotheses, think about how did that session go? What could I have done better? And so I think those are the things that are necessary for us to do to get expertise. But the field is not built that way. You kind of pushed, we have to see so many clients, we don't have time to think about them. And we have to do all this paperwork. And you know, so and I... could integrate it with paperwork, but there's so much of it, there's no time. It's all I can do to get the facts out on a piece of paper. I don't have time to go to the bathroom. So it's kind of like, yes, we're expecting people to do things that there aren't too many things to build in to do it. And so I think that is the reason, one of the big ones, why we don't improve with time, because we're not allowed to, in a sense. So I think, yes, building in, and this is, it would take kind of a, revision at many levels of practice individually just having us build in time such that we can review the clients and take that. And that's what we found with the data of the best therapists. They're the ones that set aside the time to do that. They had that professional doubt. They were examining their practice in that way. And so, yes, it's a huge commitment. The best thing I can see is that building in some procedures for talking about clients, kind of like the Calgary Counseling Center. Build it in so pretty much everybody's doing it, everybody kind of has to do it. We learn from each other. We're building in models that how to think about it. When I watch you kind of go through and you know, and I said this and it was wrong, that's helpful to me too, because I can kind of use that and I can go. Maybe not with the conclusions you make, but I could see the process you went and I go, oh, that's how you do it. And so I think that's kind of a way that we could do it. But again, both of these take time and we're not set up to devote that time. Well, it does. So I guess a couple of thoughts. So one is it seems like an obvious place to start this is in training. Yeah. That you could sort of build this model into training. Because you're doing supervision anyway. Right. So it's in a way it's not more time than you're already spending. It's just structured in specific sorts of ways. Right. And I also sort of thinking that. I think that there are ways that a clinician could integrate this into their practice without spending too much time. That one could, again, just take an extra 60 seconds for each client and say, okay, what's a hypothesis that I have? How would I test it in a way that could disconfirm it? Like that, You know, it's not as sophisticated as sort of the, uh, you know, uh, center wide supervision stuff, but it is a small thing that one could do. And it might take a little bit of time when you first start doing it to sort of get your head wrapped around it. But I bet if you did it again, if you just did it in your note, just real fast, what's the hypothesis I have? How would I test it? And then revisit it at the beginning of next session. Okay. I'm going to try this. Um, yeah, no, I think the more that you can do, I think. we're not naturally wired to do that. Again, so that's kind of why building it more into training where we do have some control might be a good set this as a standard. So I'm used to the process. I know how to do it. I've been through it a bunch of times. Then I can pare it down in terms of being more efficient. But I need to be able to have that and go through it a bunch to get there. And I think training would be the place to do that. someone picking that up and just doing the quickie thing without that, it's not going to be much. Because they're not going to be used to asking it in a way that will get them the data they need. Right. Right. Got it. Is there something that you would suggest as we sort of wrap up here? Is there something that you would suggest to help people like to give people a framework of how they could ask? or create a hypothesis in a way that would start to send them down that path? Well, as with everything, do it. And the biggest thing is come up with a hypothesis and come up with how can I disprove it. That I think is a big question to always ask and we don't ask it. Because soon as I ask that, then I'm looking at a whole bunch of different stuff. And I think that always has benefits. Yeah. In everywhere that they just that's a general truism of psychology and there are not many. Well folks, we did it. We made it to the end of the first episode of the Applied Psychology Podcast. How was it? I don't know. Not a good conversation. How was it for you? Let me know. We'll work on getting better together. I want to give a special thanks to Dr. Terry Tracy for joining me and I will see you all next week on the Applied Psychology Podcast.