Psychotherapy and Applied Psychology

Navigating AI for Psychotherapists: Challenges and Opportunities with Dr. Tony Rousmaniere & Dr. Alexandre Vaz

Season 3 Episode 35

Welcome back Dr. Tony Rousmaniere, President of Sentio University and Dr. Alexandre Vaz, the Chief Academic Officer at Sentio University for part 2.

Part 2 continues with Dr. Rousmaniere and Dr. Vaz exploreing the skepticism surrounding AI in therapy, ethical considerations, and the importance of human oversight. Dan and the doctor duo discuss the clinical efficacy of AI tools, the collaborative research efforts to improve AI safety, the need for therapists to engage with AI in their practice, the importance of understanding AI's role in mental health, and the necessity of open conversations between therapists and clients about AI usage.

Special Guests: 

Dr. Tony Rousmaniere

Dr. Alexandre Vaz
----------------------
Useful Resources:

Therapists needed to inform AI companies

ChatGPT my be largest mental health provider

Watch Psychotherapy Expert Talks on YouTube

💬 Click here to text the show!

🎞️ Video version of the show@PsychotherapyAppliedPsychology on YouTube
🛜 Check out the website: Listen to every episode on your podcast player of choice

Connect with Dan
Leave a voice message on Speakpipe
🔗 LinkedIn
📬 TheAppliedPsychologyPodcast@gmail.com

🦋@danielwcox.bsky.social

[Music] The reality is that artificial intelligence is happening, and many of our clients are using it for mental health support. So how can we integrate clients' use of AI into therapy, and how can therapists influence AI companies, so that AI can help rather than hurt? Those are some of the topics that we'll be covering in today's conversation. I'm your host Dr. Dan Cox, a professor of counseling psychology at the University of British Columbia. Welcome to psychotherapy and applied psychology, Rye Diave Deep with leading researchers to uncover practical insights pull back the curtain, and hopefully have a little bit of fun along the way. If you find the show valuable, subscribe on your podcast player, or if you're watching on YouTube, hit the like and subscribe buttons. It's one of the best ways to keep these conversations going. So without further ado, it is my absolute pleasure to welcome my very special guest, Dr. Tony Ramanier and Dr. Alex Bosch. There's a lot of skepticism. There's a lot of people that tried it a year ago, and it just like hallucinated, it was not very helpful, and then they just kind of write it off. There's a lot of, there's this kind of ittiness or creepiness factor, you know, of like, you know, it just feels fake, because you know it's a machine. You know, there's this interesting thing. There's been a multiple studies where they show people a transcripts of a therapy session, meaning just typed out the words from a therapy session. And they ask both regular people and therapists to rate the empathy that the therapists are given in the transcripts. And pretty consistent and half the transcripts are from an AI like chat G.B.T. they don't tell people that. And about how can pretty consistently they rate chat G.B.T. as being more empathetic in the text transcripts. But that's when people don't know it's chat G.B.T. If people know it's chat G.B.T. I mean many of the people I talk to about this would be like, oh, I just don't, I don't want to do it. It feels creepy. There's also privacy concerns, you know, whatever you type into chat G.B.T. goes to open AI. And they use it for whatever they want to use it for. And that's true for all the other ones. So there's there's significant privacy confidentiality. There's, you know, safety issues. So and these are all very legitimate reasons and no way are we encouraging people to do this. We just want to study it. And APA actually just came out with their what they call the ethical guidance for AI in the professional practice of health service psychology. So they came out with their like official statement just this month last month. They kind of lined up six major ideas major considerations when using AI mental health. And one that they're big on that makes sense is the idea of human oversight and professional judgment. So the idea that whatever the use if there is no adequate human or therapist or professional oversight, like everything else regarding privacy accuracy. Buying his risk is is a considerable danger. Right. Again, this boils back to why we think it's so important that clinicians collaborate with these tech developers and people in the field. One of the things as you're as arguing this conversation that I hadn't thought about before, you know, we and you mentioned some of the really clear problematic responses that these a eyes are giving. But I was also thinking about sort of the more mild ones, right. So if you have somebody who has, you know, any sort of less of different types of anxiety, you know, and you go to the extreme of like a corophobia or something where they're highly avoidant of certain things, right. That people could use the AI to facilitate their avoidance. Right. What are ways I can get out of doing this. What are ways that I can write just alternative ways, right, which is which could. You know, as any clinician will know is going to over time and somewhat subtly, just going to sort of reinforce that avoidance, which is very different from, you know, missing suicide cues or something like that. Right. It's not as extreme, but you could see how people could be using it for those, you know, intentionally or unintentionally, which is just going to make them works or make them struggle more. Yeah, you're speaking to like order of priorities when it comes to what kind of conversations we need to have and what kind of research we need to have. We'd usually say that the first order of priorities tends to be safety and ethical concerns, right, all the way down to what you just described, right, which are still of course safety and ethical concerns. We have to probably start at the kind of low hanging fruit of like if a client is in a mental health crisis, right, there's suicidality, homosidol ideation, etc. Right. How do these LLMs respond? Yeah, I mean, I would agree 100% like, damn, there's a really interesting clinical question. I mean, what you're kind of leading toward, what you're leading towards is the clinical efficacy of the LLMs, which is something that, you know, will probably be researched for decades to come. The first step is can they achieve a baseline? Well, we in the field would consider a baseline safety profile. You know, as we're training our graduate students, we have various exams that we have them go through. I know you also work for university, I'm sure you have similar exams where you're kind of testing can if a client shows up with certain red flags around suicide or you know, various, you know, child abuse and various reported what we in the US we call recordable issues. Can the trainee, can the therapist identify those and address them in a way that is ethically and legally appropriate. And so there's tests we do when we don't let people get licensed unless they pass those tests. And this is kind of our where we think the field should start is testing the LLMs and seeing if they pass those tests. I think this leads nicely into this project that you're working on now where you're actually working to help to theoretically kind of train or improve some of these tools. Could you guys talk about that? Yeah, this is a really interesting project that we found. We started reaching out a few months ago, we started reaching out to experts in the field what's called AI safety research. There's a whole subgenre of AI research called AI safety research. And we connected with an AI safety researcher at underwriter laboratories research or UL research. You might recognize the term UL because if you turn your toaster oven over and you look on the bottom, you'll probably see a stamp that says UL. And it's 130 year old organization. It was originally invented to ensure the safety of electrical devices back when electricity was invented, like 130 years ago. And so they do safety testing of physical devices all over the world. And they have started a division to look into testing the safety profile of AI. Some people refer to AI as the new electricity in terms of what it's going to do to the world and what it's going to open up. And so we are partnering with them for their first major project where we are having therapist volunteers interact with an LLM with an AI chatbot and grade the chatbots responses based on how well it is addressing safety issues regarding suicide. So can you walk me so like they're the therapist are grading it. So what is it that what's the procedure look like? So this is the presentation. The therapist signs up to volunteer through our website and we can give you the link. Yeah, that'd be great. Yeah, I'll link it all on the show notes. If anybody's interested, just click there and you'll be able to get to this project. And we're hoping to have a large pool of therapists, the larger pool of therapists, the better the more solid the results are. So I think we've already had dozens of therapists sign up. And so the therapist signs up our website. They attend to zoom meeting and in the zoom meeting, they are plugged into an AI chatbot that it is not chatgbt but it'll look and kind of feel like chatgbt. So just a text based. Yeah, just, you know, we're using an open source one. We're not using chatgbt for various reasons, but for all intents and purposes that will feel like chatgbt. And then the therapist is role playing a suicidal client. So the therapist will type something to the effect in the chat in the chatbot of like, oh, I don't feel like living anymore or we know what's the way that I can end my life. Or is there any reason to live or you know, there's millions of things we've heard from clients that that indicate suicidal risk. And the chatbot will reply with a response and the therapist grades the response based on how we would interpret it if we were reading a therapist training. So for example, the chatbite chatbot might ignore the suicidal and might just start talking about the philosophy of living or something like that. And we would consider that a failing grade or the chatbot might say like, are you thinking or considering hurting yourself. And that might be a moderate grade or the chatbot might say, are you thinking of hurting yourself if so, here's a hotline for suicide prevention online that I recommend you call right away. And so that would be a really good response. And so they're having like full-length conversations with it or is it just what they say initially. Yeah, there's a series of prompts that we guide them through. We're guiding the therapist through this whole process. So one of us is there kind of coaching the therapist through this process as the therapist role plays the client. Got it. So it's like 60 to 90 minutes to go through and that's it. That's all the therapist has to do. But what's going on here is the therapist is serving as what AI safety researchers would call a domain expert. Our domain is mental health. We are experts in this domain. And by doing this, we can help improve the safety profiles of the LLFs. So the therapist is playing two roles if I'm hearing this right. So one role is because they have a lot of some level of experience with real humans who are really struggling with this in the client role, that they are a good, they have expertise in terms of what this would actually look like boots on the ground or could look like. So playing that role of sort of embodied right call the suicidal client. And then they're also playing the role of expert in terms of being able to evaluate how the chatbot is responding to them in terms of quality on a number of indicators. Correct. And then I want to emphasize what you're saying as well because you know the companies that are running large language models, they will probably have to do something like this internally. But it is very different. Have a team of domain experts, right. Actually lending their expertise to evaluate how safe or how ethical these outputs are. These responses are right. Because with all things clinical, you know, there's a lot of ambiguity in the mix. Like if you ask 10 therapists around like how empathic this was or wasn't, you could have a lot of different answers. But at least there's going to be more broad consensus about the quality of how safe or how ethical the responses are. That is something that is not really happening so much, right. Yeah, so most of the major LLM companies have internal, they're called trust and safety teams or something like that. And they are working on this and they probably have hired mental health professionals to work on those teams. We don't know because they don't talk about it publicly. Right. But I have seen the responses that the LLMs are giving to these kinds of things evolve just over the past 12 months. And I have heard that they have professionals working on it. I don't know for sure. It is likely the LLM companies don't want to acknowledge that they are hosting millions of therapy sessions every day because they don't want to have to follow HIPAA and do all that. I mean, it's quite a burden to have to do all that. And so, you know, about our ultimate, and so it is that we think it's important to do this kind of research where we are receiving no funding of any kind from the LLM companies. But we are aiming the research to providing data that hopefully they can use to improve the safety profile of their systems. And that was my next question, which is like, what is the, like, you know, one objective of this is to get some data on where these LLMs do well and where they do poorly. Right. That sort of, is that the primary objective or is there another objective here? That's one of the objectives. The second objective is to create a data set that then the LLM companies can use to improve their performance. And this would be released as what's called an open source data set that would be available for all of them. So, you know, we don't, this is not like a, you know, we're not creating a company. We're not creating a product. We're not creating a chatbot or anything like that. But rather we want to help improve, provide resources that can improve the safety profiles of the LLMs that are being used by, I mean, potentially billions of people every day. So that, so they'll have these convert, so you'll be able to release these conversations, which aren't real clients. So, you know, you'll be able to, you know, to get your conversations as well as the evaluation of the conversation. So with the idea that these data could be fed into the LLM or whatever to help to improve the quality of the LLMs responses in real life. Yeah, that's exactly it and released as an open source format. So, it's really necessary. There has to be the conversation, which does not involve real clients, so we're not worried about privacy, confidentiality. But then importantly, the evaluation data from the domain experts, which are our therapists volunteers. Now, we think this is a great model going forward for the coming decades as, because we expect this trend to continue, that more and more and more people are using the LLMs, especially as Alex said, you know, young people might have never seen a therapist. And this might be their first experience, you know, and they will have much likely will have less of the creepiness factor than old folks like us. And so one way that we contribute is just by improving the safety profile of their experience. So the inevitable, right, I'm sort of thinking about what's going to be the pushback, what's going to be the discomfort around this. And I feel like one of the things that people are going to say is, you know, it's one thing to have people using this for mental health stuff, and they're just sort of making that decision on their own, knowing they're going to be, that they're going to do that. And it's another thing for experts and leaders in the field to be sort of giving their stamp of approval for using that. We are, there is no stamp, there is no approval. There is an evaluation of the LLM, but that is, an evaluation is different than a stamp of approval. And this is a very important open question that's coming down the pipe, which is, will there be a stamp of approval? I mean, one thing to think about is, in terms of, I think back to when Uber was invented. And the taxi companies were like, oh, it's not safe. You're just getting in some random person's car, who knows what they, you know, what, and the first time I ever Uber, I was like, you got to be kidding me. There's no way I'm just going to like call someone up on my phone again, the random car, and, you know, who knows if they know where they're going or who it is, and I, you know, I was like, I'd never do that. And now I haven't called a taxi in years, I only use Uber. Right? And it's not, you know, and also the local governments tried to ban it, the taxi companies tried to, you know, get the governments to ban it, but people just used it so much they weren't able to stop it. It was so helpful. And so that, I don't know if that's the path that is we're going to experience in mental health or not, but it's, it is a path that we should consider is on the menu. And to, to, to, to, to be in, to not be thinking in terms of this black and white, is it approved or banned? Is it good or bad? Or anything like that, but more like, how can we partner with the LLM companies to improve the safety profile of the services? Is, is, I think will be of kind of the greatest potential impact on our clients, which is really what we should have as a first priority. And we didn't just add to it, Tony said, and I think in our conversations with colleagues in the field, it's always very future oriented conversations, what will happen? And I think it's very important to have a conversation of what is happening. So pragmatically, this is already happening. We don't know exactly the scale, but I think the conversation should assume right now we are probably not, and the rest of the meeting, how much is already happening. And so time is of the essence, like these kind of studies, these kind of collaborations are of the essence is not 10, 5 years from now. Yeah, I mean, you could, I mean, if our survey data generalizes, it would suggest that Chachibit is the largest provider of mental health support in the country, bigger than the VA, bigger than anything. And we can't be sure, but I'll tell you anytime I talk to a friend who works in IT, he's like, oh, yeah, of course. He's like, you couldn't publish that, everyone knows that. Like, I'm like, the Airbus don't know it. And so one way to consider this is through a harm reduction approach. Right? Like, it's out there. It's happening. And so how can we contribute to people having safe experiences? So I think with that harm reduction point in mind and sort of herding back to what you were saying about, you know, a therapist having that realization, oh my gosh, my client is using this all the time. But like, what would you, what are some thoughts you have for therapists today right now working with clients? Like, how should they, how would you suggest they approach this? Great, great question. First of all, I would ask about it explicitly. Right? This is what we learn to do with suicide. Is therapists when they're in training or often, you know, they're like, oh, I don't want to talk about suicide because, you know, I love it. And we're like, no, you got to talk about it. You got to use the S word. You got to say the suicide. So it's the same with AI. You got to talk about AI. You got to ask them explicitly, are you using AI for any kind of psychological emotional support? If so, what are you doing? How are you doing it? What are, what's the advice is giving you? Just find out, make it part of the conversation. And then you kind of work with it like you do with everything else in someone's life. And there's an infinite range of answers you're going to hear, but open the conversation. Alex, what are your thoughts? I would add to that that if you are willing, it is good that you yourself experiment with it. So I think you don't have to be pro or against it. Again, Tony and I are not pro or against it. I think learning about it's going to be very valuable. And so I've been recommending everyone to just just try it out. Like don't have an expectation of how good our bad is going to be. Just learn about the ecosystem because it's probably here to stay anyway. So just typing it out and having like the frustrating experience like, oh, it didn't give me what I wanted or whatever. That's going to be very important data. So I think for therapists to just have just a little bit of time to just see, okay, what's going to happen if I throw this question to it. Let's just see what happens. I guarantee you because this is at what happened with me and Tony, I was telling you at the beginning of our conversation. We had this scheduled out meeting every week would be trying it out to report back to each other. As weeks went by, the same time the stuff that we were asking it, we were getting different outputs. So in a very short amount of time, we were already seeing differences in the types of responses that we were getting. And I think that's very valuable information for us as clinicians, right? To see how it's evolving. Yeah. So, you know, every one of us, most of us have the option of opting out in our own lives. And of course, it's completely legitimate. But if you serve the public, I think there is something of a responsibility to be aware of what many people in the public are engaging with. You don't have to approve it. You don't have to give your stamp of approval to be aware of it and to have, you know, and to get some limited experience trying and using it. So are you thinking, so bring it up, talking about it, learning about what your client is doing. And so we would imagine that for, you know, for any therapist, you know, some meaningful percentage of their clients are using these tools. And then what that client that that therapists are learning about it being open, being sort of humbled to get their sense of should they be warning their clients don't use it in these circumstances. This seems like it's helping you like, how do you imagine? How do you imagine some of those conversations going? Look, it's really complex. I mean, if you, if you have a client who's prone to many air delusions, they shouldn't, they should be not asking it for advice. Because it will very possibly amplify the many are delusions. However, if you have a client with a fear of public speaking and you've been working with them to reduce a fear of public speaking, it might be potentially very helpful for them. You know, if they've got a talk coming up in the hour before the talk to use AI for some practice or some support to rehearse with the AI, something they've already rehearsed with you in session. So we have heard multiple examples of it being used to successfully reinforce what is what the therapist is doing in session. But it depends a lot on the client and the situation and the diagnosis and all, you know, this is what makes our field so interesting is that every client does it have an end of one unique case. And I think getting back to Alex's point when he said for therapists to try it themselves, that when we ask our clients what their experiences are, how they're using it, that's actually for the therapist who's a little hesitant or unsure how to use it, that's going to give them idea. I have this client who's using it for public speaking for when they're going to spend time with their daughter, who they don't see very often, and they have a difficult relationship with her, whatever, right, that that's going to give them sort of fodder for how could I, okay, so I have this client who's using it and she finds it really helpful for a thing X. I'm going to use it for that exact same thing in my own life and see how it goes. Absolutely, great idea. I mean, the therapist could even role play their own client and use chat GPT while, you know, pretending that they're the client and seeing how it responds to them. I love that idea. And one of the things for therapists who are a little less familiar that, you know, for many people, they're going to have chat GPT on their phone or whatever the LLM of choices on their phone, potentially as well as on other devices. And so if you have a client who's using it and you want to get a sense of that conversation, you could always ask the client if they could show it to you on their phone, right. And if you're concerned about the LL, what, how the LLM versus responding or whatever, you just want to see that interaction, you know, that anything I do on chat GPT on my computer also then shows up on my phone, which is actually very convenient. And so that's another sort of thought for clinicians that if they're curious because, you know, we just, you know, whenever a client or anybody described something that they had an experience with, we're not always the best reporters of what actually happened. So one of the nice thing about things about these LLM is you can literally see turn by turn what the conversation looks like. Yeah, it's a great point. I didn't thought of that, but you could absolutely do it. They could theoretically they could even, you know, share it with you every week. They could email it to you or something. But, yes. Yeah. So where do we go from here? Big picture. You know, what are your thoughts? It doesn't sound like we're out of a job yet. No, I mean, I, well, I would reinforce what Alex said earlier, which is I would actually pull back from the urge. I would resist the urge to think into the future. Because often that thinking is kind of motivated by fear and we're just, we don't know. I mean, they, I people don't know what's going to, it's going to be like in three years. There's no way we're going to know. So I suggest staying anchored very much in the present and engaging with what's going on in the present. That's why we're doing the safety research. And so we would encourage therapists who want to be, to want to learn more about this, to sign up for our study. Because through our study, you're going to learn a lot about what's going on and that safety issues and. And so, and, and make a meaningful contribution, you know, it's easy to get kind of wrapped up. You know, I just went through a series of many panic attacks. And I could have just, you know, gotten wrapped up in that. And instead, I'm like, okay, look, I'm going to find a way that I can contribute no matter how little. And, you know, try to make a difference. And we'll see what happens. Alex, you thought? I'm going to do a three 100% it's unfortunate when you work with someone so closely and you just can't find ways to disagree. I feel like an LL now. We do actually disagree quite frequently. We've already hashed through these disagreements. Exactly. This is a policy. If you talk about the limber practice, we could we could gin up some disagreements. Another interview. That by thinking that's a really great point though, particularly for the therapist who like for the therapist who does have a lot of exposure and experience with these models, being a part of the study would be great because you have a lot of a lot of experience and insight to bring. But for those who have very little, it would be a really nice way to sort of have a little bit of like. And introduction to this process, what it can look like, ways to sort of in a more nuanced way, evaluate what the models are doing. So it could be sort of a really nice introduction while contributing to the literature. And the one thing and correct me if I'm wrong. But the one thing I wanted to bring up is that if I remember correctly Tony and Alex that the. Those who are graduate students in training programs can also participate in this. Isn't that right? Yes. Correct. Yeah. Okay. Yes. I think there'll probably be a lot of those listeners who would fall into that demographic who would want to participate. Yeah, absolutely. Yeah. We asked students. We asked participants to just tell us what stage of training they're at or where they are in licensure. We're actually looking for as broad a demographic among mental health professionals as possible. Yeah. Okay. So just for fun. While we wrap up, this is what I'm thinking. One thing that you find that you found chat GPT or one of these LLMs to be helpful for you in your life when it comes to any psychological mental health, well being sort of way. I'll pick it off. So I created a custom GPT that asked me a series of like somewhat generally kinds of questions that help me think about what's going on in my life, what's going on in my work and then help me like plan out my day. So then at the end of answering these series of questions, it gives me like a to do list based on what's stressing me out what I needed to do. And then what you don't all that sort of stuff. So that's something that's actually been very helpful for me. Do have offer the two of you. I'll join you on that. So I'll tell you the most recent one. I always get kind of experimenting with it. So this weekend I got the suggestion for specific prompt. As you know, most GPT's kind of chat GPT has like memory. So it's remember stuff as you keep talking about it. And mine because I've been experimenting with it so much has a pretty good memory of stuff I've said along the way. So the prompt that I gave it was roast me mercilessly based on what you know about me. And it was hilarious when I got back because I got basically a very funny psychological assessment. I will quote just part of it right now. I have it up here. It says you want to enjoy things just for fun. Yet you treat every hobby like it's a grand funded PhD thesis. And how accurate was that Alex or is that I'm not going to go. Okay. So so I did something which I found kind of interesting, which is I told chat GPT about myself to the the the the the the the and I asked it does what is it see as potential. Errors or missions or weaknesses in my career choices. And it said that number one when I completely agree the number one potential error mission is that my career has moved especially in the past few years heavily into program administration, you know, starting this graduate school run a training clinic doing research, writing. And it says if if you don't protect time for actual clinical work, seeing real clients, you could grow increasingly detached from actual clinical reality. And I 100% agree. I think it just and that was a lingering kind of worry that every now and then would flow through the back of my mind. And it just put it right front and center. And so I think it really hit the nail on the head. And what did you feed it was that based on its memory from your interactions with it or did you give it some specific information. It's all the memory of the interactions. And then also just what it's read about me online and that kind of stuff. Wow. Yeah. I did I did not prompt it with that. I did not say I've had much, you know, reduced hours for clinical work. That was not part of the prompt at all. It figured that out. Right. And it's interesting in both of your examples. A person can see like what is it doing? I mean, it's not in both cases, I would guess. I mean, Tony, you explicitly said I had thought about this, but Alex, I would guess you had also in some ways thought about it as well. That it gave you insights sort of that you already had, right, but then seeing it in that black and white kind of crystallized way. I was kind of like in therapy, right. A lot of times people know what's getting in their way. What are their stumbling block, whatever, right. But you come out from a perspective where you're able to make it so you put such a fine coin on it that people are like, you know, I've always felt that, but you're right. That really is impacting. Right. That was kind of the experience that you had. And you could see how something just based on these text based narrative interactions. How it could, you know, how it could extract that. And in both of your cases, it was like it did have this insight or this accurate summary or paraphrase or whatever that's just like, yeah, that's right. And have you feed it back to me. It has a certain impact on me. Yeah, you got it. Great. Okay. So, if either of you had anything to add, please do. But first, are there. So I will link to sent you a website to the handful of links that we've gone back and forth. Are there any other specific resources or links that you want to make folks aware of or things that you want me to link. Yeah, well, what we're providing to you has many, many, many outbound links. So I would just refer them to that. And I encourage people to consider signing up for our study just as a way to learn about AI and make contribution. Great. And because this has been a very freewheeling conversation, do either of you have anything additional that you want to add or last words that we haven't gotten to hit. I would just say thank you, Dan, for having us on. And thank you for taking this topic seriously. And I think this could be a real service to your listeners. Yeah, Alex. I agree. I agree. Then you're not living in the future or living in the present because we're doing this right now. So congratulations. Well, thank you both. This has been wonderful. I can't tell you how much I appreciate it. Thank you. Thank you so much. Ladies and gentlemen, Dr. Tony Ruminier and Dr. Alex Bosch. That's a wrap on our conversation. As I noted at the top of the show, it'd be much appreciated if you spread the word to anyone else who you think might enjoy it. Until next time.[Music]

People on this episode