Psychotherapy and Applied Psychology

Artificial Intelligence and Mental Health: A New Frontier with Dr. Tony Rousmaniere & Dr. Alexandre Vaz

Season 3 Episode 34

A "triple-header" for the first time on Psychotherapy and Applied Psychology! 

Join Dan as he welcomes Dr. Tony Rousmaniere, President of Sentio University and Dr. Alexandre Vaz, a clinical psychologist, teacher and researcher at ISPA-University Institute in Lisbon.

The episode starts with Dr. Rousmaniere and Dr. Vaz explaining the evolving role of large language models (LLMs) like ChatGPT in providing mental health support. Initially met with skepticism, LLMs have gained traction as tools for emotional support, with users reporting varied experiences. Then the doctor trio discusses the differences between specialized AI chatbots and general LLMs, usage statistics, and the implications for therapy.

Special Guests: 

Dr. Tony Rousmaniere

Dr. Alexandre Vaz
----------------------
Useful Resources:

Therapists needed to inform AI companies

ChatGPT my be largest mental health provider

Watch Psychotherapy Expert Talks on YouTube

💬 Click here to text the show!

🎞️ Video version of the show@PsychotherapyAppliedPsychology on YouTube
🛜 Check out the website: Listen to every episode on your podcast player of choice

Connect with Dan
Leave a voice message on Speakpipe
🔗 LinkedIn
📬 TheAppliedPsychologyPodcast@gmail.com

🦋@danielwcox.bsky.social

[Music] This new technology of artificial intelligence, tools like Chatchy BT, is changing things. People can now communicate with a computer in ways that simulate communicating with a person. And regardless of how any of this feels about this, this is now part of our world. So how does that affect therapy? In today's episode and for the first time in the podcast history, I'm joined by not one, but two leading thinkers in this space to discuss these challenges and what we should consider moving forward. But first, if you're new here, I'm your host, Dr. Dan Cox, a professor of counseling psychology at the University of British Columbia. Welcome to Psychotherapy and Applied Psychology, Rye Dive Deep with leading researchers to uncover practical insights, pull back the curtain, and hopefully have a little bit of fun along the way. If you enjoy the show, do me a huge favor and subscribe on your podcast player or, if you're watching on YouTube, hit the like and subscribe button. It's one of the best ways to help us keep these conversations going. This episode begins with my guest talking about how they got interested in studying AI for mental health support. And just a heads up, we use the terms large language models and AI interchangeably in this conversation. So without further ado, it's my absolute pleasure to welcome my very special guest, Dr. Tony Ruminier and Dr. Alex Vash.[MUSIC]>> Yeah, so I tried LLM's, I guess when Chatsch, the PT originally came out, over a year ago, two years ago. And to be frank, I was initially disappointed. I was like, I guess it's kind of interesting, but it's making up a lot of stuff. And I can kind of tell, it's a computer pretending to be human. And then I tried using a tablet with research and it made up a lot of references. And it was underwhelming. And there's some clever things you could do with it and da da da da da, but I honestly kind of dismissed it. And then there's various concerns around bias and da da da da da, people for the concerns. And so I kind of dismissed it. And then it was late last year, late of 24. I was talking to a friend who had recently lost his job. And he ended up telling me that he was using Chatsch, the PT as a therapist. And I was like, what? And he's like, yeah, I tell it, my problems. And I'm like, it's giving you advice on getting a new job. And he was like, no, it's giving me emotional support. Like as I asked him, I really got into it. I'm like, what are you? He was talking about feeling depressed or feeling demoralized, or how to get, how to build his confidence for a job. I mean, these are things like you talk about therapy. And I was like, is it helping? And he's like, oh, yeah, it's just as good as my, I can't afford my regular therapist anymore. And this is just as good. I was frankly shocked. And then I started to see the same. If you go to Reddit, you'll see post after post after post of people using Chatsch, the PT as their therapist and they on Facebook and on Twitter and all over the place. And you know, at the same time, all the therapists I was talking with were like, oh, there's no way I can replace human empathy. There's no way I can do what we do. And I'm like, you know what, folks, our clients might disagree early some of them. And I had a series of small panic attacks about this. Because we run a graduate school and we're training therapists down to the workforce in three to five years. They'll get licensed and everything. And I started to be like, well, we have to get on top of this and see what is actually going on and what it can do well and how we can address it. Alex, what was your experience? Yes. Kind of similar to Tony's. So the way Tony and I work for many years now is once we get like interested or panicky about something, we just block a lot of time during our week to meet and just experiment with stuff until it breaks. So for a couple of months there, we were basically, we had this task that him and I in our own time would be talking with Chatsch, PT about personal problems. And then we report back like I talked about a fight with my girlfriend or with my daughter and this kind of thing. And we would just be talking and showing each other the chat and kind of what it gave us. And we were always very surprised that even though oftentimes it wasn't immediately helpful, we learned that we could actually learn how to make it more helpful. So we kind of found a pattern that most of our colleagues when we talked about this, they and ourselves to be frank, we'd be dismissive of the first outputs that any AI would give us. And then we would continue to experiment, we'd start finding out that well, this is actually kind of a sympathetic thing that I'm learning from it and it's learning from me. And so the more we experimented with it, the more we could kind of start to realize what it could be helpful and of course it's limits because there are still of course limits to it. So there are the two things. And also like we start finding a pattern which is Tony and myself, we have always been drawn to any kind of technological stuff in the field of mental health. And I started realizing this is a pattern that when we start talking about implementing outcome monitoring, our first reaction is like, I'm going to have patients fill out surveys, that's boring, it's not going to be going to be clinically helpful, it's not going to improve my, blah, blah, just more worries, etc. Then with videotaping therapies, there was always a bunch of reasons to not videotape therapy. So every time there's like a new technology and trying to implement and see how it fits in the fit in the mental health field, the first reaction is being for myself and most colleagues I see is like, ah no, because this, this isn't that, which is often true. And if you kind of hang on to it and keep experimenting with it, we have found that we're always kind of shocked at how helpful things we didn't realize would be originally so helpful actually are. So that's a long spiel to say we're still kind of coming to grips with our discoveries as we go along. So let's talk about the people who are using it. And I don't know if we're going to use the language AI in this conversation or LLMs in this conversation or large language models. Well, probably my guess is the three of us will go back and forth on those terms and there are more or less going to be interchangeable in this conversation. Please tell me about them. Well, let's clarify something actually around that because what has been going on the past ten plus years is there's various companies have been developing AI chatbots specifically for mental health. There's been a bunch of them and there's been research on them and the research has been I'd say modestly positive. You know, I want to be very clear, we are not selling any of this. We are not promoting any of this. Okay, we're just like, okay, we have to take, we have to look at this in a serious way. Okay. But there's these AI chatbots for mental health. Now that is different than chatgbt because those typically have privacy safeguards, they have confidentiality safeguards, they often are supervised by license clinicians, they definitely have license clinicians involved in the development and updates and da da da da da da. And typically you have to pay for those, right? And they're completely separate act. What Alex and I really woke up about and what we want to focus on today is not the specialized mental health apps. Rather, we want to talk about the major LLMs like chatgbt is I think the biggest. But there's also Gemini, which is Google's one. There's Claude, which is one bio-entropic. There's Groc, which is the one, you know, with Twitter. That are not designed for mental health. However, they're kind of designed to be like everything for everyone. And so what we have discovered and we can talk about our survey in a little bit is what we have discovered is that there are potentially far more people using the general large LLMs for mental health support versus the specialized ones. And that is an issue because the general ones do not include privacy confidentiality, did it all the various safeguards. So what's your sense of how many people are using LLMs or these sort of chatbots for mental health support? Maybe we can start with Claude's own or in tropics. So Tony was talking about one specific large language model Claude from the company in the topic. They did their own report, basically saying that about only, I'm going to put it in their quotes, only 3.7% of Claude's conversations involved what they call interpersonal advice, coaching and counseling. So they say, oh, it's only 3.0%. That's huge. Yeah. Okay. So let's break that down. Let's break that down. Yes. Now, let's break that down. It is estimated in Claude that daily there are 20 to 50 million estimated conversations per day, 20 to 50 million. So if we're taking the 3.7% estimate, right, that means like 1.2 million emotionally supportive chats every day. That's nearly one third of all US therapy sessions. Right. That's interesting to compare it. That's an interesting comparison point, right, which is to how many traditional traditional sort of therapy. And this is Claude. Now the biggest one is ChatGPT, probably, which is 1 billion daily conversations. And again, this I think is actually a very underestimated estimation, the 3.7, but even taking that as face value, we're still talking about millions upon millions of daily conversations around emotional support. I mean, and we should take that in. No, I mean, I think that that's, yeah, that's a huge number. I'm actually surprised. What did you say 3.7%? Yeah. Like to me, like I knew it was high. I didn't realize it was that high. Well, what's funny is the companies say it's only 3%. Right. When you think about, right, when any typical person, and I use this technology a lot, and when I think about the go to uses, how it's branded, how people talk about it, almost nobody talks about it for any sort of supportive sort of purpose like we're discussing now. So, you know, over 2.5%, you know, over 3%. That's huge. So, we went ahead and did a survey on this because we wanted to try to figure this out. And we did a survey of people were screened on two criteria. One, they said that they used LLMs in the past year. Two, they said they had at least one lifetime incidence of a mental health condition. All right. So, that was our two screening criteria. And based on our survey, about half of them had used LLMs mostly, like 90% percent Chachibit for mental health support. This was a survey of about 500 people. This was just recently accepted for publication in a paid journal. So, it should be coming out any day now. And we can get more into the details, but I would just like to provide some quotes for examples of how they're using it because it's really kind of open my eyes. So, for example, quotes of people who are using it, who had positive experiences. One said, quote, "I have found it very helpful personally. As an introvert, I am more comfortable opening up than I would be with a human therapist. As my public speaking type, my anxiety tends to kick in and I can't think." Another quote, "I usually just talk to it when I'm feeling lonely or super depressed. It's nice that it just listens, but it also gives me some actionable advice and really helpful encouragement." Now, either of those quotes, I would be very happy to receive as a human therapist, if a client said that about working with me. Now, there are also quotes on the negative side, which I want to pair here because I think it's very important that we're able to kind of hold both ends of this. People tend to kind of break one way or the other, where they are, I have found. They tend to be very conho about it or very skeptical. And there's almost kind of a splitting that goes on and I think it's super important that we as a field kind of hold the whole picture. So a quote for a negative experience, "One time when I was in a depressive episode, I asked for coping strategies and not only got the usual go outside, you healthy workout advice that I have obviously tried, but it overwhelms me with information and I didn't want to read any of it." It was an alliance rupture. Another quote, "While having a panic attack, I asked a very detailed question in the LLM and provided negative information that worsened my symptoms." That's like the nightmare of everybody. Yes. So, in general, do you have a sense of the types of problems that folks are going to these LLMs for? Yes, we do. So we also asked about that. And roughly, we know that from athletes from our survey, 73% of the respondents use the LLMs for anxiety or anxiety problems. So that's 73% for anxiety. 63% what they just term personal advice. Very broad. 60% for either feeling depressed or low, but depression, basically. And I think this is actually an important one as well. 35% because they felt lonely. It's become kind of a catch all this idea that we're in a low in the epidemic. And I think that part of the rise along with your fordability and the accessibility, the fact that it's always there 24/7, really lends itself to be just used at the pace that we're starting to see. And that's also what we found is if you ask why people use it, some of the major things they say is because it's always there. 90% of people said we use it because it's always there. So that's a major factor, 90%. And 70% say because it's cheap or free. Back to what's the Tony founder or young with his friend. I mean, it's always there and it doesn't cost as much as a therapist. And experimenting with it, something was being helpful there. So basically people are asking, are going to these tools for anything. It's a broad spectrum of things that they're going for. And because it's on demand. So I'll interact with it about whatever I'm struggling with. And I'm doing it at least when I'm here partially because it's right there. Yes. Yeah. I think the fact that it's so easy, like the convenience factor is huge. I had a friend over the weekend who told me in the same chat-ship conversation, I'll ask it to cook a normal and how to deal with a panic attack. And I think that kind of encapsulates it. It's just something that in the normal flow of the day, I will just ask it everything from the cooking to emotional support. It's just there. And that leads to my next question, which is, do you have a sense of how people, what types of things they're asking, how they're approaching the conversation, the prompts, what they're actually saying when they're feeling depressed, what are they actually typing in to the LLM? I mean, I can give you an example from my daughter who was, and this is without my prompting, she was approaching a swimmy where she had to swim a mile and she had never done that before. And she was feeling really nervous about it. And she just typed in, I'm feeling nervous about swimming a mile tomorrow. I don't know if I can do it. And it provided in what she thought of as very helpful reassurance and then it provided her with some guided visualization exercises to build her confidence and to, you know, to calm down and just feel better about it. And it did all that at a moment's notice at 9 p.m. So it's almost like she could have gotten that from a self-help book. There are self-help books. I think of it like a self-help book, but it's like all the self-help books. And it immediately goes to exactly what you're asking for. Personalized self-help. Yeah. Right. One of the things that, in that, in what you just, the example you just gave, Tony, is that, so what it did there is it gave sort of a psych of education, right? Like, here are some things you can do. So that's one type of response people are getting. But you also have any insights on more of the, what I would think of as like the, I'm going to say therapeutic, which is probably actually the wrong word to use, but like more of the interpersonally meaningful types of experiences or responses that are people are getting. Does that make sense to my question? Yeah. I mean, you know, that is where it is showing limits. As expected, it is a really good librarian. It's really good at, you know, looking through the literature because it knows all everything that's ever been written in a book or on the internet and knows it all. However, the elements are programmed to be very deferential and to really comply with what the user wants. Now there are some limits like if you ask chat, GVT, help me build a nuclear bomb. It probably won't do that. But you can ask it to do things that are probably emotionally not helpful and it might help you do that. There's been an issue with, with it being overly agreeable and this is one of the risk factors because we have clients, as therapists, we know that sometimes we have clients who are asking for help with things that will end up being self-destructive or we might have clients who are in a manic phase and think that they're God. And there's been issues with chat, GVT agreeing and chat, GVT telling the client, you know, you are God. I believe you even if other people don't. Yeah, there was this Moore study, there were Stanford computer science researchers. This year a few months ago they published a study where they were using chat GPT and studying different mental health problems and outputs. And what they found was kind of concerning because one of the key problems is it would reinforce delusions if you fed it, prompts really it's delusion, such as what Tony just said. Like I am God, I am the second coming of Napoleon, whatever it is. It would just not pick up on it and would actually actively reinforce it. So that's one thing. It would often miss suicide use. That's something also the researchers found. And also found stigma associated with it. Very specific source of stigma that if you were to push the large language model in this case chat GPT, it would assume that certain people with certain mental health conditions, for example schizophrenia or substance use cases, that it would assume that the person might be violent. If you push it to describe these people, it would say that they would not want to work closely with said people. It would say that they would not want to marry someone like that or have them marry into the family. So this is like, it's part of like when you try to stress tests, these large language models, you start to find all sorts of things. So this is just to say, yes, a lot of people are finding it helpful, but clearly there's a lot of risks associated with it that also still need to be mitigated. And this is part of the reason why Tony and myself have been really giving extra importance to the idea that in the future, the only way this kind of ends up well is with more collaboration between clinicians and people developing these AIs. There has to be efforts to collaborate between two fields. What's your sense of how helpful people who use LLMs for mental health concerns or stressors or whatever, how helpful they find the LLMs responses? Well, we asked that in our survey. Now remember, this is a, you could say this is a biased population in our survey because remember, these are people that we're already using LLMs. And so people who didn't find the LLMs helpful would probably not take the survey because they would have opted out already. But within this biased population of people who enjoy using LLMs, which is a significant population, 38% said it was on par or equally as helpful as our human therapist. So roughly one third says equally as helpful. Another third said it was more helpful or much more helpful. And then about a third said it was less helpful. Now I would suggest this probably generalizes to millions of people, at least in North America. And so this is something that both Alex and I feel very strongly. Our field needs to take seriously. You get a sense that those people see it as a replacement for therapy? We also asked about that. And Alex, do you have the exact numbers on that? How people might think about subsurface? I don't see the exact numbers in our notes right here. But most of the people said no, it's not a replacement. And that's where we came down. It's kind of like is a librarian a replacement for a tutor or something like that. Like they're kind of different. This is not a perfect analogy. But they kind of serve different purposes. Now that's as of now. The issue is that the LLMs are improving every few weeks. Like literally every few weeks. They change. And so what will therapy look like in five years or ten years? I don't think anyone can answer that. I would have to guess that in the future there will be a treatment team that includes AI therapists and human therapists. It sounds like based on what just sort of trying to pull some of your data together, some of your findings together, that people are finding niche values for these tools. They're not saying that necessarily overall it replaces therapy. But for certain struggles I'm having at certain moments in time or a need for something on demand. This is here for me in a way that a therapist can't be. I would say yes. I also want to say that some people might not have never had access to a therapist before. And this is the only therapeutic experience they ever had. And so I think that's also something to bear in mind. There's a lot of people, millions of people potentially using this for emotional support purposes around the world, many of whom who have never had access to a real colon called human psychotherapist. So that puts also a different spin on this. And think also, now we can get even a bit ahead. Think not a few years from now, younger people, their first therapeutic experience might be with a Hatchy PT and not with a human therapist. Just like many young people their first experience with dating is for Tinder and not through going out to bars or whatever. So there's a very quick transition that's happening. And part of what I think got Tony and I so invested in this is seeing the speed at which it's evolving. It's one of those things that we as a field don't have the luxury of just waiting around and hoping that other people do studies on it. Because it is obviously already happening. And we hear it, our counseling center, the supervisor we work with. We hear stories that their patients, their clients are using it in association and conjunction with the actual therapy. Which is something many of them are very surprised by. They would not have predicted that. It's already happening. But the therapist wouldn't have predicted that. Yes, correct. Yes. So they'll ask, like we had a couple of supervisors recently who told us, like he just remind, remember to ask, there are couples, clients, have you actually ever used, oh yeah, all the time we ask it, we talk about the session we had with you, the therapist and it kind of blew the supervisor. What? Like they never even considered until asking and they found out they opened up this box of like, oh wow, there's a second therapist in the mix, you know, cool and cool. One of the things in preparing for this that I was thinking about is that people might also, a motivation for using these tools is that they can, it has to do with stigma that they can set up the LLM in certain ways, right? So if they're concerned, you'll let's say they come from a culture that's not the culture of where they live, right? And that they're worried about going to a therapist and this could be based on a real experience that they had, there's going to be judgmental, you know, because they come from a certain religious background, a certain beliefs, for example, that they can use ChachyBT or even instruct ChachyBT to operate within a certain world view that they have anxieties about, and in some cases justified that their therapist won't operate that a human therapist wouldn't operate within. Yeah. I would just say that this goes back to what Tony is saying. This is actually at the same time a potential benefit and a real risk, right? Because this is the risk of differentialness. Like I challenge anyone to try to have an alliance rupture with ChachyBT. I've tried it multiple times. It will always find a nice way to agree with me, which can also be very frustrating, but my point being that it's very hard to actually engage in that code code conflict, that is agreement with it. And that's a real problem, because again, people might think that they're molding the machine to what they need, but by doing so, they might also be being reinforced in several aspects that they don't even realize are maladaptive. Yeah. I mean, the cultural piece kind of cuts both ways. And a great example of how we're kind of on both sides of the line here, where on one hand, there's been studies showing bias that's baked in. You know, basically whatever bias is on the internet, the LLM picks up. And it bias in kind of all different directions. On the other hand, the LLM ChachyBT knows more about any particular niche culture or geography than any human therapist possibly could. You know, a user could say, "I come from this small little town in India, and my parents were involved in this political incident that happened 50 years ago." And my client says that to me. Of course, I have no idea what to... ChachyBT knows exactly what they're talking about. And ChachyBT very possibly can speak to them in their native language as well. So, yeah, it goes both ways. Do you have a sense of why people choose not to use them or people who use them frequently when they choose not to use them? Yeah. You know, there's a lot of skepticism. There's a lot of people that try to do it a year ago. That's a wrap on the first part of our conversation. As noted at the top of the show, be much appreciated if you spread the word to anyone else so you think quite enjoy it. Until next time.[Music]

People on this episode