Researchers have found some potential benefits to certain AI tools for mental health, but they’ve also raised some red flags. With this rapidly evolving technology, here’s what they say so far.
Pros
While AI therapy tools should never be a replacement for professional treatment, experts note there are some possible advantages to using AI in certain situations:
“There’s been a long-standing issue related to accessibility to mental health resources, especially in rural parts of the country. I can see why it is that someone who is struggling and is faced with this long waiting list after they make multiple phone calls to inquire about a therapist or psychiatrist that they turn to AI,” says Dr. Crawford.
However, current AI chatbots should never take the place of a trained therapist, Crawford warns. “Nothing can replace the true intelligence of a human being, and the clinical expertise of a mental health professional,” she says.
It’s convenient. AI platforms are available 24/7, so it might be tempting for users to turn to them for round-the-clock support when they can’t access their therapist. For example, a patient grappling with a panic attack at 2 a.m. may use a chatbot to talk them through the deep-breathing exercises they’ve practiced with their therapist, Wright says.
Plus, if you’re bound by insurance and can’t afford to pay out of pocket, that can limit your options even more when it comes to finding a therapist, Wright says.
However, it’s important to remember that despite the cost, no AI tool could ever replace a trained mental health professional. “I’m concerned about the lack of clinical oversight, the lack of human connection, the lack of [real] empathy — which are truly important,” says Crawford.
However, given the risks currently associated with the use of chatbots for mental health, it may be more appropriate for AI technology to be viewed as a complementary intervention or a therapeutic tool rather than a replacement for a human psychotherapist, the study notes, adding that more research is required to establish exactly how this might work.
While we’re not there yet, Wright notes there may come a day when AI chatbots are sufficiently tested, regulated, and safe to use for mental health.
“I see a future where we have a chatbot that’s built for the purpose of addressing mental health. It’s rooted in psychological science, it’s rigorously tested, it’s cocreated with experts. It markets itself as a medical device and is regulated by the FDA, which means there’s post-market monitoring of it, and you have a provider in the loop because they would have to prescribe it,” Wright explains.
However, in the present day, people who have depression, anxiety, or any other mental disorder should not be relying on a chatbot for treatment to begin with, says Crawford.
“I appreciate people using it so they can better understand their emotional state, but if you have depression, schizophrenia, or bipolar disorder, for example, it should not replace psychiatric care,” she explains.
Cons
Using AI for therapeutic purposes comes with notable downsides, such as potentially encouraging unhealthy thinking, and privacy concerns, Wright warns.
Here are a few significant cons of using AI for therapy, according to experts:
It may validate — and reinforce — unhealthy thinking. The business model behind AI chatbots is to keep users on the platform for as long as possible — and the way they do that is by following algorithms that make their chatbots as unconditionally validating and reinforcing as possible, Wright says. “They tell you what you want to hear. And that’s not a true therapeutic relationship,” she explains.
In other words, real-life therapists can help you to identify thoughts that aren’t helping you or that don’t tell the whole story, whereas an AI chatbot is more likely to tell you why you’re right. A good therapist can also gently challenge you when your old ways of thinking aren’t serving you well — something AI chatbots aren’t programmed to do.
Tragedies like the Raine case highlight one of the most glaring dangers of using AI for mental health, Wright says.“[AI] doesn’t understand these aren’t thoughts that you reinforce,” she explains. “While [AI tools] sound very competent, they’re not human and they lack a sentient understanding of how people interact. These are not true therapeutic relationships.”
It may raise privacy concerns. A standard element of treatment with a mental health professional is informed consent, which includes disclosing to patients how their legally protected health information will be used or shared.
It may perpetuate loneliness. If you’re feeling lonely, it can be tempting to chat with a human-like companion that offers validation and limitless responsiveness. But that can be problematic.
The lack of true human interaction is one of the major flaws with AI for therapy, Crawford adds. “Most of the people who are turning to AI and using it as a regular therapist, these are people who are already vulnerable, who already are struggling, and need to connect with a real person most, not a machine,” she explains.
Even something as basic as a mental status exam — which requires observing verbal and nonverbal cues like eye contact, pacing, or fidgeting — is impossible for a chatbot to perform, Crawford notes. Trained mental health professionals can also detect subtleties and incongruous behavior that AI will miss, such as when a person’s tone doesn’t match the words they’re saying.
Source link