AI could be your next therapist for loneliness and anxiety

When you wake up in the middle of the night feeling overwhelmed due to the recent death of a loved one, your new shrink could be an artificial intelligence (AI) chatbot.

It offers advice and suggests techniques to improve one’s mood. And it’s almost like you’re texting with a human therapist.

Opening your heart to an AI-enabled empathetic app or text bot doesn’t replicate laying down on a couch to confide in a human psychotherapist or even calling a crisis line. But with proper guardrails, some mental health professionals see a supportive role for AI, at least for certain people in certain situations.

I do not think so [AIs] will they ever replace therapy or the role a therapist can play, says C. Vaile Wright, senior director of the office of healthcare innovation at the American Psychological Association in Washington, DC. But I think they can fill a need that reminds someone to engage in coping skills if they’re feeling lonely, sad, or stressed.

AI could help with the shortage of therapists

image of the spinner a robot therapist ai

In 2021 in the United States, nearly 1 in 5 adults sought help for a mental health issue, but more than a quarter of them felt they weren’t getting the help they needed, according to an annual Federal Substance Abuse survey and Mental Health Services Administration. About 15% of adults age 50 and older have requested such services, and 1 in 7 of the 15% thought they were not getting what they needed.

Globally, the World Health Organization (WHO) estimated in 2019 that approximately 14.6% of all adults aged 20 and over were living with mental disorders, roughly equivalent to the proportion of adults aged 20 and over. between the ages of 50 and 69. About 13 percent of adults age 70 and older had mental health problems.

The pandemic has caused an increase in anxiety and depression by more than a quarter worldwide, according to the WHO. Most people diagnosed with mental health problems, even before the 2020 surge in needs, are never treated. Others, who see few services offered in their countries or a stigma attached to their research, choose not to attempt treatment.

A March poll of 3,400 U.S. adults by CVS Health and The Harris Poll in Chicago showed that 95 percent of people ages 65 and older believe society should take mental health and illness more seriously, he says TaftParsonsIII, chief psychiatrist at Woonsocket, Rhode Island- CVS based health. But erasing the stigma that seniors in this country often feel about seeking treatment will only increase the need for therapists.

AI can be leveraged to help patients deal with stressful situations or support primary care providers treating mild illnesses, which could reduce the strain on the healthcare system, he says.

ChatGPT will not replace Freud just yet

Hiring a digital therapist might not mean venting your guts about the innovative AIs that dominate the headlines. Google’s Bard, Microsoft’s new Bing, and especially Open AIs ChatGPT, whose launch last fall sparked a tsunami of interest in all things AI, are called generative AIs.

They have been trained using large amounts of human-generated data and can create new material from what has been input, but they are not necessarily based on clinical support. While ChatGPT and its ilk can help you plan your children’s wedding, write letters of complaint, or generate computer code, turning its data into your personal psychiatrist or psychotherapist may not be one of its strong points, at least at the moment.

The problem with this space is that it’s completely unregulated. There is no regulatory body that guarantees that these AI products are safe or effective.

C. Vaile Wright, American Psychological Association

Nicholas C. Jacobson, an assistant professor of biomedical data science and psychiatry at Dartmouth in Lebanon, New Hampshire, has been exploring both generative and rule-based AI robots for about 3 years. He sees potential benefits and dangers for each.

The gist of what we’ve found with the pre-scripted works is that they’re quite clunky, he says. On the other hand, there is far less than can go off the rails with them. And they can provide access to interventions that many people might not otherwise be able to access, particularly in a short amount of time.

Jacobson says he is very excited and very scared of what people want to do with generative AI, especially robots likely to be introduced soon, a feeling echoed Tuesday when more than 350 executives, researchers and engineers working in AI released a statement from the non-profit Center for AI Security raising red flags on the technology.

Generative AI bots can provide helpful advice to someone asking for help. But because these bots can be easily confused, they can also spit out misleading, distorted and dangerous information or give directions that seem plausible but are wrong. This could make mental health problems worse in vulnerable people.

Strict supervision is required

How can you reach human help

If you or a loved one is considering self-harm, go to the nearest crisis center or hospital or call 911.

The 988 Suicide & Crisis Lifeline, formerly known as the National Suicide Prevention Lifeline, is the federal government’s toll-free 24-hour hotline. The nonprofit Crisis Text Line also has counselors 24/7. Both they employ nationally trained volunteers, are confidential and can be reached in ways that are most convenient for you:

  • Compose or text988. A phone call to 988 offers interpreters in more than 240 languages.
  • Call800-273-TALK(8255), toll-free number, to reach the same services as 988.
  • For the hearing-impaired with a TTY phone, call711Then988.
  • TextHOMETO741741the crisis text line.
  • In WhatsApp, message443-SUP-PORT.
  • Go to crisistextline.org on your laptop, choose theChat with usbutton and stay on the website.

The only way this camp should go forward and I’m afraid it probably won’t go forward this way is with care, Jacobson says.

The World Health Organization’s prescription is strict oversight of AI health technologies, according to a statement released by the UN agency in mid-May.

While WHO is enthusiastic about the appropriate use of technologies, including [large language model (LLM) tools] to support healthcare professionals, patients, researchers and scientists, there are concerns that caution, which would normally be exercised for any new technology, is not being exercised in a manner consistent with LLMs.

The hundreds working with AI technology who signed the declaration this week say vigilance should be a global priority. Wright of the American Psychological Association raises similar concerns:

The problem with this space is that it’s completely unregulated, he says. There is no regulatory body that guarantees that these AI products are safe or effective.

#therapist #loneliness #anxiety

Leave a Comment