Can chatbots become psychotherapists?


Only if you want them to be psychotherapists.


Recently, a manager at artificial intelligence company OpenAI wrote that she had just had a "very emotional and personal conversation" with ChatGPT, the company's viral chatbot.


Lilian Weng posted on the forums, "Never tried psychotherapy before, but this could be it?"


This sparked a series of negative comments accusing her of downplaying the seriousness of mental illness.


Weng's interaction with ChatGPT may be explained by the placebo effect mentioned in a study published this week in the journal Nature Machine Intelligence.


A team from the Massachusetts Institute of Technology (MIT) and Arizona State University asked more than 300 participants to interact with a mental health AI program and guided them on what to expect from the interaction.


Some were told the chatbot had empathic abilities, others were told it was manipulative, and a third group was told it was neutral.


Those who were told they were talking to a chatbot that cared about people were more likely than the other groups to think their chatbot therapist was trustworthy.


Pat Pataranutaporn, co-author of the report, said, "From this study, we can see that to some extent, AI is the AI of the observer."


In recent years, several high-profile startups have been pushing AI apps that provide psychotherapy, companionship, and other mental health support, a huge business opportunity. But the field remains a center of controversy. As with other industries threatened by AI, critics worry that robots will eventually replace human workers rather than complement them.


In the field of mental health, there are concerns that robots are unlikely to do a good job.


"Psychotherapy is for mental health and it's hard work," programmer Cher Scarlett wrote in response to Weng's initial post on X.


In addition to general concerns about AI, the recent history of some apps in the mental health field has been somewhat problematic.


A popular AI companion app called Replika is sometimes touted as delivering mental health benefits, but users have long complained that the bot can be overly sexualized and insulting.


MIT Arizona researchers say society needs to get a handle on the narrative about AI.


"The way AI is presented to society is important because it changes the experience of AI," the paper contends. "It may be necessary to lead users to lower more negative expectations."