Should chatbot psychologists be part of the health system?


By Amy Sarcevic
Monday, 08 April, 2024


Should chatbot psychologists be part of the health system?

This year, an announcement that chatbot psychologists could become part of Australia’s healthcare system within the next two years sparked controversy in medical and public arenas.

The bots, which mimic conversation with users via a voice- or text-based interface, can be programmed with human-like qualities, and deliver structured mental health programs.

But is the interaction meaningful enough to generate therapeutic outcomes?

Dr Cathy Kezelman AM, CEO and Executive Director of Blue Knot Foundation, believes AI has a place in health care, but is concerned about its impact on people with complex trauma.

“Complex trauma results from harmful interpersonal interactions and, for this reason, requires safe human relationships to facilitate the healing process.

“Supporting people to feel heard, and regain any trust that has been lost from their primary betrayal, is, in my view, only possible when a committed human being walks alongside them.

“My concern with machines is that they will miss out on a lot of the sensitivities that are important for the therapeutic alliance and minimising the risk of additional trauma.”

Too attached?

Digital mental health specialist Dr Simon D’Alfonso from the University of Melbourne shares similar concerns, but warns the opposite scenario could pose more harm.

When the world’s first chatbot ‘ELIZA’ was introduced in 1966, even its creator, Joseph Weizenbaum, was surprised to see people attribute human-like feelings towards it.

Known as the ‘ELIZA effect’, researchers have since warned about the dangers of projecting empathy and semantic comprehension onto programs with a textual interface.

“The trouble with some of the less structured bots is that people can get tangled in unconstrained conversations and be led into an emotional rabbit hole,” D’Alfonso said.

“They can go too far in attaching human characteristics onto a non-sentient system, and often the depth of their exchanges isn’t justified by what the chatbot is capable of.”

Another danger of getting too wrapped up with a bot is a sense of loss when its functionality changes.

“Sometimes, manufacturers can suddenly change the parameters of these platforms and the user ends up feeling devastated,” D’Alfonso said.

Lacking substance?

Even with more structured bot varieties, D’Alfonso is concerned about the potential for adverse consequences.

“We are seeing increased scope for bots to converse in an open-ended fashion, as natural language processing models develop. But, it’s unlikely you’ll ever find something with the cognitive complexity, semantic sophistication and emotional richness to carry out anything like a substantive psychotherapy dialogue.

“A lot of psychotherapy is reliant on facial exchanges, interpersonal presence and non-verbal cues. Acoustic and paralinguistic qualities — like a person’s pitch and intonation — all convey important information and play a role in the therapeutic alliance.”

That said, the notion of a digital therapeutic alliance (DTA) has recently emerged as a research topic. Although different to the traditional therapeutic alliance, research suggests DTA is a genuine phenomenon, which can develop with mental health apps or chatbots under the right conditions.

Studies show that by offering self-guided therapy, these technologies can effectively treat anxiety and depression.

D’Alfonso said the best results are likely to come if bots encourage users to set goals and tasks and offer a personalised experience. This type of structured intervention could even foster an emotional connection with the bot, he said.

“There are going to be instances where a human client wants to interact with a chatbot and might even develop a bond with them.

“Of course, it won’t be an authentic two-way bond, because the bots aren’t capable of that. But it might just be enough to deliver therapeutic results, when used in conjunction with a goal-based framework.”

The slot machine effect?

Kezelman agrees that chatbots could elicit feelings of connection, but draws a parallel with social media to highlight the risks.

“You only have to walk past a bus stop to see how attached people can get to their technologies. But we know that many of these attachments can also be harmful.”

Indeed, research shows that social media has addictive properties, meaning some continue to use it despite negative consequences.

One study revealed that time spent on social media was linked with depression and suicidality. Despite this, some remained hooked, given its dopaminergic effects.

D’Alfonso agrees and warns about the potential for a ‘slot machine effect’, where people keep using a bot to feed their curiosity about its next move.

An adjunct solution?

Despite the potential for harm, both experts agree that chatbots could play a useful role in a constrained healthcare system.

“It’s certainly an appealing option when there are challenges around availability and cost with traditional therapy — I’m just not sure it should be the patient’s primary relationship,” Kezelman said.

Meanwhile, D’Alfonso sees the bots as more of an interim solution.

“Someone could chat a bit with the bot and then go and see their therapist. I don’t see them as a comprehensive replacement.”

Image credit: iStock.com/Vertigo3d

Related Articles

Collaboration key to improving adherence to physical activity guidelines

We all know that being physically active is important for our health, but getting people to...

New $145m 'quiet hospital' opens in Vic

A new $145 million Northern Private Hospital has opened in Epping, Victoria, featuring the latest...

New guidelines for concussion and brain health released

The Australian Institute of Sport, in close collaboration with the Australian Physiotherapy...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd