Deepfakes mimicking health professionals pose real risk


Monday, 07 July, 2025


Deepfakes mimicking health professionals pose real risk

AI ‘deepfake’ videos that mimic reputable health professionals pose real public health risk, the Australian Medical Association (AMA) has warned. Calling on the Australian Government to crack down on “this dangerous practice”, AMA President Dr Danielle McMullen has written to Communications Minister Anika Wells urging the introduction of clear and enforceable regulations on health-related advertising online.

“We are now living in an age where any video that appears online has to be questioned — is it real, or is it a deepfake?” McMullen said. “Deepfake videos are becoming more and more convincing, and this technology is being exploited by dodgy companies peddling snake oil to vulnerable people who are dealing with serious health issues.”

Dr Norman Swan, former AMA president Professor Kerryn Phelps and Professor Jonathan Shaw are among the trusted clinicians to have had their identities misused in deepfake videos that promote unproven products, the AMA said.

The deepfake video in Shaw’s case was advertising an unproven dietary supplement as a treatment for type 2 diabetes. In the case of Swan, his deepfake was selling supplements or weight loss products purporting to treat diabetes, heart disease or obesity, while scientific evidence was denigrated as “stupid”.

“These videos encourage consumers to abandon clinically validated therapies in favour of unscientific alternatives,” McMullen said. “Disturbingly, many health professionals only become aware they have been impersonated when patients raise questions about discontinuing their prescribed treatments or request information about where to purchase so-called ‘miracle cures’.

“I first discovered a fake profile impersonating me when a family member called to ask if I really believed in what they were selling,” McMullen added. “In addition to the very serious health risks, these scams also pose a financial risk to vulnerable Australians.”

McMullen’s letter to Wells urged the Australian Government to introduce clear and enforceable guardrails governing the use of AI in health-related advertising and communications — content hosted by digital platforms and social media providers operating in Australia, particularly.

Recommended by the AMA is a regulatory framework that includes the following:

  • Mandatory identification of the individual or company responsible for any online material promoting a medical product or service.
  • An accessible portal for individuals to report fake or misleading AI-generated or other content.
  • Unsubscribing mechanisms to allow users to opt out of unsolicited medical advertising.
  • Takedown requirements mandating platforms to remove harmful content within a specified period after a complaint is lodged.
  • Enforcement powers, including the ability to issue infringement notices for non-compliance.
     

“Social media giants also need to do everything in their power to stamp out these dangerous videos,” McMullen said. “At the moment, this is an endless game of whack-a-mole, so it is important to implement strict and enforceable rules and deterrents.”

Image credit: iStock.com/Laurence Dutton

Related Articles

Continuous remote monitoring saves scoliosis surgery costs and ICU hours

For scoliosis surgery patients, Royal Perth Hospital has saved costs and reduced ICU hours as...

NHS hospital pilots AI in discharge summaries

In the UK, Chelsea and Westminster Hospital NHS Foundation Trust is piloting the use of AI in...

Doctors at breaking point — can AI medical scribes help?

A former frontline medical doctor — who, following burnout, now works as a...


  • All content Copyright © 2025 Westwick-Farrow Pty Ltd