How are AI models approved for use in health care?
Before a new healthcare intervention can be brought to market, it must first undergo a rigorous approval process with the Therapeutic Goods Administration (TGA). For a medicine, this often means presenting evidence from clinical trials, to prove it alleviates symptoms, or treats a health condition, with minimal risk. But what about AI models for use in health care? Hospital + Healthcare speaks with the TGA to find out.
From apps that diagnose melanoma, to chatbots that suggest treatments, the sector is not short of AI solutions.
But who decides which are safe, and by which criteria are they assessed for use in mainstream clinical practice?
AI approvals
According to the TGA, AI falls under its remit when it is intended for “diagnosis, prevention, monitoring, prediction, prognosis, treatment and alleviation of disease, injury or disability”.
It is treated and regulated as a medical device, meaning its approval process differs slightly from that of a medicine or biological.
“To obtain approval for a [device], an Australian sponsor must submit an application to the TGA and provide the relevant clinical and other evidence that demonstrates the product is safe and performs its intended use. The benefits of the AI model must outweigh any undesirable effects [and] risks […] must be minimised,” a TGA spokesperson told Hospital + Healthcare.
“The applicant must also outline how the sponsor will continue to monitor the [device] for its ongoing performance and be responsible for the product while it is in the market, including for any recalls.”
For AI and other connected medical devices, there are also requirements around design, development, production, testing and maintenance, cybersecurity, and the management of data and information.
For example, manufacturers will need to continually review the cybersecurity threat landscape, to reduce the risk of their products being intercepted by a malicious actor, or their data apprehended.
Approach varies
The risk assessment approach also depends on the AI model’s level of risk.
“For lower-risk products, sponsors and manufacturers can self-certify compliance, whereas higher-risk products require an independent assessment of safety, performance and how the product is manufactured,” the TGA said.
For any type of medical device, including AI, the TGA can also accept regulatory approvals from comparable overseas regulators including the USAFDA, Health Canada and European Notified Bodies.
The level of additional scrutiny it applies to products supported by an overseas regulatory approval is based on risk and any “Australian-specific requirements or concerns”.
“We apply more scrutiny for some higher-risk software and AI with the potential to cause harm by providing incorrect information to patients and health workers,” it said.
Post market obligations
For AI models, post-market obligations are very important. Sponsors must demonstrate how they propose to manage risks and unintended bias, performance degradation and off label use — ie, where the AI is being used for purposes not specified by the developer.
After the product is brought to market, they must also report adverse events and comply with recall action if it experiences a problem. This means immediately notifying end users and following strict TGA instructions.
Regardless of whether there is a problem, manufacturers must provide information and samples to the TGA on request and, for higher-risk devices, report on safety and performance annually.
The TGA can also conduct a post-market review or investigation of a medical device at any time.
“For AI, we specifically review the algorithm and model design, training and testing methodology and evidence, accuracy, sensitivity and specificity,” it said.
Interventions are not mandated
The TGA does not regulate the choice of inventions in health care. Instead, this is largely the discretion of hospital and healthcare executives.
When deciding on whether an AI model is right for your organisation, the Australian Commission on Safety and Quality in Health Care makes several recommendations.
It claims AI should solve a clear problem, integrate with workflow and deliver benefits that outweigh its risks, which include the potential for bias and inequity.
Healthcare providers should confirm its evidence base, discuss its usage with patients and educate themselves on functionality.
Healthcare providers that use AI will also need to comply with relevant obligations. For smaller organisations, this could mean establishing governance and processes to ensure its safe implementation.
TGA approval is not the final safety check
While TGA approval is crucial, it is not the final check and balance — healthcare providers must recognise their own accountability when implementing AI.
As the Australian Health Practitioner Regulation Agency states on its website, “approval of a tool does not change a practitioner’s responsibility to apply human oversight and judgment to their use of AI”.
Having a TGA stamp also doesn’t negate the ethical issues AI may potentially bring up.
To remain ethical, healthcare providers need to be transparent with patients about their use of AI and obtain informed consent.
In sum, all healthcare AI needs TGA approval, but not all TGA-approved AI is appropriate.
Using better network insight to boost productivity in Australian health care
For health services, network visibility is a lever with the potential to multiply the value of...
Digital health lessons across industries
Can digital health adapt lessons from industries that have already solved challenges of safety,...
Benefits of modern hybrid multicloud platforms in health care
A key benefit of modern hybrid multicloud platforms is that they are designed to run the...
