Generative AI in health care — how to support responsible deployment
The level of interest and adoption of generative AI within the Australian healthcare sector is increasing at a rapid rate. This is a trend largely driven by the transformative capabilities of large language models (LLMs) such as ChatGPT in areas like patient engagement, medical research, clinician experience and workflow optimisation.
In general, applications of AI in health care have a critical impact on patient care and clinical outcomes and, as a result, must meet high standards for safety, efficacy, equity and usability. From a generative AI perspective, the most significant limitations and risks centre around privacy, bias, accuracy, explainability and recency.
However, regulation isn’t keeping pace, so it’s critical for healthcare organisations in Australia to put guardrails in place to ensure generative AI is used in a responsible manner. Responsible AI makes AI a positive force, rather than a threat to society and to itself. It covers many aspects of making the right business and ethical choices when adopting AI that organisations often address independently, such as business and societal value, risk, trust, transparency and accountability.
The value potential
Currently, the potential of generative AI spans patient care, research, clinical education and workflow optimisation. For example, it could be used to improve clinician efficiency with administrative tasks such as assisting with generation of clinical notes and letters, responding to patient queries and producing patient information and educational material. Generative AI could also be used to enable clinicians to more easily find patient information to offer specific summaries of their health information.
Another potential use is to enhance patient-facing chatbots and conversational assistants supporting activities such as triage, care navigation and answering administrative questions like billing. It could enable clinical decision support through answering questions specific to differential diagnoses and treatment options.
In addition, generative AI could be used to classify large volumes of unstructured text within electronic health records (EHRs) for multiple purposes, such as making it available for research and data analysis, enabling identification of patients for clinical trials and facilitating clinical coding for billing purposes. It could also be used for sentiment analysis of patient feedback and reviews.
A foundation for governance
To enable these developments, an appropriate governance framework is essential in ensuring the responsible deployment of generative AI. Data, algorithms and people can be biased — an absence of policies and procedures that guide ethical use and best practices can lead to significant clinical, financial and reputational risks.
Ensure protocols are in place to monitor the use and performance of any deployed generative AI solutions, as well as an action plan to appropriately respond to issues as they are identified.
A critical component of generative AI deployments is ensuring end users have the appropriate knowledge and skills to apply it in a manner that supports responsible use. Generative AI models are fallible — risks include bias, inappropriate recommendations, factual inaccuracies and fabricated outputs. The unexpected impacts of reduced human interaction that may result from adoption of generative AI are unclear at this stage and thus must be carefully monitored.
In addition, the sophistication and linguistic fluency of these models can give the illusion of comprehension and expertise. This increases the risk of automation bias — the tendency to favour the output of automated decisions, overlooking critical information or dismissing the user’s own professional judgment, even when the system is incorrect. This has been demonstrated to have negative impacts on clinical decisions.
Going beyond the hype
The responsible deployment of generative AI requires a value proposition that is aligned to the strategic goals of your organisation, measured by its impact on intended outcomes and weighed against potential negative effects and risks.
Start by running an ideation workshop with broad stakeholder representation to discuss and identify how generative AI can support your organisation’s strategic goals.
Prioritise use cases by scoring them according to their business value (for example, patient outcomes, experience, improved efficiency, revenue) and feasibility (which could be technical, change management, regulatory). They should deliver business value directly and also demonstrate the longer-term strategic potential that generative AI could have across your organisation.
Don’t get caught up in the hype. The long-term financial costs of generative AI applications are currently opaque, so you need to have a clear understanding of the business or clinical value being delivered.
*Sharon Hakkennes is a VP analyst in health care at Gartner, focused on virtual care, EHR implementation and optimisation, clinical engagement, change management and strategy development. Sharon will be presenting on digital innovation in health care and life sciences at the Gartner IT Symposium/Xpo on the Gold Coast, 11–13 September.
The healthcare industry is undergoing the greatest revolution since the invention of the hospital...
While AI has been a prominent discussion for over a decade, in the last six months it has taken...
Australia is seeing a wave of technological innovation that promises to increase productivity in...