Artificial intelligence (AI) appears to be all over the place as of late, and healthcare isn’t any exception.

There are computer vision tools that may detect suspicious skin lesions in addition to a specialist dermatologist can. Other tools can predict coronary artery disease from scans. There are also data-driven robots that guide minimally-invasive surgery.

To precisely diagnose diseases and guide treatment selections, AI is used to analyse patients’ genomic and molecular data. For instance, machine learning has been applied to detect Alzheimer’s disease and to assist select one of the best antidepressant medication for patients with major depression.

Deep learning methods have been used to model electronic health record data to predict health outcomes for patients and supply early estimates of treatment cost.



With latest language-based generative AI technologies like ChatGPT, the clinical world is abuzz with talk of chatbots for answering patient questions, helping doctors take higher notes, and even explaining a diagnosis to a concerned grandchild.

There is little doubt that when it comes to patient health, workflows and system efficiency, AI will profit the health system.

But there are legitimate concerns in regards to the accuracy of such tools, including how well they work in latest settings (resembling a distinct country and even a distinct hospital from where they were created), and whether or not they “hallucinate” – or make things up.

Robot-assisted surgery is already the truth in some technologically equipped hospitals.
Shutterstock

Developing ‘medical grade’ tools

In our recent article within the Medical Journal of Australia, we argue using AI effectively in healthcare would require retraining of the workforce, retooling health services, and remodeling workflows.

Critically, we also need to gather evidence AI tools are “medical grade” before we use them on patients.

Many claims made by the developers of medical AI may lack appropriate scientific rigour and evaluations of AI tools may suffer from a high risk of bias. This means the tests run to make sure their accuracy are too narrow.

AI tools could make errors, or stop working when the applying context changes. Conversational agents resembling chatbots may produce misleading medical information which will delay patients searching for care. They may make inappropriate recommendations.

All this implies we want standards for the AI tools that impact diagnosis and treatment of patients. Clinicians needs to be given training on how one can critically assess AI applications to grasp their readiness for routine care.

We should expect to have the opportunity to duplicate the outcomes from one context to a different, under real-world conditions. For example, a tool developed using historical data from a hospital in New York needs to be fastidiously trialled with live patient data in Broome before we trust it.

Randomised controlled trials of AI tools, where these differences are controlled for, would represent a gold standard of evidence for his or her use.



We can’t just copy what other countries do

It is vital to fastidiously examine how AI tools are embedded into workflows to support clinical decisions. The advantages and risks of a tool will rely upon precisely how the human clinician and the tool work together.

There’s a view that each one we want to do in Australia is adopt one of the best of what’s produced internationally, and that we don’t need deep sovereign capabilities.

Perhaps we will depend on the regulation of AI tools under way through the European Union’s AI Act, or the United States Food and Drug Administration’s processes for assessing Software as a Medical Device.

Nothing is farther from the reality.

AI requires local customisation to support local practices, and to reflect diverse populations or health service differences. We don’t want to simply export our clinical datasets and import back the models built with them without adapting to our contexts and workflows. We need to watch the clinical deployments of AI tools into our settings.

Without some extent of algorithmic sovereignty – the aptitude to provide or modify AI in Australia – the nation is exposed to latest risks and the advantages of the technology will probably be limited.



A roadmap for AI in Australian healthcare

The Australian Alliance for Artificial Intelligence in Healthcare has produced a roadmap for future development.

It identifies gaps in Australia’s capability to translate AI into effective and secure clinical services and provides guidance on key issues resembling workforce, industry capability, implementation, regulation, and cybersecurity.

These recommendations offer a path toward an AI-enabled Australian healthcare system able to delivering personalised and patient-focused healthcare, safely and ethically.

The plan also envisages a vibrant AI industry sector that creates jobs and exports to the world, working side by side with an AI-aware workforce and AI-savvy consumers.

AI has the potential to rework medicine. It can accomplish that by harnessing computational power to discern subtle patterns in complex data spanning biology, images, sensory and experiential data, and more.

With care and strategic investment, innovations in AI will certainly profit clinicians and patients alike. Now is the time to act to make sure Australia is well-placed to learn from one of the crucial significant industrial revolutions of our time.

This article was originally published at theconversation.com