Will AI Replace Doctors’ ‘Gut Instincts’?

Doctors’ intuition plays a key role in health care, even when computers suggest another treatment approach. But with AI advancing, is that all about to change?

By Michelle Lazarus, Monash University

MELBOURNE, Dec 18 – The value of health care workers’ intuition in effective clinical care has been verified by reports around the world again and again.

From doctors’ ability to spot sepsis in critically ill children, to ‘nurse worry‘ as a ‘vital sign’ predictive of patient deterioration, to helping general practitioners navigate complex patient care — intuition appears to play a large role in supporting high risk patients — even when data or computer outputs suggest another treatment approach.

Artificial Intelligence (AI) has already begun to transform health care, and the health sector will only continue to consider AI innovations in 2024 and beyond.

In this increasingly technological world, questions swirl on what is the role of these human hunches in health care practice and whether AI is about to overtake doctors’ ‘gut feelings’ entirely.

As Thomas Davenport from Babson College and Deloitte consultant Ravi Kalakota explain elsewhere, AI health care includes ‘rule-based expert systems’, which use prescribed knowledge-based rules to solve a problem, and ‘robotic process automation’, which uses automation technologies to mimic some tasks of human workers.

Such technology can help with automated patient monitoring, where alerts are signalled once a rule criterion is met, patient scheduling reminders and medicine management.

Other forms of AI used in health care include robots, natural language processing and machine learning.

Robots can help move and stock medical supplies, lift and reposition patients and assist surgeons. One Finnish hospital has launched a €7 billion project, set to be completed in 2028, which will engage robots to collect patient data typically reliant on human physical touch: from measuring pulse, to taking temperature and calculating oxygen saturation.

The release of ChatGPT in 2023 marked a leap forward for AI in popular consciousness. This type of AI, which requires training on large data sets (supported by human feedback), focuses on giving computers the ability to read, support and manipulate human language. Such natural language processing has changed the communication landscape with its language mimicry.

While some note that the hype hasn’t quite been reflected in reality, professionals in a range of sectors — including health care — now use ChatGPT for correspondence, such as with drafting “sick notes”medication management, or to manage health care information.

There are predictions that health care Natural Language Processing will be a US$7.2 billion business by 2028, with this type of AI being deployed to help translate complex published papers for public consumption, for analysis of electronic health records to help identify at-risk patients, and to interact with patients to help with triage or answer health care questions.

In 2024, some say this type of AI is likely to focus on more sophisticated language models that power chatbots and virtual assistants, and will be built into word processing programs.

Machine Learning gives computers the ability to learn without explicitly being programmed for a given task. The algorithms driving these types of AI are based on statistical and predictive models. Like Natural Language Processing, Machine Learning often relies on ‘training’ from existing data sets, which have been human reviewed and annotated.

Essentially, Machine Learning doesn’t automatically know what to look for, and without human-informed training this type of AI tends to provide lots of noise and useless predictions. Once trained, Machine Learning can take previously unseen patient information and apply its prior ‘training’ to analyse the data and predict outcomes, or make recommendations.

In health care, Machine Learning can recognise patterns which may be missed by humans such as described for AI’s role in patient survival of gastric cancer, identification of primary causes of cancers, and reducing breast cancer false positives.

In 2024, Machine Learning algorithms are likely to continue to be used for probing health care data analytics, with vast medical data provided by wearables, medical devices and electronic health records.

With all forms of health care AI — it is clear that humans are still needed for AI training, evaluation of outputs, and considering the impacts of the AI recommendation.

Given the global health care workforce shortages, the proverb may be true: Necessity breeds invention.

It may not be long until health care includes integrated AI forms where a robot greets you in your native language for your annual check-up using Natural Language Processing, takes your vital signs, and sends a recommendation to the doctor on which patient to prioritise and what investigations need to be ordered using Machine Learning algorithms designed to analyse collected vital signs.

What AI can’t do is replace the natural ‘gut feel’ of a health care professional. And this won’t change in 2024.

The clinical reasoning and thinking process that health care providers engage is so complex, and the sources of information that the human brain considers in patient care are too numerous to capture with current algorithms. The implicit knowledge an expert relies on for effective clinical care is so deeply embedded in human automation, that methods to get at these data points often fail.

On top of this, AI-accessible data and the AI algorithms themselves can have flaws.

Machine learning can be overly sensitive, leading to over-diagnoses in some patients. Natural Language Processing AIs can act as health care trojan horses, where the technology is so convincing in its communication approaches that it tricks the user into thinking it is knowledgeable in the same way a human is.

There are also privacy concerns with such AI applications.

AI typically relies on data input to continue learning — and the clarity about what happens to confidential patient information when input into AI remains an open question for many platforms. 

There are also challenges that the health care AI field is tackling related to bias and responsibility, and questions about what we consider ‘mundane’ and ‘repetitive’ tasks which AI can truly take off humans.

In reality (ironically), all existing AI is devoid of the necessary context within which health care occurs. It misses the complexity, the empathy, and important data points that human intelligence has access to, and can therefore only replicate specific human tasks.

A better description of the role of AI in health care in the future might be: “AI won’t replace the doctors, but those doctors will be replaced who don’t use Artificial Intelligence,” as Dr Sangeeta Reddy, director at India’s Apollo, has put it.

Health care AI is increasingly taking on roles of “clinical decision-making support”, meaning the health care provider is in charge and human intelligence is prioritised, while AI augments this.

Under this model, AI could be programmed to alert the health care provider on all the variables not considered in an algorithm, to help the health care provider explore to what extent the AI recommendations can be valuable in a specific context or with a specific patient.

For instance, if an AI is recommending antidepressants for a patient who is also pregnant, it would alert the doctor that such medications aren’t yet tested in this population — allowing the doctor to consider other data points in deciding the next best step.

To support this model — in which humans lead, but AI support — the AI-integrated future shouldn’t just focus on improving AI. An AI-augmented health care system must include health care worker training and support in both AI literacy and human intelligence literacy.

Education can also focus on helping health care providers recognise when to question patterns; how to challenge predictions; and the value of trusting intuition — essentially building capacity to tolerate uncertainty.

This future-protecting health care education would help reduce the risk of “automation bias”, whereby humans allow the AI to work autonomously and the algorithm is trusted, even in the face of clear evidence that it’s wrong.

AI has given us a valuable gift: the opportunity to explore what aspects of human intelligence are critical in health care, and which tasks can be enhanced with technology.

After all, both human brains and AI are wired for prediction, but humans have the power to interrogate, question and evaluate these predictions while AI has computing power beyond a human brain.

Both are needed to manage the complexities of patient care.

Michelle Lazarus is the director of the Centre for Human Anatomy Education and deputy director of the Centre for Scholarship in Health Education at Monash University.

 Article courtesy of 360info.      

You may also like