top of page

Automation bias: the risk of over-reliance on artificial intelligence

A study by Dratsch et al. published in Radiology earlier this month found incorrect advice from a purported AI-based decision support system to impair performance of radiologists across all levels of expertise when interpreting mammograms.1 The researchers found the accuracy of ‘very experienced’ radiologists to fall from 82% to 45% when the purported AI recommended an incorrect interpretation, whilst the accuracy of ‘inexperienced’ radiologists fell from almost 80% to less than 20%.


This phenomenon – termed ‘automation bias’ – demonstrates the propensity for humans to become overly reliant on AI systems, manifesting as errors of commission (following incorrect AI advice) and omission (inaction due to lack of AI prompting). Concerningly, the likelihood of such errors has been found to increase in contexts of greater task complexity and higher workload - both characteristics of the medical profession.2 Automation bias also poses a risk of ‘de-skilling’, whereby certain physician skills may be lost or even not acquired due to reliance on automated clinical advice.


How can these risks be guarded against? In response to the findings of Dratsch et al., Dr Pascal Baltzer of the Department of Biomedical Imaging and Image-guided Therapy at the Medical University of Vienna put forward four key strategies to mitigate the impact of automation bias in AI-assisted radiology:


(1) Knowledge: Continuous training and education to prevent complacency and empower radiologists to make more informed decisions when using AI tools;


(2) Accountability: Promote accountability for radiologists' decisions, such as by benchmarking overall performance and providing continuous feedback;


(3) Transparency: Ensure that algorithms are transparently developed and validated to enhance radiologists’ understanding of, and reduce inappropriate levels of trust in, AI technologies;


(4) Context-selectivity: Implement AI as a background triage system that only displays algorithm recommendations in specific situations that are likely to benefit from AI input.


While the above strategies appear useful for reducing the risks of AI overreliance in radiology, research is needed to assess their effectiveness. Moreover, automation bias represents just one potential impact of implementing AI within healthcare. Greater efforts to identify and address other potential effects of human-machine interaction are needed to ensure safe integration of AI systems into clinical workflows.


References

1. Dratsch, T. et al. Automation bias in mammography: The impact of artificial intelligence BI-RADS suggestions on reader performance. Radiology 222176 (2023).

2. Goddard, K., Roudsari, A. & Wyatt, J. C. Automation bias: a systematic review of frequency, effect mediators, and mitigators. J. Am. Med. Inform. Assoc. 19, 121–127 (2012).

3. Baltzer, P. A. T. Automation bias in breast AI. Radiology 230770 (2023).


Article written by Dr Fazal Shah

Fazal is an academic foundation doctor working in the Oxford deanery. His interests lie in the ethics and regulation of artificial intelligence for healthcare.



104 views0 comments
bottom of page