Abstract
Background
Deep neural networks are showing impressive results in different medical image classification tasks. However, for real-world applications, there is a need to estimate the network’s uncertainty together with its prediction.
Objective
In this review, we investigate in what form uncertainty estimation has been applied to the task of medical image classification. We also investigate which metrics are used to describe the effectiveness of the applied uncertainty estimation
Methods
Google Scholar, PubMed, IEEE Xplore, and ScienceDirect were screened for peer-reviewed studies, published between 2016 and 2021, that deal with uncertainty estimation in medical image classification. The search terms “uncertainty,” “uncertainty estimation,” “network calibration,” and “out-of-distribution detection” were used in combination with the terms “medical images,” “medical image analysis,” and “medical image classification.”
Results
A total of 22 papers were chosen for detailed analysis through the systematic review process. This paper provides a table for a systematic comparison of the included works with respect to the applied method for estimating the uncertainty.
Conclusions
The applied methods for estimating uncertainties are diverse, but the sampling-based methods Monte-Carlo Dropout and Deep Ensembles are used most frequently. We concluded that future works can investigate the benefits of uncertainty estimation in collaborative settings of artificial intelligence systems and human experts.
International Registered Report Identifier (IRRID)
RR2-10.2196/11936
Subject
Health Information Management,Health Informatics
Reference37 articles.
1. The need for uncertainty quantification in machine-assisted medical decision making
2. FilosAFarquharSGomezARudnerTKentonZSmithLAlizadehMdeKAGalYA systematic comparison of Bayesian deep learning robustness in diabetic retinopathy tasks2019Conference on Neural Information Processing Systems (NeurIPS)Dec 8-14Vancouver, Canada
3. OvadiaYFertigERenJNadoZSculleyDNowozinSDillonJLakshminarayananBSnoekJCan you trust your model’s uncertainty? Evaluating predictive uncertainty under dataset shift2019Annual Conference on Neural Information Processing Systems (NeurIPS)Dec 8-14Vancouver, Canada
4. GalYGhahramaniZDropout as a Bayesian approximation: representing model uncertainty in deep learning2016International Conference on Machine Learning (ICML)June 19-24New York
5. LakshminarayananBPritzelABlundellCSimple and scalable predictive uncertainty estimation using deep ensembles2017Annual Conference on Neural Information Processing Systems (NeurIPS)Dec 4-9Long Beach, CA
Cited by
22 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献