2022
DOI: 10.1038/s41467-022-34025-x
|View full text |Cite
|
Sign up to set email alerts
|

Uncertainty-informed deep learning models enable high-confidence predictions for digital histopathology

Abstract: A model’s ability to express its own predictive uncertainty is an essential attribute for maintaining clinical user confidence as computational biomarkers are deployed into real-world medical settings. In the domain of cancer digital histopathology, we describe a clinically-oriented approach to uncertainty quantification for whole-slide images, estimating uncertainty using dropout and calculating thresholds on training data to establish cutoffs for low- and high-confidence predictions. We train models to ident… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
33
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 45 publications
(33 citation statements)
references
References 45 publications
0
33
0
Order By: Relevance
“…Previous work in DL uncertainty estimation has been extensively investigated in segmenting lung-related [19][20][21] and brain-related [22][23][24] structures. While DL uncertainty estimation has been applied to a broad range of HNSCC-related classification tasks [25][26][27][28] and dose prediction 29 , only a limited number of studies have investigated uncertainty estimation for 3-dimensional HNSCC medical image segmentation, predominantly for nasopharyngeal cancer 30 or organs at risk 31,32 ; to our knowledge only one study has attempted to investigate segmentation uncertainty estimation in OPC 33 . Therefore, there exists a significant gap in knowledge about how to construct DL auto-segmentation models that lend themselves to uncertainty estimation and subsequently how to quantify the model uncertainty at individual patient and voxel-wise levels for OPC GTVp segmentation.…”
Section: Introductionmentioning
confidence: 99%
“…Previous work in DL uncertainty estimation has been extensively investigated in segmenting lung-related [19][20][21] and brain-related [22][23][24] structures. While DL uncertainty estimation has been applied to a broad range of HNSCC-related classification tasks [25][26][27][28] and dose prediction 29 , only a limited number of studies have investigated uncertainty estimation for 3-dimensional HNSCC medical image segmentation, predominantly for nasopharyngeal cancer 30 or organs at risk 31,32 ; to our knowledge only one study has attempted to investigate segmentation uncertainty estimation in OPC 33 . Therefore, there exists a significant gap in knowledge about how to construct DL auto-segmentation models that lend themselves to uncertainty estimation and subsequently how to quantify the model uncertainty at individual patient and voxel-wise levels for OPC GTVp segmentation.…”
Section: Introductionmentioning
confidence: 99%
“…A common approach has been to mark the most uncertain predictions for a manual review by a physician. 17,18 This strategy aims to reduce the workload of the medical doctors by enabling them to spend less time on the reliable predictions and focus on the complicated or unreliably-predicted cases. Alternatively, other researchers have argued for a manual visual review of generated uncertainty heatmaps that are overlayed on the original input image.…”
Section: Related Workmentioning
confidence: 99%
“…For effective integration into pathologists' workflows, AI models should provide a confidence estimate for each prediction. While the concept of AI model uncertainty has been studied extensively in the past years, 27‐29 its application to computational pathology has been sparse 30,31 . Tempering the AI model's predictions with a measure of its confidence could help pathologists to better blend them into their routine grading, thereby fostering model acceptance.…”
Section: Introductionmentioning
confidence: 99%