2020
DOI: 10.1007/978-3-030-60365-6_9
|View full text |Cite
|
Sign up to set email alerts
|

Uncertainty Estimation in Medical Image Denoising with Bayesian Deep Image Prior

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
40
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 30 publications
(43 citation statements)
references
References 18 publications
0
40
0
Order By: Relevance
“…In addition, due to the lack of ground truth, we were unable to make concrete conclusions about the performance of our CNN on test patient data. But the uncertainty of our CNN can be quantified by generating confidence maps 34–36 using Bayesian networks, 37 an ensemble of multiple networks, 38 or an extension of the probabilistic U‐Net, 39 which can be one direction to investigate in the future.…”
Section: Discussionmentioning
confidence: 99%
“…In addition, due to the lack of ground truth, we were unable to make concrete conclusions about the performance of our CNN on test patient data. But the uncertainty of our CNN can be quantified by generating confidence maps 34–36 using Bayesian networks, 37 an ensemble of multiple networks, 38 or an extension of the probabilistic U‐Net, 39 which can be one direction to investigate in the future.…”
Section: Discussionmentioning
confidence: 99%
“…SGLD DIP has already been applied to PET image reconstruction (Carrillo et al, 2021). Prior to (Tölle et al, 2021), we have shown that DIP with SGLD shows almost unchanged overfitting behavior in the case of medical images (Laves et al, 2020b). As a solution, we proposed a variational inference (VI) approach to DIP using Monte Carlo dropout (MCD) (Gal and Ghahramani, 2016).…”
Section: Related Workmentioning
confidence: 99%
“…These artifacts are composed out of learned image statistics, which can lead to false anatomical structures being embedded in the reconstruction that are not present in the imaged object (Bhadra et al, 2020). This phenomenon is referred to as hallucination and is not limited to tomographic reconstruction but also happens in other deeplearning-based inverse image tasks (Laves et al, 2020b). Hallucinations can result in misdiagnosis and must be avoided at all costs in medical imaging.…”
Section: Introductionmentioning
confidence: 99%
“…This is a commonly observed scenario in medical imaging applications due to variations among patients, image acquisition, and reconstruction protocols [8,9,12]. For example, when applying denoising ConvNets on unseen features, it may cause artifacts in the denoised images as demonstrated in both Ultrasound [14] and Positron Emission Tomography (PET) applications [5]. To generalize a trained ConvNet to different image distributions, one has to include images sampled from the new distribution (task) in the training dataset and retrain the ConvNet.…”
Section: Introductionmentioning
confidence: 99%