2020
DOI: 10.1073/pnas.1907377117
|View full text |Cite
|
Sign up to set email alerts
|

On instabilities of deep learning in image reconstruction and the potential costs of AI

Abstract: Deep learning, due to its unprecedented success in tasks such as image classification, has emerged as a new tool in image reconstruction with potential to change the field. In this paper, we demonstrate a crucial phenomenon: Deep learning typically yields unstable methods for image reconstruction. The instabilities usually occur in several forms: 1) Certain tiny, almost undetectable perturbations, both in the image and sampling domain, may result in severe artefacts in the reconstruction; 2) a small structural… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

10
413
3
3

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 529 publications
(467 citation statements)
references
References 27 publications
10
413
3
3
Order By: Relevance
“…This concerning effect has also been noted in a recently published article 9 and could be indicative of misclassification of small pathological features as artifact, in addition to the inherent blurring of filter‐based techniques. Future work should be encouraged to adopt stability tests 9 to address this phenomenon in detail.…”
supporting
confidence: 66%
“…This concerning effect has also been noted in a recently published article 9 and could be indicative of misclassification of small pathological features as artifact, in addition to the inherent blurring of filter‐based techniques. Future work should be encouraged to adopt stability tests 9 to address this phenomenon in detail.…”
supporting
confidence: 66%
“…It is known that adding white noise to the input helps against adversarial attacks for classifiers (Cohen, Rosenfeld and Kolter 2019). Moreover, in inverse problems one always has noisy data, so it remains unclear whether the computed perturbation in Antun et al (2019) actually acts as an adversarial example when noise is added.…”
Section: Robustness Against Adversarial Attacksmentioning
confidence: 99%
“…Furthermore, properties inherent to image processing may cause misclassifications. For instance, DNNbased image reconstructions are often performed for purifying adversarial examples [28]; however, it causes image artifacts, resulting in misclassifications by DNNs [29]. It may be difficult to completely avoid security concerns caused by adversarial attacks.…”
Section: Discussionmentioning
confidence: 99%