2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI) 2021
DOI: 10.1109/isbi48211.2021.9433761
|View full text |Cite
|
Sign up to set email alerts
|

Defending Against Adversarial Attacks On Medical Imaging Ai System, Classification Or Detection?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(17 citation statements)
references
References 11 publications
0
16
0
Order By: Relevance
“…In some studies, adversarial training improved DL model robustness for multiple medical imaging modalities such as lung CT and retinal optical coherence tomography. 40 , 42 - 44 By contrast, Hirano et al 45 found that adversarial training generally did not increase model robustness for classifying dermatoscopic images, optical coherence tomography images, and chest x-ray images. The difference in effectiveness of adversarial training can be attributed to differences in adversarial training protocols (eg, single-step v iterative approaches).…”
Section: Discussionmentioning
confidence: 98%
See 2 more Smart Citations
“…In some studies, adversarial training improved DL model robustness for multiple medical imaging modalities such as lung CT and retinal optical coherence tomography. 40 , 42 - 44 By contrast, Hirano et al 45 found that adversarial training generally did not increase model robustness for classifying dermatoscopic images, optical coherence tomography images, and chest x-ray images. The difference in effectiveness of adversarial training can be attributed to differences in adversarial training protocols (eg, single-step v iterative approaches).…”
Section: Discussionmentioning
confidence: 98%
“…One reason for this behavior could be that medical images are highly standardized, and small adversarial perturbations dramatically distort their distribution in the latent feature space. 40 , 41 Another factor could be the overparameterization of DL models for medical image analysis, as sharp loss landscapes around medical images lead to higher adversarial vulnerability. 14 …”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Defence Method. In [41], the authors perform both prevention and detection in their solution. For prevention, the DNN is trained on both normal and adversarial samples to make it more robust to attacks, an idea that has been shown to be effective in the past [8].…”
Section: Gmm Methodsmentioning
confidence: 99%
“…For binary classifiers we use accuracy and for multi-class classifiers we use average-accuracy. We compute GMM's accuracy using the inverse of the author's metric called 'adversarial risk' -a combined performance measure of the detector and the victim's classifier [41]. Additionally, for MGM [40] and GMM [41], which used an evaluation different from accuracy, we performed the exact same evaluation made in the original papers.…”
Section: Metricsmentioning
confidence: 99%