2020
DOI: 10.1097/md.0000000000023568
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial attack on deep learning-based dermatoscopic image recognition systems

Abstract: Deep learning algorithms have shown excellent performances in the field of medical image recognition, and practical applications have been made in several medical domains. Little is known about the feasibility and impact of an undetectable adversarial attacks, which can disrupt an algorithm by modifying a single pixel of the image to be interpreted. The aim of the study was to test the feasibility and impact of an adversarial attack on the accuracy of a deep learning-based dermatoscopic image recognition syste… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 14 publications
(5 citation statements)
references
References 26 publications
(17 reference statements)
0
5
0
Order By: Relevance
“…Although previous research has highlighted the efficacy of adversarial training in bolstering deep neural networks against adversarial noises for tasks like natural image segmentation ( 22 ) or detection ( 23 ), adversarial noise for medical images (eg, MRI data) has not received sufficient attention because of the methods of producing medical images and the substantial differences between medical image noise and natural noise ( 24 , 25 ). A previous tentative study indicated that adversarial training could enhance the robustness and generalizability of models for csPCa, leading to improved diagnostic accuracy across varied testing datasets ( 26 ).…”
Section: Discussionmentioning
confidence: 99%
“…Although previous research has highlighted the efficacy of adversarial training in bolstering deep neural networks against adversarial noises for tasks like natural image segmentation ( 22 ) or detection ( 23 ), adversarial noise for medical images (eg, MRI data) has not received sufficient attention because of the methods of producing medical images and the substantial differences between medical image noise and natural noise ( 24 , 25 ). A previous tentative study indicated that adversarial training could enhance the robustness and generalizability of models for csPCa, leading to improved diagnostic accuracy across varied testing datasets ( 26 ).…”
Section: Discussionmentioning
confidence: 99%
“…Lastly, they demonstrated that the success rate of the attacks isn't influenced by the size of the training set. Allyn et al [27] performed adversarial attacks on dermoscopic imaging. They tested the test set of data set HAM10000 with DenseNet201 after perturbing it.…”
Section: Existing Adversarial Attacks On Medical Imagesmentioning
confidence: 99%
“…Such papers [38,46,48] investigated attacks to fool the segmentation task using UNet 3 to generate perturbed masks. In the classification task, papers [35] and [41][42][43][44] employed the FGSM attack, [35,41,44] the PGD attack, [39,40] the UAP attack, [37] the One Pixel attack, and [46,48,60] GANs-based attack. As far as DeepFake attacks are concerned, which generate fake data, e.g., inserting a malign tumor into a medical image that is supposed to be benign.…”
Section: Highlighted Strategies Of Security In Machine Learning For H...mentioning
confidence: 99%