2020 International Conference on Data Science and Its Applications (ICoDSA) 2020
DOI: 10.1109/icodsa50139.2020.9212850
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of Adversarial Attacks on Skin Cancer Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 18 publications
0
8
0
Order By: Relevance
“…In clinical applications, it has been discovered that noises that are difficult for humans to detect frequently cause significant interference to the diagnostic model, limiting the utility of deep learning in the actual world. To improve the model’s robustness ( 97 ), performed adversarial training on MobileNet and VGG-16 using the innovative attacking models FGSM and PGD for skin cancer classification. Firstly, two white-box attacks based on Projected Gradient Descent (PGD) and Fast Gradient Sign Method (FGSM) were used to test the robustness of these models.…”
Section: Methods For Typical and Frontier Problems In Skin Cancer Cla...mentioning
confidence: 99%
“…In clinical applications, it has been discovered that noises that are difficult for humans to detect frequently cause significant interference to the diagnostic model, limiting the utility of deep learning in the actual world. To improve the model’s robustness ( 97 ), performed adversarial training on MobileNet and VGG-16 using the innovative attacking models FGSM and PGD for skin cancer classification. Firstly, two white-box attacks based on Projected Gradient Descent (PGD) and Fast Gradient Sign Method (FGSM) were used to test the robustness of these models.…”
Section: Methods For Typical and Frontier Problems In Skin Cancer Cla...mentioning
confidence: 99%
“…FGSM reduced the model’s accuracy by 36% while One-pixel by only 2–3%. Huq and Pervin [ 18 ] applied the FGSM and PGD attacks on dermoscopic images for skin cancer recognition. The model’s performance decreased by up to 75%.…”
Section: Related Workmentioning
confidence: 99%
“…In addition, they used two different architectures, a DNN and a hybrid DNN model that is combined with segmentation techniques and they showed that the hybrid model is much more robust than a conventional DNN on adversarial attacks. Huq et al [72] analyzed adversarial attacks for skin cancer recognition. They experimented with VGG16 and MobileNet on the HAM10000 dataset, trying to classify an image into seven categories.…”
Section: Existing Adversarial Attacks On Medical Imagesmentioning
confidence: 99%