2019
DOI: 10.48550/arxiv.1907.13124
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…Ozbulak et al [41] demonstrated that deep-learning-based medical image segmentation models are susceptible to AAs. They concentrated on optic disc segmentation in glaucoma and targeted attacks.…”
Section: Adaptive Segmentation Mask Attackmentioning
confidence: 99%
“…Ozbulak et al [41] demonstrated that deep-learning-based medical image segmentation models are susceptible to AAs. They concentrated on optic disc segmentation in glaucoma and targeted attacks.…”
Section: Adaptive Segmentation Mask Attackmentioning
confidence: 99%
“…In some cases, the performance of the model decreased by 100% [ 19 ]. Ozbulak et al [ 20 ] proposed a targeted attack for medical image segmentation, which is named Adaptive Segmentation Mask Attack (ASMA). This attack creates imperceptible samples and achieves high Intersection-over-Union (IoU) degradation.…”
Section: Related Workmentioning
confidence: 99%
“…This attack was tested on the InceptionResNetV2 model. The authors in [ 25 ] created an attack for segmentation on fundoscopy and dermoscopy images, using the U-Net model. In addition, [ 26 ] developed an attack for fundoscopy images, which can be implemented on segmentation and classification tasks.…”
Section: Introductionmentioning
confidence: 99%