2020
DOI: 10.1109/access.2020.3030235
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Perturbation on MRI Modalities in Brain Tumor Segmentation

Abstract: Convolutional neural networks(CNNs) have been widely used by biomedical image segmentation applications. U-net, as a semantic segmentation method, has become a mainstream approach to brain tumor segmentation. However, the intrinsic vulnerability of CNNs also brings potential risks to all CNN-based applications, including semantic segmentation applications. In this paper, we create a universal adversarial perturbation and apply it on every modality in order to investigate how the adversarial perturbation affect… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 16 publications
(8 citation statements)
references
References 24 publications
0
4
0
Order By: Relevance
“…Table 4 and Table 5 provides a basic comparison of the vulnerability of various systems employing various adversarial attacks. For instance, in the domain of healthcare application, Cheng et al (Cheng & Ji, 2020) exploit the vulnerability of the CNN model which performs tumor detection using brain MRIs. They have also employed the universal adversarial perturbations to create adversarial MRIs to fool CNN.…”
Section: Discussionmentioning
confidence: 99%
“…Table 4 and Table 5 provides a basic comparison of the vulnerability of various systems employing various adversarial attacks. For instance, in the domain of healthcare application, Cheng et al (Cheng & Ji, 2020) exploit the vulnerability of the CNN model which performs tumor detection using brain MRIs. They have also employed the universal adversarial perturbations to create adversarial MRIs to fool CNN.…”
Section: Discussionmentioning
confidence: 99%
“…The decline in image quality observed in these instances frequently arises from the presence of irregular illumination. In light of this concern, Cheng et al [63] addressed the matter by using an adversarial attack technique. The researchers presented a new type of attack known as the "adversarial exposure attack".…”
Section: A Medical Image Adversarial Attack and Defense Classificatio...mentioning
confidence: 99%
“…MRI images for brain tumor segmentation provide four different modalities (T1, T2, T1ce, and FLAIR) with different intensities in order for the brain tumor to be detected and labeled easily. Cheng et al [69] investigated the effects of adversarial examples when they are applied on each modality and in all modalities simultaneously. Experiments were carried out with an ensemble U-Net model and MICCAI BraTS 2019 [70] dataset.…”
Section: Existing Adversarial Attacks On Medical Imagesmentioning
confidence: 99%