2021
DOI: 10.1007/978-3-030-87199-4_4
|View full text |Cite
|
Sign up to set email alerts
|

A Hierarchical Feature Constraint to Camouflage Medical Adversarial Attacks

Abstract: Deep learning based methods for medical images can be easily compromised by adversarial examples (AEs), posing a great security flaw in clinical decisionmaking. It has been discovered that conventional adversarial attacks like PGD which optimize the classification logits, are easy to distinguish in the feature space, resulting in accurate reactive defenses. To better understand this phenomenon and reassess the reliability of the reactive defenses for medical AEs, we thoroughly investigate the characteristic of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(2 citation statements)
references
References 46 publications
0
2
0
Order By: Relevance
“…This approach proves to be advantageous and secure in the domain of biomedical image analysis. Most pre-processing-based defense approaches primarily focus on medical classification tasks, as evidenced by the majority of existing research [61,117]. Therefore, it is necessary to do image-level preprocessing to preserve the identifiable components for further diagnostic purposes.…”
Section: Image Level Preprocessingmentioning
confidence: 99%
“…This approach proves to be advantageous and secure in the domain of biomedical image analysis. Most pre-processing-based defense approaches primarily focus on medical classification tasks, as evidenced by the majority of existing research [61,117]. Therefore, it is necessary to do image-level preprocessing to preserve the identifiable components for further diagnostic purposes.…”
Section: Image Level Preprocessingmentioning
confidence: 99%
“…Yao et al [42] presented a novel hierarchical feature constraint (HFC) to supplement current white-box attacks, allowing the adversarial representation to be concealed within the normal feature distribution. They examined the proposed approach using a fundoscopy image dataset.…”
Section: Hfcmentioning
confidence: 99%