2019
DOI: 10.1007/978-3-030-37337-5_25
|View full text |Cite
|
Sign up to set email alerts
|

DeT: Defending Against Adversarial Examples via Decreasing Transferability

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 16 publications
0
2
0
Order By: Relevance
“…Most adversarial examples are inherently unstable. Previous studies [23], [65], [28] have shown that adversarial examples experience a trade-off between transferability and imperceptibility, i.e., the imperceptible adversarial examples generated from the surrogate model can hardly fool the target model. To show that adversarial texts also have low transferability, we first train two models from the same training dataset, generate adversarial texts from one model, and then transfer them to another.…”
Section: A Design Intuitionmentioning
confidence: 99%
“…Most adversarial examples are inherently unstable. Previous studies [23], [65], [28] have shown that adversarial examples experience a trade-off between transferability and imperceptibility, i.e., the imperceptible adversarial examples generated from the surrogate model can hardly fool the target model. To show that adversarial texts also have low transferability, we first train two models from the same training dataset, generate adversarial texts from one model, and then transfer them to another.…”
Section: A Design Intuitionmentioning
confidence: 99%
“…Jiachen et al [28] proposed a novel black-box adversarial sensor attack targeting the security of autonomous driving perception models. To defend against adversarial examples, Changjiang et al [29] designed a transferability-based approach for both black and gray box attacks on deep neural networks.…”
Section: Attack and Defense In Other Layersmentioning
confidence: 99%