2022
DOI: 10.1016/j.patter.2022.100472
|View full text |Cite
|
Sign up to set email alerts
|

Disrupting adversarial transferability in deep neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…On the one hand, reducing the strength of the model's loss field can achieve the effect of enhancing the model's smoothness [75]. On the other hand, boosting the diversity of gradient orthogonality and reducing the magnitude of the gradient can also constrain the adversarial transferability [40,80], which can be explained as a special case of loss field orthogonality when the number of sampling points is m = 0.…”
Section: T ≤mentioning
confidence: 99%
“…On the one hand, reducing the strength of the model's loss field can achieve the effect of enhancing the model's smoothness [75]. On the other hand, boosting the diversity of gradient orthogonality and reducing the magnitude of the gradient can also constrain the adversarial transferability [40,80], which can be explained as a special case of loss field orthogonality when the number of sampling points is m = 0.…”
Section: T ≤mentioning
confidence: 99%
“…[10]. In Reference [50], the authors proposed that transferability between seemingly different models is due to a high linear correlation between the feature sets extracted by different networks. In Reference [51], a systematic investigation of factors affecting adversarial examples' transferability for text classification models was explored.…”
mentioning
confidence: 99%