2021
DOI: 10.1016/j.mlwa.2020.100017
|View full text |Cite
|
Sign up to set email alerts
|

Scaleable input gradient regularization for adversarial robustness

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
53
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 40 publications
(55 citation statements)
references
References 4 publications
0
53
0
Order By: Relevance
“…A set of 26 models trained without robustness penalties and 14 models trained using various robust optimization algorithms [18][19][20][21]37] were used in the analyses. We refer the reader to S1 Table for the complete list of models used in this work.…”
Section: Convolutional Neural Network Architecturesmentioning
confidence: 99%
See 3 more Smart Citations
“…A set of 26 models trained without robustness penalties and 14 models trained using various robust optimization algorithms [18][19][20][21]37] were used in the analyses. We refer the reader to S1 Table for the complete list of models used in this work.…”
Section: Convolutional Neural Network Architecturesmentioning
confidence: 99%
“…Finally, the weight decay was set to 0.0001. The model trained with this algorithm is denoted as trades robust resnet50 linf 4 in S1 Table . Input gradient regularization [21] This algorithm seeks to improve the adversarial robustness of models by adding a regularization term to the cross-entropy loss. At a high-level, the regularization term penalizes the gradient of the loss function with respect to the input.…”
Section: Robust Modelsmentioning
confidence: 99%
See 2 more Smart Citations
“…In particular, these adversarial perturbations can cause the model to completely mis-classify the image even though it could correctly classify the unperturbed image, resulting in poor adversarial robustness [15]. This is clearly an issue in safety-critical applications (e.g., self-driving cars), so the machine learning community has been developing techniques to train these models to be more robust to adversarial perturbations [16][17][18][19][20][21]. Robust optimization techniques have been shown to be able to defend against very strong adversarial attacks, although there still exist perturbations that can fool models trained with these techniques.…”
Section: Introductionmentioning
confidence: 99%