2018
DOI: 10.48550/arxiv.1805.04807
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Curriculum Adversarial Training

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
20
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(21 citation statements)
references
References 6 publications
0
20
0
Order By: Relevance
“…Defenses against Traditional Adversarial Examples. Most existing defenses focus on imperceptible adversarial perturbations [12], [37], [38], [10], [39], [40], [41], [42]. Papernot et al [12] proposed defensive distillation extracting the key information from a pretrained DNN to improve the resilience of a model to adversarial examples.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Defenses against Traditional Adversarial Examples. Most existing defenses focus on imperceptible adversarial perturbations [12], [37], [38], [10], [39], [40], [41], [42]. Papernot et al [12] proposed defensive distillation extracting the key information from a pretrained DNN to improve the resilience of a model to adversarial examples.…”
Section: Related Workmentioning
confidence: 99%
“…Similarly, PixelDefend [38] projects the adversarial input back to the training distribution. Moreover, adversarial training [10], [40], [37] improves the robustness of the DNN models by training against known attacks. Certified robustness approaches [41], [42] give a lower-bound on the adversarial accuracy.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Focusing on the worst-case loss over a convex outer region, Wong and Kolter [27] introduces a provable robust model. There are more improvements of PGD adversarial training techniques, including Lipschitz regularization [7] and curriculum adversarial training [3]. In a recent study by Tsipras et al [25], there exists a trade-off between standard accuracy and adversarial robustness.…”
Section: Introductionmentioning
confidence: 99%
“…Out of the various approaches proposed to improve the robustness of deep neural network models, Adversarial Training is found to be most effective [5,20,10,23,2,6,4]. In a typical adversarial training procedure of the model, first adversarial versions of the training dataset are generated and are then used to train the model to increase its robustness on such samples [5].…”
Section: Introductionmentioning
confidence: 99%