Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence 2018
DOI: 10.24963/ijcai.2018/520
|View full text |Cite
|
Sign up to set email alerts
|

Curriculum Adversarial Training

Abstract: Recently, deep learning has been applied to many security-sensitive applications, such as facial authentication. The existence of adversarial examples hinders such applications. The state-of-theart result on defense shows that adversarial training can be applied to train a robust model on MNIST against adversarial examples; but it fails to achieve a high empirical worst-case accuracy on a more complex task, such as CIFAR-10 and SVHN. In our work, we propose curriculum adversarial training (CAT) to resolve this… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
67
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 90 publications
(71 citation statements)
references
References 2 publications
1
67
0
Order By: Relevance
“…The outer minimization is the standard training procedure to minimize the loss of a DNN. Recent work shows that this straightforward method is one of the most effective defenses against adversarial samples (Madry et al 2017;Tramèr et al 2017;Cai et al 2018).…”
Section: Adversarial Trainingmentioning
confidence: 99%
“…The outer minimization is the standard training procedure to minimize the loss of a DNN. Recent work shows that this straightforward method is one of the most effective defenses against adversarial samples (Madry et al 2017;Tramèr et al 2017;Cai et al 2018).…”
Section: Adversarial Trainingmentioning
confidence: 99%
“…Note that x adv and x are vectors that contain multiple single values. Mathematically, x adv is defined as follows [24]:…”
Section: A Adversarial Examplesmentioning
confidence: 99%
“…where D is the dataset, e.g., the training dataset. The idea of adversarial training is to solve (12) by iteratively executing the following two steps [24]: 1) with all given x adv , find the optimal W for the outer minimization problem, and 2) with the given W , find the worst-case adversarial example x adv in the dataset D for the left inner maximization problem. The standard SGD method is used to train the network by estimating the weight matrix W .…”
Section: B Adversarial Trainingmentioning
confidence: 99%
See 1 more Smart Citation
“…The effectiveness of model-strengthening methods generally roots in gradient-confused [20] . Adversarial training methods [21][22][23][24] aim to make the target model loss of examples to be zeros, vanishing the gradient. Feature nullification methods [25] try to mask the original gradients.…”
Section: Introductionmentioning
confidence: 99%