2015 IEEE International Conference on Data Mining 2015
DOI: 10.1109/icdm.2015.84
|View full text |Cite
|
Sign up to set email alerts
|

A Unified Gradient Regularization Family for Adversarial Examples

Abstract: Abstract-Adversarial examples are augmented data points generated by imperceptible perturbation of input samples. They have recently drawn much attention with the machine learning and data mining community. Being difficult to distinguish from real examples, such adversarial examples could change the prediction of many of the best learning models including the state-of-the-art deep learning models. Recent attempts have been made to build robust models that take into account adversarial examples. However, these … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
118
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 160 publications
(119 citation statements)
references
References 14 publications
(44 reference statements)
1
118
0
Order By: Relevance
“…As a result, adversarial examples with small perturbations were unlikely to modify the output of deep models but this increases the training complexity with a factor of two. The notion of penalizing the gradient of loss function of models with respect to the inputs for robustification has been already been investigated in [191]. 4) Classifier Robustifying: In this method, classification models that are robust to adversarial attacks are designed from the ground up instead of detecting or transforming them.…”
Section: B Modifying Modelmentioning
confidence: 99%
“…As a result, adversarial examples with small perturbations were unlikely to modify the output of deep models but this increases the training complexity with a factor of two. The notion of penalizing the gradient of loss function of models with respect to the inputs for robustification has been already been investigated in [191]. 4) Classifier Robustifying: In this method, classification models that are robust to adversarial attacks are designed from the ground up instead of detecting or transforming them.…”
Section: B Modifying Modelmentioning
confidence: 99%
“…, λ m are the weight coefficients of the terms in the DataGrad loss function. Close to our work, (Lyu et al, 2015) present a heuristic way to optimize a special case of this objective. By directly provding an algorithm, our analysis can explain what their algorithm optimizes.…”
Section: The Datagrad Frameworkmentioning
confidence: 99%
“…In an approach that ends up closely related to ours, (Lyu et al, 2015) consider the objective min θ max r:||r||p≤σ L(x + r; θ) and a linearized inner version max r:||r||p≤σ L(x) + ∇ x L T r. They iteratively select r by optimizing the latter and θ by back-propagation on the former (with r fixed). Since the θ update is not directly minimizing the linearized objective, (Lyu et al, 2015) claimed the procedure was only an approximation of what we call the DataGrad objective. However, their method devolves to training on adversarial examples, so as before, Equation 4 shows they are actually optimizing the DataGrad objective but with r = 1 and λ 0 and λ 1 carefully chosen to eliminate the…”
Section: How Prior Work Are Instances Of Datagradmentioning
confidence: 99%
See 1 more Smart Citation
“…Biggio et al [8] used a regularization method to limit the vulnerability of data when training an SVM model. The works [47][48][49] used the regularization method to improve the robustness of the algorithm and achieved good results. The second method is defensive distillation [14], which produced a model with a smoother output surface and less sensitivity to disturbance, so as to improve the robustness of the model, and can reduce the success rate of adversarial attack by 90%.…”
Section: Introductionmentioning
confidence: 99%