2019
DOI: 10.1609/aaai.v33i01.33012253
|View full text |Cite
|
Sign up to set email alerts
|

Distributionally Adversarial Attack

Abstract: Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD is robust against a wide range of first-order attacks. It is worth noting that the original objective of an attack/defense model relies on a data distribution p(x), typically in the form of risk maximization/minimization, e.g., max/min E p(x) L(x) with p(x) some unknown data distribution and L(·) a loss function. However, since PGD gen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
43
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 84 publications
(45 citation statements)
references
References 8 publications
1
43
0
Order By: Relevance
“…With regard to DFR introduced in Section II-B1, it is important to prevent or detect the adversarial attack proactively. Many researchers proposed defence methods that attempt to classify AEs correctly, but the methods have been being defeated by newly developed attacks [126]- [128].…”
Section: Adversarial Attack Detectionmentioning
confidence: 99%
“…With regard to DFR introduced in Section II-B1, it is important to prevent or detect the adversarial attack proactively. Many researchers proposed defence methods that attempt to classify AEs correctly, but the methods have been being defeated by newly developed attacks [126]- [128].…”
Section: Adversarial Attack Detectionmentioning
confidence: 99%
“…This greatly increases the computational cost of the attack and makes it infeasible for evaluating the robustness of large models. Other methods, such as FGM [5], PGD [8,17] and DAA [74] attacks, reformulate the original norm minimization problem with non-convex misclassification constraint as non-convex surrogate loss minimization with convex l p -norm perturbation constraint as in Equation (2.15). For this "simpler" problem, projected gradient descent attack (PGD) is an optimal first-order adversary [17].…”
Section: Primal-dual Gradient Descent Attackmentioning
confidence: 99%
“…• We reimplemented Distributionally Adversarial Attack (DAA) [74]. We used Lagrangian Blob Attack algorithm.…”
Section: Attack Parametersmentioning
confidence: 99%
See 1 more Smart Citation
“…Adversarial examples [40,47] are specifically manipulated inputs that are able to drastically change the model predictions with insignificant perturbations, which can barely be observed by humans. It's a thorn in the side of the modern machine learning community that undermines the reliability of DNN models in various domains [112][113][114][115][116][117][118][119] and settings [120][121][122][123][124][125], causing serious security issues in computer vision systems, like face recognition [126] and autonomous vehicle [127]. Therefore, a line of related research [47,48,51,77,85,[128][129][130][131] has attracted significant attention in the machine learning community.…”
Section: Adversarial Examplesmentioning
confidence: 99%