Proceedings of the 26th ACM International Conference on Multimedia 2018
DOI: 10.1145/3240508.3240639
|View full text |Cite
|
Sign up to set email alerts
|

An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks

Abstract: Deep neural networks (DNNs) are known vulnerable to adversarial attacks. That is, adversarial examples, obtained by adding delicately crafted distortions onto original legal inputs, can mislead a DNN to classify them as any target labels. In a successful adversarial attack, the targeted mis-classification should be achieved with the minimal distortion added. In the literature, the added distortions are usually measured by L 0 , L 1 , L 2 , and L ∞ norms, namely, L 0 , L 1 , L 2 , and L ∞ attacks, respectively.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
2
1

Relationship

2
8

Authors

Journals

citations
Cited by 25 publications
(18 citation statements)
references
References 27 publications
0
18
0
Order By: Relevance
“…SparseFool [27] converts the 0 constraint problem into an 1 constraint problem and exploits the boundaries' low mean curvature to compute adversarial perturbations. ADMM 0 [44] utilizes the alternating direction method of multipliers method [42] to separate the 0 norm and the adversarial loss and facilitate the optimization of the sparse attack. SAPF [14] formulates the sparse attack problem as a mixed integer programming to jointly optimize the binary selection factors and continuous perturbation magnitudes of all pixels, with a cardinality constraint on selection factors to explicitly control the degree of sparsity.…”
Section: Related Workmentioning
confidence: 99%
“…SparseFool [27] converts the 0 constraint problem into an 1 constraint problem and exploits the boundaries' low mean curvature to compute adversarial perturbations. ADMM 0 [44] utilizes the alternating direction method of multipliers method [42] to separate the 0 norm and the adversarial loss and facilitate the optimization of the sparse attack. SAPF [14] formulates the sparse attack problem as a mixed integer programming to jointly optimize the binary selection factors and continuous perturbation magnitudes of all pixels, with a cardinality constraint on selection factors to explicitly control the degree of sparsity.…”
Section: Related Workmentioning
confidence: 99%
“…[44] also proposes a framework based on ADMM to generate ℓ p adversarial examples. However, we note that our proposed attack is completely different from theirs in two aspects: First, the constraints we consider in this paper are much more complicated than the ℓ p -norm constraints in [44]. Second, we formulate the problem in a very different manner.…”
Section: Adversarial Attacksmentioning
confidence: 99%
“…This work mainly investigates the first category to build the groundwork towards developing potential defensive measures in reliable ML. However, most of preliminary studies on this topic focus on the white-box setting where the target DNN model is completely available to the attacker (Goodfellow, Shlens, and Szegedy 2015;Carlini and Wagner 2017;Zhao et al 2018). More specifically, the adversary can compute the gradients of the output with respect to the input to identify the effect of perturbing certain input pixels, with complete knowledge about the DNN model's internal structure, parameters and configurations.…”
Section: Introductionmentioning
confidence: 99%