2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00021
|View full text |Cite
|
Sign up to set email alerts
|

On the Design of Black-Box Adversarial Examples by Leveraging Gradient-Free Optimization and Operator Splitting Method

Abstract: Robust machine learning is currently one of the most prominent topics which could potentially help shaping a future of advanced AI platforms that not only perform well in average cases but also in worst cases or adverse situations. Despite the long-term vision, however, existing studies on black-box adversarial attacks are still restricted to very specific settings of threat models (e.g., single distortion metric and restrictive assumption on target model's feedback to queries) and/or suffer from prohibitively… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2
2

Relationship

3
7

Authors

Journals

citations
Cited by 44 publications
(23 citation statements)
references
References 33 publications
1
22
0
Order By: Relevance
“…Ilyas et al [88] revisited the zeroth-order optimization (zoo) and proposed a query-based attack using bandit optimization that exploits prior information about the target model gradient. From zoo perspective, Zhao et al [89] also proposed to augment the optimization with an ADMMbased framework.…”
Section: B Black-box Attacksmentioning
confidence: 99%
“…Ilyas et al [88] revisited the zeroth-order optimization (zoo) and proposed a query-based attack using bandit optimization that exploits prior information about the target model gradient. From zoo perspective, Zhao et al [89] also proposed to augment the optimization with an ADMMbased framework.…”
Section: B Black-box Attacksmentioning
confidence: 99%
“…Finally, there has been some recent interest in leveraging Bayesian optimization (BO) for constructing adversarial perturbations. For example, Zhao et al [36] use BO to solve the -step of an alternating direction of method multipliers approach, Co et al [11] searches within a set of procedural noise perturbations using BO and Gopakumar et al [15] use BO to find maximal distortion error by optimizing perturbations defined using 3 parameters. On the other hand, prior work in which Bayesian optimization plays a central role, the use cases and experiments are performed only in relatively low-dimensional settings, highlighting the main challenge of its application: Suya et al [33] examine an attack on a spam email classifier with 57 input features, and in Co [10] image classifiers are attacked but notably, the attack does not scale beyond MNIST classifiers.…”
Section: Related Workmentioning
confidence: 99%
“…Although the structure constraints make the problem nondifferentiable and more complicated, ADMM is able to split the original problem into several easier sub-problems, and iteratively solve them until convergence Zhao et al, 2019a]. We apply column pruning for style transfer and kernel pruning for coloring and super resolution.…”
Section: Structured Model Pruningmentioning
confidence: 99%