2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00496
|View full text |Cite
|
Sign up to set email alerts
|

The LogBarrier Adversarial Attack: Making Effective Use of Decision Boundary Information

Abstract: Adversarial attacks for image classification are small perturbations to images that are designed to cause misclassification by a model. Adversarial attacks formally correspond to an optimization problem: find a minimum norm image perturbation, constrained to cause misclassification. A number of effective attacks have been developed. However, to date, no gradient-based attacks have used best practices from the optimization literature to solve this constrained minimization problem. We design a new untargeted att… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(13 citation statements)
references
References 9 publications
0
13
0
Order By: Relevance
“…Adversarial examples are first observed in [27] and subsequently efficient gradient based attacks such as fgsm [12] and pgd [19] are introduced. There exist recent stronger attacks such as [6,9], however, compared to pgd, they are much slower to be used for adversarial training in practice. Some recent works focus on adversarial robustness of bnns [5,10,17,18], however, a strong consensus on the robustness properties of quantized networks is lacking.…”
Section: Related Workmentioning
confidence: 99%
“…Adversarial examples are first observed in [27] and subsequently efficient gradient based attacks such as fgsm [12] and pgd [19] are introduced. There exist recent stronger attacks such as [6,9], however, compared to pgd, they are much slower to be used for adversarial training in practice. Some recent works focus on adversarial robustness of bnns [5,10,17,18], however, a strong consensus on the robustness properties of quantized networks is lacking.…”
Section: Related Workmentioning
confidence: 99%
“…At test time, it is possible to design a specific invisible perturbation such as a targeted network eventually predicts different outputs on original and disturbed input. Computer vision is especially concerned with accuracy of unprotected network dropping close to 0% under state of the art attack [12] but other fields are concerned (e.g. [13] highlights this issue in cyber security context with performance of a malware detector dropping from 87% to 66% on adversarial malwares).…”
Section: A Adversarial Examplesmentioning
confidence: 99%
“…Classification attacks. In [FPO19] we implemented to the barrier method from constrained optimization [NW06] to perform the classification attack (8.1). While attacks vectors are normally small enough to be invisible, for some images, gradient based attacks are visible.…”
Section: Adversarial Attacksmentioning
confidence: 99%