2019
DOI: 10.1007/978-3-030-29135-8_5
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Vision Challenge

Abstract: The NIPS 2018 Adversarial Vision Challenge is a competition to facilitate measurable progress towards robust machine vision models and more generally applicable adversarial attacks. This document is an updated version of our competition proposal that was accepted in the competition track of 32nd Conference on Neural Information Processing Systems (NIPS 2018). NIPS 2017 Competition on adversarial attacks and defenses.Co-organised by Alexey Kurakin. This competition pitted models against attacks but only indirec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
34
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 34 publications
(34 citation statements)
references
References 19 publications
0
34
0
Order By: Relevance
“…The methods presented in this paper were used in NIPS 2018 Adversarial Vision Challenge [3], ranking first in untargeted attacks, and third in targeted attacks and robust models (both attacks and defense in a black-box scenario). These results highlight the effectiveness of the defense mechanism, and suggest that attacks using adversarially-trained surrogate models can be effective in black-box scenarios, which is a promising future direction.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The methods presented in this paper were used in NIPS 2018 Adversarial Vision Challenge [3], ranking first in untargeted attacks, and third in targeted attacks and robust models (both attacks and defense in a black-box scenario). These results highlight the effectiveness of the defense mechanism, and suggest that attacks using adversarially-trained surrogate models can be effective in black-box scenarios, which is a promising future direction.…”
Section: Resultsmentioning
confidence: 99%
“…As in [5], we generated attacks on the first 1 000 images of the test set for MNIST and CIFAR-10, while for ImageNet we randomly chose 1 000 images from the validation set that are correctly classified. For the untargeted attacks, we report the success rate of the attack (percentage of samples for which an attack was found), the mean L 2 norm of the adversarial noise (for successful attacks), and the median L 2 norm over all attacks while considering unsuccessful attacks as worst-case adversarial (distance to a uniform gray image, as in [3]). We also report the average number (for batch execution) of gradient computations and the total run-times (in seconds) on a NVIDIA Table 4: Comparison of the DDN attack to the C&W L 2 attack on ImageNet.…”
Section: Attack Evaluationmentioning
confidence: 99%
“…Adversarial attacks is an area of research which recently gained popularity after the seminal work of Szegedy et al [68], showing that small perturbations in the input image can switch the neural network prediction outcome. There exist several works showing that CNN-based solutions for classification [69], segmentation [70], object detection [71] and image retrieval [72] are all prone to such attacks. The only paper about attack on local feature matching is [73], which proposed to place special noisy patches on response peak locations, killing the matching process for matching pairs.…”
Section: Targeted Adversarial Attack On Sift-matchingmentioning
confidence: 99%
“…Yet, the authors do not know of any paper devoted to targeted adversarial attacks on local features-based image matching. Most attack methods are "white-box" [69], which means they require access to the model gradients w.r.t the input. This makes them an excellent choice for a kornia.features differentiability demonstration.…”
Section: Targeted Adversarial Attack On Sift-matchingmentioning
confidence: 99%
“…Evaluation setting. The AVC is an open competition between image classifiers and adversarial attacks in an iterative blackbox decision-based setting [4]. Participants can choose between three tracks:…”
Section: Submission To Neurips 2018 Adversarial Vision Challengementioning
confidence: 99%