2018
DOI: 10.48550/arxiv.1804.00097
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Attacks and Defences Competition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
52
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
2

Relationship

3
5

Authors

Journals

citations
Cited by 31 publications
(52 citation statements)
references
References 0 publications
0
52
0
Order By: Relevance
“…Among them, HGD [32] and R&P [61] are the rank-1 submission and rank-2 submission in the NeurIPS 2017 defense competition [30], respectively. FD [62] Implementation details.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Among them, HGD [32] and R&P [61] are the rank-1 submission and rank-2 submission in the NeurIPS 2017 defense competition [30], respectively. FD [62] Implementation details.…”
Section: Methodsmentioning
confidence: 99%
“…Comprehensive experiments are conducted to verify the effectiveness of the proposed regionally homogeneous perturbation (RHP). Under the black-box setting, RHP successfully attacks 9 latest defenses [21,26,32,37,56,61,62] and improves the top-1 error rates by 21.6% in average, where three of them are the top submissions in the NeurIPS 2017 defense competition [30] and the Competition on Adversarial Attacks and Defenses 2018. Compared with the state-of-the-art attack methods, RHP not only outperforms universal adversarial perturbations (e.g., UAP [38] by 19.2% and GAP [42] by 15.6%), but also outperforms image-dependent perturbations (FGSM [18] by 12.9%, MIM [14] by 12.6% and DIM [14,63] by 9.58%).…”
Section: Introductionmentioning
confidence: 99%
“…Black box attacks In the white box attack setting [22], the adversary has full knowledge of the model including model type, model architecture and values of all parameters and trainable weights. In the black box setting [23], [24], [25], [26], the adversary has limited or no knowledge about the model under attack [27]. In this paper, we focus on the white box attack for Siamese trackers.…”
Section: B Adversarial Attacksmentioning
confidence: 99%
“…z is obtained from the gradient w.r.t. r (or x) by smoothing, much like how r is obtained from z in the forward pass (17).…”
Section: Attack Targeting Optimalitymentioning
confidence: 99%
“…Adversarial examples where introduced by Szegedy et al [34] as imperceptible perturbations of a test image that can change a neural network's prediction. This has spawned active research on adversarial attacks and defenses with competitions among research teams [17]. Despite the theoretical and practical progress in understanding the sensitivity of neural networks to their input, assessing the imperceptibility of adversarial attacks remains elusive: user studies show that L p norms are largely unsuitable, whereas more sophisticated measures are limited too [30].…”
Section: Introductionmentioning
confidence: 99%