2018
DOI: 10.48550/arxiv.1811.12641
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Transferable Adversarial Attacks for Image and Video Object Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
35
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(35 citation statements)
references
References 24 publications
0
35
0
Order By: Relevance
“…Since then, several works have focused on adversarial attacks. Early works [9,21,35,34,31] add perturbations in the digital world, directly changing pixel values which are fed into the networks. Latter works focus on creating physical adversarial objects, such as eyeglasses [26], posters [32], and animals [1], in the real world, further broadening the influence of adversarial attacks.…”
Section: Adversarial Attackmentioning
confidence: 99%
See 2 more Smart Citations
“…Since then, several works have focused on adversarial attacks. Early works [9,21,35,34,31] add perturbations in the digital world, directly changing pixel values which are fed into the networks. Latter works focus on creating physical adversarial objects, such as eyeglasses [26], posters [32], and animals [1], in the real world, further broadening the influence of adversarial attacks.…”
Section: Adversarial Attackmentioning
confidence: 99%
“…The former method [9,21,35] applies many times of gradient ascent to maximize an adversarial objective function for deceiving deep networks and is usually time-consuming. However, the latter one [34,31] applies tremendous data to train an adversarial perturbation-generator. The latter method is faster than the former method because only one-time forward propagation is needed for each attack after training.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Adversarial Attacks and Defenses. When presented with adversarial samples, which are maliciously designed by imperceptible perturbations [21,33,48], deep neural networks often suffer from severe performance deterioration, e.g., [62,21,4,11] for classification models and [39,38,47,46,72,69,77,1,57] for detection/segmentation models. To address this notorious vulnerability, numerous defense mechanisms [76,55,61,50,60,52] have been proposed, such as input transformation [73,40,22,16], randomization [45,44,15], and certified defense approaches [9,51].…”
Section: Related Workmentioning
confidence: 99%
“…Adversarial vulnerability is a critical issue in the practical application of neural networks. Various attacks have been proposed to challenge visual recognition models of classification, detection and segmentation [62,21,39,38,47,46,47,72,69,77,1,57]. Such susceptibility has motivated abundant studies on adversarial defense mechanisms for training robust neural networks [55,61,50,60,52,26,6,7,29], among which adversarial training based methods [48,76],…”
Section: Introductionmentioning
confidence: 99%