2021
DOI: 10.48550/arxiv.2107.07449
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Attacks on Multi-task Visual Perception for Autonomous Driving

Abstract: Deep neural networks (DNNs) have accomplished impressive success in various applications, including autonomous driving perception tasks, in recent years. On the other hand, current deep neural networks are easily fooled by adversarial attacks. This vulnerability raises significant concerns, particularly in safety-critical applications. As a result, research into attacking and defending DNNs has gained much coverage. In this work, detailed adversarial attacks are applied on a diverse multi-task visual perceptio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 16 publications
0
7
0
Order By: Relevance
“…(4) Untargeted perturbation. Unlike the targeted perturbation approaches [12] (e.g., by turning a sunny image into a rainy image such that the method is robust to the rainy image) which typically fail to adapt to 'unseen' conditions (e.g., snowy conditions), our approach uses untargeted perturbation [27]. Therefore, it can adapt to a wide variety of unseen weather conditions without specifying the target domain.…”
Section: B Proposed Perturbation Mechanismmentioning
confidence: 99%
See 2 more Smart Citations
“…(4) Untargeted perturbation. Unlike the targeted perturbation approaches [12] (e.g., by turning a sunny image into a rainy image such that the method is robust to the rainy image) which typically fail to adapt to 'unseen' conditions (e.g., snowy conditions), our approach uses untargeted perturbation [27]. Therefore, it can adapt to a wide variety of unseen weather conditions without specifying the target domain.…”
Section: B Proposed Perturbation Mechanismmentioning
confidence: 99%
“…One possible solution to improve the robustness of existing deep learning-based localization models is to use basic data augmentation to diversify the training data distribution. For example, [1], [3] shifted the RGB pixel values, [17] used Gaussian noise, and [27] employed blur, HSV shift, and 1 Jialu Wang, Niki Trigoni, and Andrew Markham are with the Department of Computer Science, University of Oxford, UK {jialu.wang, niki.trigoni, andrew.markham}@cs.ox.ac.uk 2 Muhamad Risqi U. Saputra is with the Data Science Department at Monash University Indonesia risqi.saputra@monash.edu 3 Chris Xiaoxuan Lu is with the School of Informatics at the University of Edinburgh, UK xiaoxuan.lu@ed.ac.uk other perturbations to simulate cross weather interference. However, these domain adaptation techniques are data agnostic.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…One possible solution to tackle this problem is to use well-known basic data augmentation techniques, such as shifting RGB pixel values or using Gaussian noise, have been employed [2], [1], [7], [8], [9]. However, these "data agnostic" techniques may corrupt important information by perturbing all image pixels uniformly.…”
Section: Introductionmentioning
confidence: 99%
“…While GAN has been used for camera localization, it requires some target domain information [16], [17], [18], whereas AT can generalize without prior information of the target domain [13]. Furthermore, some other works also use AT technique, but they focus on different tasks [19], [20], [21], [3], [8].…”
Section: Introductionmentioning
confidence: 99%