2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01028
|View full text |Cite
|
Sign up to set email alerts
|

Class-Aware Robust Adversarial Training for Object Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
27
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 36 publications
(27 citation statements)
references
References 15 publications
0
27
0
Order By: Relevance
“…Timing overhead is one of the most important factors that may limit the deployability of defense for real-time systems such as AD systems. Among the existing defenses, 4 have negligible or no timing overhead by design or shown in evaluation [57,81,105,107], 1 fails to catch up with the camera and LiDAR frame rate in evaluation [104]. For the remaining ones, we cannot conclude their timeliness from their papers (e.g., no timing overhead evaluation).…”
Section: Systematization Of Ad Ai Defensesmentioning
confidence: 97%
See 2 more Smart Citations
“…Timing overhead is one of the most important factors that may limit the deployability of defense for real-time systems such as AD systems. Among the existing defenses, 4 have negligible or no timing overhead by design or shown in evaluation [57,81,105,107], 1 fails to catch up with the camera and LiDAR frame rate in evaluation [104]. For the remaining ones, we cannot conclude their timeliness from their papers (e.g., no timing overhead evaluation).…”
Section: Systematization Of Ad Ai Defensesmentioning
confidence: 97%
“…Several defenses try to improve the robustness of the AI component against attacks. For example, Chen et al [105] applied adversarial training [13] to make the camera object detection model more robust. Jia et al [106] improved the model robustness by predicting and removing potential adversarial perturbations from the model inputs.…”
Section: Systematization Of Ad Ai Defensesmentioning
confidence: 99%
See 1 more Smart Citation
“…Such vulnerability inspires increasing attentions on the adversarial robustness mainly in the image classification task [29,3,16,36,22]. Nevertheless, with elaborate architectures to recognize simultaneously where and which category objects are in images, object detectors also suffers from the vulnerable robustness and are easily fooled by adversarial attacks [32,30,6,5,15]. As demonstrated in Fig.…”
Section: Introductionmentioning
confidence: 99%
“…To address this issue, MTD [34], as an earlier attempt, regards the adversarial training of object detection as a multi-task learning and choose those adversarial images that have the largest impact on the total loss for learning. Subsequently, the second related work, CWAT [5], points out the problem of class imbalance in the attack and proposes to attack each category as evenly as possible to generate more reasonable adversarial images. In general, these existing methods suffer from the detection robustness bottleneck: a significant degradation on clean images with only a limited adversarial robustness, shown in Fig.…”
Section: Introductionmentioning
confidence: 99%