2022
DOI: 10.48550/arxiv.2207.06202
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarially-Aware Robust Object Detector

Abstract: Object detection, as a fundamental computer vision task, has achieved a remarkable progress with the emergence of deep neural networks. Nevertheless, few works explore the adversarial robustness of object detectors to resist adversarial attacks for practical applications in various real-world scenarios. Detectors have been greatly challenged by unnoticeable perturbation, with sharp performance drop on clean images and extremely poor performance on adversarial images. In this work, we empirically explore the mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…We choose adversarial training (AT) as the last countermeasure for X -Adv. Although AT for image classification has been widely studied [25,30], only some preliminary studies have been devoted to object detection [6,10,52]. Here, we adopt RobustDet [10] as the AT method and use an SSD detector with a backbone of VGG-16.…”
Section: Countermeasures Against X -Advmentioning
confidence: 99%
See 1 more Smart Citation
“…We choose adversarial training (AT) as the last countermeasure for X -Adv. Although AT for image classification has been widely studied [25,30], only some preliminary studies have been devoted to object detection [6,10,52]. Here, we adopt RobustDet [10] as the AT method and use an SSD detector with a backbone of VGG-16.…”
Section: Countermeasures Against X -Advmentioning
confidence: 99%
“…Although AT for image classification has been widely studied [25,30], only some preliminary studies have been devoted to object detection [6,10,52]. Here, we adopt RobustDet [10] as the AT method and use an SSD detector with a backbone of VGG-16. Specifically, we adversarially train two detectors using adversarial examples generated by (1) PGD attacks or (2) our X -Adv.…”
Section: Countermeasures Against X -Advmentioning
confidence: 99%
“…Adversarial training methods, such as AdvProp [28] for classification and Det-AdvProp [29] for detection, create adversarial samples and add them to the training process to improve the model's robustness. RobustDet [30] modifies the model framework to defend against attack.…”
Section: Introductionmentioning
confidence: 99%