The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2020
DOI: 10.48550/arxiv.2007.03730
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Detection as Regression: Certified Object Detection by Median Smoothing

Abstract: Despite the vulnerability of object detectors to adversarial attacks, very few defenses are known to date. While adversarial training can improve the empirical robustness of image classifiers, a direct extension to object detection is very expensive. This work is motivated by recent progress on certified classification by randomized smoothing. We start by presenting a reduction from object detection to a regression problem. Then, to enable certified regression, where standard mean smoothing fails, we propose m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(6 citation statements)
references
References 18 publications
0
6
0
Order By: Relevance
“…For our certificates, we focus on the 2 adversary described above: the goal of certification is to bound the worst-case decrease in trigger set accuracy, given that the model parameters do not move too far in 2 distance. Doing this directly is in general quite difficult (Katz et al, 2019), but using techniques from (Chiang et al, 2020;Cohen et al, 2019), we show that by adding random noise to the parameters it is possible to define a smoothed version of the model and bound the change in its trigger set accuracy.…”
Section: Watermark Certificationmentioning
confidence: 99%
See 3 more Smart Citations
“…For our certificates, we focus on the 2 adversary described above: the goal of certification is to bound the worst-case decrease in trigger set accuracy, given that the model parameters do not move too far in 2 distance. Doing this directly is in general quite difficult (Katz et al, 2019), but using techniques from (Chiang et al, 2020;Cohen et al, 2019), we show that by adding random noise to the parameters it is possible to define a smoothed version of the model and bound the change in its trigger set accuracy.…”
Section: Watermark Certificationmentioning
confidence: 99%
“…Certified adversarial robustness involves not only training the model to be robust to adversarial attacks under particular threat models, but also proving that no possible attacks under a particular constraint could possibly succeed. Specifically, in this paper, we used the randomized smoothing technique first developed by (Cohen et al, 2019;Lecuyer et al, 2019) for classifiers, and later extended by (Chiang et al, 2020) to deal with regression models. However, as opposed to defending against an 2 -bounded threat models in the image space, we are now defending against an 2 -bounded adversary in the parameter space.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…In the domain of object detection, most existing defenses focus on global perturbations with a l p norm constraint [8,10,51] and only a few defenses [20,39,48] for patch attacks have been proposed. Saha [39] proposed Grad-defense and OOC defense for defending blindness attacks such that the detector is blind to a specific object category chosen by the adversary.…”
Section: Defenses Against Patch Attacksmentioning
confidence: 99%