2017
DOI: 10.48550/arxiv.1710.03337
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Standard detectors aren't (currently) fooled by physical adversarial stop signs

Abstract: An adversarial example is an example that has been adjusted to produce the wrong label when presented to a system at test time. If adversarial examples existed that could fool a detector, they could be used to (for example) wreak havoc on roads populated with smart vehicles. Recently, we described our difficulties creating physical adversarial stop signs that fool a detector. More recently, Evtimov et al. produced a physical adversarial stop sign that fools a proxy model of a detector. In this paper, we show t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
20
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(22 citation statements)
references
References 22 publications
0
20
0
Order By: Relevance
“…Furthermore, Evtimov et al (15) and Eykholt et al (16) designed a real-world stop sign with domain knowledge from adversarial examples and successfully let the object recognition system classify it as a speed limit sign. Several works such as Lu et al (33,34) showed that the adversarial stop signs in the physical world will not fool the modern CAV perception system as the CAV is continuously moving and detecting the object at each time step. However, Chen et al (14) showed that the perception system of CAV is still vulnerable to specific adversarial examples.…”
Section: Corner Case Generation For Vehicle Perceptionmentioning
confidence: 99%
“…Furthermore, Evtimov et al (15) and Eykholt et al (16) designed a real-world stop sign with domain knowledge from adversarial examples and successfully let the object recognition system classify it as a speed limit sign. Several works such as Lu et al (33,34) showed that the adversarial stop signs in the physical world will not fool the modern CAV perception system as the CAV is continuously moving and detecting the object at each time step. However, Chen et al (14) showed that the perception system of CAV is still vulnerable to specific adversarial examples.…”
Section: Corner Case Generation For Vehicle Perceptionmentioning
confidence: 99%
“…More recently, there has been an increasing interest in the research community to explore whether adversarial examples are also effective against more complex vision systems [34,33,32]. For instance, among the latest results along this debate, Lu et al showed that adversarial examples previously constructed to fool CNN-based classifiers cannot fool state-of-the-art detectors [33].…”
Section: Introductionmentioning
confidence: 99%
“…More recently, there has been an increasing interest in the research community to explore whether adversarial examples are also effective against more complex vision systems [34,33,32]. For instance, among the latest results along this debate, Lu et al showed that adversarial examples previously constructed to fool CNN-based classifiers cannot fool state-of-the-art detectors [33]. In this work, we are interested in exploring whether language context in language-and-vision systems offer more resistance to adver-target_img A black and white dog sits next to a bottle on the ground target_caption 𝝉 > 0.15: "a dog that is laying down on ground" predicted_adv_captions Exact: "A black and white dog sits next to a bottle on the ground" 𝝉 > 0.25: "a black and white dog sitting in a kitchen" 𝝉 > 0.20: "a dog looks around while sitting in a room" 𝝉 < 0.15: "a dog looking around with open mouth" source_img Figure 1.…”
Section: Introductionmentioning
confidence: 99%
“…Since the first Adversarial Example (AE) against traffic sign image classification discovered by Eykholt et al [10], several research work in adversarial machine learning [9,30,15,16,35,6] started to focus on the context of visual perception in autonomous driving, and studied AEs on object detection models. For example, Eykholt et al [9] and Zhong et al [36] studied AEs in the form of adversarial stickers on stop signs or the back of front cars against YOLO object detectors [23], and performed indoor experiments to demonstrate the attack feasibility in the real world.…”
Section: Introductionmentioning
confidence: 99%
“…For example, as shown by our analysis later in §4, an attack on objection detection needs to succeed consecutively for at least 60 frames to fool a representative MOT process, which requires an at least 98% attack success rate ( §4). To the best of our knowledge, no existing attacks on objection detection can achieve such a high success rate [9,30,15,16,35,6].…”
Section: Introductionmentioning
confidence: 99%