2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2019
DOI: 10.1109/cvprw.2019.00012
|View full text |Cite
|
Sign up to set email alerts
|

Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection

Abstract: Adversarial attacks on machine learning models have seen increasing interest in the past years. By making only subtle changes to the input of a convolutional neural network, the output of the network can be swayed to output a completely different result. The first attacks did this by changing pixel values of an input image slightly to fool a classifier to output the wrong class. Other approaches have tried to learn "patches" that can be applied to an object to fool detectors and classifiers. Some of these appr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
413
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 481 publications
(443 citation statements)
references
References 18 publications
(28 reference statements)
0
413
0
Order By: Relevance
“…Recently, many methods have been proposed for attack object detection and semantic segmentation networks [26]. For example, Thys et al [27] developed a method to learn a patch that can be applied to an object to fool the YOLO [28] object detector and classifier.…”
Section: A Attack Methodsmentioning
confidence: 99%
“…Recently, many methods have been proposed for attack object detection and semantic segmentation networks [26]. For example, Thys et al [27] developed a method to learn a patch that can be applied to an object to fool the YOLO [28] object detector and classifier.…”
Section: A Attack Methodsmentioning
confidence: 99%
“…An example here is a single perturbation for any street sign that deceives a sensing system with high probability [2]. A subset of data-agnostic perturbations, called class-universal, are data-agnostic only for intra-class signals [13] (i.e., a single perturbation for every stop sign); that is, ∀x ∼ X c , where c is the corresponding class. Data-agnostic and class-universal APs are critical for the robustness of AVs, since a single AP added to the physical world could potentially mislead any classifier of a specific modality, thus posing a safety threat.…”
Section: Robust Sensingmentioning
confidence: 99%
“…Unlike attacks that focus on targets with no-intra class variety (i.e., only stop signs), target types with large intra-class variety (persons) are considered in [13]. The loss function to help hide people from detectors considers three factors, namely, a non-printability score (how well the colors of a patch can be reproduced by a printer); the total variation of the image (favoring a patch with smooth color transitions); and the maximum objectiveness score in the image (i.e., the effectiveness on hiding a person), which aims to minimize the object or class score output by the detector.…”
Section: Camera Attacks With Patchesmentioning
confidence: 99%
“…They claim it manages to achieve up to 74% and 57% success rates in digital and physical worlds, respectively, against the popular YOLOv2 model (see Figure 9). Thys et al [22] have implemented a similar approach. They show how simple printed patterns can fool an AI system that was designed to recognize people in images (YOLOv2).…”
Section: Attacks In T He Physical Worldmentioning
confidence: 99%