2019
DOI: 10.48550/arxiv.1905.11026
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Fooling Detection Alone is Not Enough: First Adversarial Attack against Multiple Object Tracking

Yunhan Jia,
Yantao Lu,
Junjie Shen
et al.

Abstract: Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models. However, in such visual perception pipeline the detected objects must also be tracked, in a process called Multiple Object Tracking (MOT), to build the moving trajectories of surrounding obstacles. Since MOT is designed to be robust against errors in object detection, it poses a general challenge to existing attack techniques that blindl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 27 publications
(80 reference statements)
0
3
0
Order By: Relevance
“…Taking the realm of physical world attacks into account, Eykholt et al (2018) analyzed adversarial stickers on stop signs in the context of autonomous driving to fool YOLO (Redmon et al, 2016). Jia et al (2019) proposed a 'tracking hijacking' technique to fool multiple object trackers with imperceptible perturbations computed for object detectors in the perceptual pipeline of autonomous driving. Meanwhile, Yan et al (2020a) developed an attacking technique to deceive single object trackers based on SiamRPN++ (Li et al, 2018).…”
Section: Adversarial Attacks On Visual Object Trackingmentioning
confidence: 99%
“…Taking the realm of physical world attacks into account, Eykholt et al (2018) analyzed adversarial stickers on stop signs in the context of autonomous driving to fool YOLO (Redmon et al, 2016). Jia et al (2019) proposed a 'tracking hijacking' technique to fool multiple object trackers with imperceptible perturbations computed for object detectors in the perceptual pipeline of autonomous driving. Meanwhile, Yan et al (2020a) developed an attacking technique to deceive single object trackers based on SiamRPN++ (Li et al, 2018).…”
Section: Adversarial Attacks On Visual Object Trackingmentioning
confidence: 99%
“…For example, (Szegedy et al 2013) first shows that adversarial examples, generated by adding visually imperceptible perturbations to the original images, could make classification models predict a wrong label with high confidence. Further, (Thys, Van Ranst, and Goedemé 2019) successfully generates adversarial patches that can hide a person from a person detector while (Jia et al 2019) studies adversarial attacks against the visual perception pipeline in autonomous Figure 1: The framework of our UEN, which consists of a generator G and a attacked victim model F . Three different losses are developed for the multi-scenario attacks.…”
Section: Introductionmentioning
confidence: 99%
“…Using typical attack methods (Jia et al 2019;Szegedy et al 2013;Thys, Van Ranst, and Goedemé 2019) to generate adversarial perturbations against SOT is difficult and we analyze and summarize several reasons for it. Firstly, SOT algorithms could handle information across frames in realtime to locate the trajectory of the target in videos.…”
Section: Introductionmentioning
confidence: 99%