2020
DOI: 10.1007/978-3-030-58558-7_39
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial T-Shirt! Evading Person Detectors in a Physical World

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
161
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 205 publications
(161 citation statements)
references
References 18 publications
0
161
0
Order By: Relevance
“…Adversarial stickers and graffiti have also been used to attack such as traffic sign classifiers and ImageNet classifiers in physical-world scenarios [10]. Other physicalworld attacks include adversarial eye-glass frames [23], vehicles [35], or t-shirts [32] that can fool face recognition systems or object detectors. All these physical-world attacks generate large perturbations to increase adversarial strength, which inevitably results in large and unrealistic distortions.…”
Section: Physical-world Attacksmentioning
confidence: 99%
“…Adversarial stickers and graffiti have also been used to attack such as traffic sign classifiers and ImageNet classifiers in physical-world scenarios [10]. Other physicalworld attacks include adversarial eye-glass frames [23], vehicles [35], or t-shirts [32] that can fool face recognition systems or object detectors. All these physical-world attacks generate large perturbations to increase adversarial strength, which inevitably results in large and unrealistic distortions.…”
Section: Physical-world Attacksmentioning
confidence: 99%
“…This already led to an incessant attacker-defender race in the fast moving field of security for machine learning and adversarial examples [53][54][55][56]. In recent years, researchers have among others developed different attack schemes on how to evade cybersecurity AI [57], e-mail protection, verification tools [58], forensic classifiers [59] and person detectors [60], how to elicit algorithmic biases [13,61], how to fool medical AI [62][63][64][65], law enforcement tools [66] as well as autonomous vehicles [67,68], how to perform denial-of-service and other adversarial attacks on commercial AI services [69][70][71], how to cause energy-intense and unnecessarily prolonged processing time [72] and how to poison AI systems postdeployment [73].…”
Section: Rda For Ai Risk Instantiations Ia and Ib-examplesmentioning
confidence: 99%
“…Adversarial example attack have been studied broadly and being considered as one of the most important attack models to explore the DNN vulnerability. Adversarial patch attacks add input-independent patch perturbation in the input images to manipulate the victim model to output malicious results [39]. Compared to adversarial example attacks, patch attacks are more practical due to the reasons: 1) Better universality across different input images.…”
Section: Advanced Attacks With Extracted Dnn Archsmentioning
confidence: 99%
“…We additionally test the effectiveness of extracted network architectures on adversarial patch attacks. Patch attack is to generate a uniform adversarial patch for any arbitrary inputs that manipulate the victim model to output targeted or untargeted labels [39]. The major difference between an adversarial example and patch attack is that the adversarial patches are not dependent on the inputs while every different input desires a customized adversarial example.…”
Section: Adversarial Patch Attack Effectivenessmentioning
confidence: 99%