2020
DOI: 10.1007/978-3-030-58520-4_8
|View full text |Cite
|
Sign up to set email alerts
|

Spatiotemporal Attacks for Embodied Agents

Abstract: Adversarial attacks are valuable for providing insights into the blindspots of deep learning models and help improve their robustness. Existing work on adversarial attacks have mainly focused on static scenes; however, it remains unclear whether such attacks are effective against embodied agents, which could navigate and interact with a dynamic environment. In this work, we take the first step to study adversarial attacks for embodied agents. In particular, we generate spatiotemporal perturbations to form 3D a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
3

Relationship

2
7

Authors

Journals

citations
Cited by 33 publications
(14 citation statements)
references
References 23 publications
0
11
0
Order By: Relevance
“…Zhang et al [186] used the movement patterns in the video frames to compute a noise prior that can help in gradient estimation for fooling video classifiers in the context of query-based attacks. A spatiotemporal attack is also introduced for embodied agents in [187]. Liu et al [188] proposed an FGSM-like attack to fool skeleton-based human action recognition models.…”
Section: G Miscellaneous Attacksmentioning
confidence: 99%
“…Zhang et al [186] used the movement patterns in the video frames to compute a noise prior that can help in gradient estimation for fooling video classifiers in the context of query-based attacks. A spatiotemporal attack is also introduced for embodied agents in [187]. Liu et al [188] proposed an FGSM-like attack to fool skeleton-based human action recognition models.…”
Section: G Miscellaneous Attacksmentioning
confidence: 99%
“…Adversarial examples are elaborately designed perturbations which are imperceptible to human but could mislead DNNs [41,22]. In the past years, a long line of work has been proposed to develop adversarial attack strategies [25,13,30,43,11,29,48,19]. In general, there are several different ways to categorize adversarial attack methods, e.g., targeted or untargeted attacks, white-box or black-box attacks, etc.…”
Section: Related Workmentioning
confidence: 99%
“…By introducing a seed content patch P 0 , which has a strong perceptual correlation to the scenario context, the generated adversarial camouflage in this case can be more unsuspicious and natural to human perception. Since humans pay more attention to object shapes when making predictions [29], we further preserve the shape information of the seed content patch to improve the human attention correlations. Thus, the humanspecific attention mechanism is evaded, leading to more natural camouflage.…”
Section: Framework Overviewmentioning
confidence: 99%
“…A recent work by Yao et al [39] leverages multi-view attacks inspired by EOT, in order to devise 3D adversarial objects perturbing the texture space, investigating the attack quality using multiple classifiers. Finally, Liu et al [40] considers the case of embodied agents performing navigation and question answering. To better attack the task at hand, the perturbations are focused on the salient stimuli characterizing the temporal trajectory followed by the embodied agent to complete its task.…”
Section: Adversarial Objectsmentioning
confidence: 99%