2021
DOI: 10.48550/arxiv.2109.05830
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Bone Length Attack on Action Recognition

Abstract: Skeleton-based action recognition models have recently been shown to be vulnerable to adversarial attacks. Compared to adversarial attacks on images, perturbations to skeletons are typically bounded to a lower dimension of approximately 100 per frame. This lower-dimensional setting makes it more difficult to generate imperceptible perturbations. Existing attacks resolve this by exploiting the temporal structure of the skeleton motion so that the perturbation dimension increases to thousands. In this paper, we … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 9 publications
(14 reference statements)
0
2
0
Order By: Relevance
“…Adversaries have been developed in video-based recognition (Zhang et al 2020a;Pony, Naeh, and Mannor 2021;Hwang et al 2021;Wei et al 2020) and multi-modal setting (Kumar et al 2020). Very recently, skeleton-based HAR has been shown to be extremely vulnerable (Liu, Akhtar, and Mian 2020;Tanaka, Kera, and Kawamoto 2021;Wang et al 2021a;Diao et al 2021;Zheng et al 2020). Adversarial examples can be generated by Gen-erative Adversarial Networks (Liu, Akhtar, and Mian 2020), optimization based on a new perceptual metric (Wang et al 2021a), or exploring the interplay between the classification boundary and the natural motion manifold under the hardlabel black-box setting (Diao et al 2021).…”
Section: Related Work Adversarial Attackmentioning
confidence: 99%
See 1 more Smart Citation
“…Adversaries have been developed in video-based recognition (Zhang et al 2020a;Pony, Naeh, and Mannor 2021;Hwang et al 2021;Wei et al 2020) and multi-modal setting (Kumar et al 2020). Very recently, skeleton-based HAR has been shown to be extremely vulnerable (Liu, Akhtar, and Mian 2020;Tanaka, Kera, and Kawamoto 2021;Wang et al 2021a;Diao et al 2021;Zheng et al 2020). Adversarial examples can be generated by Gen-erative Adversarial Networks (Liu, Akhtar, and Mian 2020), optimization based on a new perceptual metric (Wang et al 2021a), or exploring the interplay between the classification boundary and the natural motion manifold under the hardlabel black-box setting (Diao et al 2021).…”
Section: Related Work Adversarial Attackmentioning
confidence: 99%
“…We use it as one baseline. Standard AT (Madry et al 2018) has recently been briefly attempted on HAR (Tanaka, Kera, and Kawamoto 2021), so we use it as a baseline SMART-AT (Diao et al 2022) which employs SMART as the attacker. We also employ another two baseline methods TRADES (Zhang et al 2019b) and MART (Wang et al 2020), which are the state-of-the-art defense methods on images.…”
Section: Experiments Experimental Settingsmentioning
confidence: 99%