2024
DOI: 10.1117/1.jei.33.1.013052
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial attack on human pose estimation network

Zhaoxin Zhang,
Shize Huang,
Xiaowen Liu
et al.

Abstract: Real-time human pose estimation (HPE) using convolutional neural networks (CNN) is critical for enabling machines to better understand human beings based on images and videos, and for assisting supervisors in identifying human behavior. However, CNN-based systems are susceptible to adversarial attacks, and the attacks specifically targeting HPE have received little attention. We present a gradient-based adversarial example generation method, named AdaptiveFool, which is designed to effectively perform a keypoi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 38 publications
0
0
0
Order By: Relevance
“…After that, DAG [24] completed the adversarial attack on the object detection and instance segmentation model by the dense attack method. Then, many studies extended adversarial attacks to models in different domains, including object detection [25], instance segmentation [26,27], human pose estimation [28], person re-identification [29], person detector [30], visual language model [31,32], remote sensing [33], and 3D point cloud processing [34,35]. These works show that adversarial attacks can threaten the security of various neural network-based application models.…”
Section: Adversarial Attackmentioning
confidence: 99%
“…After that, DAG [24] completed the adversarial attack on the object detection and instance segmentation model by the dense attack method. Then, many studies extended adversarial attacks to models in different domains, including object detection [25], instance segmentation [26,27], human pose estimation [28], person re-identification [29], person detector [30], visual language model [31,32], remote sensing [33], and 3D point cloud processing [34,35]. These works show that adversarial attacks can threaten the security of various neural network-based application models.…”
Section: Adversarial Attackmentioning
confidence: 99%