2018
DOI: 10.1109/access.2017.2732919
|View full text |Cite
|
Sign up to set email alerts
|

Bio-Inspired Human Action Recognition With a Micro-Doppler Sonar System

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 46 publications
0
5
0
Order By: Relevance
“…Additionally, a human action recognition algorithm is displayed in ref. [10], specifically, the statistical models are trained simultaneously on both the micro‐Doppler modulations induced by human actions and symbolic representations of skeletal poses, the training, meanwhile, enables the model to learn the relations between the rich temporal structure of the micro‐Doppler modulations and the high‐dimensional pose sequences of human action. An approach used for real‐time human sensing based on micro‐Doppler estimation for radar target is displayed in ref.…”
Section: Introductionmentioning
confidence: 99%
“…Additionally, a human action recognition algorithm is displayed in ref. [10], specifically, the statistical models are trained simultaneously on both the micro‐Doppler modulations induced by human actions and symbolic representations of skeletal poses, the training, meanwhile, enables the model to learn the relations between the rich temporal structure of the micro‐Doppler modulations and the high‐dimensional pose sequences of human action. An approach used for real‐time human sensing based on micro‐Doppler estimation for radar target is displayed in ref.…”
Section: Introductionmentioning
confidence: 99%
“…The Thalmann model that has been used in this research provides a very detailed movement description, but for the purpose it is used in it would also be sufficient to have a simpler approximation of the movement model that would be faster to evaluate. A possibility for easily building movement models for many types of movement could be using a 3D pointcloud sensor as described in (Murray et al, 2018).…”
Section: Future Workmentioning
confidence: 99%
“…According to the different types of data used for human action recognition, human action recognition is usually divided into the following three main categories: human action recognition based on vision, human action recognition based on acoustics, and human action recognition based on inertial sensors [ 13 ]. Visual action recognition extracts human action data from image or video data obtained by optical sensors [ 14 , 15 ], acoustic action recognition uses sound signals for high-precision hand action tracking and gesture recognition [ 16 , 17 ], inertial sensor action recognition focuses on extracting human action inertial data from wearable inertial sensors [ 18 , 19 ]. Liu Yutao [ 20 ] summarized the three methods, as shown in the following Table 1 .…”
Section: Introductionmentioning
confidence: 99%