2019
DOI: 10.1109/tsmc.2018.2850149
|View full text |Cite
|
Sign up to set email alerts
|

Deep Convolutional Neural Networks for Human Action Recognition Using Depth Maps and Postures

Abstract: In this paper, we present a method for human action recognition from depth images and posture data using convolutional neural networks (CNN). Two input descriptors are used for action representation. The first input is a depth motion image (DMI) that accumulates consecutive depth images of a human action, whilst the second input is a proposed moving joints descriptor (MJD) which represents the motion of body joints over time. In order to maximize feature extraction for accurate action classification, three CNN… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
89
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 169 publications
(91 citation statements)
references
References 56 publications
1
89
0
1
Order By: Relevance
“…In Fig. 6(a) top, we fix the angle thresholds at THAngles = [12,12,10,10,8,5,8,5] and we changed the THiRT thresholds from TH1RT to TH7RT, then we check the comparison results. We notice that from TH4RT to TH7RT, the algorithm generates correct judgments for the four actions pairs.…”
Section: A Utd-mhad Dataset 1) Utd-mhad Resultsmentioning
confidence: 99%
“…In Fig. 6(a) top, we fix the angle thresholds at THAngles = [12,12,10,10,8,5,8,5] and we changed the THiRT thresholds from TH1RT to TH7RT, then we check the comparison results. We notice that from TH4RT to TH7RT, the algorithm generates correct judgments for the four actions pairs.…”
Section: A Utd-mhad Dataset 1) Utd-mhad Resultsmentioning
confidence: 99%
“…Imitation can happen at different levels, such as at the action level, or at the effect level [41]. Recently, advances on motion analysis and estimation have been proposed [13,14,15], and these techniques have also been applied to humanoid robot motion learning through sensorimotor representation and physical interactions [42]. In this paper, we use a trajectory level imitation, as an instrumental example of application of our proposed multimodal learning approach.…”
Section: Related Workmentioning
confidence: 99%
“…learning a forward model) or actions of others (e.g. human trajectories from images or videos) [13,14,15], in this paper the goal is to learn a model of the self that can be applied to predict and imitate the visual perception of another agent from an egocentric point of view. The proposed architecture is based on a self-learned model, which is built, trained and updated only using the experience accumulated by the agent.…”
Section: Introductionmentioning
confidence: 99%
“…This represents how the body joints are moving during action. [2] There are three Convolution Neural Networks of DMI outputs only, both DMI and MJD outputs and the third containing MJD outputs only. These results are merged to get the final result.…”
Section: Depth Maps and Postures Detection Using Convolution Neural Nmentioning
confidence: 99%
“…In 1991 it was modified and was used for medical image processing and automatic detection of breast cancer in mammograms. Another convolution-based design was proposed in 1988 which was applied to decompose 1D electromyography convolved signals via deconvolution [2]. Another met`hod for this is studying skeleton joints called temporal pyramid or RGB-D dataset.…”
Section: Introductionmentioning
confidence: 99%