2020
DOI: 10.1007/s11071-020-05468-y
|View full text |Cite
|
Sign up to set email alerts
|

Human action recognition using Lie Group features and convolutional neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(9 citation statements)
references
References 33 publications
0
9
0
Order By: Relevance
“…In the cross-subject experiment, 40 types of subjects were divided into training and test sets, numbered as 1, 2, 4, 5, 8, 9, 13,14,15,16,17,18,19,25,27,28,31,34,35,38, and the rest were test sets. In the cross-view experiment, the first camera was selected as the test set, and the rest were training sets.…”
Section: Ntu Rgb-d Datasetmentioning
confidence: 99%
See 3 more Smart Citations
“…In the cross-subject experiment, 40 types of subjects were divided into training and test sets, numbered as 1, 2, 4, 5, 8, 9, 13,14,15,16,17,18,19,25,27,28,31,34,35,38, and the rest were test sets. In the cross-view experiment, the first camera was selected as the test set, and the rest were training sets.…”
Section: Ntu Rgb-d Datasetmentioning
confidence: 99%
“…From Table 1, it is clear that literature [17] based on variable parameter related skeletons and dynamic skeletons based on 3D geometric relationships) does not take into account deep space-time information, resulting in low accuracy. Literature [18] mapped joints to 3D space and extracted depth features through 3D CNN, thus effectively improving accuracy to 67.96% and 73.69%.…”
Section: Ntu Rgb-d Datasetmentioning
confidence: 99%
See 2 more Smart Citations
“…In addition, we apply an attention mechanism in the decoder to learn good alignments between input and output sequences, guiding the decoder to focus on corresponding parts of the input feature sequences when generating target Laban symbols. Besides, we utilize the Lie group representation proposed in [14,15] and widely used in [13,16,17,18] as input feature for the proposed seq2seq model. In general, by training the encoder and decoder networks together in the proposed seq2seq model, the Labanotation score can be Fig.…”
Section: Introductionmentioning
confidence: 99%