2020
DOI: 10.1155/2020/4358728
|View full text |Cite
|
Sign up to set email alerts
|

Using a Multilearner to Fuse Multimodal Features for Human Action Recognition

Abstract: The representation and selection of action features directly affect the recognition effect of human action recognition methods. Single feature is often affected by human appearance, environment, camera settings, and other factors. Aiming at the problem that the existing multimodal feature fusion methods cannot effectively measure the contribution of different features, this paper proposed a human action recognition method based on RGB-D image features, which makes full use of the multimodal information provide… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 36 publications
0
2
0
Order By: Relevance
“…Similarly, in Wu et al (2022a), the authors proposed a method for the defect identification of foundation piles under layered soil conditions. In Tang et al (2020), a human action recognition scheme was proposed, introducing and using the RGB-D image feature approach, which is a current research hotspot for effectively resisting the influence of external factors and improving the generalization ability of the classifier. The proposed scheme achieved excellent identification results on the public CAD60 and G3D datasets, utilizing three different patterns for human action feature extraction: The RGB modal information, based the histogram of oriented gradient (RGB-HOG), the depth modal information, based on the space-time interest points (D-STIP), and the skeleton modal information based on the joints' relative position feature (S-JRPF).…”
Section: Identification Methodsmentioning
confidence: 99%
“…Similarly, in Wu et al (2022a), the authors proposed a method for the defect identification of foundation piles under layered soil conditions. In Tang et al (2020), a human action recognition scheme was proposed, introducing and using the RGB-D image feature approach, which is a current research hotspot for effectively resisting the influence of external factors and improving the generalization ability of the classifier. The proposed scheme achieved excellent identification results on the public CAD60 and G3D datasets, utilizing three different patterns for human action feature extraction: The RGB modal information, based the histogram of oriented gradient (RGB-HOG), the depth modal information, based on the space-time interest points (D-STIP), and the skeleton modal information based on the joints' relative position feature (S-JRPF).…”
Section: Identification Methodsmentioning
confidence: 99%
“…A human action recognition method based on RGB-D image features is proposed. The multimodal information provided by RGB-D sensors is fully utilized to extract valid human action features [27]. A multimodal and multi-level feature extraction method is proposed.…”
Section: Related Workmentioning
confidence: 99%
“…It takes action videos as input, and then builds and trains a machine learning model, and finally classifies the results. Human behavior recognition is also such a process [39]. Therefore, the proposed method can be used in the classification problem that takes action videos as input, especially in human behavior recognition.…”
Section: Practical Applicationmentioning
confidence: 99%