2021
DOI: 10.1007/s00500-021-06149-7
|View full text |Cite
|
Sign up to set email alerts
|

Human action recognition using a hybrid deep learning heuristic

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 45 publications
0
6
0
Order By: Relevance
“…To demonstrate the performance of the proposed processing units, Π ec , we use the KTH dataset (Schuldt et al, 2004). In this way, we make a coherent comparison between our approach and existing works (Baccouche et al, 2011;Ji et al, 2012;Grushin et al, 2013;Ali and Wang, 2014;Liu et al, 2014;Shu et al, 2014;Shi et al, 2015;Veeriah et al, 2015;Dash et al, 2021). This data set is composed of 6 videos, which involves human actions, such as boxing(a), hand clapping(b), hand waving(c), jogging(e), running (d) and walking(f).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…To demonstrate the performance of the proposed processing units, Π ec , we use the KTH dataset (Schuldt et al, 2004). In this way, we make a coherent comparison between our approach and existing works (Baccouche et al, 2011;Ji et al, 2012;Grushin et al, 2013;Ali and Wang, 2014;Liu et al, 2014;Shu et al, 2014;Shi et al, 2015;Veeriah et al, 2015;Dash et al, 2021). This data set is composed of 6 videos, which involves human actions, such as boxing(a), hand clapping(b), hand waving(c), jogging(e), running (d) and walking(f).…”
Section: Resultsmentioning
confidence: 99%
“…As a consequence, the detection and recognize of the human motion in real-time becomes infeasible since the latency of the RPM system is increased significantly. In the past decades, several approaches have been developed to efficiently perform human action recognition (Baccouche et al, 2011;Ji et al, 2012;Grushin et al, 2013;Ali and Wang, 2014;Liu et al, 2014;Shu et al, 2014;Shi et al, 2015;Veeriah et al, 2015;Dash et al, 2021). In general terms, these approaches intend to determine when the action is occurring and what is this action by considering the starting and ending times of all action occurrences from the video.…”
Section: Introductionmentioning
confidence: 99%
“…Of 96.75% [ 172 ] 2021 Automatically learned features a deep bottleneck multimodal feature fusion (D-BMFF) framework that fused three different modalities of RGB, RGB-D(depth) and 3D coordinates information for activity classification Four RGB-D datasets: UT Kinect, CAD-60, Florence 3D, and SBU Interaction The ARR on UT Kinect, CAD-60, Florence 3D, SBU Interaction are 99%, 98.50%, 98.10%, and 97.75% respectively [ 209 ] 2021 Automatically learned features approach for human activity recognition using ensemble learning of multiple convolutional neural network (CNN) models Ensemble of CNN models gives accuracy of 94% [ 26 ] 2021 Automatically learned features deep learning-based method for human activity recognition problem. The method uses convolutional neural networks to automatically extract features from raw sensor data and classify six basic human activities Diabetes dataset Different activities covered with approximately 90% accuracy using CNN,Random forest, SVM [ 135 ] 2021 Automatically learned features a deep learning architecture that leverages the feature extraction capability of the convolutional neural networks and the construction of the temporal sequences of recurrent neural networks to improve existing classification results Accuracy increased from 30% to 35% due to transfer learning [ 57 ] 2021 Automatically learned features, handcrafted features framework to extract handcrafted high-level motion features and in-depth features by CNN in parallel to recognize human action. SIFT is used as a handcrafted feature to encode high-level motion features from the maximum number of input video frames.…”
Section: Approaches Of Harmentioning
confidence: 99%
“…Moreover, the major limitation of moving joint descriptor is obstinacy and lesser accuracy [39]. Similarly, a novel approach was designed in [40] that extracts high level movement features by scale invariant feature transform and deep features to classify the activities [40]. However, there were some errors in the corresponding feature points attained by this method, and because of this, the accuracy was gradually decreased [41].…”
Section: Literature Reviewmentioning
confidence: 99%