2018 IEEE 4th International Conference on Computer and Communications (ICCC) 2018
DOI: 10.1109/compcomm.2018.8780777
|View full text |Cite
|
Sign up to set email alerts
|

A Vision-Based Human Action Recognition System for Companion Robots and Human Interaction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(3 citation statements)
references
References 6 publications
0
1
0
Order By: Relevance
“…Deep learning (CNNs and RNNs) addresses the critical task of human action recognition in computer vision, enhancing accuracy and optimizing performance. [9,[61][62][63][64][65][66][67][68][69][70][71][72] Attention-based LSTM for feature distinctions, incorporating a spatiotemporal saliency-based multi-stream network. [73] A hybrid deep learning model for human action recognition.…”
Section: Methods Referencesmentioning
confidence: 99%
See 1 more Smart Citation
“…Deep learning (CNNs and RNNs) addresses the critical task of human action recognition in computer vision, enhancing accuracy and optimizing performance. [9,[61][62][63][64][65][66][67][68][69][70][71][72] Attention-based LSTM for feature distinctions, incorporating a spatiotemporal saliency-based multi-stream network. [73] A hybrid deep learning model for human action recognition.…”
Section: Methods Referencesmentioning
confidence: 99%
“…The fusion of modalities like RGB and depth information further refines recognition. Recent strides in attention mechanisms and metaheuristic algorithms have optimized network architectures, emphasizing relevant regions for improved performance [9,[61][62][63][64][65][66][67][68][69][70][71][72].…”
Section: Human Action Recognitionmentioning
confidence: 99%
“…More specifically, in [52], [53], it was shown that higher recognition accuracies were reached when fusing RGB video and inertial sensing. Similarly, in [20], [54]- [58], higher recognition accuracies were obtained when fusing RGB video and depth sensing. In [59], [60], RGB, depth and inertial signals were simultaneously used to achieve higher recognition accuracies.…”
Section:  Introductionmentioning
confidence: 93%