2018
DOI: 10.1016/j.compind.2018.03.015
|View full text |Cite
|
Sign up to set email alerts
|

Automated multi-feature human interaction recognition in complex environment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 17 publications
(5 citation statements)
references
References 42 publications
0
5
0
Order By: Relevance
“…In this system, some confusion was observed due to the similarities in angles and positions of various actions. Furthermore, Bibi et al [10] proposed an HIR system with local binary patterns using multi-view cameras. A high confusion rate was observed in similar interactions.…”
Section: Hhi Recognition Systemsmentioning
confidence: 99%
“…In this system, some confusion was observed due to the similarities in angles and positions of various actions. Furthermore, Bibi et al [10] proposed an HIR system with local binary patterns using multi-view cameras. A high confusion rate was observed in similar interactions.…”
Section: Hhi Recognition Systemsmentioning
confidence: 99%
“…In skeleton point modeling, firstly, the contours of a human silhouette have been calculated that detect the outer pixels of human shape [30]. Secondly, torso has been traced by approximating the center point of calculated human contours [31], which is depicted in Eq.…”
Section: Skeleton Point Modelingmentioning
confidence: 99%
“…Lee et al used stack self-coding network for speech feature coding and compressed the data to a preset length with minimum reconstruction error [10]. Bibi et al have explored various DL (deep learning) frameworks in speech emotion tasks, and their experiments have demonstrated that feedforward and RNN (recurrent neural network) structures and their variants can be used to assist speech recognition, especially emotion recognition [11]. Fornaser et al put forward a method of identifying isolated sound events by using DBN (deep belief network).…”
Section: Mnfr-related Research Nazemi Et Al Comprehensivelymentioning
confidence: 99%