2012 IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom) 2012
DOI: 10.1109/coginfocom.2012.6422021
|View full text |Cite
|
Sign up to set email alerts
|
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
4
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 14 publications
1
4
0
Order By: Relevance
“…More details about the action recognition step can be found in [16,58,62,97]. Similar work is also reported in [10,14,21,22,24,28,56,81,82,117,118,119].…”
Section: Automatic Gesture Recognitionsupporting
confidence: 55%
See 1 more Smart Citation
“…More details about the action recognition step can be found in [16,58,62,97]. Similar work is also reported in [10,14,21,22,24,28,56,81,82,117,118,119].…”
Section: Automatic Gesture Recognitionsupporting
confidence: 55%
“…The discussed research in this chapter is published in the papers [1,5,6,7,22,24,28,31,33,34,35,40,41,43,46,48,49,50,51,54,55,57,58,61,63,66,68,69,70,71,73,75,77,81,82,84,85,87,88,90,92,93,96,99,101,104,111]. In 1992 he started his job as an Assistant Professor and later as an Associate Professor Artificial Intelligence at Delft University of Technology (DUT) in the group Knowledge Based Systems headed by Prof. dr. H. Koppelaar.…”
Section: Resultsmentioning
confidence: 99%
“…Liu et al (2002) identified a vision device that monitored the driver's face and used the yaw orientation angles to estimate his face position during driving conditions. Toma et al (2012) developed an efficient approach for inexperienced drive by using a finite state machine (FSM) and a rule-based system (RBS), using inputs from sensor data fusion. By evaluating the sequence of postures, we can determine whether the maneuvering of the inexperienced drivers is done correctly.…”
Section: Related Workmentioning
confidence: 99%
“…Most of the traditional methods for driver's upper body posture recongnize use color cameras or infrared depth cameras alone for classification of specific several driving poses and driving intention recognition. 11,12 Some studies have built a driver's posture model and used it for driver monitoring, 13,14,15 but they did not use the depth information in combination with RGB information, nor did they extract the 3D coordinates of the keypoints for the model. Park et al 16,17 used the fusion of depth and RGB information for head keypoint 3D coordinate acquisition and motion trajectory recognition, However, the study was limited to the keypoints of the head and did not focus on the rest of the keypoints of the upper body.…”
Section: Kinect-openpose Posture Recognition Methodsmentioning
confidence: 99%