2016 IEEE International Conference on Signal and Image Processing (ICSIP) 2016
DOI: 10.1109/siprocess.2016.7888345
|View full text |Cite
|
Sign up to set email alerts
|

Towards robust ego-centric hand gesture analysis for robot control

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 16 publications
0
8
0
Order By: Relevance
“…Unlike pose estimation, only the fingertips of one or multiple fingers are detected. These key-points alone do not allow reconstructing the articulated hand pose, but can be used as input to HCI/HRI systems such as [105], [106], [107] (Section 5). If the objective is to estimate the key-points of a single finger, the most common solution is to regress the coordinates of these points (usually the tip and knuckle of the index finger) from a previously detected hand ROI.…”
Section: Fingertip Detectionmentioning
confidence: 99%
See 3 more Smart Citations
“…Unlike pose estimation, only the fingertips of one or multiple fingers are detected. These key-points alone do not allow reconstructing the articulated hand pose, but can be used as input to HCI/HRI systems such as [105], [106], [107] (Section 5). If the objective is to estimate the key-points of a single finger, the most common solution is to regress the coordinates of these points (usually the tip and knuckle of the index finger) from a previously detected hand ROI.…”
Section: Fingertip Detectionmentioning
confidence: 99%
“…However, two main differences exist between these two topics: 1) Grasp analysis looks at the hand posture during hand-object manipulations, whereas hand gesture recognition is usually performed on hands free of any manipulations; 2) grasp analysis aims at recognizing only static hand postures [110], whereas hand gesture recognition can also be generalized to dynamic gestures. According to the literature, hand gestures can be static or dynamic [70]: static hand gesture recognition (Section 4.1.3) aims at recognizing gestures that do not depend on the motion of the hands, thus relying on appearance and hand posture information only [33], [70], [105], [123], [124], [125], [126]; dynamic hand gesture recognition (Section 4.1.4) is performed using temporal information (e.g., hand tracking), in order to capture the motion cues that allow generating specific gestures [124], [126], [127], [128], [93], [94].…”
Section: Hand Grasp Analysis and Gesture Recognitionmentioning
confidence: 99%
See 2 more Smart Citations
“…The simultaneous view of both real reality and VR through the optical seethrough display preserves visual experience of the real world, even if the device malfunctions (Qian et al 2017). Mid-air gestures, gaze and vocal commands reflect a more natural user interaction with reality than is represented by manipulation of a mouse (Hasan and Yu 2017;Muser 2015;Song et al 2016). User's sense of presence in MR is enhanced by the HoloLens real-time processing of input (Hasan and Yu 2017).…”
Section: Human-computer Interaction In Mixed Realitymentioning
confidence: 99%