2016 IEEE 21st International Conference on Emerging Technologies and Factory Automation (ETFA) 2016
DOI: 10.1109/etfa.2016.7733719
|View full text |Cite
|
Sign up to set email alerts
|

A human-robot interaction interface for mobile and stationary robots based on real-time 3D human body and hand-finger pose estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
3
1
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 20 publications
(12 citation statements)
references
References 8 publications
0
12
0
Order By: Relevance
“…Such techniques have recently been outperformed by modern deep learning ones like convolutional neural networks [38]. The authors of [39] propose a human-robot interaction system for the navigation of a mobile robot using Kinect V1. The point cloud acquired from Kinect V1 is fit on a skeleton topology with multiple nodes, to extract the human operator pose.…”
Section: Gesture Detection In Human-robot Interactionmentioning
confidence: 99%
See 1 more Smart Citation
“…Such techniques have recently been outperformed by modern deep learning ones like convolutional neural networks [38]. The authors of [39] propose a human-robot interaction system for the navigation of a mobile robot using Kinect V1. The point cloud acquired from Kinect V1 is fit on a skeleton topology with multiple nodes, to extract the human operator pose.…”
Section: Gesture Detection In Human-robot Interactionmentioning
confidence: 99%
“…In [60], human body localization is performed using laser sensors, and its sub-parts are obtained through Kinect with the OpenNI library as in [61]. In [39], the authors localize the human body, inspired by [62], by merging clusters of the point cloud obtained from the Kinect V1 after voxel filtering and ground plane removal.…”
Section: Image Acquisition and Hand Localization Modulementioning
confidence: 99%
“…However, close proximity would not be sufficient in redefining the interaction of today. For example, there are nano-robots taking on certain assignments in some patient's bloodstream to repair a damaged cell back to health (Felfoul et al, 2016). In these cases, an intact state is observed rather than some proximity.…”
Section: A Forms Of Proximate Interactionmentioning
confidence: 99%
“…Perhaps the discussion would not be complete by including neither the appearance nor the movement, but also the learning and creating abilities. Robots ability to learn quicker than an average human (Rossi & Lee, in press), even the gestures (Ehlers & Brama, 2016) and the social cues by often mimicking the Human-Human-Interaction (Wehle, Weidemann, & Boblan, 2017), needs to be accounted, too.…”
Section: B Anthropomorphic Featuresmentioning
confidence: 99%
“…The OpenNI and NITE middleware are used to extract skeleton information of the human user. Authors in [5] and [6] also propose HRI scenarios using Kinect. Mostly the researchers have used OpenNI or Microsoft SDK to extract the human-skeleton, which is a model based skeleton tracker having several discrepancies including the need of initialization pose, not being able to detect gestures on which the model is not trained on, and noisy detections.…”
Section: Introductionmentioning
confidence: 99%