2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2018
DOI: 10.1109/iros.2018.8594385
|View full text |Cite
|
Sign up to set email alerts
|

Towards Real-Time Physical Human-Robot Interaction Using Skeleton Information and Hand Gestures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 43 publications
(22 citation statements)
references
References 15 publications
(22 reference statements)
0
22
0
Order By: Relevance
“…The use of multiple GPUs for OpenPose library can enhance the temporal performance of our system. We explain our presented work in more detail in [10]. We plan to extend our work by developing a background independent hand gesture detector by substituting backgrounds with rich-textured images.…”
Section: Phri Experiments and Resultsmentioning
confidence: 99%
“…The use of multiple GPUs for OpenPose library can enhance the temporal performance of our system. We explain our presented work in more detail in [10]. We plan to extend our work by developing a background independent hand gesture detector by substituting backgrounds with rich-textured images.…”
Section: Phri Experiments and Resultsmentioning
confidence: 99%
“…This paper is an extension of our previous work proposed in [55] which presented a tool handover task between robot and human coworker through static hand gestures. A convolutional neural network, inspired mainly by LeNet [47] was developed, to classify four hand gestures.…”
Section: Our Contributionsmentioning
confidence: 95%
“…We extend our work by training a hand gesture detector on ten gestures instead of four presented in [55]. Moreover, the backgrounds are now replaced with random pattern/indoor-architecture images to make the detection robust and background invariant.…”
Section: Our Contributionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Unlike in our work, in these two works, the adaptation was done at the low-level control of the robot by a hybrid force/impedance controller, while we did it at the symbolic level of the task. A scenario where a human and a robot physically interact through a handover of an object was discussed by Mazhar et al [18]. Force signals were used to identify different phases of the sequence of actions.…”
Section: Related Workmentioning
confidence: 99%