2016
DOI: 10.1007/978-3-319-48881-3_29
|View full text |Cite
|
Sign up to set email alerts
|

Human Joint Angle Estimation and Gesture Recognition for Assistive Robotic Vision

Abstract: We explore new directions for automatic human gesture recognition and human joint angle estimation as applied for human-robot interaction in the context of an actual challenging task of assistive living for real-life elderly subjects. Our contributions include state-of-the-art approaches for both low-and mid-level vision, as well as for higher level action and gesture recognition. The first direction investigates a deep learning based framework for the challenging task of human joint angle estimation on noisy … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 25 publications
0
8
0
Order By: Relevance
“…In references [ 28 , 29 ], features used are angles between joints so that features are scaled. In reference [ 29 ], features mainly used are hip and knee angles.…”
Section: Related Workmentioning
confidence: 99%
“…In references [ 28 , 29 ], features used are angles between joints so that features are scaled. In reference [ 29 ], features mainly used are hip and knee angles.…”
Section: Related Workmentioning
confidence: 99%
“…The algorithm uses information (time of flight, thermal, and 2D) from three cameras to detect and track the trunk, arms, and head; then, the position is classified by the body trunk lean direction (upright and forward) and the orientation patterns, like towards, neutral, or away from another person. Another example is presented by Guler et al in [ 64 , 65 ]; authors used an RGB-D camera and laser sensor to recognize the human gestures when a robot interacts with a person. The algorithm extracts the skeleton and estimates the joint angles using geometry.…”
Section: Algorithms Used For the Bodymentioning
confidence: 99%
“…Two-stage approaches, e.g. [9,10,12], firstly detect joint positions in 2D and subsequently lift joints into 3D by relying on prior knowledge about the 3D human pose. The advantage of such approaches is that they can exploit large datasets constructed for the prediction of 2D landmarksthe disadvantage is that errors in the 2D stage can propagate to the 3D predictions and can often not be recovered from.…”
Section: Introductionmentioning
confidence: 99%