2017
DOI: 10.3389/frobt.2017.00008
|View full text |Cite
|
Sign up to set email alerts
|

A Human-Centered Approach to One-Shot Gesture Learning

Abstract: This article discusses the problem of one-shot gesture recognition using a humancentered approach and its potential application to fields such as human-robot interaction where the user's intentions are indicated through spontaneous gesturing (one shot). Casual users have limited time to learn the gestures interface, which makes one-shot recognition an attractive alternative to interface customization. In the aim of natural interaction with machines, a framework must be developed to include the ability of human… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(9 citation statements)
references
References 57 publications
0
9
0
Order By: Relevance
“…One possibility could be to incorporate on-the-fly gesture learning in the PIL framework. It is a topic of active research in gesture recognition (Fanello et al, 2013 ; Cabrera and Wachs, 2017 ; Cabrera et al, 2017 ). Once a gesture is recorded, the user can teach its association with a robot action.…”
Section: Discussionmentioning
confidence: 99%
“…One possibility could be to incorporate on-the-fly gesture learning in the PIL framework. It is a topic of active research in gesture recognition (Fanello et al, 2013 ; Cabrera and Wachs, 2017 ; Cabrera et al, 2017 ). Once a gesture is recorded, the user can teach its association with a robot action.…”
Section: Discussionmentioning
confidence: 99%
“…Most of the early methods required assistance in observing the demonstrations. This assistance was provided by motion capture systems (Ijspeert et al, 2001;Ijspeert et al, 2002;Field et al, 2009), visual detectors (Ramirez-Amaro et al, 2017;Sieb et al, 2020;Zhang and Nikolaidis, 2019), skeleton tracking (Cabrera and Wachs, 2017), trackers/markers (Dillmann, 2004;Dragan and Srinivasa, 2012;Gupta et al, 2016) or a combination of the above (Kuniyoshi et al, 1994). However, the entities to be tracked or detected must be known beforehand and only demonstrations using these entities can be learned.…”
Section: Related Workmentioning
confidence: 99%
“…In order to distinguish between important movements and noise, and to also correct for differences in location due to the position or height of the participant, a pre-processing step is performed to identify salient features of the gestures, also known as primitives (Ramey et al, 2012 ). We based our approach on the work by Cabrera and Wachs ( 2017 ) by using the inflection points of the hands’ motion trajectories, combined with peaks in the hands’ position (Fig. 6 shows a time series trajectory where inflection points and peaks are marked).…”
Section: Technical Implementationmentioning
confidence: 99%
“…For example, we calculate whether the hand was in front of or behind, and above or below the shoulder. Cabrera and Wachs ( 2017 ) call the resulting sequence of inflection points and relative locations the gist of the gesture. One limitation that remains is that the same gesture could be performed at different positions relative to the body.…”
Section: Technical Implementationmentioning
confidence: 99%