2017
DOI: 10.1007/978-3-319-57021-1_11
|View full text |Cite
|
Sign up to set email alerts
|

One-Shot Learning Gesture Recognition from RGB-D Data Using Bag of Features

Abstract: For one-shot learning gesture recognition, two important challenges are: how to extract distinctive features and how to learn a discriminative model from only one training sample per gesture class. For feature extraction, a new spatio-temporal feature representation called 3D enhanced motion scale-invariant feature transform (3D EMoSIFT) is proposed, which fuses RGB-D data. Compared with other features, the new feature set is invariant to scale and rotation, and has more compact and richer visual representatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
64
1

Year Published

2017
2017
2021
2021

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 53 publications
(66 citation statements)
references
References 45 publications
(68 reference statements)
1
64
1
Order By: Relevance
“…Long Short Term Memory (LSTM) networks have proven successful for this task [22]. With the release of ChaLearn Gesture Challenge data set [23], there have been a number of works in oneshot learning, in which a single training example is used per gesture class [24], [25], [26]. A third focus is on developing methods that work on well-established gesture sets, such as sign languages.…”
Section: Related Workmentioning
confidence: 99%
“…Long Short Term Memory (LSTM) networks have proven successful for this task [22]. With the release of ChaLearn Gesture Challenge data set [23], there have been a number of works in oneshot learning, in which a single training example is used per gesture class [24], [25], [26]. A third focus is on developing methods that work on well-established gesture sets, such as sign languages.…”
Section: Related Workmentioning
confidence: 99%
“…Most of the works about gesture recognition approaches in the review literature are focused on gesture data from optical [107,31,96] or inertial sensor [52,18,37], or the fusion of these two [21,65,20]. This work, however, explores a unified solution to the gesture recognition problem based on both types of sensors, widening the base of compatible gesture input devices in terms of sensing devices.…”
Section: Contributionsmentioning
confidence: 99%
“…Most of the reviewed recognition approaches [21,54,110] are tested with more than one training samples. Some works [96,105,52] are dedicated to approaches with only one training sample. Others ones [20,3] report impressive accuracy with user-independent approaches, which do not require training from new users.…”
Section: Contributionsmentioning
confidence: 99%
See 2 more Smart Citations