2014 IEEE Conference on Computer Vision and Pattern Recognition 2014
DOI: 10.1109/cvpr.2014.247
|View full text |Cite
|
Sign up to set email alerts
|

Gesture Recognition Portfolios for Personalization

Abstract: Human gestures, similar to speech and handwriting, are often unique to the individual. Training a generic classifier applicable to everyone can be very difficult and as such, it has become a standard to use personalized classifiers in speech and handwriting recognition. In this paper, we address the problem of personalization in the context of gesture recognition, and propose a novel and extremely efficient way of doing personalization. Unlike conventional personalization methods which learn a single classifie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
25
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 31 publications
(26 citation statements)
references
References 19 publications
1
25
0
Order By: Relevance
“…We observe that personalization using BALD outperforms personalization using RAND when the number of personalization instances is greater than 1 for the MSRC-12 dataset (left), 3 for the ChaLearn 2013 dataset (middle) and 4 for the NATOPS dataset (right). Our results also compare favorably with the personalization methods presented by Yao et al [34], who reported their results for the MSRC-12 and ChaLearn 2013 datasets. We compare the personalization results with a baseline BNN trained with all training data pooled into one group, whose mean is depicted in the figures as a dashed black line.…”
Section: Personalizationsupporting
confidence: 81%
See 4 more Smart Citations
“…We observe that personalization using BALD outperforms personalization using RAND when the number of personalization instances is greater than 1 for the MSRC-12 dataset (left), 3 for the ChaLearn 2013 dataset (middle) and 4 for the NATOPS dataset (right). Our results also compare favorably with the personalization methods presented by Yao et al [34], who reported their results for the MSRC-12 and ChaLearn 2013 datasets. We compare the personalization results with a baseline BNN trained with all training data pooled into one group, whose mean is depicted in the figures as a dashed black line.…”
Section: Personalizationsupporting
confidence: 81%
“…Like Yao et al [34], we experimented with the Training and Validation data containing ∼11000 samples. The gestures in the dataset, recorded using the Microsoft Kinect, represent common communication signals used in the Italian language (Figure 3 middle).…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations