2013 12th International Conference on Document Analysis and Recognition 2013
DOI: 10.1109/icdar.2013.204
|View full text |Cite
|
Sign up to set email alerts
|

Using Confusion Reject to Improve (User and) System (Cross) Learning of Gesture Commands

Abstract: This paper presents a new method to help users defining personalized gesture commands (on pen-based devices) that maximize recognition performance from the classifier. The use of gesture commands give rise to a cross-learning situation where the user has to learn and memorize the command gestures and the classifier has to learn and recognize drawn gestures. The classification task associated with the use of customized gesture commands is complex because the classifier only has very few samples per class to sta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2013
2013
2017
2017

Publication Types

Select...
1
1
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…It allows us to flag gestures as confusing when confidence is below a defined confidence threshold (0.09 in our case). The choice of the threshold is explained in our paper [17].…”
Section: Conflict Detection With Recognition Systemmentioning
confidence: 99%
“…It allows us to flag gestures as confusing when confidence is below a defined confidence threshold (0.09 in our case). The choice of the threshold is explained in our paper [17].…”
Section: Conflict Detection With Recognition Systemmentioning
confidence: 99%
“…On the other hand, enabling users to choose their own gestures may lead to commands with similar or strange gestures that are hard to recognize by the classifier. We must assist the user to avoid similar gestures during this definition step, by providing a dynamic feedback on potential confusion risks [5]. Moreover, we can't expect users to draw much more than a few gesture samples per class, so the classifier must be able to learn with very few data.…”
Section: Introductionmentioning
confidence: 99%
“…These devices rarely use a mouse or keyboard and employs natural interfaces like gesture and speech instead. Natural interfaces using depth sensors like kinect are getting popular for gesture recognition [1,2,3,4], sign language recognition [5], signature based authentication systems [6], and computer games. Wachs et al [7] also talks about vision based hand-gesture applications, which include medical systems and assistive technologies, crisis management and disaster relief, entertainment and human-robot interaction.…”
Section: Introductionmentioning
confidence: 99%