2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2015
DOI: 10.1109/cvprw.2015.7301344
|View full text |Cite
|
Sign up to set email alerts
|

On-the-fly hand detection training with application in egocentric action recognition

Abstract: We propose a novel approach to segment hand regions in egocentric video that requires no manual labeling of training samples. The user wearing a head-mounted camera is prompted to perform a simple gesture during an initial calibration step. A combination of color and motion analysis that exploits knowledge of the expected gesture is applied on the calibration video frames to automatically label hand pixels in an unsupervised fashion. The hand pixels identified in this manner are used to train a statisticalmode… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(17 citation statements)
references
References 52 publications
0
16
0
Order By: Relevance
“…Usually, methods for online hand segmentation made assumptions on the hand motion [55], [56], [57], [58] and/or required the user to perform a calibration with pre-defined hand movements [59]. In this way, the combination of color and motion features facilitates the detection of hand pixels, in order to train segmentation models online.…”
Section: Lack Of Pixel-level Annotationsmentioning
confidence: 99%
“…Usually, methods for online hand segmentation made assumptions on the hand motion [55], [56], [57], [58] and/or required the user to perform a calibration with pre-defined hand movements [59]. In this way, the combination of color and motion features facilitates the detection of hand pixels, in order to train segmentation models online.…”
Section: Lack Of Pixel-level Annotationsmentioning
confidence: 99%
“…Another approach dealing with hand detection and tracking in ego-centric data is [4]. Hand detection and action recognition using the data from Google glasses is considered in [17]. Finally hand pose estimation using a wearable RGB-D camera is the topic of [32] that exploits synthetic training examples and multi-class rejection-cascade classifiers.…”
Section: Related Workmentioning
confidence: 99%
“…These methods reduce false-positive rate of hand segmentation but also needs the offline training which requires manual labeled data. Kumar et al [10] illustrate an on-the-fly hand detection training method which is initialized by a calibration gesture performed by the user. This simple preprocessing step saves a great deal of Fig.…”
Section: Related Workmentioning
confidence: 99%
“…But, the assumption fails in many situations in which the hand is not used, such as before or after the human-computer interaction. Subsequently, some cascade detection methods are put forwarded to get rid of the assumption by checking out hand presence before performing pixel-by-pixel classification [8][9][10]. However, these approaches rely on the existence of a large training set containing a broad variety of data which are collected from multiple users under diverse illumination conditions.…”
Section: Introductionmentioning
confidence: 99%