2017
DOI: 10.1145/3152129
|View full text |Cite
|
Sign up to set email alerts
|

Egocentric Hand Detection Via Dynamic Region Growing

Abstract: Abstract-Egocentric videos, which mainly record the activities carried out by the users of the wearable cameras, have drawn much research attentions in recent years. Due to its lengthy content, a large number of ego-related applications have been developed to abstract the captured videos. As the users are accustomed to interacting with the target objects using their own hands while their hands usually appear within their visual fields during the interaction, an egocentric hand detection step is involved in tas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 10 publications
(2 citation statements)
references
References 44 publications
(137 reference statements)
0
2
0
Order By: Relevance
“…The combination of color and motion segmentation (Horn-Schunck optical flow [60]) and region growing, allowed locating the hand regions for training a GMM-based hand segmentation model. Region growing was also used by Huang et al [55], [56]. The authors segmented the frames in super-pixels [38] and extracted ORB descriptors [28] from each super-pixel to find correspondences between regions of consecutive frames, which reflect the motion between two frames.…”
Section: Lack Of Pixel-level Annotationsmentioning
confidence: 99%
See 1 more Smart Citation
“…The combination of color and motion segmentation (Horn-Schunck optical flow [60]) and region growing, allowed locating the hand regions for training a GMM-based hand segmentation model. Region growing was also used by Huang et al [55], [56]. The authors segmented the frames in super-pixels [38] and extracted ORB descriptors [28] from each super-pixel to find correspondences between regions of consecutive frames, which reflect the motion between two frames.…”
Section: Lack Of Pixel-level Annotationsmentioning
confidence: 99%
“…Usually, methods for online hand segmentation made assumptions on the hand motion [55], [56], [57], [58] and/or required the user to perform a calibration with pre-defined hand movements [59]. In this way, the combination of color and motion features facilitates the detection of hand pixels, in order to train segmentation models online.…”
Section: Lack Of Pixel-level Annotationsmentioning
confidence: 99%