Ninth IEEE International Symposium on Wearable Computers (ISWC'05)
DOI: 10.1109/iswc.2005.57
|View full text |Cite
|
Sign up to set email alerts
|

Wearable Hand Activity Recognition for Event Summarization

Abstract: In this paper we develop a first step towards the recognition of hand activity by detecting objects subject to manipulation, and use the results to build a visual summary of events. The motivation is to extract information from hand activity without requiring that the wearer is explicit as in gesture-based interaction. Our method uses simple image measurements within a probabilistic framework and allows real-time implementation.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
33
0

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 50 publications
(33 citation statements)
references
References 17 publications
0
33
0
Order By: Relevance
“…There are several threads of literature to cite. First, action recognition from firstperson videos is itself a relatively recent but rapidly expanding area of exploration [2,3,7,38,39,40,41,42,43,44]. As far as the authors know, all existing techniques employ uniform spatiotemporal sampling of the egocentric video.…”
Section: Adaptive Sampling For Egocentric Action Recognitionmentioning
confidence: 99%
See 1 more Smart Citation
“…There are several threads of literature to cite. First, action recognition from firstperson videos is itself a relatively recent but rapidly expanding area of exploration [2,3,7,38,39,40,41,42,43,44]. As far as the authors know, all existing techniques employ uniform spatiotemporal sampling of the egocentric video.…”
Section: Adaptive Sampling For Egocentric Action Recognitionmentioning
confidence: 99%
“…We employ a similar idea but tailored to the egocentric setting. We draw inspiration from previous studies that highlight the importance of hands as salient cues towards action and activity recognition [4,7,10,38,39,40,41,42,43,44], and propose to directly modulate the density of the video sampling based on the detected hand regions. As in [45], we compare our approach with several masking schemes and feature extraction methods.…”
Section: Adaptive Sampling For Egocentric Action Recognitionmentioning
confidence: 99%
“…The use of object recognition in egocentric vision systems dates back to the DyPERS system of Schiele et al [23]. Mayol and Murray [13] recognized manipulation activities leveraging skin color and histogram-based object classification from a shoulder-mounted camera. Ren and Philipose [18] collected a large database of egocentric videos of objects to facilitate research in egocentric object recognition.…”
Section: Related Workmentioning
confidence: 99%
“…According to [20], known for being the first public dataset in FPV for object recognition, hand-detection/segmentation methods can be grouped in two: model-driven and data-driven. The former uses a computerized model of the hands to recreate the image of the videos [30], while the latter exploit image features to infer about hand location, shape and position [19,27,21].…”
Section: Introductionmentioning
confidence: 99%