2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops 2009
DOI: 10.1109/cvprw.2009.5204360
|View full text |Cite
|
Sign up to set email alerts
|

Egocentric recognition of handled objects: Benchmark and analysis

Abstract: Recognizing objects being manipulated in hands can provide essential information about a person's activities and have far-reaching impacts on the application of visionin everyday life. The egocentric viewpoint from a wearable camera has unique advantages in recognizing handled objects, such as having a close view and seeing objects in their natural positions. We collect a comprehensive dataset and analyze the feasibilities and challenges of the egocentric recognition of handled objects.We use a lapel-worn came… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
63
0

Year Published

2012
2012
2019
2019

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 69 publications
(64 citation statements)
references
References 28 publications
1
63
0
Order By: Relevance
“…Nowadays, the attention of computer vision community is more and more turned to the new forms of video content: such as wearable video cameras, or "egocentric" view of the world [8,9]. Some attempts to identify visual saliency mainly on the basis of the frequency of repetition of visual objects and regions in the wearable video content have recently been made in [10]. We are specifically interested in building visual saliency maps by fusion of all cues in the pixel domain for the case of "egocentric" video content recorded with wearable cameras.…”
Section: Introductionmentioning
confidence: 99%
“…Nowadays, the attention of computer vision community is more and more turned to the new forms of video content: such as wearable video cameras, or "egocentric" view of the world [8,9]. Some attempts to identify visual saliency mainly on the basis of the frequency of repetition of visual objects and regions in the wearable video content have recently been made in [10]. We are specifically interested in building visual saliency maps by fusion of all cues in the pixel domain for the case of "egocentric" video content recorded with wearable cameras.…”
Section: Introductionmentioning
confidence: 99%
“…Papers reviewed using such techniques [93,111,112] emphasize how wearable cameras avoid the body to occlude what is being managed with the hands (see Fig. 7).…”
Section: Data Fusionmentioning
confidence: 99%
“…The first workshop organized by Philipose, Hebert and Ren, featured topics such as object analysis [63,72], activity analysis [74,68] and scene understanding [25,16,20]. The first workshop helped to bring together computer vision researchers to develop more advanced component-level technologies and to understand the challenges of working with egocentric vision.…”
Section: A Brief Historymentioning
confidence: 99%
“…To date researchers have explored the use of egocentric vision for activity recognition [60,11,10], object recognition [63,14], summarization [31,35], temporal segmentation [68,26,61], scene understanding [65], interaction analysis [13,66], hand detection [33,32], gaze estimation [76], gaze analysis [54,80,34,10], visual saliency [78,79], social saliency [57] and motion capture [67].…”
Section: A Brief Historymentioning
confidence: 99%