2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops 2014
DOI: 10.1109/cvprw.2014.82
|View full text |Cite
|
Sign up to set email alerts
|

Action and Interaction Recognition in First-Person Videos

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
34
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(34 citation statements)
references
References 15 publications
0
34
0
Order By: Relevance
“…Considering the intrinsic differences between first and third-person videos, several methods have been specifically proposed for first person viewpoint videos [7,[20][21][22][23][24][25]. In [7], the combination of the local and the global features using a multi-channel kernel is investigated.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Considering the intrinsic differences between first and third-person videos, several methods have been specifically proposed for first person viewpoint videos [7,[20][21][22][23][24][25]. In [7], the combination of the local and the global features using a multi-channel kernel is investigated.…”
Section: Related Workmentioning
confidence: 99%
“…Furthermore, [7] proposed to explicitly consider temporal structure using a hierarchical structure learning. Narayan et al extend improved trajectory approach [11] by grouping trajectories using a motion pyramidal structure [22]. Kitani et al [21] proposed a framework for ego-centric videos by using a stacked Dirichlet process mixture model to automatically learn a motion codebook and ego-action categories.…”
Section: Related Workmentioning
confidence: 99%
“…One of the main areas investigated so far is activity recognition [9], where the focus is the assessment of interactions between the wearer and the environment, paying particular attention on the manipulation of objects and hands movements as in [3] and [14].…”
Section: Previous Workmentioning
confidence: 99%
“…Application domains that employ wearable cameras ( Fig. 1 ) include life-logging and video summarization [3][4][5][6][7] , activity recognition [8][9][10][11][12][13][14][15][16][17][18][19][20][21] , and eye-tracking and gaze detection [22][23][24][25] . Human activities can be categorized as ambulatory (e.g., walk) [8][9][10][11][12][13][14][15] ; person-to-object interactions (e.g., cook) [16][17][18][19] ; and person-to-person interactions (e.g., handshake) [20,21] .…”
Section: Introductionmentioning
confidence: 99%