2022
DOI: 10.48550/arxiv.2207.11365
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Egocentric scene context for human-centric environment understanding from video

Abstract: First-person video highlights a camera-wearer's activities in the context of their persistent environment. However, current video understanding approaches reason over visual features from short video clips that are detached from the underlying physical space and only capture what is directly seen. We present an approach that links egocentric video and camera pose over time by learning representations that are predictive of the camera-wearer's (potentially unseen) local surroundings to facilitate human-centric … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 57 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?