2017 IEEE Winter Conference on Applications of Computer Vision (WACV) 2017
DOI: 10.1109/wacv.2017.21
|View full text |Cite
|
Sign up to set email alerts
|

First-Person Action Decomposition and Zero-Shot Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 39 publications
0
5
0
Order By: Relevance
“…Zero-shot learning models (Zhang, Li, and Rehg 2017;Jain et al 2015;Liu, Kuipers, and Savarese 2011) do not require as much supervision and learn semantic correspondences that extend beyond training classes to unseen test classes. The common approaches are to either use an attribute space or embedding space that captures the semantics of a scene and helps extend beyond the training label by exploiting the semantic correspondences across classes.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Zero-shot learning models (Zhang, Li, and Rehg 2017;Jain et al 2015;Liu, Kuipers, and Savarese 2011) do not require as much supervision and learn semantic correspondences that extend beyond training classes to unseen test classes. The common approaches are to either use an attribute space or embedding space that captures the semantics of a scene and helps extend beyond the training label by exploiting the semantic correspondences across classes.…”
Section: Related Workmentioning
confidence: 99%
“…GTEA Gaze contains 10 different verbs and 38 different nouns, while GTEA Gaze+ contains 15 verbs and 27 nouns. We report results averaged over all subjects for a fair comparison with prior works (Ma, Fan, and Kitani 2016;Zhang, Li, and Rehg 2017), which use leave-one-out cross-validation. We also test our approach's generalization capability to scenes beyond egocentric videos for object detection with zero supervision.…”
Section: Experimental Evaluation Datamentioning
confidence: 99%
See 2 more Smart Citations
“…Also the recipes are not treated as a strictly ordered set since recipe steps can be done out of order. First Person Vision: Our system is created for first person (FP) videos which have become more prevalent in the computer vision community in recent years [18,16,22,13,33]. We utilize the egocentric cues proposed by [16] in our method for action proposal generation.…”
Section: Related Workmentioning
confidence: 99%