2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
DOI: 10.1109/cvpr.2016.575
|View full text |Cite
|
Sign up to set email alerts
|

Synthesized Classifiers for Zero-Shot Learning

Abstract: Given semantic descriptions of object classes, zeroshot learning aims to accurately recognize objects of the unseen classes, from which no examples are available at the training stage, by associating them to the seen classes, from which labeled examples are provided. We propose to tackle this problem from the perspective of manifold learning. Our main idea is to align the semantic space that is derived from external information to the model space that concerns itself with recognizing visual features. To this e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
763
0
1

Year Published

2017
2017
2022
2022

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 689 publications
(765 citation statements)
references
References 35 publications
1
763
0
1
Order By: Relevance
“…Recent work [4], [33] combines the embedding-inferring procedure into a unified framework and empirically demonstrates better performance. The closest related work is [34], which takes one-step further to synthesise classifiers for unseen classes. Our method is also different from DS-SJE [35], in terms of learning objective, regularisation, and the potential applications.…”
Section: Related Workmentioning
confidence: 99%
“…Recent work [4], [33] combines the embedding-inferring procedure into a unified framework and empirically demonstrates better performance. The closest related work is [34], which takes one-step further to synthesise classifiers for unseen classes. Our method is also different from DS-SJE [35], in terms of learning objective, regularisation, and the potential applications.…”
Section: Related Workmentioning
confidence: 99%
“…Many recent approaches adopt such an embedding manner and achieve promising results [13,4,33,15,7,19,39,8,23]. Besides, similarity-based frameworks also adopt the embedding approach [24,40,41,34,8,25]. But the semantic space aims to associate unseen to seen classes.…”
Section: Related Workmentioning
confidence: 99%
“…the trained model on seen classes is also effective on unseen classes; 2) visual-related, the gap between the semantic and visual spaces should be small enough to train a stable model. According to these requirements, learning visual attributes has gain most popularity [21,29,38,27,14,16,8]. However, attribute annotations are very expensive, especially for image-level tasks.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…[35,4] proposes bilinear joint embeddings to mitigate the distribution difference between visual and semantic spaces. In [5], classifiers of unseen classes are directly estimated by aligning the manifolds of seen classes.…”
Section: Related Workmentioning
confidence: 99%