2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.01052
|View full text |Cite
|
Sign up to set email alerts
|

F-VAEGAN-D2: A Feature Generating Framework for Any-Shot Learning

Abstract: When labeled training data is scarce, a promising data augmentation approach is to generate visual features of unknown classes using their attributes. To learn the class conditional distribution of CNN features, these models rely on pairs of image features and class attributes. Hence, they can not make use of the abundance of unlabeled data samples. In this paper, we tackle any-shot learning problems i.e. zero-shot and few-shot, in a unified feature generating framework that operates in both inductive and tran… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
389
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 462 publications
(389 citation statements)
references
References 39 publications
0
389
0
Order By: Relevance
“…Recently, a popular approach of zero-shot classification is generating synthesized features for unseen categories. For example, the method in [44] first generated features using word embeddings and random vectors, which was further improved by later works [7,22,28,40,45]. These zero-shot classification methods generated image features without involving contextual information.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, a popular approach of zero-shot classification is generating synthesized features for unseen categories. For example, the method in [44] first generated features using word embeddings and random vectors, which was further improved by later works [7,22,28,40,45]. These zero-shot classification methods generated image features without involving contextual information.…”
Section: Related Workmentioning
confidence: 99%
“…Transferring knowledge from seen categories to unseen categories is not a new idea and has been actively studied by zero-shot learning (ZSL) [2,10,21,45]. Most ZSL methods tend to learn the mapping between visual features and semantic word embeddings or synthesize visual features for unseen categories.…”
Section: Introductionmentioning
confidence: 99%
“…Amongst different GZSL approaches, non-generative models aim to learn deterministic or stochastic functions given the semantic and the visual spaces [9,10,15,21,23,24,29,31,[37][38][39]. On the other hand, generative techniques for GZSL focus to combat the classimbalance issue in GZSL by modeling the underlying data distributions [4,12,14,16,19,27,35,36]. Notably, [35] uses WGAN [6], which is trained to generate visual samples of the seen classes from the corresponding seen class-prototypes.…”
Section: Related Workmentioning
confidence: 99%
“…[14] proposes to alleviate confusion between seen and unseen class samples and introduce feature confusion scores. [36] uses GAN and VAE to generate unseen class samples.…”
Section: Related Workmentioning
confidence: 99%
“…We compare our method, tuned-SAE, with state-of-theart method [36] and other work are compared in [24]. All compared research studies have used zero-shot learning (supervised learning) [31] [16] and semi-supervised learning [28].…”
Section: Competitorsmentioning
confidence: 99%