2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00844
|View full text |Cite
|
Sign up to set email alerts
|

Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders

Abstract: Many approaches in generalized zero-shot learning rely on cross-modal mapping between the image feature space and the class embedding space. As labeled images are expensive, one direction is to augment the dataset by generating either images or image features. However, the former misses fine-grained details and the latter requires learning a mapping associated with class embeddings. In this work, we take feature generation one step further and propose a model where a shared latent space of image features and c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
507
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 582 publications
(507 citation statements)
references
References 26 publications
0
507
0
Order By: Relevance
“…We consider the simplified models above to describe CADA-VAE [4] and cycle-WGAN [3] as the latent space learning models.…”
Section: 2data Augmentation Frameworkmentioning
confidence: 99%
See 4 more Smart Citations
“…We consider the simplified models above to describe CADA-VAE [4] and cycle-WGAN [3] as the latent space learning models.…”
Section: 2data Augmentation Frameworkmentioning
confidence: 99%
“…Encoder x for x), then the decoder of a different modality is used (e.g.Decoder a from z x -see Fig. 2), which constraints the visual and semantic projections to be in the same region of the latent space represented by the mean µ and variance Σ of the samples produced by the encoder [4]. Figure 2: Depiction of the method CADA-VAE [4].…”
Section: Cada-vaementioning
confidence: 99%
See 3 more Smart Citations