2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.01173
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Disentanglement of Discriminative Latent Features for Zero-Shot Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
41
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 57 publications
(41 citation statements)
references
References 18 publications
0
41
0
Order By: Relevance
“…iii) In particular, for the two coarse-grained datasets (aPY and AWA2), our CPL achieves 0.5% and 5.6% significant improvements over the strongest competitors, showing its great advantage in coarse-grained object recognition problems. iv) Although DLFZRL [39] performs better than our CPL with 1.4% on CUB, it still can be observed that our CPL takes more advantage on most cases.…”
Section: Standard Zslmentioning
confidence: 77%
See 1 more Smart Citation
“…iii) In particular, for the two coarse-grained datasets (aPY and AWA2), our CPL achieves 0.5% and 5.6% significant improvements over the strongest competitors, showing its great advantage in coarse-grained object recognition problems. iv) Although DLFZRL [39] performs better than our CPL with 1.4% on CUB, it still can be observed that our CPL takes more advantage on most cases.…”
Section: Standard Zslmentioning
confidence: 77%
“…By contrast, our framework needs not to predefine these relations. Third, a recent work DLFZRL [39] aims to learn discriminative and generalizable representations from image features and then improves the performance of existing methods. In particular, they utilized DEVISE [11] as the embedding function and obtained very competitive results on four benchmark datasets.…”
Section: Standard Zslmentioning
confidence: 99%
“…There have been previous efforts to extract the semantic feature from the image feature (Tong et al 2019;Han, Fu, and Yang 2020;Li et al 2021;Chen et al 2021). While our approach seems to be a bit similar to (Li et al 2021) and (Chen et al 2021) in the sense that the autoencoder-based image feature decomposition method is used for the semantic feature extraction, our work is dearly distinct from those works in two respects.…”
Section: Comparison With Conventional Approachesmentioning
confidence: 99%
“…Datasets In our experiments, we evaluate the performance of our model using four benchmark datasets: AwA1, AwA2, 1 We recall that {z In dividing the total classes into seen and unseen classes, we adopt the conventional dataset split presented in (Xian, Schiele, and Akata 2017).…”
Section: Experiments Experimental Setupmentioning
confidence: 99%
“…Although remarkable results were obtained by these unsupervised DRL methods using toy datasets such as dSprites [22] and 3D Shapes [23], there is no guarantee that each latent variable corresponds to a single semantically meaningful factor of variation without any inductive bias [10], [24], [25]. Hence, recent DRL studies have focused on introduction to a model of an explicit prior that imposes constraints or regulariza-tions based on the underlying structure of complicated realworld images [26], [27], such as translation and rotation [2], [28], hierarchical features [8], [9], [29] and domain-specific knowledge [10].…”
Section: Introductionmentioning
confidence: 99%