2022
DOI: 10.3390/math10193587
|View full text |Cite
|
Sign up to set email alerts
|

Residual-Prototype Generating Network for Generalized Zero-Shot Learning

Abstract: Conventional zero-shot learning aims to train a classifier on a training set (seen classes) to recognize instances of novel classes (unseen classes) by class-level semantic attributes. In generalized zero-shot learning (GZSL), the classifier needs to recognize both seen and unseen classes, which is a problem of extreme data imbalance. To solve this problem, feature generative methods have been proposed to make up for the lack of unseen classes. Current generative methods use class semantic attributes as the cu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 30 publications
0
1
0
Order By: Relevance
“…In our analysis, we are still considering the methods quoted above, while also adding many other approaches benchmarked in the more challenging GZSL setup. In fact, we also consider approaches designed for few-shot learning and casted to the zero-shot setting (such as CRnet [46]). We account for approaches which tackle zero-shot learning by an attention mechanism over the attributes used to describe each categories (LFGAA [27], ZSL-OCD [20] and DAZLE [17]).…”
Section: Comparisons With the State-of-the-artmentioning
confidence: 99%
“…In our analysis, we are still considering the methods quoted above, while also adding many other approaches benchmarked in the more challenging GZSL setup. In fact, we also consider approaches designed for few-shot learning and casted to the zero-shot setting (such as CRnet [46]). We account for approaches which tackle zero-shot learning by an attention mechanism over the attributes used to describe each categories (LFGAA [27], ZSL-OCD [20] and DAZLE [17]).…”
Section: Comparisons With the State-of-the-artmentioning
confidence: 99%