2022
DOI: 10.1109/tcyb.2020.3004641
|View full text |Cite
|
Sign up to set email alerts
|

Semantic-Guided Class-Imbalance Learning Model for Zero-Shot Image Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(9 citation statements)
references
References 50 publications
0
9
0
Order By: Relevance
“…Re-weighting [29,30] on class-sensitive learning is a simple yet effective method to address class imbalance. SCILM [31] distributed different weights to selected samples based on class representativeness to alleviate the class-imbalance issue. Ye et al [32] detailed a theoretical analysis of data imbalance, proposing a balanced mixup loss function.…”
Section: Long-tail Learningmentioning
confidence: 99%
“…Re-weighting [29,30] on class-sensitive learning is a simple yet effective method to address class imbalance. SCILM [31] distributed different weights to selected samples based on class representativeness to alleviate the class-imbalance issue. Ye et al [32] detailed a theoretical analysis of data imbalance, proposing a balanced mixup loss function.…”
Section: Long-tail Learningmentioning
confidence: 99%
“…Ji et al [47] propose a simple balanced sampling approach on top of a prototypical model, then add a more complex "feature fusion" technique to account for the fact that certain instances may be more or less representative of their class prototype. This is done by aligning the semantic features of the class with the visual features of instances, creating semantic-guided prototypes for each class.…”
Section: Zero-shot Image Classificationmentioning
confidence: 99%
“…In experiments on the three datasets listed above plus two others, the proposed model outperformed many SOTA GZSL models for the balanced datasets, and marginally outperformed all for the imbalanced datasets. On AWA2 [82], they report 62.2% accuracy on unseen classes, 76.7% on seen classes, and 68.7% on the harmonic mean of these two, significantly outperforming [47]. Additionally, on the imbalanced data, they report better training times than all but one compared model, CNZSL [83].…”
Section: Zero-shot Image Classificationmentioning
confidence: 99%
“…Shigeto et al [ 31 ] experimentally proved that the semantic-to-visual embedding is able to generate more compact and separative visual feature distribution with the one-to-many correspondence manner, thereby mitigating the hubness issue. Ji et al [ 32 ] also follow the inverse mapping direction from semantic space to visual space and proposed a semantic-guided class imbalance learning model which alleviates the class-imbalance issue in ZSIC. In addition, for the class-imbalance issue, the generative models have been introduced to learn semantic-to-visual mapping to generate visual features of unseen classes [ 33 , 34 , 35 , 36 , 37 ] for data augmentation.…”
Section: Related Workmentioning
confidence: 99%