Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &Amp; Data Mining 2021
DOI: 10.1145/3447548.3467426
|View full text |Cite
|
Sign up to set email alerts
|

Generalized Zero-Shot Extreme Multi-label Learning

Abstract: Extreme Multi-label Learning (XML) involves assigning the subset of most relevant labels to a data point from millions of label choices. A hitherto unaddressed challenge in XML is that of predicting unseen labels with no training points. These form a significant fraction of total labels and contain fresh and personalized information desired by end users. Most existing extreme classifiers are not equipped for zero-shot label prediction and hence fail to leverage unseen labels. As a remedy, this paper proposes a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 20 publications
(32 citation statements)
references
References 44 publications
(54 reference statements)
0
26
0
Order By: Relevance
“…Zero-shot multi-label text classification [50] aims to tag each document with labels that are unseen during training time but available for prediction. Most previous studies [4,16,34,38] assume that there is a set of seen classes, each of which has some annotated documents. Trained on these documents, their proposed text classifiers are expected to transfer the knowledge from seen classes to the prediction of unseen ones.…”
Section: Problem Definitionmentioning
confidence: 99%
See 4 more Smart Citations
“…Zero-shot multi-label text classification [50] aims to tag each document with labels that are unseen during training time but available for prediction. Most previous studies [4,16,34,38] assume that there is a set of seen classes, each of which has some annotated documents. Trained on these documents, their proposed text classifiers are expected to transfer the knowledge from seen classes to the prediction of unseen ones.…”
Section: Problem Definitionmentioning
confidence: 99%
“…Therefore, new scoring functions are designed to promote infrequent label prediction by giving the model a higher "reward" when it predicts a tail label correctly. Propensity-scored P@𝑘 (PSP@𝑘) and propensity-scored NDCG@𝑘 (PSNDCG@𝑘, abbreviated to PSN@𝑘 in this paper) are thus proposed in [18] and widely used in LMTC evaluation [16,32,40,55,62]. PSP@𝑘 and PSN@𝑘 are defined as follows.…”
Section: Performance On Tail Labelsmentioning
confidence: 99%
See 3 more Smart Citations