Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1484
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Task Label Embedding for Text Classification

Abstract: Multi-task learning in text classification leverages implicit correlations among related tasks to extract common features and yield performance gains. However, most previous works treat labels of each task as independent and meaningless onehot vectors, which cause a loss of potential information and makes it difficult for these models to jointly learn three or more tasks. In this paper, we propose Multi-Task Label Embedding to convert labels in text classification into semantic vectors, thereby turning the ori… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
3
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 69 publications
(20 citation statements)
references
References 17 publications
(34 reference statements)
0
20
0
Order By: Relevance
“…In the field of NLP, Tang et al studied the applications of label embeddings for text classification in the heterogeneous networks [33]. Furthermore, multitask learning also fully utilized label embeddings [34], which is beneficial to experimental results. At present, more and more researchers devote themselves to improve the performance of related tasks.…”
Section: Label Embeddingmentioning
confidence: 99%
“…In the field of NLP, Tang et al studied the applications of label embeddings for text classification in the heterogeneous networks [33]. Furthermore, multitask learning also fully utilized label embeddings [34], which is beneficial to experimental results. At present, more and more researchers devote themselves to improve the performance of related tasks.…”
Section: Label Embeddingmentioning
confidence: 99%
“…The SSTb corpus was utilized for the first time by Pang et al [47] for Sentiment Analysis research. The extension of this SSTb as a benchmark dataset was used by Sochar et al [48].…”
Section: • Stanford Sentiment Treebank (Sstb)mentioning
confidence: 99%
“…We tested our baseline on the whole data set, and it performed similarly (95.3%), which serves as a validation of the baseline approach used in this study. Next, the accuracy on the Reuters data set was recently reported to be 80-85%, where multi-objective label encoders were used [54]. Our baseline implementation performs with 75% accuracy.…”
Section: Resultsmentioning
confidence: 99%