Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1044
|View full text |Cite
|
Sign up to set email alerts
|

Label-Specific Document Representation for Multi-Label Text Classification

Abstract: Multi-label text classification (MLTC) aims to tag most relevant labels for the given document. In this paper, we propose a Label-Specific Attention Network (LSAN) to learn the new document representation. LSAN takes advantage of label semantic information to determine the semantic connection between labels and document for constructing labelspecific document representation. Meanwhile, the self-attention mechanism is adopted to identify the label-specific document representation from document content informati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
74
0
1

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 103 publications
(75 citation statements)
references
References 19 publications
0
74
0
1
Order By: Relevance
“…However, Wiegreffe and Pinter (2019) show through alternative tests that prior work does not discredit the usefulness of attention for interpretability. Xiao et al (2019) introduce the Label-Specific Attention Network (LSAN) for multi-label document classification. They use label descriptions to compute attention scores for words, and follow the self-attention of Lin et al (2017).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…However, Wiegreffe and Pinter (2019) show through alternative tests that prior work does not discredit the usefulness of attention for interpretability. Xiao et al (2019) introduce the Label-Specific Attention Network (LSAN) for multi-label document classification. They use label descriptions to compute attention scores for words, and follow the self-attention of Lin et al (2017).…”
Section: Related Workmentioning
confidence: 99%
“…Recent work in multi-label text classification (Xiao et al, 2019) and sequence labeling (Cui and Zhang, 2019) shows the efficiency and interpretability of label-specific representations. We introduce Figure 1: Comparison of the attention head architectures of our proposed Label Attention Layer and a Self-Attention Layer (Vaswani et al, 2017).…”
Section: Introductionmentioning
confidence: 99%
“…(4) AC 6 (Kim et al, 2018), which consists of a self-attention module and multiple CNNs enabling it to imitate human's two-step procedure of analyzing emotions from sentences: comprehend and classify. (5) LSAN 7 (Xiao et al, 2019), which takes advantage of label semantic information to determine the semantic connection between labels and document for constructing label-specific document representation. This approach is considered as the state-of-the-art in multi-label text classification.…”
Section: Baselinesmentioning
confidence: 99%
“…In this group, the baselines use different approaches to deal with the multi-modal issue without considering the label dependence issue. Specifically, in these approaches, a linear layer of L dimensions Approaches Acc HL F1 BR (Shen et al, 2004) 0.222 0.371 0.386 CC (Read et al, 2011) 0.225 0.377 0.386 RAkLA (Tsoumakas et al, 2011) 0.242 0.376 0.397 AC (Kim et al, 2018) 0.388 0.240 0.492 LSAN (Xiao et al, 2019) 0.393 0.209 0.501 DRS2S 0.436 0.215 0.523 GMFN (Zadeh et al, 2018b) 0.396 0.195 0.517 RAVEN 0.416 0.195 0.517 MulT (Tsai et al, 2019) 0 with sigmoid activation is used to predict the emotions. ( 7) GMFN 2 (Zadeh et al, 2018b), which explicitly models the multi-modal interactions by capturing uni-modal, bi-modal and tri-modal interactions.…”
Section: Baselinesmentioning
confidence: 99%
See 1 more Smart Citation