Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1518
|View full text |Cite
|
Sign up to set email alerts
|

A Deep Reinforced Sequence-to-Set Model for Multi-Label Classification

Abstract: Multi-label text classification (MLTC) aims to assign multiple labels to each sample in the dataset. The labels usually have internal correlations. However, traditional methods tend to ignore the correlations between labels. In order to capture the correlations between labels, the sequence-tosequence (Seq2Seq) model views the MLTC task as a sequence generation problem, which achieves excellent performance on this task. However, the Seq2Seq model is not suitable for the MLTC task in essence. The reason is that … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
185
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 110 publications
(196 citation statements)
references
References 33 publications
(47 reference statements)
0
185
0
Order By: Relevance
“…The AAPD dataset (P. Yang, Sun, et al, 2018) contains a combination of 55,840 pairs of abstracts and corresponding multilabel subject classifications. Tags cover 54 types of labels.…”
Section: Experimental Datasetmentioning
confidence: 99%
“…The AAPD dataset (P. Yang, Sun, et al, 2018) contains a combination of 55,840 pairs of abstracts and corresponding multilabel subject classifications. Tags cover 54 types of labels.…”
Section: Experimental Datasetmentioning
confidence: 99%
“…However, the above methods ignore the association between labels. In recent years, there has been work on making use of the association relationship between labels, which applies the sequence generation method into multi-label classification [2], [19]- [22].…”
Section: B Multi-label Text Classificationmentioning
confidence: 99%
“…SGM [22] multi-label classification: The Seq2Seq model with attention mechanism is used for multi-label text classification. Besides, the concept of ''global embedding'' is introduced, in which labels except the one with the maximum probability at the previous time step are used together for the prediction of the labels at the current time step.…”
Section: Baselinementioning
confidence: 99%
“…Nam et al [14] used Seq2Seq architecture with GRU encoder and attention-based GRU decoder, achieving an improvement over a standard GRU model [3] on several datasets and metrics. Yang et al [29] continued this idea by introducing Sequence Generation Model (SGM) consisting of BiLSTM-based encoder and LSTM decoder [6] coupled with additive attention mechanism [2].…”
Section: Introductionmentioning
confidence: 99%