Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.104
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchy-Aware Global Model for Hierarchical Text Classification

Abstract: Hierarchical text classification is an essential yet challenging subtask of multi-label text classification with a taxonomic hierarchy. Existing methods have difficulties in modeling the hierarchical label structure in a global view. Furthermore, they cannot make full use of the mutual interactions between the text feature space and the label space. In this paper, we formulate the hierarchy as a directed graph and introduce hierarchy-aware structure encoders for modeling label dependencies. Based on the hierar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
81
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 97 publications
(111 citation statements)
references
References 32 publications
2
81
0
Order By: Relevance
“…The major part of HTCInfoMax is the "Information Maximization" part shown in the dashed box which has two new modules: text-label mutual information maximization and label prior matching, which will be introduced in the following sections. We keep the remaining part such as text encoder, structure encoder and the predictor be the same as in HiAGM-LA (Zhou et al, 2020).…”
Section: Our Approachmentioning
confidence: 99%
See 4 more Smart Citations
“…The major part of HTCInfoMax is the "Information Maximization" part shown in the dashed box which has two new modules: text-label mutual information maximization and label prior matching, which will be introduced in the following sections. We keep the remaining part such as text encoder, structure encoder and the predictor be the same as in HiAGM-LA (Zhou et al, 2020).…”
Section: Our Approachmentioning
confidence: 99%
“…And the loss from the predictor is the traditional binary cross-entropy loss L c (Zhou et al, 2020).…”
Section: Final Loss Of Htcinfomaxmentioning
confidence: 99%
See 3 more Smart Citations