Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.205
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Strategies for Hierarchical Text Classification: External Knowledge and Auxiliary Tasks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(15 citation statements)
references
References 23 publications
0
13
0
Order By: Relevance
“…For example, in [1], the outputs of word-level tasks are fed to the char-level primary task. [99] feeds the output of more general classification models to more specific classification models during training, and the more general classification results are used to optimize beam search of more specific models at test time.…”
Section: Hierarchicalmentioning
confidence: 99%
See 2 more Smart Citations
“…For example, in [1], the outputs of word-level tasks are fed to the char-level primary task. [99] feeds the output of more general classification models to more specific classification models during training, and the more general classification results are used to optimize beam search of more specific models at test time.…”
Section: Hierarchicalmentioning
confidence: 99%
“…Similar to the data sampling in Section 3.2, we can assign a task sampling weight đť‘ź 𝑡 for task 𝑡, which is also called mixing ratio, to control the frequency of data batches from task 𝑡. The most common task scheduling technique is to shuffle between different tasks [5,20,30,33,38,44,51,71,73,79,80,89,93,99,102,108,109,114,118], either randomly or according to a pre-defined schedule. While random shuffling is widely adopted, introducing more heuristics into scheduling could help further improving the performance of MTL models.…”
Section: Task Schedulingmentioning
confidence: 99%
See 1 more Smart Citation
“…Compared with one-hot representations, label embeddings have advantages in capturing domain-specific information and importing external knowledge. In the field of text classification (includes the HTC task), researchers propose several forms of label embeddings to encode different kinds of information, such as 1) anchor points (Du et al, 2019), 2) compatibility between labels and words Huang et al, 2019;Tang et al, 2015), 3) taxonomic hierarchy (Cao et al, 2020;Zhou et al, 2020) and 4) external knowledge (Rivas Rojas et al, 2020).…”
Section: Introductionmentioning
confidence: 99%
“…In this work, a HMTC model with a label-based attention module is proposed for text classification. Different from Huang et al (2019); Rojas et al (2020) where hierarchical feature extraction is realized by applying general attention over the whole text, LA-HCN is designed to extract key information based on different labels at different hierarchical levels. Comparing with normal attention, label-based attention is more helpful for human understanding on the classification results which makes the model more explainable and interpretable.…”
Section: Introductionmentioning
confidence: 99%