Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop 2019
DOI: 10.18653/v1/p19-2045
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Multi-label Classification of Text with Capsule Networks

Abstract: Capsule networks have been shown to demonstrate good performance on structured data in the area of visual inference. In this paper we apply and compare simple shallow capsule networks for hierarchical multi-label text classification and show that they can perform superior to other neural networks, such as CNNs and LSTMs, and non-neural network architectures such as SVMs. For our experiments, we use the established Web of Science (WOS) dataset and introduce a new real-world scenario dataset, the BlurbGenreColle… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
44
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 74 publications
(45 citation statements)
references
References 19 publications
0
44
0
Order By: Relevance
“…Another group of algorithms called hierarchical multi-label classification methods has been proposed for leveraging the hierarchical relationships among labels in making predictions, which has been successfully exploited for text processing [36], visual recognition [37, 38] and genomic analysis [39]. One common approach is to train classifiers on conditional data with all parent-level labels being positive and then to finetune them with the whole dataset [12], which contains both the positive and negative samples.…”
Section: Problem Formulationmentioning
confidence: 99%
“…Another group of algorithms called hierarchical multi-label classification methods has been proposed for leveraging the hierarchical relationships among labels in making predictions, which has been successfully exploited for text processing [36], visual recognition [37, 38] and genomic analysis [39]. One common approach is to train classifiers on conditional data with all parent-level labels being positive and then to finetune them with the whole dataset [12], which contains both the positive and negative samples.…”
Section: Problem Formulationmentioning
confidence: 99%
“…In our study, we employ three evaluation metrics to measure the performances of different approaches to multi-modal multi-label emotion detection, i.e., multi-label Accuracy (Acc), Hamming Loss (HL) and micro F 1 measure (F 1 ). These metrics have been popularly used in some multi-label classification problems (Li et al, 2015;Aly et al, 2019;Wu et al, 2019).…”
Section: Experimental Settingsmentioning
confidence: 99%
“…Intent detection task aims to classify the intent of queries and is always considered as a text classification task (Kim, 2014;Lai et al, 2015;Yang et al, 2016;Joulin et al, 2017;Xia et al, 2018). Considering the complexity of the label, some hierarchical text classification methods (Huang et al, 2019;Mao et al, 2019;Aly et al, 2019) have emerged to capture label hierarchies. Recently there are some joint models to jointly learn the intent detection and…”
Section: Intent Detectionmentioning
confidence: 99%