Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.283
|View full text |Cite
|
Sign up to set email alerts
|

Hyperbolic Capsule Networks for Multi-Label Classification

Abstract: Although deep neural networks are effective at extracting high-level features, classification methods usually encode an input into a vector representation via simple feature aggregation operations (e.g. pooling). Such operations limit the performance. For instance, a multi-label document may contain several concepts. In this case, one vector can not sufficiently capture its salient and discriminative content. Thus, we propose Hyperbolic Capsule Networks (HYPERCAPS) for Multi-Label Classification (MLC), which h… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 19 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…Moreover, Ganea et al [12] and Nickel & Kiela [30] introduce the basic operations of neural networks in the Poincaré ball and the Lorentz model respectively. After that, researchers further introduce various types of neural models in hyperbolic space including hyperbolic attention networks [14], hyperbolic graph neural networks [23,6], hyperbolic prototypical networks [26] and hyperbolic capsule networks [9]. Recently, with the rapid development of hyperbolic neural networks, people attempt to utilize them in various downstream tasks such as word embeddings [37], knowledge graph embeddings [7], entity typing [24], text classification [44], question answering [36] and machine translation [14,34], to handle their non-Euclidean properties, and have achieved significant and consistent improvement compared to the traditional neural models in Euclidean space.…”
Section: Related Workmentioning
confidence: 99%
“…Moreover, Ganea et al [12] and Nickel & Kiela [30] introduce the basic operations of neural networks in the Poincaré ball and the Lorentz model respectively. After that, researchers further introduce various types of neural models in hyperbolic space including hyperbolic attention networks [14], hyperbolic graph neural networks [23,6], hyperbolic prototypical networks [26] and hyperbolic capsule networks [9]. Recently, with the rapid development of hyperbolic neural networks, people attempt to utilize them in various downstream tasks such as word embeddings [37], knowledge graph embeddings [7], entity typing [24], text classification [44], question answering [36] and machine translation [14,34], to handle their non-Euclidean properties, and have achieved significant and consistent improvement compared to the traditional neural models in Euclidean space.…”
Section: Related Workmentioning
confidence: 99%
“…Capsule networks were proposed in [5] to learn the part-whole relationships of objects through iterative routing among different level capsules. The pipelines using capsule networks have achieved state-of-the-art results in some areas of Natural Language Processing (NLP) such as intent detection [13] and multi-label classification [14], as well as computer vision such as expression recognition [15] and low resolution image recognition [16]. Recently, an LSTM-CapsNet architecture was successfully proposed for EEG-based affective computing [6].…”
Section: Related Workmentioning
confidence: 99%
“…These approaches may not be easily extended to those tasks without external knowledge. To this end, the multi-label text classification approaches can be quickly applied to emotion recognition (Chen et al 2020;Fei et al 2020). Recently, Yang et al (2019) leverage a reinforced approach to find a better sequence than a baseline sequence, but it still relies on the pre-trained seq2seq model with a pre-defined order of ground-truth.…”
Section: Related Workmentioning
confidence: 99%