2021
DOI: 10.3390/a14120352
|View full text |Cite
|
Sign up to set email alerts
|

A Sequential Graph Neural Network for Short Text Classification

Abstract: Short text classification is an important problem of natural language processing (NLP), and graph neural networks (GNNs) have been successfully used to solve different NLP problems. However, few studies employ GNN for short text classification, and most of the existing graph-based models ignore sequential information (e.g., word orders) in each document. In this work, we propose an improved sequence-based feature propagation scheme, which fully uses word representation and document-level word interaction and o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 38 publications
0
3
0
Order By: Relevance
“…Zhao et al [24] proposed a novel approach to short text classification using a sequential graph neural network (SGNN). This aims to address the challenges of capturing the sequential dependencies and semantic relationships among words in short texts, which are often ignored by traditional methods.…”
Section: Applications Of Natural Language Processingmentioning
confidence: 99%
“…Zhao et al [24] proposed a novel approach to short text classification using a sequential graph neural network (SGNN). This aims to address the challenges of capturing the sequential dependencies and semantic relationships among words in short texts, which are often ignored by traditional methods.…”
Section: Applications Of Natural Language Processingmentioning
confidence: 99%
“…One of the RNN models is bidirectional long short-term memory (BiLSTM). BiLSTM is advantageous for sequence data modeling, however it cannot extract data characteristics in parallel [23]. Deep learning methods such as CNN and BiLSTM can be combined by taking advantage of the advantages of each method to obtain a more accurate deep learning model.…”
Section: Introductionmentioning
confidence: 99%
“…In conventional CNNs, vector z will be directly fed into the classifiers after the document representation is obtained, e.g., fully-connected neural networks [83,84]. From an architecture perspective, conventional sequence-based or convolutional neural networks that are often utilized for text classification are limited by their nature to prioritize sequentiality and locality [85,86]. While these deep learning models capture semantic and syntactic information in the Euclidean space and in local sequences well, they do not account for global word co-occurrences in a corpus that carries non-consecutive and long-distance semantics [39,87].…”
Section: Convolutional Neural Networkmentioning
confidence: 99%