2022 International Joint Conference on Neural Networks (IJCNN) 2022
DOI: 10.1109/ijcnn55064.2022.9892257
|View full text |Cite
|
Sign up to set email alerts
|

Contrastive Learning with Heterogeneous Graph Attention Networks on Short Text Classification

Abstract: Graph neural networks (GNNs) have attracted extensive interest in text classification tasks due to their expected superior performance in representation learning. However, most existing studies adopted the same semi-supervised learning setting as the vanilla Graph Convolution Network (GCN), which requires a large amount of labelled data during training and thus is less robust when dealing with large-scale graph data with fewer labels. Additionally, graph structure information is normally captured by direct inf… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 35 publications
(61 reference statements)
0
3
0
Order By: Relevance
“…Label information is propagated through message passing on the built graph. Inspired by the success of CL in unsupervised representation learning, recent studies (Su et al 2022) explore the potential of CL based on GNNs to leverage the self-supervised signals present in the unlabeled data, aiding in extracting useful features. However, their effectiveness is heavily relies on the generated contrastive views, and the way the views are generated can easily lead to incorrect self-supervised signals that misguide model learning.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Label information is propagated through message passing on the built graph. Inspired by the success of CL in unsupervised representation learning, recent studies (Su et al 2022) explore the potential of CL based on GNNs to leverage the self-supervised signals present in the unlabeled data, aiding in extracting useful features. However, their effectiveness is heavily relies on the generated contrastive views, and the way the views are generated can easily lead to incorrect self-supervised signals that misguide model learning.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, short text classification (STC), as a highly challenging task that attracts tremendous attention from researchers, has a wide range of practical applications, such as news classification (Chen et al 2019), sentiment analysis (Yao, Mao, and Luo 2019), and social media analysis (Liu et al 2021(Liu et al , 2023a. Recently, some studies (Su et al 2022) have attempted to integrate graph neural networks (GNNs) with contrastive learning (CL) for solving STC tasks, with promising results. In these approaches, a corpus-level graph is constructed, incorporating latent topics, words, or entities as nodes.…”
Section: Introductionmentioning
confidence: 99%
“…Additionally, as there is a limited amount of training data available in medical VQA, we can apply graph generative methods [32], to enhance the generalisation ability of models. • Graph representation learning methods can be introduced to the question embeddings, such as heterogeneous graph neural networks for different words [33], [34], [35]. • External knowledge, such as knowledge graphs, can be considered [36], [37], so that the model can understand questions and implement the inference, by connecting questions to knowledge graphs.…”
Section: Future Workmentioning
confidence: 99%