2021 International Joint Conference on Neural Networks (IJCNN) 2021
DOI: 10.1109/ijcnn52387.2021.9533981
|View full text |Cite
|
Sign up to set email alerts
|

A Generative Bayesian Graph Attention Network for Semi-Supervised Classification on Scarce Data

Abstract: This research focuses on semi-supervised classification tasks, specifically for graph-structured data under datascarce situations. It is known that the performance of conventional supervised graph convolutional models is mediocre at classification tasks, when only a small fraction of the labeled nodes are given. Additionally, most existing graph neural network models often ignore the noise in graph generation and consider all the relations between objects as genuine ground-truth. Hence, the missing edges may n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…GATs can be trained using standard neural network techniques, such as backpropagation. GATs have been shown to outperform traditional graph neural network architectures on a variety of tasks, such as node classification [33] and link prediction [34], [35]. Additionally, GATs are computationally efficient, making them well-suited for large-scale graph data.…”
Section: A Robust Gat Based Model For Structure Recognition: Design A...mentioning
confidence: 99%
“…GATs can be trained using standard neural network techniques, such as backpropagation. GATs have been shown to outperform traditional graph neural network architectures on a variety of tasks, such as node classification [33] and link prediction [34], [35]. Additionally, GATs are computationally efficient, making them well-suited for large-scale graph data.…”
Section: A Robust Gat Based Model For Structure Recognition: Design A...mentioning
confidence: 99%
“…At the same time, we use the MLP-based model to learn more graph structure information, without an explicit message-passing function. To be more specific, the k-hop neighbours are considered more similar to the target node, where this k th power of the neighbouring information is in the range of [1,2,3,4,5,6,7]. If the neighbouring node is not a k-hop of the target node, the neighbours' information is considered zero.…”
Section: Model Trainingmentioning
confidence: 99%
“…On the one hand, most existing studies of GNNs on text classification tasks are trained in a semi-supervised manner, the same as the vanilla Graph Convolution Network (GCN) [5] requiring a large set of labelled data, which cannot be obtained in many real-life scenarios. Therefore, the shortage of labelled data may undermine the performances of graph neural network models on classification tasks, particularly with large scale data [6], [7]. On the other hand, although a GCN can encode local topological properties, it may fail to fully capture the global structural information [8].…”
Section: Introductionmentioning
confidence: 99%
“…For instance, a state-of-the-art model, Vision-Transformer [31], can be deployed for image feature extraction, to improve the current results. Additionally, as there is a limited amount of training data available in medical VQA, we can apply graph generative methods [32], to enhance the generalisation ability of models. • Graph representation learning methods can be introduced to the question embeddings, such as heterogeneous graph neural networks for different words [33], [34], [35].…”
Section: Future Workmentioning
confidence: 99%