2020
DOI: 10.48550/arxiv.2009.03509
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification

Abstract: Graph convolutional network (GCN) and label propagation algorithms (LPA) are both message passing algorithms, which have achieved superior performance in semi-supervised classification. GCN performs feature propagation by a neural network to make predictions, while LPA uses label propagation across graph adjacency matrix to get results. However, there is still no good way to combine these two kinds of algorithms. In this paper, we proposed a new Unified Massage Passaging model (UniMP) that can incorporate feat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
101
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 81 publications
(109 citation statements)
references
References 19 publications
0
101
0
Order By: Relevance
“…However, the aforementioned models are required to load all the graph data into the memory at the same time, making training larger relation networks impossible. Gilmer et al (2017) Shi et al (2020) further show that using a Transformer-like operator to aggregate node features and the neighbor nodes' features gives a better performance than a simple average such as GraphSAGE or an attention mechanism such as Graph Attention Network (Veličković et al, 2017). To close the gap in the researches, we propose GTN-VF, which adopts the state-of-the-art Graph Neural Networks to model the relationships.…”
Section: Graph Neural Networkmentioning
confidence: 99%
“…However, the aforementioned models are required to load all the graph data into the memory at the same time, making training larger relation networks impossible. Gilmer et al (2017) Shi et al (2020) further show that using a Transformer-like operator to aggregate node features and the neighbor nodes' features gives a better performance than a simple average such as GraphSAGE or an attention mechanism such as Graph Attention Network (Veličković et al, 2017). To close the gap in the researches, we propose GTN-VF, which adopts the state-of-the-art Graph Neural Networks to model the relationships.…”
Section: Graph Neural Networkmentioning
confidence: 99%
“…At the second stage we construct fully-connected graph from all polyline representations d and apply graph transformer convolution [18]:…”
Section: Multimodal Modelmentioning
confidence: 99%
“…After encoding the geometric tensors into SO(3)-invariant scalars t ij , we first embed them alone with other pre-defined node/edge attributes (h j , e ij ) into high-dimensional representations, and leverage an attention-based Graph Transformer architecture (Shi et al, 2020) to learn the SO(3)-invariant edgewise message embeddings m ij by propagating and aggregating information on the graph G X . The attention mechanism is introduced due to its powerful capacity in modeling those graphs with unknown topology.…”
Section: Graph Transformer Blockmentioning
confidence: 99%