Proceedings of the 29th ACM International Conference on Information &Amp; Knowledge Management 2020
DOI: 10.1145/3340531.3412139
|View full text |Cite
|
Sign up to set email alerts
|

Label-Aware Graph Convolutional Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
159
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 110 publications
(160 citation statements)
references
References 4 publications
1
159
0
Order By: Relevance
“…One problem is that the representation of all nodes converges to the same value, along with repetitions of the graph convolution operation when the number of layers in the graph convolutional neural network is increased, resulting in reduced performance [ 29 , 30 , 31 ]. Another problem is that label information cannot be propagated to the entire graph when there is limited label information, resulting in low performance [ 32 , 33 , 34 ].…”
Section: Introductionmentioning
confidence: 99%
“…One problem is that the representation of all nodes converges to the same value, along with repetitions of the graph convolution operation when the number of layers in the graph convolutional neural network is increased, resulting in reduced performance [ 29 , 30 , 31 ]. Another problem is that label information cannot be propagated to the entire graph when there is limited label information, resulting in low performance [ 32 , 33 , 34 ].…”
Section: Introductionmentioning
confidence: 99%
“…We apply the SAGNN on datasets Twitter15 and Twitter16 to evaluate its performance. A GCNII (Graph Convolutional Network via Initial residual and Identity mapping) network [21] serves as the baseline to assess the proposed algorithm. In this experiment, the SAGNN has two aggregation layers, unlike the one shown in Figure 4, in which only a single aggregation layer is shown for clarity.…”
Section: Network Setupmentioning
confidence: 99%
“…We improve the GCN layer of graph U-Net to GCNII layer [31], the formula of forward propagation is defined as:…”
Section: Improved Information Aggregation Layer Bgnnmentioning
confidence: 99%
“…In order to ensure the back-propagation and gradient descent algorithm works, we not only change the original skip-connection way of graph U-Net from residual to dense, but also add a deep supervision mechanism to the loss function with four outputs of different encoders. The graph U-Net+ extends the information aggregation approach by combining GCNII [31] and bilinear information aggregator [32] for all encoding and decoding blocks. In addition, we also introduce a novel graph normalization technique named NodeNorm [33], so as to eliminate the over-smoothing problem with Fig.…”
Section: Introductionmentioning
confidence: 99%