2020
DOI: 10.1007/978-981-15-7984-4_16
|View full text |Cite
|
Sign up to set email alerts
|

Over-Smoothing Algorithm and Its Application to GCN Semi-supervised Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 11 publications
0
1
0
Order By: Relevance
“…The best f1 score (0.62) is achieved with the Random Forest algorithm. The traditional models outperform GCN (f1 score of 0.46), and the main cause of this may lie in over-smoothing (Dai, Guo, and Feng 2020). This refers to a situation where, during the training process, the node representations become too similar or indistinguishable across different nodes in the graph.…”
Section: Models Performancementioning
confidence: 99%
“…The best f1 score (0.62) is achieved with the Random Forest algorithm. The traditional models outperform GCN (f1 score of 0.46), and the main cause of this may lie in over-smoothing (Dai, Guo, and Feng 2020). This refers to a situation where, during the training process, the node representations become too similar or indistinguishable across different nodes in the graph.…”
Section: Models Performancementioning
confidence: 99%