Proceedings of the 29th ACM International Conference on Information &Amp; Knowledge Management 2020
DOI: 10.1145/3340531.3411872
|View full text |Cite
|
Sign up to set email alerts
|

Investigating and Mitigating Degree-Related Biases in Graph Convoltuional Networks

Abstract: Graph Convolutional Networks (GCNs) show promising results for semi-supervised learning tasks on graphs, thus become favorable comparing with other approaches. Despite the remarkable success of GCNs, it is difficult to train GCNs with insufficient supervision. When labeled data are limited, the performance of GCNs becomes unsatisfying for low-degree nodes. While some prior work analyze successes and failures of GCNs on the entire model level, profiling GCNs on individual node level is still underexplored. In t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
52
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 87 publications
(62 citation statements)
references
References 25 publications
1
52
0
Order By: Relevance
“…We hope that new models will more regularly include datasets such as Hetionet during their development phase. More generally, and taking cues from the field of GNNs [25,26,44], new methods could be be developed which consider how best to learn meaningful representations for low-degree entities.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…We hope that new models will more regularly include datasets such as Hetionet during their development phase. More generally, and taking cues from the field of GNNs [25,26,44], new methods could be be developed which consider how best to learn meaningful representations for low-degree entities.…”
Section: Discussionmentioning
confidence: 99%
“…The issue of non-uniform graph connectivity (typically in homogenous graphs) has begun to be studied in parallel by the field of Graph Neural Networks (GNN), where researchers have shown that models learn low-quality representations, thus making more incorrect predictions, for low-degree vertices [26,25,44]. This has also been explored in the context of homogenous graph representation learning [3] and for random walks [23,36].…”
Section: Previous Workmentioning
confidence: 99%
“…Graph representation learning. Graph representation learning (GRL) aims to learn representations suitable for graph-based tasks, mainly including node/graph classification [21,48,50,62], and link prediction [17]. Early methods focused on non-attributed graphs, leveraging insights from language modeling [39] to learn embeddings which preserve node co-occurrence statistics on random walks [40].…”
Section: Related Workmentioning
confidence: 99%
“…Like other machine learning models, GNNs can suffer from typical unfairness issues which may arise due to sensitive attributes, label parity issues, and more [55]. Moreover, GNNs can also suffer from degree-related biases [56]. Self-supervised learning (SSL) is often used to learn high-quality representations without supervision from labeled data sources, and is especially useful in low-resource settings or in pre-training/fine-tuning scenarios.…”
Section: G Broader Impactmentioning
confidence: 99%