2022 International Joint Conference on Neural Networks (IJCNN) 2022
DOI: 10.1109/ijcnn55064.2022.9892866
|View full text |Cite
|
Sign up to set email alerts
|

On Calibration of Graph Neural Networks for Node Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
5
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 6 publications
1
5
0
Order By: Relevance
“…The generally reported underconfidence of GCNs 27 , 28 was in line with our results for the GCNConv operator. We observed overconfidence to a higher degree using the GCNConv operator than the GraphConv operator.…”
Section: Discussionsupporting
confidence: 92%
“…The generally reported underconfidence of GCNs 27 , 28 was in line with our results for the GCNConv operator. We observed overconfidence to a higher degree using the GCNConv operator than the GraphConv operator.…”
Section: Discussionsupporting
confidence: 92%
“…Lalitha et al, 40 Rizk et al, 41 and Caldarola et al 42 are some trailblazing literatures that proposed to model clients as nodes in graphs where the collaborative training is analogous to neighborhood aggregation in graph data learning. FedGraphNN and Liu et al 43,44 are prominent benchmark surveys that have contributed to examine the applications and theoretical insights into GNN-based FL across graphs in diverse data domains. However, graph FL encounters unique challenges stemming from graph-specific heterogeneities, such as inconsistent node-and edge-level semantics.…”
Section: Appendix C Related Workmentioning
confidence: 99%
“…While DNNs lean towards overconfidence, GNNs tend to exhibit underconfident. In GNN calibration, there are generally two types of loss functions that regularize the training phase: the graph calibration loss (GCL) (Wang, Yang, and Cheng 2022) and the confidence-reward loss (CRL) (Liu et al 2022). The post-hoc calibration methods in GNNs, such as CaGCN (Wang et al 2021), RBS (Liu et al 2022), and GATS (Hsu et al 2022), train additional networks on the validation set.…”
Section: Gnn Calibrationmentioning
confidence: 99%
“…In GNN calibration, there are generally two types of loss functions that regularize the training phase: the graph calibration loss (GCL) (Wang, Yang, and Cheng 2022) and the confidence-reward loss (CRL) (Liu et al 2022). The post-hoc calibration methods in GNNs, such as CaGCN (Wang et al 2021), RBS (Liu et al 2022), and GATS (Hsu et al 2022), train additional networks on the validation set. They use the GNN output, such as logits, as the input for the calibration model and generate nodespecific temperature coefficients.…”
Section: Gnn Calibrationmentioning
confidence: 99%
See 1 more Smart Citation