2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00801
|View full text |Cite
|
Sign up to set email alerts
|

Quantifying Explainers of Graph Neural Networks in Computational Pathology

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
47
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 67 publications
(55 citation statements)
references
References 32 publications
0
47
0
Order By: Relevance
“…Second, in an off-line step, we employ a discriminative feature attribution technique to measure importance scores ∀v ∈ V towards the classification of each class. Specifically, we use GraphGrad-CAM [15,21], a version of Grad-CAM [24] that can operate with GNNs. Argmax across class-wise node attribution maps from GraphGrad-CAM determines the node labels.…”
Section: Methodsmentioning
confidence: 99%
“…Second, in an off-line step, we employ a discriminative feature attribution technique to measure importance scores ∀v ∈ V towards the classification of each class. Specifically, we use GraphGrad-CAM [15,21], a version of Grad-CAM [24] that can operate with GNNs. Argmax across class-wise node attribution maps from GraphGrad-CAM determines the node labels.…”
Section: Methodsmentioning
confidence: 99%
“…ASR * , Accuracy [28] Accuracy [29], Faithfulness [30] ASR * , Data Leakage [31] Group/Individual Fairness [32], [33] Standard Evaluations [34] Inference Time [35], Nodes-Per-Joule [36] are commonly used metrics for evaluating GNN explanations. In specific applications, some metrics based on domain knowledge (e.g., correlated separability [59] in computational pathology) are also used to measure explanations. Research Differences.…”
Section: Trustworthy Gnnsmentioning
confidence: 99%
“…Despite their great success, GNNs are generally treated as black-box since their decisions are less understood [145,79], leading to the increasing concerns about the explainability of GNNs. It is hard to fully trust the GNN-based models without interpretation to their predictions, and thus restrict their applications in high-stake scenarios such as clinical diagnose [56,147], legal domains [142] and so on. Hence, it is imperative to develop the explanation techniques for the improved transparency of GNNs.…”
Section: Explainabilitymentioning
confidence: 99%
“…However, there is increasing concerns on the reliability of the GNN-based models as they are treated as the black-box. Althogh several explanation methods of GNN are implemented to digital-pathology tasks [56,147,57], it still remains challenging to develop the explanation methods which align with the domain knowledge of the clinical practitioners.…”
Section: Applicationsmentioning
confidence: 99%