2021
DOI: 10.48550/arxiv.2109.04173
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Relating Graph Neural Networks to Structural Causal Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(12 citation statements)
references
References 0 publications
0
12
0
Order By: Relevance
“…in the political or medical domain. We thus encourage anyone who uses VACA (or any other ML method for causal inference) to i) fully understand the model assumptions and to verify (up to the possible extend) that they are fulfilled; as well as ii) to be aware of the identifiability problem in counterfactual queries [49,45].…”
Section: Conclusion Limitations and Impactmentioning
confidence: 99%
See 1 more Smart Citation
“…in the political or medical domain. We thus encourage anyone who uses VACA (or any other ML method for causal inference) to i) fully understand the model assumptions and to verify (up to the possible extend) that they are fulfilled; as well as ii) to be aware of the identifiability problem in counterfactual queries [49,45].…”
Section: Conclusion Limitations and Impactmentioning
confidence: 99%
“…We stress that the causal graph can often be inferred from expert knowledge [52] or via one of the approaches for causal discovery [12,42]. With this analysis we aim to complement the concurrent line of research that theoretically studies the use of Neural Networks (NN) [45], and more recently GNNs [49], for causal inference.…”
Section: Introductionmentioning
confidence: 99%
“…Recent work [6] explores how to select trustworthy neighbors for GNN in the inference stage, and demonstrates its effectiveness in node classification tasks. [37] studies the connection between GNNs and SCM from a theoretical perspective. Different from them, we introduce a causal intervention strategy to mitigate the confounding effect for GNNs in the training stage.…”
Section: Causal Inferencementioning
confidence: 99%
“…The regularization term introduced in Conj.1 thereby penalizes CEM that would not be able to account for certain explanations to the posed single-why questions. Furthermore, Conj.1 poses an interesting direction of future research concerning more general classes of causal models that could enable a tighter integration between current practices in deep learning and causality, as in (Xia et al, 2021) for general Neural Causal Models (NCM) and (Zečević et al, 2021b) for Graph Neural Networks based NCM.…”
Section: Mathematical Foundations Of Structural Causal Interpretationsmentioning
confidence: 99%
“…Since the CEM is a sub-structure of the SCM as it "summarizes" the given structural equations to their causal effects for each of the edges within the SCM's induced graph, we can make the important observation that methods that in fact model the actual SCM (e.g. NCMs as in (Xia et al, 2021) or (Zečević et al, 2021b)) are more powerful than methods that model the sub-structure (CEM).…”
Section: A3 Proofs For Theorem 2 and Propositionmentioning
confidence: 99%