2021
DOI: 10.48550/arxiv.2101.03501
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Entropic Causal Inference: Identifiability and Finite Sample Results

Abstract: Entropic causal inference is a framework for inferring the causal direction between two categorical variables from observational data. The central assumption is that the amount of unobserved randomness in the system is not too large. This unobserved randomness is measured by the entropy of the exogenous variable in the underlying structural causal model, which governs the causal relation between the observed variables. [15] conjectured that the causal direction is identifiable when the entropy of the exogenous… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 13 publications
0
4
0
Order By: Relevance
“…There exist many proposals for discovering causal networks from a single (typically observational) i.i.d. dataset (Spirtes et al 2000;Chickering 2002;Huang et al 2018;Compton et al 2021), which discover partially directed causal networks. While Mian, Marx, and Vreeken (2021) propose an approach to discover fully directed networks, their method is restricted to a single dataset and can not handle interventions.…”
Section: Related Workmentioning
confidence: 99%
“…There exist many proposals for discovering causal networks from a single (typically observational) i.i.d. dataset (Spirtes et al 2000;Chickering 2002;Huang et al 2018;Compton et al 2021), which discover partially directed causal networks. While Mian, Marx, and Vreeken (2021) propose an approach to discover fully directed networks, their method is restricted to a single dataset and can not handle interventions.…”
Section: Related Workmentioning
confidence: 99%
“…However, in general, even if both X → Y and Y → X are true, the two models usually have different complexities in terms of the potential random variables Xy and Yx. See Kocaoglu et al (2017), Compton et al (2021) for examples. Hence given a joint distribution, it is an interesting question to ask if one or both relationships X → Y and X → Y hold true under various assumptions on the model.…”
Section: A Model For Causationmentioning
confidence: 99%
“…The effect of a cause has been quantified using information theory (Wieczorek and Roth 2019), however, without considering learning in a Bayesian setting. Entropic causal inference (see (Compton et al 2021)) specifies circumstances where the causal direction between categorical variables can be determined from observational data under assumptions of limited entropy. Information geometry has been proposed as a means to infer causal orientation by relying on distributional assumptions (Janzing et al 2012), however both settings are different from the Bayesian learning setting considered here.…”
Section: Related Workmentioning
confidence: 99%
“…2 This assumption is not innocent, as it is violated in work which assumes the causal orientation can be identified from observational data (Compton et al 2021). This could be implemented by specifying (P (x|y, h), P (y|h) P (x|y, h)) and (P (x|y, ¬h), P (x|¬h)) separately.…”
Section: Information Gainmentioning
confidence: 99%