2020
DOI: 10.3389/fdata.2020.535976
|View full text |Cite
|
Sign up to set email alerts
|

Causal Learning From Predictive Modeling for Observational Data

Abstract: We consider the problem of learning structured causal models from observational data. In this work, we use causal Bayesian networks to represent causal relationships among model variables. To this effect, we explore the use of two types of independencies-context-specific independence (CSI) and mutual independence (MI). We use CSI to identify the candidate set of causal relationships and then use MI to quantify their strengths and construct a causal model. We validate the learned models on benchmark networks an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 42 publications
0
8
0
Order By: Relevance
“…We compared the DARLS algorithm to the following DAG structure learning methods: the standard greedy hill climbing (HC) algorithm (Gámez et al 2011), the Peter-Clark (PC) algorithm (Spirtes and Glymour 1991), the max-min hill-climbing (MMHC) algorithm (Tsamardinos et al 2006), the fast greedy equivalence search (FGES) (Chickering 2002;Ramsey et al 2017;Ramanan and Natarajan 2020), and the NOTEARS algorithm (Zheng et al 2018). Among these methods, PC is a constraint-based method and MMHC is a hybrid method.…”
Section: Methodsmentioning
confidence: 99%
“…We compared the DARLS algorithm to the following DAG structure learning methods: the standard greedy hill climbing (HC) algorithm (Gámez et al 2011), the Peter-Clark (PC) algorithm (Spirtes and Glymour 1991), the max-min hill-climbing (MMHC) algorithm (Tsamardinos et al 2006), the fast greedy equivalence search (FGES) (Chickering 2002;Ramsey et al 2017;Ramanan and Natarajan 2020), and the NOTEARS algorithm (Zheng et al 2018). Among these methods, PC is a constraint-based method and MMHC is a hybrid method.…”
Section: Methodsmentioning
confidence: 99%
“…These are often limited in terms of user-controllable parameters, with structures being sampled uniformly from the space of DAGs, or limited in terms of variation in topology. Other studies use standard benchmark datasets ( Scutari et al, 2019 ; Ramanan and Natarajan, 2020 ). A flexible synthetic generation system would allow the user to specify many parameters which influence the BN generation, in order to match a given real dataset as closely as possible.…”
Section: Methodsmentioning
confidence: 99%
“…Because of its design and consequently causal properties, this dataset can be used to evaluate methods in terms of forecasting and terms of generated patterns. This dataset was adopted in Ramanan and Natarajan (2020) to study how context-specific independencies can be used to learn causal algorithms. A sample from this dataset is shown in Figure 4.…”
Section: Datasetsmentioning
confidence: 99%