2020
DOI: 10.48550/arxiv.2006.07040
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Decomposed Representation for Counterfactual Inference

Abstract: One fundamental problem in the learning treatment effect from observational data is confounder identification and balancing. Most of the previous methods realized confounder balancing by treating all observed variables as confounders, ignoring the identification of confounders and non-confounders. In general, not all the observed variables are confounders which are the common causes of both the treatment and the outcome, some variables only contribute to the treatment and some contribute to the outcome. Balanc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1
1

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(22 citation statements)
references
References 29 publications
0
22
0
Order By: Relevance
“…Subsequently, this was extended to enforce the preservation of local similarity [Yao et al, 2018], to learn overlapping representations [Zhang et al, 2020] and to incorporate importance weighting [Hassanpour and Greiner, 2019]. Instead of learning one representation of all inputs, Hassanpour and Greiner [2020] and Wu et al [2020] identify disentangled representations that separate input covariates by the effects they have on treatment assignment and outcome. However, none of these methods are fit for use in dynamic settings, as they model neither time-dependent confounding nor other dynamic relationships between variables.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Subsequently, this was extended to enforce the preservation of local similarity [Yao et al, 2018], to learn overlapping representations [Zhang et al, 2020] and to incorporate importance weighting [Hassanpour and Greiner, 2019]. Instead of learning one representation of all inputs, Hassanpour and Greiner [2020] and Wu et al [2020] identify disentangled representations that separate input covariates by the effects they have on treatment assignment and outcome. However, none of these methods are fit for use in dynamic settings, as they model neither time-dependent confounding nor other dynamic relationships between variables.…”
Section: Related Workmentioning
confidence: 99%
“…and J, L ∈ {I, C, O}. Wu et al [2020] improve over Hassanpour and Greiner [2020]'s method by augmenting their loss function to enforce that the factors do not overlap, an approach that we adapt to the dynamic setting. Note that forcing such disentanglement is not more restrictive than assuming complete entanglement -should there be no underlying disentangled factors in the data-generating process, then DCRN will simply not learn these factors and will represent the entire covariate space through the confounding factor, and hence able to handle any violations against the assumed existence of disentangled factors.…”
Section: Loss Functionmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, causal representation learning [18], [19], [22], [33], [40], [42] has attracted lots of attention. Among these works, Yao et al [42] propose to reduce prediction bias by filtering out the nearly IVs.…”
Section: Causal Representation Learningmentioning
confidence: 99%
“…Inspired by the recent works [18], [40], [42] on causal disentangled representation learning, we argue that although invalid IV candidates do not satisfy the conditions of the valid IVs strictly, one might decompose and utilize a part of their information to generate IV representations. Therefore, in this paper, we propose a novel Automatic Instrumental Variable decomposition (AutoIV) algorithm to automatically generate representations serving the role of IVs for counterfactual prediction with fewer constraints for the IV candidates.…”
Section: Introductionmentioning
confidence: 97%