2021
DOI: 10.1007/s10618-021-00759-3
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial balancing-based representation learning for causal effect inference with observational data

Abstract: Learning causal effects from observational data greatly benefits a variety of domains such as health care, education, and sociology. For instance, one could estimate the impact of a new drug on specific individuals to assist clinical planning and improve the survival rate. In this paper, we focus on studying the problem of estimating the Conditional Average Treatment Effect (CATE) from observational data. The challenges for this problem are two-fold: on the one hand, we have to derive a causal estimator to est… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(22 citation statements)
references
References 36 publications
(35 reference statements)
0
22
0
Order By: Relevance
“…• Reconstruction Loss. Some authors have proposed that reconstruction losses should be applied to representation layers to improve confidence in the invertability assumption (Du et al 2019;Zhang et al 2020). These losses simply minimize an L 2 norm between inputs and outputs to force the representation function to be able to reconstruct it's inputs, along with it's other tasks: L(X, X ) = ||X − X || 2…”
Section: Box 5: Other Flavors Of Tarnetmentioning
confidence: 99%
See 2 more Smart Citations
“…• Reconstruction Loss. Some authors have proposed that reconstruction losses should be applied to representation layers to improve confidence in the invertability assumption (Du et al 2019;Zhang et al 2020). These losses simply minimize an L 2 norm between inputs and outputs to force the representation function to be able to reconstruct it's inputs, along with it's other tasks: L(X, X ) = ||X − X || 2…”
Section: Box 5: Other Flavors Of Tarnetmentioning
confidence: 99%
“…• Adversarial Loss. Rather than learn to predict the propensity score Du et al (2019), apply an adversarial gradient to force the representation layers to "unlearn" information about treatment assignment. This approach is also applied in Bica et al (2020a).…”
Section: Box 5: Other Flavors Of Tarnetmentioning
confidence: 99%
See 1 more Smart Citation
“…Next, we provide a summary of each paper and highlight the main contributions and relevance to the goals of the special issue. Du et al (2021) introduce a novel model, referred to as Adversarial Balancing-based representation learning for Causal Effect Inference. The paper focuses on observational data and highlights the selection bias problem by introducing a neural network encoder constrained by a mutual information estimator for minimizing the loss between representation and input covariates.…”
Section: The Special Issuementioning
confidence: 99%
“…Whenever identifiability is obtained through the backdoor criterion/conditional ignorability [55,Sec. 3.3.1], deep learning techniques can be leveraged to estimate such effects with impressive practical performance [60,49,45,30,65,66,34,61,15,25]. For effects that are identifiable through causal functionals that are not necessarily of the backdoor-form (e.g., frontdoor, napkin), other optimization/statistical techniques can be employed that enjoy properties such as double robustness and debiasedness [31,32,33].…”
Section: Introductionmentioning
confidence: 99%