2022
DOI: 10.48550/arxiv.2205.11486
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Robust and Agnostic Learning of Conditional Distributional Treatment Effects

Abstract: The conditional average treatment effect (CATE) is the best point prediction of individual causal effects given individual baseline covariates and can help personalize treatments. However, as CATE only reflects the (conditional) average, it can wash out potential risks and tail events, which are crucially relevant to treatment choice. In aggregate analyses, this is usually addressed by measuring distributional treatment effect (DTE), such as differences in quantiles or tail expectations between treatment group… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 52 publications
0
3
0
Order By: Relevance
“…The partial balance property is naturally satisfied in many cases, for example, (a) the working propensity scores are correct, i.e., p a|s (X, Z s ) = p a|s (X, Z s ); (b) there are no source-specific covariates, i.e., Z s = ∅; (c) only the covariates X lead to the heterogeneity of causal effects, i.e., δ(X, Z s ) = δ(X). Case (c) holds especially when many variables predict potential outcomes but only a few has a strong modulate effect (Kallus and Oprescu, 2022).…”
Section: Direct Learning For Homogeneous Causal Data Fusionmentioning
confidence: 99%
See 2 more Smart Citations
“…The partial balance property is naturally satisfied in many cases, for example, (a) the working propensity scores are correct, i.e., p a|s (X, Z s ) = p a|s (X, Z s ); (b) there are no source-specific covariates, i.e., Z s = ∅; (c) only the covariates X lead to the heterogeneity of causal effects, i.e., δ(X, Z s ) = δ(X). Case (c) holds especially when many variables predict potential outcomes but only a few has a strong modulate effect (Kallus and Oprescu, 2022).…”
Section: Direct Learning For Homogeneous Causal Data Fusionmentioning
confidence: 99%
“…The data splitting technique is commonly used when learning nuisance estimators (Kallus and Oprescu, 2022).…”
Section: Algorithmmentioning
confidence: 99%
See 1 more Smart Citation