2020
DOI: 10.48550/arxiv.2003.03191
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Double Machine Learning based Program Evaluation under Unconfoundedness

Michael C. Knaus

Abstract: This paper consolidates recent methodological developments based on DoubleMachine Learning (DML) with a focus on program evaluation under unconfoundedness. DML based methods leverage flexible prediction methods to control for confounding in the estimation of (i) standard average effects, (ii) different forms of heterogeneous effects, and (iii) optimal treatment assignment rules. We emphasize that these estimators build all on the same doubly robust score, which allows to utilize computational synergies. An eva… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(15 citation statements)
references
References 57 publications
1
14
0
Order By: Relevance
“…The extreme values of the predicted CATEs are mainly caused by the propensity scores which are close to the {0, 1} bounds. Similar issues of the DR-learner due to extreme propensity scores have also been documented in the simulation experiments of Knaus et al (2021) as well as in the empirical application of Knaus (2020). The second observation is that for the DR-learner we clearly see how the theoretical arguments of sample-splitting and cross-fitting translate into the finite sample properties of the estimator.…”
Section: Results Of Main Simulation: Unbalanced Treatment and Nonline...supporting
confidence: 62%
See 4 more Smart Citations
“…The extreme values of the predicted CATEs are mainly caused by the propensity scores which are close to the {0, 1} bounds. Similar issues of the DR-learner due to extreme propensity scores have also been documented in the simulation experiments of Knaus et al (2021) as well as in the empirical application of Knaus (2020). The second observation is that for the DR-learner we clearly see how the theoretical arguments of sample-splitting and cross-fitting translate into the finite sample properties of the estimator.…”
Section: Results Of Main Simulation: Unbalanced Treatment and Nonline...supporting
confidence: 62%
“…In this paper, we focus on the above-listed meta-learners in order to provide a contrast between less and more complex algorithms for the estimation of causal effects. Moreover, these particular meta-learners have been extensively studied theoretically as well as applied in various empirical settings, including economics (Knaus, 2020;Jacob, 2021;Sallin, 2021;Valente, 2022), public policy (Kristjanpoller, Michell, & Minutolo, 2021;Shah, Kreif, & Jones, 2021), marketing (Gubela, Lessmann, & Jaroszewicz, 2020;Gubela & Lessmann, 2021), medicine (Lu, Sadiq, Feaster, & Ishwaran, 2018;Duan, Rajpurkar, Laird, Ng, & Basu, 2019) or sports (Goller, 2021). Some further examples of meta-learners proposed in the literature consist of the U-learner and Y-learner (Stadie, Kunzel, Vemuri, & Sekhon, 2018), or more recently the IF-learner (Curth, Alaa, & van der Schaar, 2020) and RA-learner (Curth & van der Schaar, 2021), which are, however, beyond the scope of this paper.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations