2019
DOI: 10.1353/obs.2019.0007
|View full text |Cite
|
Sign up to set email alerts
|

Selective Inference for Effect Modification: An Empirical Investigation

Abstract: We demonstrate a selective inferential approach for discovering and making confident conclusions about treatment effect heterogeneity. Our method consists of two stages. First, we use Robinson's transformation to eliminate confounding in the observational study. Next we select a simple model for effect modification using lasso-regularized regression and then use recently developed tools in selective inference to make valid statistical inference for the discovered effect modifiers. We analyze the Mindset Study … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…where plug-in estimates of the nuisance parameters m ( x ) and π ( x ) are obtained from some off-the-shelf machine learning methods, and the individualized treatment effects, Δ ( x ) , are estimated using penalized empirical loss minimization. These ideas were first conceptualized in the proposal of selecting inference for effect modifiers by Zhao et al 41,42 and further generalized in the R-learning method of Nie and Wager. 43…”
Section: Overview Of Hte Evaluation Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…where plug-in estimates of the nuisance parameters m ( x ) and π ( x ) are obtained from some off-the-shelf machine learning methods, and the individualized treatment effects, Δ ( x ) , are estimated using penalized empirical loss minimization. These ideas were first conceptualized in the proposal of selecting inference for effect modifiers by Zhao et al 41,42 and further generalized in the R-learning method of Nie and Wager. 43…”
Section: Overview Of Hte Evaluation Methodsmentioning
confidence: 99%
“…where plug-in estimates of the nuisance parameters m(x) and p(x) are obtained from some off-the-shelf machine learning methods, and the individualized treatment effects, D(x), are estimated using penalized empirical loss minimization. These ideas were first conceptualized in the proposal of selecting inference for effect modifiers by Zhao et al 41,42 and further generalized in the R-learning method of Nie and Wager. 43 An important role in removing biases due to overfitting is played by cross-validated predictions of the nuisance function, that is, the plugged-in values mÀi (X i ) and pÀi (X i ) for the outcome and propensity functions, respectively.…”
Section: Global Outcome Modelingmentioning
confidence: 99%
“…Like for MCM, current implementations consider a linear working model for the IATE (Nie & Wager, 2017;Zhao, Small, & Ertefaie, 2017) and solve βRL = arg min…”
Section: R-learningmentioning
confidence: 99%
“…For models chosen based on the LASSO solution, Lee et al (2016) developed the influential polyhedral method, reducing the selection event to a series of affine inequalities in the response variable. Many different model selection events for a Gaussian response variable have subsequently been developed using a similar set of constraints, yielding tractable conditional distributions that can be used for inference (Suzumura et al, 2017;Liu et al, 2018;Taylor and Tibshirani, 2018;Zhao and Panigrahi, 2019;Tanizaki et al, 2020, among others). Despite the convenience of the polyhedral method, the resulting confidence intervals can have infinite expected length under Gaussian regression, formally established by Kivaranovic and Leeb (2018).…”
Section: Introductionmentioning
confidence: 99%