2022
DOI: 10.1080/01621459.2022.2069572
|View full text |Cite|
|
Sign up to set email alerts
|

Sharp Sensitivity Analysis for Inverse Propensity Weighting via Quantile Balancing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(8 citation statements)
references
References 45 publications
0
8
0
Order By: Relevance
“…The sharp lower bound is given by the solution of (4) with minimization, which takes the same form as (5) but with W − (X) and W + (X) swapped and α * = 1/(1 + Γ). Dorn and Guo [2022] proved the analytical solution (5) using the Neyman-Pearson lemma. To see this connection, rewrite the constraint in 4).…”
Section: Zhaomentioning
confidence: 97%
See 1 more Smart Citation
“…The sharp lower bound is given by the solution of (4) with minimization, which takes the same form as (5) but with W − (X) and W + (X) swapped and α * = 1/(1 + Γ). Dorn and Guo [2022] proved the analytical solution (5) using the Neyman-Pearson lemma. To see this connection, rewrite the constraint in 4).…”
Section: Zhaomentioning
confidence: 97%
“…Soriano et al [2021] extended this idea to a broader class of balancing weights estimators. Interestingly, Dorn and Guo [2022] showed that the bound obtained by Zhao et al [2019] are conservative even asymptotically. They sharpened the bound by adding a moment constraint to the optimization step.…”
Section: Introductionmentioning
confidence: 99%
“…For example, a video compression treatment might have systematic and stable treatment effects across subpopulations, while a recommendation algorithm may not because of preference heterogeneity that isn't adequately captured by covariates. Sensitivity analyses in the vein of [4,8,12] that assess the magnitude of violations of outcome model stability that overturn transportation conclusions are a fruitful avenue for future research.…”
Section: Selectionmentioning
confidence: 99%
“…Similarly, we can benchmark Λ α by computing how much the odds of treatment change when adding a reference covariate into a propensity model which already includes some baseline covariates (see e.g. Kallus and Zhou, 2021;Dorn and Guo, 2022). Here, we can compute the 1 − αth…”
Section: Calibration For Binary Treatmentsmentioning
confidence: 99%