Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2021
DOI: 10.48550/arxiv.2104.12909
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Algorithm is Experiment: Machine Learning, Market Design, and Policy Eligibility Rules

Abstract: Algorithms produce a growing portion of decisions and recommendations both in policy and business. Such algorithmic decisions are natural experiments (conditionally quasirandomly assigned instruments) since the algorithms make decisions based only on observable input variables. We use this observation to develop a treatment-effect estimator for a class of stochastic and deterministic algorithms. Our estimator is shown to be consistent and asymptotically normal for well-defined causal effects. A key special cas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 38 publications
0
2
0
Order By: Relevance
“…1 Method. Our OPE estimator is based on a modification of the Propensity Score (Rosenbaum and Rubin 1983), which we dub the "Approximate Propensity Score" (APS) (Narita and Yata 2022). APS of action (arm) a at context (covariate) value x is the average probability that the logging policy chooses action a over a shrinking neighborhood around x in the context space.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…1 Method. Our OPE estimator is based on a modification of the Propensity Score (Rosenbaum and Rubin 1983), which we dub the "Approximate Propensity Score" (APS) (Narita and Yata 2022). APS of action (arm) a at context (covariate) value x is the average probability that the logging policy chooses action a over a shrinking neighborhood around x in the context space.…”
Section: Introductionmentioning
confidence: 99%
“…Our approach instead predicts the mean reward differences between actions by exploiting local subsamples near the decision boundaries without specifying the regression model. Narita and Yata (2022) originally develop and empirically apply this approach in the context of treatment effect estimation with a binary treatment. This paper extends their approach to OPE with multiple actions.…”
Section: Introductionmentioning
confidence: 99%