2020
DOI: 10.1287/mnsc.2019.3379
|View full text |Cite
|
Sign up to set email alerts
|

Efficiently Evaluating Targeting Policies: Improving on Champion vs. Challenger Experiments

Abstract: Champion versus challenger field experiments are widely used to compare the performance of different targeting policies. These experiments randomly assign customers to receive marketing actions recommended by either the existing (champion) policy or the new (challenger) policy, and then compare the aggregate outcomes. We recommend an alternative experimental design and propose an alternative estimation approach to improve the evaluation of targeting policies. The recommended experimental design randomly assign… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 41 publications
(16 citation statements)
references
References 16 publications
0
16
0
Order By: Relevance
“…A series of recent papers at the intersection of machine learning and causal inference have been developing methods to address these challenges and obtain individual-level treatment effects, which can then be used to personalize treatment assignment (Athey and Imbens, 2016;Wager and Athey, 2018). Similarly, a series of papers in marketing have combined powerful predictive machine learning models with experimental (or quasi-experimental) data to develop personalized targeting policies (Rafieian and Yoganarasimhan, 2020;Rafieian, 2019a,b;Simester et al, 2019a). At a high level, all these papers share the common goal of personalizing marketing interventions in order to maximize some measure of reward that is important to the firm.…”
Section: Research Agenda and Challengesmentioning
confidence: 99%
“…A series of recent papers at the intersection of machine learning and causal inference have been developing methods to address these challenges and obtain individual-level treatment effects, which can then be used to personalize treatment assignment (Athey and Imbens, 2016;Wager and Athey, 2018). Similarly, a series of papers in marketing have combined powerful predictive machine learning models with experimental (or quasi-experimental) data to develop personalized targeting policies (Rafieian and Yoganarasimhan, 2020;Rafieian, 2019a,b;Simester et al, 2019a). At a high level, all these papers share the common goal of personalizing marketing interventions in order to maximize some measure of reward that is important to the firm.…”
Section: Research Agenda and Challengesmentioning
confidence: 99%
“…The previous work has focused on validating myopic targeting problems. Simester et al, (2019b) propose the use of a randomizedby-action (RBA) design to improve the efficiency of model comparisons. Hitsch and Misra (2018) .…”
Section: Literature Reviewmentioning
confidence: 99%
“…Finally, we may encounter covariate shift. Covariate shift arises if the distribution of the data used for training a targeting model is different than the data used for implementation (see for example Simester et al 2019b). In our setting, the probability vector at the last period, which will guide the targeting policy in a sequential targeting problem, might not occur in any prior period.…”
Section: Calculating Predicted Probabilitiesmentioning
confidence: 99%
“…While the focus of that literature is primarily on developing a scalable architecture, our focus is on developing a set of treatment effects that incorporate experimentation, and on ways in which treatment effects learned in the experiment can be extrapolated to other situations. This paper is related to the copious literature on measuring digital advertising effects via randomized controlled trials (e.g., Goldfarb and Tucker, 2011;Lewis and Reiley, 2014;Blake et al, 2015;Sahni, 2015;Sahni and Nair, 2016;Gordon et al, 2019), and to the empirical literature on measuring advertising effects in competition (e.g., Shapiro, 2018;Simester et al, 2019), though these papers have not addressed the issue of parallel experimentation to our knowledge. In addition, to the extent that we leverage counterfactual policy logging to improve the precision of our estimates, our work is related to the recent literature on digital advertising that has suggested such strategies for improving statistical efficiency (e.g., Johnson et al, 2017;Simester et al, 2019).…”
Section: Relationship To the Literaturementioning
confidence: 99%
“…This ensures we implement statistical analysis on a set of users in the treatment group who have the highest opportunity to be factually served the focal ad, and on a set of equivalent users in the control group who have the highest opportunity to be counterfactually served the focal ad. By removing users with low propensity to be served the focal ad from the analysis, we improve precision (e.g., Johnson et al, 2017;Simester et al, 2019).…”
Section: Counterfactual Policy Loggingmentioning
confidence: 99%