2016
DOI: 10.1073/pnas.1510504113
|View full text |Cite
|
Sign up to set email alerts
|

Improving massive experiments with threshold blocking

Abstract: Inferences from randomized experiments can be improved by blocking: assigning treatment in fixed proportions within groups of similar units. However, the use of the method is limited by the difficulty in deriving these groups. Current blocking methods are restricted to special cases or run in exponential time; are not sensitive to clustering of data points; and are often heuristic, providing an unsatisfactory solution in many common instances. We present an algorithm that implements a widely applicable class o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 47 publications
(33 citation statements)
references
References 35 publications
0
33
0
Order By: Relevance
“…Consider, for example, blocking observations in an experiment—that is, grouping together observations before random assignment to improve the precision of estimated effects. Higgins and Sekhon (2014) leveraged insights from graph theory to provide a blocking algorithm with guarantees about the similarity of observations assigned to the same block. Moore and Moore (2013) used tools to provide a blocking algorithm for experiments that arrive sequentially.…”
Section: Combining Machine Learning and Causal Inferencementioning
confidence: 99%
“…Consider, for example, blocking observations in an experiment—that is, grouping together observations before random assignment to improve the precision of estimated effects. Higgins and Sekhon (2014) leveraged insights from graph theory to provide a blocking algorithm with guarantees about the similarity of observations assigned to the same block. Moore and Moore (2013) used tools to provide a blocking algorithm for experiments that arrive sequentially.…”
Section: Combining Machine Learning and Causal Inferencementioning
confidence: 99%
“…From this full list, we constructed a politically balanced set of users to form the subject pool for our experiment; we also removed users with more than 15,000 followers or for whom the partisanship estimator was unable to return a score. To balance subjects across experimental conditions and to improve our precision in our causal inference, we performed randomized assignment by blocking (11). We created homogeneous blocks of users based on (i) users partisanship, (ii) log transform of number of followers (we used log transform since this data is highly skewed), (iii) number of days with at least one tweet in past 14 days (to measure recent activity on the platform), and (iv) number of mutual friendships divided by total number of followers (as a proxy for tendency to reciprocate follows).…”
Section: Introductionmentioning
confidence: 99%
“…to conduct completely randomized experiments (CREs) within blocks of covariates. This remains a powerful tool in modern experiments (Miratrix et al, 2013;Higgins et al, 2016;Athey and Imbens, 2017). Blocking is a special case of rerandomization (Morgan and Rubin, 2012), which rejects 'bad' random allocations that violate certain covariate balance criteria.…”
Section: Introductionmentioning
confidence: 99%