Dealing with Interference on Experimentation Platforms
MetadataShow full item record
AbstractThe theory of causal inference, as formalized by the potential outcomes framework, relies on an assumption that the experimental units are independent. When independence is not tenable, we say there is interference, and the core results of causal inference can no longer be guaranteed. Recent research efforts have focused on extending the theory to a setting where interference is present. The many advantages of experimentation platforms over more traditional settings of causal inference---no issue of non-compliance, large number of experimental units, ease of collecting outcomes over the course of an experiment---make them an ideal setting for studying causality with interference. With this setting in mind, we explore how multi-level designs, Experiment-of-Experiments, can allow us to detect and mitigate the effects of interference on experimentation platforms. In particular, we develop a design-based statistical test for the no-interference assumption. We further design an empirical procedure for comparing the effectiveness of cluster-based randomized designs. Finally, we show that randomized saturation designs can be optimized to improve the bias and variance of standard estimators, and extend these results to a new category of randomized designs: optimized saturation designs.
Citable link to this pagehttp://nrs.harvard.edu/urn-3:HUL.InstRepos:39947197
- FAS Theses and Dissertations