Publication: Dealing with Interference on Experimentation Platforms
No Thumbnail Available
Date
2018-09-16
Authors
Published Version
Published Version
Journal Title
Journal ISSN
Volume Title
Publisher
The Harvard community has made this article openly available. Please share how this access benefits you.
Citation
Research Data
Abstract
The theory of causal inference, as formalized by the potential outcomes framework, relies on an assumption that the experimental units are independent. When independence is not tenable, we say there is interference, and the core results of causal inference can no longer be guaranteed. Recent research efforts have focused on extending the theory to a setting where interference is present. The many advantages of experimentation platforms over more traditional settings of causal inference---no issue of non-compliance, large number of experimental units, ease of collecting outcomes over the course of an experiment---make them an ideal setting for studying causality with interference. With this setting in mind, we explore how multi-level designs, Experiment-of-Experiments, can allow us to detect and mitigate the effects of interference on experimentation platforms. In particular, we develop a design-based statistical test for the no-interference assumption. We further design an empirical procedure for comparing the effectiveness of cluster-based randomized designs. Finally, we show that randomized saturation designs can be optimized to improve the bias and variance of standard estimators, and extend these results to a new category of randomized designs: optimized saturation designs.
Description
Other Available Sources
Keywords
Computer Science, Statistics
Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service