Publication: Statistical and Machine Learning Methods for Multi-Study Prediction and Causal Inference
No Thumbnail Available
Open/View Files
Date
2022-06-27
Authors
Published Version
Published Version
Journal Title
Journal ISSN
Volume Title
Publisher
The Harvard community has made this article openly available. Please share how this access benefits you.
Citation
Wang, Cathy. 2022. Statistical and Machine Learning Methods for Multi-Study Prediction and Causal Inference. Doctoral dissertation, Harvard University Graduate School of Arts and Sciences.
Research Data
Abstract
In many areas of biomedical research, exponential advances in technology and facilitation of systematic data-sharing increased access to multiple studies. This dissertation proposes and compares methods to address three challenges in multi-study learning. First, personalized cancer risk assessment is key to early prevention, but studies typically report aggregated risk information. We address this challenge by proposing a method that integrates and deconvolves aggregated risk, allowing for heterogeneity in study populations, design, and risk measures, to provide personalized risk estimates that comprehensively reflect the best available data. Second, prediction models are widely used to evaluate disease risk and inform decisions about treatment, but models trained on a single study generally perform worse on out-of-study samples. To address this challenge, we compare two strategies for training prediction models on multiple studies to improve generalizability: merging and ensembling; in practice, our theory can help guide decisions on choosing the ideal strategy. Third, heterogeneous treatment effect estimation is central to personalizing treatment and improving clinical practice, but existing approaches on synthesizing evidence across multiple studies do not account for between-study heterogeneity. We address this challenge by proposing a flexible method that estimates heterogeneous treatment effects from multiple studies, including evidence from randomized controlled trials and real world data, while appropriately accounting for between-study differences in the propensity score and outcome models.
In Chapter 1, we propose a meta-analytic approach for deconvolving aggregated risks to provide age-, gene-, and sex-specific cancer risk. Carriers of pathogenic variants in mismatch repair (MMR) genes benefit from reliable information about their cancer risk to better inform targeted surveillance strategies for colorectal cancer (CRC), but published estimates vary. Variation in published estimates could arise from differences in study designs, selection criteria for molecular testing, and statistical adjustments for ascertainment. Previous meta-analyses of CRC risk are based on studies that report gene- and sex-specific risk. This may exclude studies that provide aggregated cancer risk across sex and genes and lead to bias. To address this challenge, our meta-analytic approach has the ability to deconvolve aggregated risks, allowing us to use all of the information available in the literature and provide more comprehensive penetrance estimates. This method can be applied in the future to other gene/cancer combinations without restriction on the mutation.
In Chapter 2, we compare methods for training gradient boosting models on multiple studies. When training and test studies come from different distributions, prediction models trained on a single study generally perform worse on out-of-study samples due to heterogeneity in study design, data collection methods, and sample characteristics. Training prediction models on multiple studies can address this challenge and improve cross-study replicability of predictions. We focus on two strategies for training cross-study replicable models: 1) merging all studies and training a single model, and 2) multi-study ensembling, which involves training a separate model on each study and combining the resulting predictions. We study boosting algorithms in a regression setting and compare cross-study replicability of merging vs. multi-study ensembling both empirically and theoretically. In particular, we characterize an analytical transition point beyond which ensembling exhibits lower prediction error than merging for boosting with linear learners. We verify the theoretical transition point empirically and illustrate how it may guide practitioners' choice regarding merging vs. ensembling in a breast cancer application.
In Chapter 3, we propose an approach for estimating heterogeneous treatment effects in multiple studies. Heterogeneous treatment effect estimation is central to many modern statistical applications, such as precision medicine. Despite increased access to multiple studies, existing methods on heterogeneous treatment effect estimation are largely rooted in theory based on a single study. These methods generally rely on the assumption that the heterogeneous treatment effect is the same across studies. However, this assumption may be untenable under potential heterogeneity in study design, data collection methods, and sample characteristics across multiple studies. To address this challenge, we propose the multi-study R-learner for estimating heterogeneous treatment effects under the presence of between-study heterogeneity. This method allows information to be borrowed across multiple studies and allows flexible modeling of the nuisance components with machine learning methods. We show analytically that optimizing the multi-study R-loss is equivalent to optimizing the oracle loss up to an error that diminishes at a relatively fast rate with the sample size. Under the series estimation framework, we derive a pointwise normality result for the multi-study R-learner estimator. Empirically, we show that as between-study heterogeneity increases, the multi-study R-learner results in lower estimation error than the R-learner via simulations and a breast cancer application.
Description
Other Available Sources
Keywords
causal inference, machine learning, meta-analysis, multi-study, prediction modeling, Biostatistics
Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service