Publication: A Gaussian-process Framework for Nonlinear Statistical Inference using Modern Machine Learning Models
No Thumbnail Available
Open/View Files
Date
2023-05-12
Authors
Published Version
Published Version
Journal Title
Journal ISSN
Volume Title
Publisher
The Harvard community has made this article openly available. Please share how this access benefits you.
Citation
Deng, Wenying. 2023. A Gaussian-process Framework for Nonlinear Statistical Inference using Modern Machine Learning Models. Doctoral dissertation, Harvard University Graduate School of Arts and Sciences.
Research Data
Abstract
Gaussian Process Regression has become widely used in biomedical research in recent years, particularly for studying the intricate and nonlinear impacts of multivariate genetic or environmental exposures. This dissertation proposes methods for estimating and testing nonlinear effects and/or estimating variable importance with uncertainty quantification. Chapter 1 focuses on hypothesis testing, while Chapter 2 and 3 address general variable importance estimation problems.
In Chapter 1, we develop an R package \pkg{CVEK}, which introduces a suite of flexible machine learning models and robust hypothesis tests for learning the joint nonlinear effects of multiple covariates in limited samples. It implements the \textit{Cross-validated Ensemble of Kernels} (CVEK)\citep{liu_robust_2017}, an ensemble-based kernel machine learning method that adaptively learns the joint nonlinear effect of multiple covariates from data, and provides powerful hypothesis tests for both main effects of features and interactions among features. The R Package \pkg{CVEK} provides a flexible, easy-to-use implementation of CVEK, and offers a wide range of choices for the kernel family (for instance, polynomial, radial basis functions, Mat\'{e}rn, neural network, and others), model selection criteria, ensembling method (averaging, exponential weighting, cross-validated stacking), and the type of hypothesis test (asymptotic or parametric bootstrap). Through extensive simulations we demonstrate the validity and robustness of this approach, and provide practical guidelines on how to design an estimation strategy for optimal performance in different data scenarios.
In Chapter 2, we propose a simple and unified framework for nonlinear variable importance estimation that incorporates uncertainty in the prediction function and is compatible with a wide range of machine learning models (e.g., tree ensembles, kernel methods, neural networks, etc). In particular, for a learned nonlinear model $f(\mathbf{x})$, we consider quantifying the importance of an input variable $\mathbf{x}^j$ using the integrated partial derivative $\Psi_j = \Vert \frac{\partial}{\partial \mathbf{x}^j} f(\mathbf{x})\Vert^2_{P_\mathcal{X}}$. We then (1) provide a principled approach for quantifying uncertainty in variable importance by deriving its posterior distribution, and (2) show that the approach is generalizable even to non-differentiable models such as tree ensembles. Rigorous Bayesian nonparametric theorems are derived to guarantee the posterior consistency and asymptotic uncertainty of the proposed approach. Extensive simulations and experiments on healthcare benchmark datasets confirm that the proposed algorithm outperforms existing classical and recent variable selection methods.
In Chapter 3, we develop a versatile framework that can be applied to continuous, count, and binary responses. The primary aim is to estimate the variable importance scores using various machine learning models, including tree ensembles, kernel methods, neural networks, and others. Additionally, the proposed framework accounts for the impact of confounding variables and provides a way to assess the uncertainty associated with variable importance scores. Subsequently, we present a systematic method to estimate the uncertainty in variable importance by computing its posterior distribution. We derive Bayesian nonparametric theorems that ensure the posterior consistency and asymptotic uncertainty of the proposed approach. The efficacy of the proposed algorithm is validated through comprehensive simulations and experiments on socioeconomic benchmark datasets, indicating superior performance compared to existing traditional and contemporary variable selection techniques.
Description
Other Available Sources
Keywords
Gaussian Process Regression, Hypothesis Testing, Machine Learning, Statistical Inference, Variable Selection, Biostatistics, Statistics, Artificial intelligence
Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service