Quantifying the Fraction of Missing Information for Hypothesis Testing in Statistical and Genetic Studies

DSpace/Manakin Repository

Quantifying the Fraction of Missing Information for Hypothesis Testing in Statistical and Genetic Studies

Citable link to this page


Title: Quantifying the Fraction of Missing Information for Hypothesis Testing in Statistical and Genetic Studies
Author: Nicolae, Dan L.; Meng, Xiao-Li; Kong, Augustine

Note: Order does not necessarily reflect citation order of authors.

Citation: Nicolae, Dan L., Xiao-Li Meng, and Augustine Kong. 2008. Quantifying the fraction of missing information for hypothesis testing in statistical and genetic studies. statistical science 23(3): 287-312.
Full Text & Related Files:
Abstract: Many practical studies rely on hypothesis testing procedures applied to data sets with missing information. An important part of the analysis is to determine the impact of the missing data on the performance of the test, and this can be done by properly quantifying the relative (to complete data) amount of available information. The problem is directly motivated by applications to studies, such as linkage analyses and haplotype-based association projects, designed to identify genetic contributions to complex diseases. In the genetic studies the relative information measures are needed for the experimental design, technology comparison, interpretation of the data, and for understanding the behavior of some of the inference tools. The central difficulties in constructing such information measures arise from the multiple, and sometimes conflicting, aims in practice. For large samples, we show that a satisfactory, likelihood-based general solution exists by using appropriate forms of the relative Kullback–Leibler information, and that the proposed measures are computationally inexpensive given the maximized likelihoods with the observed data. Two measures are introduced, under the null and alternative hypothesis respectively. We exemplify the measures on data coming from mapping studies on the inflammatory bowel disease and diabetes. For small-sample problems, which appear rather frequently in practice and sometimes in disguised forms (e.g., measuring individual contributions to a large study), the robust Bayesian approach holds great promise, though the choice of a general-purpose “default prior” is a very challenging problem. We also report several intriguing connections encountered in our investigation, such as the connection with the fundamental identity for the EM algorithm, the connection with the second CR (Chapman–Robbins) lower information bound, the connection with entropy, and connections between likelihood ratios and Bayes factors. We hope that these seemingly unrelated connections, as well as our specific proposals, will stimulate a general discussion and research in this theoretically fascinating and practically needed area.
Published Version: http://dx.doi.org/10.1214/07-STS244
Terms of Use: This article is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA
Citable link to this page: http://nrs.harvard.edu/urn-3:HUL.InstRepos:2766348

Show full Dublin Core record

This item appears in the following Collection(s)


Search DASH

Advanced Search