Publication:
Statistics Can Lie but Can also Correct for Lies: Reducing Response Bias in NLAAS via Bayesian Imputation

Thumbnail Image

Date

2013

Journal Title

Journal ISSN

Volume Title

Publisher

International Press of Boston, Inc.
The Harvard community has made this article openly available. Please share how this access benefits you.

Research Projects

Organizational Units

Journal Issue

Citation

Liu, Jingchen, Xiao-Li Meng, Chih-Nan Chen, and Margarita Alegria. 2013. “Statistics can lie but can also correct for lies: Reducing response bias in NLAAS via Bayesian imputation.” Statistics and Its Interface 6 (3): 387-398. doi:10.4310/SII.2013.v6.n3.a9. http://dx.doi.org/10.4310/SII.2013.v6.n3.a9.

Research Data

Abstract

The National Latino and Asian American Study (NLAAS) is a large scale survey of psychiatric epidemiology, the most comprehensive survey of this kind. A unique feature of NLAAS is its embedded experiment for estimating the effect of alternative orderings of interview questions. The findings from the experiment are not completely unexpected, but nevertheless alarming. Compared to the survey results from the widely used traditional ordering, the self-reported psychiatric service-use rates are often doubled or even tripled under a more sensible ordering introduced by NLAAS. These findings explain certain perplexing empirical findings in literature, but at the same time impose some grand challenges. For example, how can one assess racial disparities when different races were surveyed with different survey instruments that are now known to induce substantial differences? The project documented in this paper is part of an effort to address these questions. It creates models for imputing the original responses had the respondents under the traditional survey not taken advantage of the skip patterns to reduce interview time, which resulted in increased rates of incorrect negative responses over the course of the interview. The imputation modeling task is particularly challenging because of the complexity of the questionnaire, the small sample sizes for subgroups of interests, and the need for providing sensible imputation to whatever sub-population that a future user might be interested in studying. As a case study, we report both our findings and frustrations in our quest for dealing with these common real-life complications.

Description

Other Available Sources

Keywords

checking imputation quality, continuation ratio model, mental health, multiple imputation, probit model, question ordering

Terms of Use

This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service

Endorsement

Review

Supplemented By

Referenced By

Related Stories