Publication:
Discreteness Causes Bias in Percentage-Based Comparisons: A Case Study From Educational Testing

Thumbnail Image

Date

2015

Journal Title

Journal ISSN

Volume Title

Publisher

Informa UK Limited
The Harvard community has made this article openly available. Please share how this access benefits you.

Research Projects

Organizational Units

Journal Issue

Citation

Yee, Darrick S. and Andrew D. Ho. 2015. Discreteness causes bias in percentage-based comparisons: A case study from educational testing. American Statistician, 69: 174-181.

Research Data

Abstract

Discretizing continuous distributions can lead to bias in parameter estimates. We present a case study from educational testing that illustrates dramatic consequences of discreteness when discretizing partitions differ across distributions. The percentage of test-takers who score above a certain cutoff score (percent above cutoff, or “PAC”) often describes overall performance on a test. Year-over-year changes in PAC, or ΔPAC, have gained prominence under recent U.S. education policies, with public schools facing sanctions if they fail to meet PAC targets. In this paper, we describe how test score distributions act as continuous distributions that are discretized inconsistently over time. We show that this can propagate considerable bias to PAC trends, where positive ΔPACs appear negative, and vice versa, for a substantial number of actual tests. A simple model shows that this bias applies to any comparison of PAC statistics in which values for one distribution are discretized differently from values for the other.

Description

Other Available Sources

Keywords

Terms of Use

This article is made available under the terms and conditions applicable to Open Access Policy Articles (OAP), as set forth at Terms of Service

Endorsement

Review

Supplemented By

Referenced By

Related Stories