Discreteness Causes Bias in Percentage-Based Comparisons: A Case Study From Educational Testing
MetadataShow full item record
CitationYee, Darrick S. and Andrew D. Ho. 2015. Discreteness causes bias in percentage-based comparisons: A case study from educational testing. American Statistician, 69: 174-181.
AbstractDiscretizing continuous distributions can lead to bias in parameter estimates. We present a case study from educational testing that illustrates dramatic consequences of discreteness when discretizing partitions differ across distributions. The percentage of test-takers who score above a certain cutoff score (percent above cutoff, or “PAC”) often describes overall performance on a test. Year-over-year changes in PAC, or ΔPAC, have gained prominence under recent U.S. education policies, with public schools facing sanctions if they fail to meet PAC targets. In this paper, we describe how test score distributions act as continuous distributions that are discretized inconsistently over time. We show that this can propagate considerable bias to PAC trends, where positive ΔPACs appear negative, and vice versa, for a substantial number of actual tests. A simple model shows that this bias applies to any comparison of PAC statistics in which values for one distribution are discretized differently from values for the other.
Citable link to this pagehttp://nrs.harvard.edu/urn-3:HUL.InstRepos:27471534
- GSE Scholarly Articles