Validation Methods for Aggregate-Level Test Scale Linking: A Rejoinder
Reardon, Sean F.
MetadataShow full item record
CitationHo, Andrew, Sean F. Reardon, Demetra Kalogrides. "Validation Methods for Aggregate-Level Test Scale Linking: A Rejoinder." Journal of Educational and Behavioral Statistics 46, no. 2 (2021): 209-218. DOI: 10.3102/1076998621994540
AbstractIn Reardon, Kalogrides, and Ho (2021), we developed precision-adjusted random effects models to estimate aggregate-level linking error, for populations and subpopulations, for averages and progress over time. We are grateful to past editor Dan McCaffrey for selecting our paper as the focal article for a set of commentaries from our colleagues, Daniel Bolt, Mark Davison, Alina von Davier, Tim Moses, and Neil Dorans. These commentaries reinforce important cautions and identify promising directions for future research. In this rejoinder, we clarify aspects of our originally proposed method. 1) Validation methods provide evidence of benefits and risks that different experts may weigh differently for different purposes. 2) Our proposed method differs from “standard mapping” procedures using the National Assessment of Educational Progress not only by using a linear (vs. equipercentile) link but also by targeting direct validity evidence about counterfactual aggregate scores. 3) Multilevel approaches that assume common score scales across states are indeed a promising next step for validation, and we hope that states enable researchers to use more of their common-core-era consortium test data for this purpose. Finally, we apply our linking method to an extended panel of data from 2009 to 2017 to show that linking recovery has remained stable.
Citable link to this pagehttps://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37374047
- GSE Scholarly Articles