Disrupted Data: Using Longitudinal Assessment Systems to Monitor Test Score Quality
Author
An, Lily Shiao
Ho, Andrew Dean
Davis, Laurie Laughlin
Published Version
https://doi.org/10.1111/emip.12491Metadata
Show full item recordCitation
An, Lily S., Andrew D. Ho, and Laurie Laughlin Davis. 2022. Disrupted data: Using longitudinal assessment systems to monitor test score quality. Educational Measurement: Issues and Practice.Abstract
Technical documentation for educational tests focuses primarily on properties of individual scores at single points in time. Reliability, standard errors of measurement, item parameter estimates, fit statistics, and linking constants are standard technical features that external stakeholders use to evaluate items and individual scale scores. However, these cross-sectional, “point-in-time” features can mask threats to the validity of score interpretations, including those for aggregate scores and trends over time. We use test score data collected before and during the COVID-19 pandemic to show that longitudinal analyses, not just point-in-time analyses, are necessary to detect threats to desired inferences. We propose that educational agencies require and vendors include longitudinal data features, including “match rates” and correlations, as standard exhibits in technical documentation.Terms of Use
This article is made available under the terms and conditions applicable to Individual Open Access License Articles, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#IOALCitable link to this page
https://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37370969
Collections
- GSE Scholarly Articles [362]
Contact administrator regarding this item (to report mistakes or request changes)