Disrupted Data: Using Longitudinal Assessment Systems to Monitor Test Score Quality
An, Lily Shiao
Ho, Andrew Dean
Davis, Laurie Laughlin
MetadataShow full item record
CitationAn, Lily S., Andrew D. Ho, and Laurie Laughlin Davis. 2022. Disrupted data: Using longitudinal assessment systems to monitor test score quality. Educational Measurement: Issues and Practice.
AbstractTechnical documentation for educational tests focuses primarily on properties of individual scores at single points in time. Reliability, standard errors of measurement, item parameter estimates, fit statistics, and linking constants are standard technical features that external stakeholders use to evaluate items and individual scale scores. However, these cross-sectional, “point-in-time” features can mask threats to the validity of score interpretations, including those for aggregate scores and trends over time. We use test score data collected before and during the COVID-19 pandemic to show that longitudinal analyses, not just point-in-time analyses, are necessary to detect threats to desired inferences. We propose that educational agencies require and vendors include longitudinal data features, including “match rates” and correlations, as standard exhibits in technical documentation.
Citable link to this pagehttps://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37370969
- GSE Scholarly Articles