Person: Yee, Darrick Shen-Wei
Loading...
Email Address
AA Acceptance Date
Birth Date
Research Projects
Organizational Units
Job Title
Last Name
Yee
First Name
Darrick Shen-Wei
Name
Yee, Darrick Shen-Wei
2 results
Search Results
Now showing 1 - 2 of 2
Publication A Three-Study Examination of Test-Based Accountability Metrics(2017-05-08) Yee, Darrick Shen-Wei; Ho, Andrew; Koretz, Daniel; Miratrix, LukeRecent state and federal policy initiatives have led to the development of a multitude of statistics intended to measure school performance. Of these, statistics constructed from student test scores number among both the most widely-used and most controversial. In many cases, researchers and policymakers alike are not fully aware of the ways in which these statistics may lead to unjustified inferences regarding school effectiveness. A substantial amount of recent research has attempted to remedy this, although much remains unknown. This thesis seeks to contribute to these research efforts via three papers, each examining how a commonly-employed accountability statistic may be influenced by factors unrelated to student proficiency or school effectiveness. The first paper demonstrates how the discrete nature of test scores leads to biased estimates of changes in the percentage of “proficient” students between any two given years and examines estimators that provide better recovery of this parameter. The second paper makes use of a state-wide natural experiment to show that a change in testing program, from paper-and-pencil to computer-adaptive, may cause apparent changes in achievement gaps even when relative student proficiencies have remained constant. The third paper examines “growth-based” accountability metrics based on vertically-scaled assessments, showing that certain types of metrics based on gain scores can be modeled via nonlinear transformations of the underlying vertical scale. It then makes use of this result to investigate the potential magnitude of impacts of such transformations on growth-based school accountability ratings.Publication Discreteness Causes Bias in Percentage-Based Comparisons: A Case Study From Educational Testing(Informa UK Limited, 2015) Yee, Darrick Shen-Wei; Ho, AndrewDiscretizing continuous distributions can lead to bias in parameter estimates. We present a case study from educational testing that illustrates dramatic consequences of discreteness when discretizing partitions differ across distributions. The percentage of test-takers who score above a certain cutoff score (percent above cutoff, or “PAC”) often describes overall performance on a test. Year-over-year changes in PAC, or ΔPAC, have gained prominence under recent U.S. education policies, with public schools facing sanctions if they fail to meet PAC targets. In this paper, we describe how test score distributions act as continuous distributions that are discretized inconsistently over time. We show that this can propagate considerable bias to PAC trends, where positive ΔPACs appear negative, and vice versa, for a substantial number of actual tests. A simple model shows that this bias applies to any comparison of PAC statistics in which values for one distribution are discretized differently from values for the other.