School-Based Data Teams Ask the Darnedest Questions About Statistics: Three Essays in the Epistemology of Statistical Consulting and Teaching
MetadataShow full item record
CitationParker, Sean Stanley. 2014. School-Based Data Teams Ask the Darnedest Questions About Statistics: Three Essays in the Epistemology of Statistical Consulting and Teaching. Doctoral dissertation, Harvard Graduate School of Education.
AbstractThe essays in this thesis attempt to answer the most difficult questions that I have faced as a teacher and consultant for school-based data teams. When we report statistics to our fellow educators, what do we say and what do we leave unsaid? What do averages mean when no student is average? Why do we treat our population of students as infinite when we test for statistical significance? I treat these as important philosophical questions. In the first essay, I use Paul Grice’s philosophical analysis of conversational logic to understand how data teams can accidentally mislead with true statistics, and I use Bernard Williams’s philosophical analysis of truthfulness to understand the value, for data teams, of not misleading with statistics. In short, statistical reports can be misleading when they violate the Gricean maxims of conversation (e.g., “be relevant,” “be orderly”). I argue that, for data teams, adhering to the Gricean maxims is an intrinsic value, alongside Williams’s intrinsic values of Sincerity and Accuracy. I conclude with some recommendations for school-based data teams. In the second essay, I build on Nelson Goodman and Catherine Z. Elgin’s analyses of exemplification to argue that averages (i.e., medians and means) are attenuated, moderate, and sometimes fictive exemplars. As such, medians and means lend themselves to scientific objectivity. In the third essay, I use Goodman’s theory of counterfactuals and Carl Hempel’s theory of explanation to articulate why data teams should make statistical inferences to infinite populations that include possible but not actual students. Data teams are generally concerned that their results are explainable by random chance. Random chance, as an explanation, implies lawlike generalizations, which in turn imply counterfactual claims about possible but not actual subjects. By statistically inferring to an infinite population of students, data teams can evaluate those counterfactual claims in order to assess the plausibility of random chance as an explanation for their findings.
Citable link to this pagehttp://nrs.harvard.edu/urn-3:HUL.InstRepos:13383545