How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do about It
MetadataShow full item record
CitationKing, Gary, and Margaret Earling Roberts. 2014. How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do about It. Political Analysis, 1-21.
Abstract"Robust standard errors'" are used in a vast array of scholarship to correct standard errors for model misspecification. However, when misspecification is bad enough to make classical and robust standard errors diverge, assuming that it is nevertheless not so bad as to bias everything else requires considerable optimism. And even if the optimism is warranted, settling for a misspecified model, with or without robust standard errors, will still bias estimators of all but a few quantities of interest. Even though this message is well known to methodologists and has appeared in the literature in several forms, it has failed to reach most applied researchers. The resulting cavernous gap between theory and practice suggests that considerable gains in applied statistics may be possible. We seek to help applied researchers realize these gains via an alternative perspective that offers a productive way to use robust standard errors; a new general and easier-to-use information test statistic which is easier to apply appropriately; and practical illustrations via simulations and real examples from published research. Instead of jettisoning this extremely popular tool, as some suggest, we show how robust and classical standard error differences can provide effective clues about model misspecification, likely biases, and a guide to more reliable inferences.
Citable link to this pagehttp://nrs.harvard.edu/urn-3:HUL.InstRepos:13572089
- FAS Scholarly Articles