Person: Kashin, Konstantin
Loading...
Email Address
AA Acceptance Date
Birth Date
Research Projects
Organizational Units
Job Title
Last Name
Kashin
First Name
Konstantin
Name
Kashin, Konstantin
2 results
Search Results
Now showing 1 - 2 of 2
Publication Essays on Political Methodology and Data Science(2015-05-15) Kashin, Konstantin; King, Gary; Spirling, Arthur; Ziblatt, Daniel; Glynn, AdamThis collection of six essays makes novel methodological contributions to causal inference, time-series cross-sectional forecasting, and supervised text analysis. The first three essays start from the premise that while randomized experiments are the gold standard for causal claims, randomization is not feasible or ethical for many questions in the social sciences. Researchers have thus devised methods that approximate experiments using nonexperimental control units to estimate counterfactuals. However, control units may be costly to obtain, incomparable to the treated units, or completely unavailable when all units are treated. We challenge the commonplace intuition that control units are necessary for causal inference. We propose conditions under which one can use post-treatment variables to estimate causal effects. At its core, we show when one can obtain identification of causal effects by comparing treated units to other treated units, without recourse to control units. The next two essays demonstrate that the U.S. Social Security Administration's (SSA) forecasting errors were approximately unbiased until about 2000, but then began to grow quickly, with increasingly overconfident uncertainty intervals. Moreover, the errors all turn out to be in the same potentially dangerous direction, each making the Social Security Trust Funds look healthier than they actually are. We also discover the cause of these findings with evidence from a large number of interviews we conducted with participants at every level of the forecasting and policy processes. Finally, the last essay develops a new dataset for studying the influence of business on public policy decisions across the American states. Compiling and digitizing nearly 1,000 leaked legislative proposals made by a leading business lobbying group in the states, along with digitized versions of all state legislation introduced or enacted between 1995 and 2013, we use a two-stage supervised classifier to categorize state bills as either sharing the same underlying concepts or specific language as business-drafted model bills. We find these business-backed bills were more likely to be introduced and enacted by legislatures lacking policy resources, such as those without full-time members and with few staffers.Publication Systematic Bias and Nontransparency in US Social Security Administration Forecasts(American Economic Association, 2015) Kashin, Konstantin; King, Gary; Soneji, SamirWe offer an evaluation of the Social Security Administration demographic and financial forecasts used to assess the long-term solvency of the Social Security Trust Funds. This same forecasting methodology is also used in evaluating policy proposals put forward by Congress to modify the Social Security program. Ours is the first evaluation to compare the SSA forecasts with observed truth; for example, we compare forecasts made in the 1980s, 1990s, and 2000s with outcomes that are now available. We find that Social Security Administration forecasting errors—as evaluated by how accurate the forecasts turned out to be—were approximately unbiased until 2000 and then became systematically biased afterward, and increasingly so over time. Also, most of the forecasting errors since 2000 are in the same direction, consistently misleading users of the forecasts to conclude that the Social Security Trust Funds are in better financial shape than turns out to be the case. Finally, the Social Security Administration's informal uncertainty intervals appear to have become increasingly inaccurate since 2000. At present, the Office of the Chief Actuary, at the Social Security Administration, does not reveal in full how its forecasts are made. Every future Trustees Report, without exception, should include a routine evaluation of all prior forecasts, and a discussion of what forecasting mistakes were made, what was learned from the mistakes, and what actions might be taken to improve forecasts going forward. And the Social Security Administration and its Office of the Chief Actuary should follow best practices in academia and many other parts of government and make their forecasting procedures public and replicable, and should calculate and report calibrated uncertainty intervals for all forecasts.