Publication: When is hindsight 20/20? The politics of acknowledging and revising failed policies
No Thumbnail Available
Open/View Files
Date
2021-07-12
Authors
Published Version
Published Version
Journal Title
Journal ISSN
Volume Title
Publisher
The Harvard community has made this article openly available. Please share how this access benefits you.
Citation
James, Sarah Elizabeth. 2021. When is hindsight 20/20? The politics of acknowledging and revising failed policies. Doctoral dissertation, Harvard University Graduate School of Arts and Sciences.
Research Data
Abstract
The policy analysis movement and its subsequent iterations have touted the promise of advances in data collection, storage, and analysis for facilitating effective and responsive policymaking. Existing literature on state politics offers several explanations for when and how state public officials might learn about and implement effective policies. Specifically, public officials are more likely to adopt successful policies they observe in states that are ideologically similar or geographically proximate. There is far less evidence for when and how state-level public officials might learn about policy failures. There is some evidence that policy abandonment might diffuse among ideologically similar states, once again suggesting that ideology filters the interpretation of policy effects. Perhaps even more importantly, political science and public administration do not have coherent theories for when and how state-level public officials learn from their own constituents’ experiences with a policy’s actual outcomes. Thus, this dissertation asks the question: under what conditions are state-level public officials willing to acknowledge and address state-level policy failure? I define a policy as having failed if it meets two criteria: first, the original legislation must contain an explicitly advertised intent; and second, there is consistent, reliable, and scholarly research demonstrating that the intent of the law is not being met or is being undermined by the policy’s consequences.
I use process tracing in a comparative case study design of two failed state-level policies—truancy consequences and business location tax incentives—across three states for each policy. A comparison of these six policy trajectories shows that traditional explanations for policy learning and change fail to account for the patterns of acknowledgement and policy responsiveness. The two anchor states in my study—traditionally conservative Texas and progressive Washington state—have similar failed juvenile justice and tax policies, but acknowledgement and revision of the policies varies both within and across state lines. I exploit this variation to move beyond traditional ideological explanations for the acknowledgement and revision of failed policies. The project merges a diverse set of data, including legislative and bureaucratic archives, media accounts, and over a dozen interviews with public officials, policy researchers, and advocates.
Part I defines and operationalizes policy failure, collection capacity, and analytical capacity more specifically than existing scholarship. I build on existing public administration scholarship on research capacity by showing that collection capacity and analytical capacity are two distinct features of a state’s institutional landscape that can vary independently of one another. I define collection capacity as the available resources and motivation to gather relevant and usable data on policy outcomes, and analytical capacity as the ability to draw scientifically valid and reliable inferences and conclusions from data. Stated-orchestrated and centralized collection plans with clear definitions and over-time data collection characterizes high data collection capacity, while the presence of state sponsored research organizations, professionalized researchers, and established policy evaluation and reporting schedules characterizes high analytical capacity. I show that both collection and analytical capacity are necessary for the acknowledgement of policy failure. However, without robust collection capacity, analytical capacity is ineffective at drawing attention to policy failure. I also find that state-managed data collection is necessary for failure recognition and policy revision, while analytical capacity can, in some circumstances, be successfully supplemented by non-state actors.
In Part II, I compare the policy trajectories of each case to examine the politics of revising failed policies. I show that cases in which public officials widely acknowledge failure all have comprehensive, state-sponsored data collection plans. I also analyze the efforts to revise each of the policies and identify the ways in which irrefutable data analysis shifted the burden of proof from a failed policy’s supporters to its opponents. My central conclusion is that patterns of acknowledgement and revision will vary as a function of data collection and analytical capacity, especially the capacity of the state to collect high quality data early on in the policy’s implementation. Strong data collection and analysis jointly (but not separately) have the potential to disrupt stakeholder dynamics in ways that empower a policy’s opponents over its beneficiaries. Revision, in particular, will also be responsive to public officials’ perceptions of the policy as a proactive or reactive measure. Because public officials may be more hesitant to revise reactive policies, including effective and comprehensive data collection into such policies is essential for encouraging policy learning.
Over the last three decades, states have gained increasing control over the design and distribution of the American welfare state, giving state-level policy makers enormous power to impact racial, economic, health, and educational disparities. Supporters of this devolution strategy have argued that it allows states to act as policy laboratories, experimenting, innovating, and passing along effective strategies. This justification for devolution ignores an equally likely (or maybe even more likely) outcome of experimentation: failure. Recognizing and learning from mistakes is a critical part of responsible policy experimentation, and this is especially important for policies that disproportionately impact low-income communities and communities of color. My findings illuminate policy and institutional design features that can mitigate the risk of new policies permanently exacerbating racial and economic inequality. Identifying the institutional designs that encourage policy learning and responsiveness to real policy outcomes offers an opportunity to proactively incorporate these features into future policies and institutions. As a whole, my dissertation speaks to literatures on federalism, state-level institutional change, bureaucratic capacity, policy feedback, the politicization of knowledge, and the role of data and research in policymaking.
Description
Other Available Sources
Keywords
bureaucracy, inequality, policy failure, policy learning, state politics, Political science, Public policy
Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service