Publication:
Evaluating State-Driven Changes to the Medicaid Program: Unintended, Intended, and Methodological Implications

No Thumbnail Available

Date

2020-05-14

Published Version

Published Version

Journal Title

Journal ISSN

Volume Title

Publisher

The Harvard community has made this article openly available. Please share how this access benefits you.

Research Projects

Organizational Units

Journal Issue

Citation

Fry, Carrie E. 2020. Evaluating State-Driven Changes to the Medicaid Program: Unintended, Intended, and Methodological Implications. Doctoral dissertation, Harvard University, Graduate School of Arts & Sciences.

Research Data

Abstract

This dissertation consists of two empirical policy papers and one methods paper. All three papers examine the effect of changes to the Medicaid program. The first paper examines the impact of Medicaid expansion of jail-based recidivism. The second paper estimates the effect of retroactive eligibility waivers in Medicaid on enrollment. The third explores the implications of choosing a study design on the estimated effects of these changes by re-analyzing three published papers. In Chapter 1, co-authors and I estimate the impact of Medicaid expansion on recidivism. Previous research on the relationship between financial access to care and re-offense is mixed, and much of the published work is subject to selection bias, does not have a comparison group, or lacks a defined intervention. We use the variation introduced by the 2012 Supreme Court ruling in NFIB vs. Sebelius to derive causal estimates of this relationship using 48 months of booking and release data from six county jails. Three of the six counties are in Medicaid expansion states, and three are in non-expansion states. We conduct three case studies using a comparative interrupted time series analysis (CITS) to estimate the differential change in the probability of re-arrest and the number of arrests between the expansion and non-expansion counties. We find mixed results across these three case studies – in two case studies, we estimate declines in the probability of re-arrest of 5 and 13 percent. In the third, we estimate an increase of similar magnitude. We find a similar pattern of results with the number of arrests. To put these mixed results in context, we supplement our quantitative analysis with information from site visits and stakeholder interviews to identify mediators and moderators of the relationship between financial access to care and recidivism. In Chapter 2, I estimate what happens to Medicaid enrollment after the implementation of a retroactive eligibility waiver in a state’s Medicaid program. Retroactive eligibility provides 90 days of Medicaid coverage prior to a person’s date of application, given that the beneficiary was eligible in those 90 days. In the past five years, seven states have eliminated retroactive eligibility for some portion of the Medicaid population. However, we know of no study that examines what happens to enrollment, beneficiary financial status, or health outcomes after the removal of this provision. We use 24 months of Medicaid enrollment data in four of the seven retroactive eligibility waiver states to estimate the relationship between retroactive eligibility removal and changes in enrollment. Using a difference-in-differences (DID) and geographically similar comparison states, we find no impact of retroactive eligibility on Medicaid enrollment in any of the four states. However, the confidence intervals suggest that we may be under-powered. To address the power concerns, we combine the four retroactive eligibility states and their comparators in a pooled analysis. Here, we find a 10 percent decline in Medicaid enrollment at five and six months after waiver implementation, suggesting that removing retroactive eligibility may have a ‘chilling’ effect on Medicaid enrollment in the months after implementation. In Chapter 3, I explore the differences between two similar study designs – CITS and DID. Both of these designs use two time periods and a comparison group; they also use the change in the comparison group to estimate the counterfactual for the treated group without treatment. However, the use of these two study designs is disciplinary, and the respective disciplines prefer one design over the other. This is due, in part, to the lack of mathematical formalization for CITS. To understand the differences (if any), we first carefully write down the potential outcome model for two versions of each design – a general version of CITS, a linear version of CITS, DID with time fixed effects, and DID with time fixed effects and group-specific trends – and conduct a modeling exercise to estimate the counterfactuals for each. We, then, re-analyze three published studies to understand the situations where one of these designs might be preferable to the others. We find that general CITS and DID with time fixed effects and group-specific trends produce the same counterfactual and estimate the same treatment effects. The only difference between these two designs is the language used to describe them. We also find that when researchers lean into each design’s respective constraints – linearity for CITS and a zero difference for DID – counterfactual and treatment effect estimation differ. Empirical researchers should provide a clear explanation of the counterfactual assumptions being made and the model specification to allow for a more transparent evaluation of the plausibility of these assumptions.

Description

Other Available Sources

Keywords

Medicaid, health services research, health reform, health policy, behavioral health, mental health, substance use disorder,

Terms of Use

This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service

Endorsement

Review

Supplemented By

Referenced By

Related Stories