• Center on Health Equity and Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

Implementation Variation in Natural Experiments of State Health Policy Initiatives

Publication
Article
The American Journal of Accountable Care®September 2019
Volume 7
Issue 3

This paper presents a method to characterize policy implementation across states to enable more nuanced impact assessments of federal healthcare delivery system and payment reforms.

ABSTRACT

Objectives: An increasing number of federal initiatives allow states flexibility in selecting the strategies used to achieve initiative-specific goals. Variation in the foci and intensity of implementation may explain why federal policy initiatives succeed in some states and fail in others. The CMS State Innovation Models (SIM) initiative is a complex policy intervention implemented with substantial variation across states and may have variable impacts. This paper presents a method to characterize and account for that variation in states’ implementation foci and intensity in natural policy experiments.

Study Design: A combination of quantitative and qualitative measures of SIM implementation was used to characterize the foci of payment and delivery system reforms across states.

Methods: A modified Delphi expert panel process was used to prioritize the features of SIM implementation that would differentiate grantee states with respect to improved health outcomes. Three researchers then reviewed summaries of published evaluations and reports to characterize and score states on each implementation feature. Expert panelists guided the researchers on developing the criteria and weights applied to the focus areas when calculating SIM implementation intensity scores for states.

Results: Over 3 years of an expert panel process, 4 dimensions of SIM implementation that would most affect health outcomes were prioritized: (1) extent and breadth of stakeholder engagement, (2) extent that SIM implementation was focused on improving behavioral health, (3) amount of SIM funding per capita, and (4) breadth and depth of value-based payment reforms. Scoring states based on the prioritized factors resulted in composite scores that differentiated states into 3 categories: high, moderate, and low implementation intensity.

Conclusions: We developed a stakeholder-driven method to measure and account for variation in implementation foci and intensity in a federal policy initiative that was implemented heterogeneously across grantee states. Our method for characterizing state implementation variation may be useful for natural policy experiments examining the variable impact of policy initiatives.

The American Journal of Accountable Care. 2019;7(3):12-17Over the past decade, health policies and programs intended to spur innovation in delivery system design and payment reform have become commonplace across the United States.1 Studies examining the effect of state health policies rely on natural experiment study designs, but they do not account for differences in states’ foci and experiences of policy implementation. Characterizing states as exposed or not exposed (1 or 0), as is traditionally done in natural experiments of state health policy initiatives, is overly simplistic and does not consider the specific strategies used by states. Ideally, features of each state’s rollout, including reform foci and intensity of activities, could be modeled quantitatively. The small number of states involved in any given reform, however, precludes the use of quantitative methods to produce a taxonomy to characterize “types” of policy implementation using k-means cluster analysis or another data reduction method.2 As part of a natural experiment of the federal—state program—the CMS State Innovation Models (SIM) initiative—we describe a stakeholder-driven method to prioritize, assess, and account for state-level variation in natural policy experiments.

The SIM initiative awarded funding and technical assistance to states through a competitive process. State health departments proposed plans to implement innovative delivery and payment models to improve health system performance, improve the quality of patient care, and decrease healthcare costs for all residents of the state. Through SIM, the federal government provided states with more than $1 billion in funding and substantial technical assistance to plan, pilot test, and implement payment and delivery system reforms.3 Round 1 of SIM funding was awarded in April 2013 to 6 states (Arkansas, Maine, Massachusetts, Minnesota, Oregon, and Vermont). Round 2 was awarded in December 2014 to 11 additional states (Colorado, Connecticut, Delaware, Idaho, Iowa, Michigan, New York, Ohio, Rhode Island, Tennessee, and Washington). Some states that applied for, but did not receive, SIM funding were awarded modest planning grants ($3 million or less) to aid in advancing their innovations to the potential testing phase in the future. This staged roll-out of SIM allows for a natural experiment study design to evaluate the impact of this policy on population health outcomes.

Previous reviews have conceptualized the critical role of variation in implementation processes to understand differential impacts of policy change.4,5 Implementation science considers intensity and other aspects of the implementation process, including the adoption, reach, and fidelity of implementation to intended policy features.6 The application of implementation science in health services and policy research is growing, but it primarily focuses on the ways in which practitioners successfully incorporate new policies into routine practice as study outcomes.7 Studies have rarely examined how federal policies are differentially implemented at the state level and how these variations affect healthcare utilization and health outcomes.8

The political science subfield of policy implementation research analyzes sources of variation in the implementation of large-scale policies (ie, laws and regulations) and does consider policy goals such as health outcomes as dependent variables, but as with the other perspectives, it does not study how the variation itself influences these outcomes. A handful of policy implementation research studies have described variation in the focus of state-level policy implementation, including applications to welfare policies, medical marijuana policies, and youth sports traumatic brain injury policies.9-12 However, we could find no empirical studies that simultaneously characterized the foci and intensity of state-level policy implementation—considerations that are critically important for understanding the impacts of a complex, multifaceted policy intervention like SIM.

Our conceptualization of the connection between policy implementation and outcomes is most similar to that of Strehlenert and colleagues’ Conceptual Model for Evidence-Informed Policy Formulation and Implementation,5 which covers the entire policy process from agenda setting and policy formulation to implementation and outcomes evaluation; however, this framework was used only descriptively with case studies and not to make comparisons across multiple implementers. CMS allowed states considerable latitude in SIM plan foci and implementation strategies,13 and this variation in policy implementation could result in differential impacts of SIM on utilization and health outcomes across the grantee states. To advance the examination of heterogeneous effects in natural policy experiments, we developed a stakeholder-driven method to measure and account for variation in implementation foci and intensity in a federal policy initiative that was implemented heterogeneously across states.

METHODS

We used a combination of quantitative and qualitative measures to prioritize, classify, and analyze SIM implementation variation for each of the 17 grantee states. To do this, we convened an expert advisory panel composed of 8 SIM leaders from different states to provide us with qualitative and quantitative input about core SIM activities and, ultimately, to participate in a modified Delphi expert panel process to prioritize key differences in implementation foci and strategies across the SIM states. The panel members were recruited from 8 SIM grantee states: Arkansas, Colorado, Iowa, Oregon, Maine, Michigan, Minnesota, and Washington.

The research team facilitated web-based quarterly meetings from October 2015 to January 2019. Webinars were recorded and transcribed. Several important policy implementation differences were identified and discussed during the first 3 meetings. Importantly, states varied in the delivery system and payment reforms that were tested (Table 1 summarizes examples). Some states, such as Minnesota and Colorado, emphasized delivery system reform, including using SIM initiative funding to implement patient-centered medical homes, integration of physical and behavioral healthcare, use of health information technology, and/or health information exchange; meanwhile, Washington emphasized implementing value-based payment reforms, such as shared savings and total cost of care models.13,14 Another important factor discussed was that states were allowed substantial latitude with regard to the distribution of SIM funds within the state. For example, some states, such as Arkansas, retained all of the funds at the state level to support and augment pre-existing programs in physician practice transformation and Medicaid innovation. Other states, such as Minnesota, distributed most of the funding down to the local and regional levels through competitive grants. Maine used a competitive process to contract with several statewide organizations to pursue statewide health system transformation efforts.

Panelists emphasized that the role of SIM in each state additionally differed based on states’ prior investments in healthcare delivery and payment reform. In some states, the resources were used to establish new health system infrastructure. For example, Washington implemented regionally organized public/private Accountable Communities for Health and created a new “support hub” for practice transformation. In other states, SIM resources were used primarily to accelerate changes that were already under way in the state and used the funding to improve interagency alignment and coordination. For example, Maine created a governance structure for the 6 strategic pillars that it selected, convened decision makers from across the state to take action on the proposed innovations, and used this governance structure to ensure that implementing the SIM initiative was a priority of Maine’s Department of Health and Human Services.15

Another important difference across states discussed by panelists was changes in requirements communicated by CMS for round 2 grantees. In round 1, states had high latitude in selecting their performance indicators and targets, as long as they made a strong case as to why their foci of activities would improve these indicators. By round 2, however, CMS was more prescriptive in the performance indicators and made tobacco use, obesity, and diabetes required indicators. In addition, to ensure coordination and linkages with overall state policies, round 2 applications were required to be routed through the state’s governor’s office for approval prior to submission. The absolute amount awarded to each state was greater, on average, in round 2 ($56.6 million) compared with round 1 ($42.4 million).

Based on observations and data from the first 3 meetings, a list of the 10 most important factors that panelists agreed most differentiated states with respect to SIM implementation was finalized: (1) amount of SIM funding received by state, per capita; (2) whether the state was funded in SIM round 1 or round 2; (3) extent to which SIM implementation was focused on improving behavioral health; (4) extent to which SIM implementation was focused on diabetes; (5) breadth and depth of value-based payment reforms; (6) extent to which SIM funds were centralized versus distributed to local/regional entities; (7) co-occurring delivery system interventions, such as the Section 1115 Medicaid demonstration programs; (8) co-occurring Medicaid expansion; (9) co-occurring philanthropic contributions; and (10) state agency funding reallocation.

Then, 7 of the panel members engaged in a 3-round modified Delphi expert panel prioritization process,16 which involved ranking the 10 factors using Qualtrics survey software (Qualtrics; Provo, Utah) based on their relevance for differentiating impactful versus unsuccessful implementation of the SIM initiative among grantee states. The first round of survey results was discussed during a subsequent meeting and used as a basis for modifying/adding/dropping factors for the second round of the ranking process, with the aim of achieving convergence. Panelists were allowed to add additional criteria in the first and second rounds, resulting in the inclusion of stakeholder engagement as an important dimension of implementation after round 1 of the expert panel process. After 3 rounds of ranking, criteria were developed in consultation with panelists to characterize the implementation intensity and resources for the top 4 prioritized factors for each of the 17 states. Finally, we assessed each state’s efforts with regard to these factors using information from state agency reports of SIM implementation, RTI International national evaluation reports,17,18 CMS, and the US Census Bureau. This study was approved by the University of California, Berkeley, Committee for the Protection of Human Subjects.

RESULTS

The modified Delphi process was completed in 3 rounds (Table 2) and resulted in the prioritization of 4 implementation factors, in order of importance: (1) extent and breadth of stakeholder engagement; (2) extent to which SIM implementation was focused on improving behavioral health; (3) amount of SIM funding received by state, per capita; and (4) breadth and depth of value-based payment reforms.

Once prioritized, intensity levels for each implementation factor were determined in consultation with the panelists. In terms of stakeholder engagement, panelists indicated that interagency coordination and working well with community-based organizations were central to getting broad-based delivery system and payment reforms launched and implemented broadly. After extensive discussion, panelists concluded that the document review and interview methods would be inadequate for assessing stakeholder engagement given the complex web of organizations and agencies involved in implementing SIM. Because stakeholder engagement is a contextual influence on policy implementation rather than about implementation foci or resources, panelists recommended that it should not be factored into the calculation of the SIM implementation intensity index.

Data on SIM funding per capita were obtained from CMS and the US Census Bureau, and each state’s level of per capita funding was assigned a numeric value according to whether it fell within the lowest (1), middle (2), or highest (3) third of the distribution.

To measure the extent of focus on behavioral health and the breadth and depth of payment reforms, panelists confirmed the use of published evaluations as the best sources for characterizing these activities. Accordingly, we created summaries of each state’s efforts based on review of the comprehensive evaluation reports by RTI International,17,18 as well as states’ publications on their plans and progress. Using these summaries, 3 evaluators on the research team independently rated each state’s efforts in these domains according to predefined rubrics. The extent of behavioral health focus was rated 1 for little to no focus, 2 for some focus, or 3 for strong focus. The breadth and depth of payment reform was rated 1 if payment reform was not part of the state’s SIM plan, 2 if 1 payer participated, or 3 if more than 1 payer participated (including at least Medicare or Medicaid).

Panelists provided feedback about how the weight of each factor should contribute to an overall index of SIM implementation intensity. Behavioral health focus and depth of payment reform were deemed by panelists to be more important for outcomes than per capita funding because the funding level per capita is quite low; the grant simply provided foundational resources, and states had to have the wherewithal to leverage these resources. As a result, these categories were assigned a weight of 40% and per capita funding was assigned a lower weight of 20%. These final weighted summary scores exhibited low variation; the vast majority of states had scores that fell between 1.8 and 2.2 (Table 3). However, negative outliers were identified as those that scored below 1.8 (Connecticut, Michigan, and Iowa) and positive outliers as those that scored above 2.2 (Delaware, Maine, and Colorado). Due to the concentrated distribution, SIM states were grouped into 3 categories based on their behavioral health focus, breadth and depth of payment reform efforts, and per capita funding. The resulting scoring and categorization was shared with panelists for their review and feedback, resulting in requested changes to reclassify a state’s payment reform activities and to modify the weighting criteria. When examined, the 2 changes did not affect the categorization of SIM states (Table 3).

DISCUSSION

There is a trend toward more state-level health policy development and implementation in the United States, with less emphasis on homogenous federal reforms. Given the inherent challenges posed by the naturally occurring variation in such natural policy experiments, new methods are needed to take into account differences in policy implementation. We describe a method based on Delphi assessments and analysis of source documents relevant to policy implementation that may be useful for assessing differential impacts of SIM across states. Importantly, the SIM implementation factors prioritized by the expert panel process included the extent of behavioral health integration and the breadth and depth of value-based payments, which were recently found to be important differentiators of SIM implementation in national evaluations of round 1 SIM states.13,14 As our natural experiment research moves forward, we will directly examine the impact of implementation variation on healthcare utilization and outcomes. We hypothesize that SIM states with high implementation intensity for the prioritized areas and greater resources will achieve relatively greater reductions in preventable utilization and improved patient outcomes.

CONCLUSIONS

Federal policy initiatives that allow states flexibility in their foci and implementation strategies can have heterogeneous impacts on health system performance and patient outcomes. We developed a stakeholder-driven method to measure and account for variation in implementation foci and intensity in a federal policy initiative that was implemented heterogeneously across grantee states. Our method for characterizing state implementation variation may be useful for natural policy experiments examining the variable impact of policy initiatives across states and can be used alongside other important state-level factors, such as socioeconomic profiles and political contexts. We encourage a dialogue among policy makers, implementers, and evaluators of state health policy reforms to unpack the role of implementation variation in explaining outcomes of broad-based policy changes.

Acknowledgments

The authors are grateful for critical feedback from their project’s advisory panel of SIM directors and designees. They also thank Stephen M. Shortell, PhD, MPH, MBA, for critical input on the modified Delphi expert panel process and Amalya Penso for assistance with fielding and analyzing the survey.Author Affiliations: Department of Family and Community Medicine, Philip R. Lee Institute for Health Policy Studies, University of California, San Francisco (DRR), San Francisco, CA; Center for Healthcare Organizational and Innovation Research, School of Public Health, University of California, Berkeley (AZP, SB, HPR), Berkeley, CA.

Source of Funding: This publication was made possible by the NEXT-D2 (Natural Experiments for Translation in Diabetes 2.0) Study, a cooperative agreement (5U18DP006123) jointly funded by the CDC and the National Institute of Diabetes and Digestive and Kidney Diseases.

Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (HPR); acquisition of data (DRR, HPR); analysis and interpretation of data (DRR, AZP, SB, HPR); drafting of the manuscript (DRR, AZP, HPR); critical revision of the manuscript for important intellectual content (DRR, AZP, SB, HPR); obtaining funding (HPR); administrative, technical, or logistic support (SB); and supervision (SB, HPR).

Send Correspondence to: Hector P. Rodriguez, PhD, MPH, Center for Healthcare Organizational and Innovation Research, School of Public Health, University of California, Berkeley, 2121 Berkeley Way, Room 5302, Berkeley, CA 94720-7360. Email: hrod@berkeley.edu.REFERENCES

1. Shortell SM, Rittenhouse D. The most critical health care issues for the next president to address. Ann Intern Med. 2016;165(11):816-817. doi: 10.7326/M16-2471.

2. Wu FM, Shortell SM, Lewis VA, Colla CH, Fisher ES. Assessing differences between early and later adopters of accountable care organizations using taxonomic analysis. Health Serv Res. 2016;51(6):2318-2329. doi: 10.1111/1475-6773.12473.

3. Hughes LS, Peltz A, Conway PH. State Innovation Model initiative: a state-led approach to accelerating health care system transformation. JAMA. 2015;313(13):1317-1318. doi: 10.1001/jama.2015.2017.

4. Oliver K, Lorenc T, Innvær S. New directions in evidence-based policy research: a critical analysis of the literature. Health Res Policy Syst. 2014;12:34. doi: 10.1186/1478-4505-12-34.

5. Strehlenert H, Richter-Sundberg L, Nyström ME, Hasson H. Evidence-informed policy formulation and implementation: a comparative case study of two national policies for improving health and social care in Sweden. Implement Sci. 2015;10:169. doi: 10.1186/s13012-015-0359-1.

6. Slaughter SE, Hill JN, Snelgrove-Clarke E. What is the extent and quality of documentation and reporting of fidelity to implementation strategies: a scoping review. Implement Sci. 2015;10:129. doi: 10.1186/s13012-015-0320-3.

7. Fisher ES, Shortell SM, Savitz LA. Implementation science: a potential catalyst for delivery system reform. JAMA. 2016;315(4):339-340. doi: 10.1001/jama.2015.17949.

8. Nilsen P, Ståhl C, Roback K, Cairney P. Never the twain shall meet?—a comparison of implementation science and policy implementation research. Implement Sci. 2013;8:63. doi: 10.1186/1748-5908-8-63.

9. Chapman SA, Spetz J, Lin J, Chan K, Schmidt LA. Capturing heterogeneity in medical marijuana policies: a taxonomy of regulatory regimes across the United States. Subst Use Misuse. 2016;51(9):1174-1184. doi: 10.3109/10826084.2016.1160932.

10. De Jong GF, Graefe DR, Irving SK, St Pierre T. Measuring state TANF policy variations and change after reform. Soc Sci Q. 2006;87(4):755-781. doi: 10.1111/j.1540-6237.2006.00432.x.

11. Coxe K, Hamilton K, Harvey HH, Xiang J, Ramirez MR, Yang J. Consistency and variation in school-level youth sports traumatic brain injury policy content. J Adolesc Health. 2018;62(3):255-264. doi: 10.1016/j.jadohealth.2017.07.003.

12. McKernan SM, Bernstein J, Fender L. Taming the beast: categorizing state welfare policies: a typology of welfare policies affecting recipient job entry. J Policy Anal Manage. 2005;24(2):443-460. doi: 10.1002/pam.20102.

13. Kissam SM, Beil H, Cousart C, Greenwald LM, Lloyd JT. States encouraging value-based payment: lessons from CMS’s State Innovation Models initiative. Milbank Q. 2019;97(2):506-542. doi: 10.1111/1468-0009.12380.

14. Beil H, Feinberg RK, Patel SV, Romaire MA. Behavioral health integration with primary care: implementation experience and impacts from the State Innovation Model round 1 states. Milbank Q. 2019;97(2):543-582. doi: 10.1111/1468-0009.12379.

15. The Lewin Group, Inc. Maine State Innovation Model self evaluation: year three final report. Maine website. maine.gov/dhhs/sim/evaluation/documents/ME%20SIM%20Self%20Evaluation%202016%20Final%20Report%2012.21.16.pdf. Published December 21, 2016. Accessed August 2, 2019.

16. Kennedy HP. Enhancing Delphi research: methods and results. J Adv Nurs. 2004;45(5):504-511. doi: 10.1046/j.1365-

2648.2003.02933.x.

17. RTI International. State Innovation Models (SIM) initiative evaluation: model test year three annual report. CMS website. downloads.cms.gov/files/cmmi/sim-rd1mt-thirdannrpt.pdf. Published September 2017. Accessed August 2, 2019.

18. RTI International. State Innovation Models (SIM) initiative evaluation: model test year five annual report. CMS website. downloads.cms.gov/files/cmmi/sim-rd1-mt-fifthannrpt.pdf. Published December 2018. Accessed August 2, 2019.

Related Videos
Pat Van Burkleo
Robert Groves, MD
Pat Van Burkleo
James Robinson, PhD, MPH, University of California, Berkeley
James Robinson, PhD, MPH, University of California, Berkeley
Carrie Kozlowski
Carrie Kozlowski
Carrie Kozlowski, OT, MBA
Shawn Gremminger
Marjorie Robinson, UPMC Health Plan Member
Related Content
© 2024 MJH Life Sciences
AJMC®
All rights reserved.