Currently Viewing:
Supplements The Aligning Forces for Quality Initiative: Summative Findings and Lessons Learned From Efforts to Improve Healthcare Quality at the Community Level
The Aligning Forces for Quality Initiative: Background and Evolution From 2005 to 2015
Dennis P. Scanlon, PhD; Jeff Beich, PhD; Brigitt Leitzell, MS; Bethany W. Shaw, MHA; Jeffrey A. Alexander, PhD; Jon B. Christianson, PhD; Diane C. Farley, BA; Jessica Greene, PhD; Muriel Jean-Jacques,
Summative Evaluation Results and Lessons Learned From the Aligning Forces for Quality Program
Dennis P. Scanlon, PhD; Jeffrey A. Alexander, PhD; Megan McHugh, PhD; Jeff Beich, PhD; Jon B. Christianson, PhD; Jessica Greene, PhD; Muriel Jean-Jacques, MD, MAPP; Brigitt Leitzell, MS; Yunfeng Shi,
The Longitudinal Impact of Aligning Forces for Quality on Measures of Population Health, Quality and Experience of Care, and Cost of Care
Yunfeng Shi, PhD; Dennis P. Scanlon, PhD; Raymond Kang, MA; Megan McHugh, PhD; Jessica Greene, PhD; Jon B. Christianson, PhD; Muriel Jean-Jacques, MD, MAPP; Yasmin Mahmud, MPH; and Jeffrey A. Alexande
Reporting Provider Performance: What Can Be Learned From the Experience of Multi-Stakeholder Community Coalitions?
Jon B. Christianson, PhD; Bethany W. Shaw, MHA; Jessica Greene, PhD; and Dennis P. Scanlon, PhD
Improving Care Delivery at the Community Level: An Examination of the AF4Q Legacy
Megan McHugh, PhD; Jillian B. Harvey, MPH, PhD; Jaime Hamil, MPH; and Dennis P. Scanlon, PhD
From Rhetoric to Reality: Consumer Engagement in 16 Multi-Stakeholder Alliances
Jessica Greene, PhD; Diane C. Farley, BA; Jon B. Christianson, PhD; Dennis P. Scanlon, PhD; and Yunfeng Shi, PhD
Lessons Learned About Advancing Healthcare Equity From the Aligning Forces for Quality Initiative
Muriel Jean-Jacques, MD, MAPP; Yasmin Mahmud, MPH; Jaime Hamil, MPH; Raymond Kang, MA; Philethea Duckett, MPA; and Juliet C. Yonek, MPH, PhD
Aligning Forces for Quality Multi-Stakeholder Healthcare Alliances: Do They Have a Sustainable Future
Jeffrey A. Alexander, PhD; Larry R. Hearld, PhD; Laura J. Wolf, MSW; and Jocelyn M. Vanderbrink, MHA
Currently Reading
Evaluating a Complex, Multi-Site, Community-Based Program to Improve Healthcare Quality: The Summative Research Design for the Aligning Forces for Quality Initiative
Dennis P. Scanlon, PhD; Laura J. Wolf, MSW; Jeffrey A. Alexander, PhD; Jon B. Christianson, PhD; Jessica Greene, PhD; Muriel Jean-Jacques, MD, MAPP; Megan McHugh, PhD; Yunfeng Shi, PhD; Brigitt Leitze
Letter From Donald M. Berwick, MD, MPP, Guest Editor
Donald M. Berwick, MD, MPP
The View From Aligning Forces to a Culture of Health
Carolyn E. Miller, MSHP, MA, and Anne F. Weiss, MPP
Leading Multi-sector Collaboration: Lessons From the Aligning Forces for Quality National Program Office
Katherine O. Browne, MBA, MHA; Robert Graham, MD; and Bruce Siegel, MD, MPH
Healthcare Reform Post AF4Q: A National Network of Regional Collaboratives Continues Healthcare Reform From the Ground Up
Elizabeth Mitchell and Dianne Hasselman, MSPH

Evaluating a Complex, Multi-Site, Community-Based Program to Improve Healthcare Quality: The Summative Research Design for the Aligning Forces for Quality Initiative

Dennis P. Scanlon, PhD; Laura J. Wolf, MSW; Jeffrey A. Alexander, PhD; Jon B. Christianson, PhD; Jessica Greene, PhD; Muriel Jean-Jacques, MD, MAPP; Megan McHugh, PhD; Yunfeng Shi, PhD; Brigitt Leitze
Objective: The Aligning Forces for Quality (AF4Q) initiative was the Robert Wood Johnson Foundation’s (RWJF’s) signature effort to increase the overall quality of healthcare in targeted communities throughout the country. In addition to sponsoring this 16-site complex program, RWJF funded an independent scientific evaluation to support objective research on the initiative’s effectiveness and contributions to basic knowledge in 5 core programmatic areas. The research design, data, and challenges faced during the summative evaluation phase of this near decade-long program are discussed.

Study Design: A descriptive overview of the summative research design and its development for a multi-site, community-based, healthcare quality improvement initiative is provided.

Methods: The summative research design employed by the evaluation team is discussed.

Results: The evaluation team’s summative research design involved a data-driven assessment of the effectiveness of the AF4Q program at large, assessments of the impact of AF4Q in the specific programmatic areas, and an assessment of how the AF4Q alliances were positioned for the future at the end of the program.

Conclusion: The AF4Q initiative was the largest privately funded community-based healthcare improvement initiative in the United States to date and was implemented at a time of rapid change in national healthcare policy. The implementation of large-scale, multi-site initiatives is becoming an increasingly common approach for addressing problems in healthcare. The summative evaluation research design for the AF4Q initiative, and the lessons learned from its approach, may be valuable to others tasked with evaluating similarly complex community-based initiatives.

Am J Manag Care. 2016;22:eS8-eS16
The Aligning Forces for Quality (AF4Q) initiative, funded by the Robert Wood Johnson Foundation (RWJF), was a multi-site, multifaceted program with the overarching goals of improving the quality of healthcare and reducing health disparities in its 16 participant communities and providing models for national healthcare reform.1 Launched in 2006 and concluded in 2015, the AF4Q initiative was built on a community-based multi-stakeholder approach and included multiple interventions and goals, which were developed and revised throughout the program’s near decade-long lifespan.

Besides being a complex and ambitious initiative in its own right, the AF4Q program was implemented at a time of rapid change in healthcare marked by a growing awareness of the multiple determinants of health and healthcare quality, and significant national change in healthcare policy, including the passage of the Affordable Care Act (ACA) in 2010.2,3 A comprehensive overview of the components and phases of the AF4Q program is available in the article by Scanlon et al in this supplement.4

In addition to sponsoring the initiative, RWJF dedicated funding to support an independent scientific evaluation of the AF4Q program. The evaluation design included both formative and summative components, which the evaluation team put in place at the beginning of the program and periodically updated in response to the evolution of the program. During the formative phase, the evaluation team focused on developing an ongoing understanding of how the overall program was unfolding by creating subteams to study, in depth, the AF4Q initiative’s 5 main programmatic areas (quality improvement, measurement and public reporting, consumer engagement, disparities reduction, and payment reform) and the approaches to governance and organization employed by each grantee community; developing interim findings at multiple points during the program years; and sharing lessons learned during implementation. These findings, along with alliance-specific reports from each of the evaluation team’s surveys, provided real-time feedback to internal audiences (ie, RWJF, the AF4Q National Program Office, technical assistance providers, and AF4Q alliances). In addition, formative observations were disseminated to external stakeholders through peer-reviewed publications, research summaries, and presentations. A description of the evaluation design and data sources for the formative phase is located in the article by Scanlon et al.5

Approximately 2 years before the AF4Q program ended, the evaluation team began to focus more intently on the summative component, revisiting its initial plan in light of the program’s evolution. This paper discusses the team’s approach to its summative data collection and analysis and key lessons learned throughout the final phase of this complex multi-site program.

Summative Evaluation of Complex Programs

The essential purpose of an evaluation is to document what happened in a program, whether changes occurred, and what links exist between a program and the observed impacts.6,7 As described above in relation to the AF4Q initiative, one way to view evaluations of long-term and complex programs is through 2 interrelated phases: a formative evaluation phase, which produces findings during program implementation, and a summative, or impact evaluation phase, which provides an empirically based appraisal of the final results of a program.8,9 There is no single format for conducting an evaluation of complex programs like the AF4Q initiative, and the specific approach taken in the design is based on factors such as the characteristics of the program, the requirements of the funding agency, the budget for the evaluation effort, the availability of relevant secondary data sources, and the evaluators’ training, research experience, and theoretical lens.10

There is, however, a shift occurring in the overall field of evaluation, from focusing on a pre-defined set of program effects for complex initiatives to an emergent approach that aligns more with the multifaceted ways in which social change typically occurs.11,12 Relatedly, there is an increasing number of evaluations of large-scale and complex community-based programs designed to improve health or healthcare using multimethod and adaptive approaches to study programs and their outcomes.13-15 Guidance available to researchers on how to approach the evaluation of complex programs in health and health services is also expanding, including a recent article in the Journal of the American Medical Association from CMS, which states that “CMS uses a mixed-methods approach that combines qualitative and quantitative analyses to provide insights into both what the outputs of models are and which contextual factors drive the observed results.”16

Another key factor to consider in summative evaluation is that funders’ interests often go beyond the impact of the particular interventions they sponsored in specific settings to a desire to identify generalizable lessons from the program.17 RWJF is no exception, describing itself as “passionate about the responsibility it has to share information and foster understanding of the impact of past grant-making—what works, what doesn’t, and why.”18 For the AF4Q initiative, RWJF summarized the program goals as, “an unprecedented effort to improve the quality of healthcare in targeted communities, reduce disparities, and provide models to propel reform.”19

Developmental Stages of the AF4Q Summative Design

The AF4Q evaluation team’s summative design entailed a 3-stage process: putting foundational elements of the summative design in place at the start of the program; closely following the development and changes in the program, and in the larger environment, to account for those factors in the final analysis; and establishing and implementing the final summative design. Each of these stages is described in detail below.

The Foundations of the Summative Design for the AF4Q Initiative

Overall, the AF4Q evaluation used a multiphase design, formally referred to as a “methodological triangulated design.”20 The design included subprojects with independent methodological integrity in each of the AF4Q initiative’s 5 main programmatic areas that were aggregated into an assessment of the whole program during the summative phase.

While the evaluation team agreed that many of the details of the summative work would require adjustments as the program progressed, foundational elements of the summative plan were in place at the beginning. One such element was the plan for a postprogram period to focus on final data collection, analysis, and reporting of the evaluation team’s summative findings. Another component involved the early articulation of the overall AF4Q logic model. As Pawson and Tilley wrote in their seminal work on realistic evaluation, “The goal [of evaluation research] has never been to construct theory per se; rather, it has been to develop the theories of practitioners, participants, and policy makers.”21 Accordingly, the AF4Q logic model, designed to capture the program’s inputs and expected outcomes, was based on RWJF’s plans for the program and the theory of change on which those plans were built. The evaluation team updated the logic model as the program evolved. (A detailed description of the logic model and its development is located in the article by Scanlon et al in this supplement.4)

Building from the overall logic model and the more detailed logic models developed for the AF4Q programmatic interventions, the evaluation team developed both formative and summative research questions that corresponded to each programmatic area, the alliance organizations, and the hypothesized intermediate and long-term outcomes of the program. Using those research questions as a guide, the evaluation team then laid out a plan for collecting the needed qualitative and quantitative data (see the online eAppendix for an overview of data). The qualitative data included key informant interviews; observations of principal meetings, discussions, and events; and the compilation of both projectwide and alliance-specific documentation. Quantitative data were obtained from 3 longitudinal surveys (using control groups where possible) and other secondary data sources, such as Medicare claims data from the Dartmouth Group, commercial claims data from MarketScan, and survey data from the Behavioral Risk Factor Surveillance System.

Systematic Monitoring of Program and Environmental Change

During the AF4Q initiative, RWJF and the National Program Office (NPO) enacted many substantial changes to the initiative, including modifications to the requirements in existing programmatic areas, the addition of new programmatic areas, and an expansion of the overall focus of the program, from ambulatory care for people with a specific set of chronic illnesses to all inpatient and outpatient care. (See the article by Scanlon et al in this supplement for a more detailed description of the phases of the AF4Q program.4) Monitoring these changes on a program level, and how the alliances adapted to them, became an important aspect of the formative evaluation effort; the team recognized that this work was essential to its ability to assess the program and its outcomes, including the level of alliance fidelity to the interventions and the degree to which observed changes in outcomes could be attributed to the program.

Additionally, although there was only a limited amount of AF4Q-like activity at the start of the program, soon after the AF4Q initiative launched, new programs that affected many AF4Q alliances were established (eg, the federal government’s Chartered Value Exchange program and the Office of the National Coordinator’s [ONC’s] health information technology projects). Once the ACA was passed in 2010, the pace of change escalated. Dozens of programs aimed at community (and often multi-stakeholder) approaches to improving healthcare quality and cost were launched during the second, third, and fourth funding cycles of the AF4Q initiative. These programs were funded by agencies such as the CMS’s Innovation Center, the Patient-Centered Outcomes Research Institute, the ONC, and others.

The tremendous amount of policy and contextual change meant that attributing observable change to the AF4Q initiative would be even more difficult than anticipated at the outset of the evaluation. To help inform its summative work, the evaluation team followed and documented policy changes at the national level, as well as at the state level for those states with an AF4Q presence. Additionally, the team tracked (ie, followed and documented) alliance involvement in the myriad of health improvement programs that developed during the AF4Q program years. Because of the high volume of environmental change, the evaluation team also conducted “vantage” interviews with a sample of national leaders in health policy to hear perspectives about the AF4Q initiative and its role in the national conversation related to community-based healthcare improvement (see the online eAppendix for details about this effort).

Copyright AJMC 2006-2019 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
Welcome the the new and improved, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up