Currently Viewing:
Supplements The Aligning Forces for Quality Initiative: Early Lessons From Efforts to Improve Healthcare Quality
Creating and Sustaining Change: Early Insights From Aligning Forces
Claire B. Gibbons, PhD, MPH; and Anne F. Weiss, MPP
Getting the Structure Right for Communitywide Healthcare Improvement
Gordon Mosser, MD
Lessons for Reducing Disparities in Regional Quality Improvement Efforts
Scott C. Cook, PhD; Anna P. Goddu, MSc; Amanda R. Clarke, MPH; Robert S. Nocon, MHS; Kevin W. McCullough, MJ; and Marshall H. Chin, MD, MPH
The Imperative to Promote Collaborative Consumer Engagement: Lessons From the Aligning Forces for Quality Initiative
Debra L. Ness, MS
That Was Then, This Is Now
Lisa A. Simpson, MB, BCh, MPH, FAAP
Regional Health Improvement Collaboratives Needed Now More Than Ever: Program Directors' Perspectives
Randall D. Cebul, MD; Susanne E. Dade, MPA; Lisa M. Letourneau, MD, MPH; and Alan Glaseroff, MD, ABFM
The Aligning Forces for Quality Initiative: Background and Evolution From 2005 to 2012
Dennis P. Scanlon, PhD; Jeff Beich, PhD; Jeffrey A. Alexander, PhD; Jon B. Christianson, PhD; Romana Hasnain-Wynia, PhD; Megan C. McHugh, PhD; and Jessica N. Mittler, PhD
Barriers and Strategies to Align Stakeholders in Healthcare Alliances
Larry R. Hearld, PhD; Jeffrey A. Alexander, PhD; Jeff Beich, PhD; Jessica N. Mittler, PhD; and Jennifer L. O’Hora, BA
The Aligning Forces for Quality Initiative: Background and Evolution From 2005 to 2012 - eAppendix
Midterm Observations and Recommendations From the Evaluation of the AF4Q Initiative
Jeffrey A. Alexander, PhD; Dennis P. Scanlon, PhD; Megan C. McHugh, PhD; Jon B. Christianson, PhD; Jessica N. Mittler, PhD; Romana Hasnain-Wynia, PhD; and Jeff Beich, PhD
Producing Public Reports of Physician Quality at the Community Level: The Aligning Forces for Quality Initiative Experience
Jon B. Christianson, PhD; Karen M. Volmar, JD, MPH; Bethany W. Shaw, MHA; and Dennis P. Scanlon, PhD
Community-Level Interventions to Collect Race/Ethnicity and Language Data to Reduce Disparities
Romana Hasnain-Wynia, PhD; Deidre M. Weber, BA; Julie C. Yonek, MPH; Javiera Pumarino, BA; and Jessica N. Mittler, PhD
Approaches to Improving Healthcare Delivery by Multi-stakeholder Alliances
Megan C. McHugh, PhD; Jillian B. Harvey, MPH; Dasha Aseyev, BS; Jeffrey A. Alexander, PhD; Jeff Beich, PhD; and Dennis P. Scanlon, PhD
Currently Reading
Evaluating a Community-Based Program to Improve Healthcare Quality: Research Design for the Aligning Forces for Quality Initiative
Dennis P. Scanlon, PhD; Jeffrey A. Alexander, PhD; Jeff Beich, PhD; Jon B. Christianson, PhD; Romana Hasnain-Wynia, PhD; Megan C. McHugh, PhD; Jessica N. Mittler, PhD; Yunfeng Shi, PhD; and Laura J. B
Participating Faculty: The Aligning Forces for Quality Initiative: Early Lessons From Efforts to Improve Healthcare Quality at the Community Level
Letter From the Guest Editor
David Blumenthal, MD, MPP
Samuel O. Thier Professor of Medicine and Professor of Health Care Policy Massachusetts General Hospital/Partners HealthCare System and Harvard Medical School, Boston

Evaluating a Community-Based Program to Improve Healthcare Quality: Research Design for the Aligning Forces for Quality Initiative

Dennis P. Scanlon, PhD; Jeffrey A. Alexander, PhD; Jeff Beich, PhD; Jon B. Christianson, PhD; Romana Hasnain-Wynia, PhD; Megan C. McHugh, PhD; Jessica N. Mittler, PhD; Yunfeng Shi, PhD; and Laura J. B
As with the consumer survey, the evaluation team will produce alliance-specific reports of physician survey results for each of 3 planned rounds.

Target population: The NSSMPP collects information about physician practices with 1 to 19 physicians, and because the focus of the NSSMPP is on 4 major chronic diseases, practices were selected only if they were primary care practices; single-specialty cardiology, endocrinology, or pulmonology practices; or multispecialty practices with a significant number of physicians across these specialties. The NSSMPP oversampled the AF4Q communities and insofar as possible, sampled reasonable numbers of practices of each of the above specialty types, and practices in 4 size categories: 1 to 2, 3 to 8, 9 to 12, and 13 to 20 physicians.

Additional details: The physician survey was developed by the NSSMPP research team in collaboration with the AF4Q evaluation team, and parallels the National Study of Physician Organizations (NSPO), a longitudinal study of practices with 20 or more physicians, which began in 2001.1 The survey was conducted via telephone by a contracted survey firm that interviewed the lead physician or lead administrator of each practice. When this was not possible, the firm interviewed another knowledgeable physician in the practice. Interviews lasted 30 to 45 minutes, and respondents were compensated for their time. The NSSMPP survey was completed in all of the AF4Q communities, and a second round, renamed the NSPO3, is currently under way; this second round combines the previously separate NSSMPP and NSPO surveys to accommodate a single data collection effort in all practice sizes. A third and final round of the physician survey is planned for 2014 to 2015.

The longitudinal nature of the survey allows for estimates of change over time to identify practice characteristics and market factors that could explain baseline levels and longitudinal changes in practice adoption of quality improvement processes; it also tracks awareness and reaction to public reports of provider quality. A difference-in-difference approach is used to examine the effects in AF4Q communities relative to non-AF4Q communities.

Alliance Survey

Purpose and uses: The alliance survey covers the left-hand portion of the AF4Q logic model and connects the relationship of alliance governance and management to programmatic area implementation and program outcomes. It is designed to provide information regarding the degree to which alliance stakeholders are coalescing around a common vision. The survey also allows for assessment of elements of alliance management, leadership, governance, and organizational structure thought to provide the foundation for successful, sustainable collaboration, and demonstrates how these elements change over time.

Similar to the consumer and physician surveys, customized reports are prepared for each AF4Q alliance and provide communities with specific feedback they may use for targeting areas of improvement/attention and identifying success. These reports present baseline and longitudinal results, and comparisons among other AF4Q communities.

Target population: The alliance survey targets individuals associated with the alliance as defined by membership on alliance boards, leadership groups, work groups, and staff. Respondents that continue to participate in the alliance are surveyed at multiple times, allowing for comparisons of individual responses over time.

Additional detail: The alliance survey is administered online at multiple points throughout the life of the AF4Q initiative. Multiple administrations of the survey, each approximately 18 months apart, facilitate longitudinal comparisons. Three rounds of the alliance survey were completed in the original AF4Q communities (2006-2012) and at least 1 round was completed in the newer AF4Q communities. By the initiative’s end, a total of 5 rounds will be completed for the original alliances, and based on their program entry dates, 3 or 4 rounds will be completed in the newer AF4Q communities.

Qualitative Data

The evaluation team periodically conducts 3 different types of semi-structured interviews with key stakeholders in the AF4Q alliances: in-person site visit interviews; follow-up telephone interviews; and targeted telephone interviews. Rather than focus on any individual area of the logic model, qualitative data play a role throughout the research design. A high-level description of the 3 types of interviews and the processes used to prepare for interviews is located below; it is followed by a description of the evaluation team’s collection and synthesis of documents related to the program.

In-Person Site Visit Interviews

To gain the perspective of a variety of stakeholders within each AF4Q community and develop a deep understanding of the alliances’ structure and work, the evaluation team periodically conducts site visits. During the 2-day site visits, evaluation team researchers have in-depth, 1-on-1 conversations with a mix of participants in the community. In addition to interviewing alliance staff and volunteer leaders, AF4Q leadership team members, key committee and work group leaders, and other participants in the local AF4Q effort, the evaluation team works to ensure that interviews are conducted with representatives from each of the initiative’s targeted community stakeholder groups (eg, consumers, physicians, hospital leaders, healthcare plans, employers, and nurse leaders). The team also identifies 1 or 2 leaders in each community who are not directly involved in the AF4Q initiative to gain an outsider’s perspective on the alliance’s work. Site visit interview questions are tailored to each interviewee, and a typical interview lasts approximately 1 hour. Collectively, the interviews cover a wide range of topics, including participants’ views of the alliance’s organizational structure and governance, vision, strategy, collaboration among members, and progress and barriers in each of the AF4Q programmatic areas. The first evaluation team site visit was held approximately 6 months after each community entered the AF4Q initiative, and the second site visit occurred approximately 36 months later in each of the original AF4Q communities. The first site visit was also completed in the newer communities. To date, the evaluation team has conducted a total of 635 in-person site visit interviews resulting in approximately 10,700 pages of double-spaced, typed site visit interview transcripts. Two additional rounds of site visits are planned for each of the AF4Q communities.

Biannual Phone Interviews With AF4Q Staff Leaders

The evaluation team also conducts telephone interviews every 6 months with staff leaders in each AF4Q community (eg, AF4Q project directors and/or alliance directors). These 90-minute interviews cover topics such as progress and barriers in each of the AF4Q programmatic areas; changes in alliance governance structure, leadership, and stakeholder participation; the effect of external factors on the alliance’s AF4Q efforts; and alliance strategies for alignment of AF4Q programmatic areas. To date, the evaluation team has conducted 8 rounds of interviews with staff leaders resulting in 107 interviews and approximately 2700 pages of double-spaced, typed interview transcripts. The evaluation team plans to continue conducting these interviews for the duration of the AF4Q initiative.

Targeted Phone Interviews

Targeted phone interviews complement the site visits and the staff leader interviews by providing an opportunity for in-depth discussions with the individual(s) who lead(s) work in the AF4Q programmatic areas within each alliance and/or AF4Q community. These interviews have been used throughout the study (they were conducted as part of the site visits in the earlier years of the project) and will be conducted annually or semi-annually throughout the remainder of the project. Questions included in these interviews focus on the goals, processes, barriers, and successes in the intervention area of focus.

Documentation

The evaluation team gathers, organizes, and synthesizes AF4Q-related documents to understand and track what is happening in each of the AF4Q communities and with the AF4Q initiative on a national level. These data include community funding proposals, information available on alliance or community partner websites, strategic planning documents, meeting agendas and minutes, alliance reports to the AF4Q National Program Office and the Robert Wood Johnson Foundation, news articles and other media, and documents from a host of other sources. In addition, the evaluation team members observe key meetings, webinars, conference calls, and special events to gain additional information. These observations are entered into a projectwide tracking system and provide important context about the program and its implementation.

The documents described here collectively provide the evaluation team with an extensive data set that can be contrasted with the key stakeholder interview data and survey data to challenge conclusions that the team is developing on any given research question. Additionally, documentation in the evaluation team’s project-wide tracking system exclusively represents the single-most comprehensive view of the AF4Q initiative from its inception to its current state, and it is used regularly to develop descriptions of the initiative and its evolution.2

Existing Observational Data

Since primary data collection was not possible prior to the start of the AF4Q initiative, the evaluation team uses existing secondary data to explore and understand pre-program trends and ex-ante differences. While these data do not always contain the exact information sought by the evaluation team, they provide valuable insight into these issues. Additional advantages of using these data sources include: (1) cost and collection times are minimized because the data already exist; (2) data that are national in scope and include information on areas outside of the AF4Q alliances; and (3) standardized data which allows for comparable measures across AF4Q alliances and over time. The main secondary data sources used by the evaluation team are described below in greater detail. Other data sources (eg, the US Census, the American Community Survey, and HealthLeaders-InterStudy) that provide descriptive information about the AF4Q communities and their healthcare characteristics are used to supplement our analyses.

Dartmouth Atlas

The Dartmouth Atlas data contain claims-based quality measures for the fee-for-service Medicare population, computed by AF4Q service areas and other regions not participating in the AF4Q initiative. Specific aspects of quality of care, such as chronic disease management, care coordination, and hospital readmissions, are measured, and serve to assess the AF4Q initiative’s effect on long-term quality outcomes identified in the logic model.

Hospital Quality Alliance Program Patient-Level Data

Over 4200 hospitals voluntarily report their adherence to recommended processes and treatments for patients admitted for acute myocardial infarction, heart failure, and pneumonia. The Hospital Quality Alliance program data contain hospitals’ performance on these process measures, and the associated risk-standardized 30-day readmission and mortality rates. This data source is also used to estimate the effect of the AF4Q initiative on hospital quality over time and reductions in disparities in care relative to hospitals in non-AF4Q communities.

 
Copyright AJMC 2006-2020 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
x
Welcome the the new and improved AJMC.com, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up