Currently Viewing:
Supplements The Aligning Forces for Quality Initiative: Early Lessons From Efforts to Improve Healthcare Quality
Creating and Sustaining Change: Early Insights From Aligning Forces
Claire B. Gibbons, PhD, MPH; and Anne F. Weiss, MPP
Getting the Structure Right for Communitywide Healthcare Improvement
Gordon Mosser, MD
Lessons for Reducing Disparities in Regional Quality Improvement Efforts
Scott C. Cook, PhD; Anna P. Goddu, MSc; Amanda R. Clarke, MPH; Robert S. Nocon, MHS; Kevin W. McCullough, MJ; and Marshall H. Chin, MD, MPH
The Imperative to Promote Collaborative Consumer Engagement: Lessons From the Aligning Forces for Quality Initiative
Debra L. Ness, MS
That Was Then, This Is Now
Lisa A. Simpson, MB, BCh, MPH, FAAP
Regional Health Improvement Collaboratives Needed Now More Than Ever: Program Directors' Perspectives
Randall D. Cebul, MD; Susanne E. Dade, MPA; Lisa M. Letourneau, MD, MPH; and Alan Glaseroff, MD, ABFM
The Aligning Forces for Quality Initiative: Background and Evolution From 2005 to 2012
Dennis P. Scanlon, PhD; Jeff Beich, PhD; Jeffrey A. Alexander, PhD; Jon B. Christianson, PhD; Romana Hasnain-Wynia, PhD; Megan C. McHugh, PhD; and Jessica N. Mittler, PhD
Barriers and Strategies to Align Stakeholders in Healthcare Alliances
Larry R. Hearld, PhD; Jeffrey A. Alexander, PhD; Jeff Beich, PhD; Jessica N. Mittler, PhD; and Jennifer L. O’Hora, BA
The Aligning Forces for Quality Initiative: Background and Evolution From 2005 to 2012 - eAppendix
Midterm Observations and Recommendations From the Evaluation of the AF4Q Initiative
Jeffrey A. Alexander, PhD; Dennis P. Scanlon, PhD; Megan C. McHugh, PhD; Jon B. Christianson, PhD; Jessica N. Mittler, PhD; Romana Hasnain-Wynia, PhD; and Jeff Beich, PhD
Producing Public Reports of Physician Quality at the Community Level: The Aligning Forces for Quality Initiative Experience
Jon B. Christianson, PhD; Karen M. Volmar, JD, MPH; Bethany W. Shaw, MHA; and Dennis P. Scanlon, PhD
Community-Level Interventions to Collect Race/Ethnicity and Language Data to Reduce Disparities
Romana Hasnain-Wynia, PhD; Deidre M. Weber, BA; Julie C. Yonek, MPH; Javiera Pumarino, BA; and Jessica N. Mittler, PhD
Approaches to Improving Healthcare Delivery by Multi-stakeholder Alliances
Megan C. McHugh, PhD; Jillian B. Harvey, MPH; Dasha Aseyev, BS; Jeffrey A. Alexander, PhD; Jeff Beich, PhD; and Dennis P. Scanlon, PhD
Currently Reading
Evaluating a Community-Based Program to Improve Healthcare Quality: Research Design for the Aligning Forces for Quality Initiative
Dennis P. Scanlon, PhD; Jeffrey A. Alexander, PhD; Jeff Beich, PhD; Jon B. Christianson, PhD; Romana Hasnain-Wynia, PhD; Megan C. McHugh, PhD; Jessica N. Mittler, PhD; Yunfeng Shi, PhD; and Laura J. B
Participating Faculty: The Aligning Forces for Quality Initiative: Early Lessons From Efforts to Improve Healthcare Quality at the Community Level
Letter From the Guest Editor
David Blumenthal, MD, MPP
Samuel O. Thier Professor of Medicine and Professor of Health Care Policy Massachusetts General Hospital/Partners HealthCare System and Harvard Medical School, Boston

Evaluating a Community-Based Program to Improve Healthcare Quality: Research Design for the Aligning Forces for Quality Initiative

Dennis P. Scanlon, PhD; Jeffrey A. Alexander, PhD; Jeff Beich, PhD; Jon B. Christianson, PhD; Romana Hasnain-Wynia, PhD; Megan C. McHugh, PhD; Jessica N. Mittler, PhD; Yunfeng Shi, PhD; and Laura J. B
The evaluation team follows a systematic process for all types of key informant interviews that it conducts and identifies multiple respondents to discuss each topic it explores. By gathering data from multiple points of observation, the evaluation team is able to compare and contrast interviewee viewpoints to gain a dynamic view of issues and processes related to the AF4Q initiative. Interviews are digitally recorded and transcribed in full. The resultant interview transcripts are tagged or “coded” by evaluation team members using deductive high-level (global) categories corresponding to the AF4Q initiative’s main programmatic areas and major concepts that are relevant across all alliances (eg, alliance participation, resources, and structure). These global codes are then entered into a qualitative data analysis software package (Atlas.ti), which allows for large amounts of data in text form to be stored, sorted, and systematically queried. In consultation with the evaluation team’s qualitative data manager, the interview data are pulled from Atlas.ti and analyzed using inductive approaches by evaluation team members working to address specific research questions. Analysis processes conducted by the evaluation team using qualitative data are done using investigator triangulation to minimize bias and ensure systematic analysis of data.

The particular portion of interview data and the analysis processes used in the development of evaluation team products varies depending on several factors: (1) the topic(s) addressed by the research question; (2) the goal of each particular analysis (eg, description, explanation); (3) whether the question can be best addressed by using data from 1 or multiple alliances and/or multiple time points; and (4) the use of other types of data (eg, survey data, documentation) in the analysis and the designated role of each type of data.

Mixed-Methods Approaches

Some of the questions that the evaluation team seeks to answer can be informed by a combination of the team’s quantitative and qualitative data. While not appropriate for every sub-design within the evaluation, the team has identified questions for which a quantitative or qualitative approach alone is not adequate, such as:

  • Did AF4Q alliances affect development of new payment approaches in their communities? If so, how?
  • What features of alliance organization and governance are related to community quality outcomes or progress toward those outcomes?


In these situations, the quantitative data are used to assess magnitude and/or frequency of a phenomenon while the qualitative data are used to understand variation in the phenomenon in different contexts, key facilitators and barriers at hand, and/or the meaning that is attributed to it by those interviewed. In total, the strategic combination of data types in these subprojects allows for a more comprehensive understanding than would be possible with only 1 data type. Additionally, the use of multiple data types and sources allows the evaluation team to compare—or triangulate—findings and look at alternate causes or explanations for findings. While the literature on combining methods and data types is varied, the evaluation team relies on the high-level guidance provided by the National Institutes of Health’s recently issued document on mixed-methods research, and as such, understands the importance of making clear choices and designations for the role of all elements that are brought together in analysis processes.7

Conclusion

Because of the myriad of challenges that come with evaluating a multi-site, multi-year initiative intended to solve complex real-world problems, the AF4Q research design is best viewed as a multiphase design of a complex multi-site program. The evaluation seeks to be comprehensive, answer a set of both broad and narrow research questions, and be summative and formative. While the nonrandom, competitive selection of program participants poses threats to internal and external validity, the evaluation team strives to share lessons learned and to disseminate findings, with the appropriate caveats highlighted for consumers of the AF4Q evaluation research.

While there is no exact parallel to the AF4Q initiative, similar large-scale, multi-site initiatives are becoming an increasingly common approach for addressing problems in healthcare. The US Department of Health and Human Services’ Chartered Value Exchanges; the Comprehensive Primary Care initiative funded by the Centers for Medicare & Medicaid Services; the Beacon Community Cooperative Agreement Program funded by the Office of the National Coordinator for Health Information Technology; the RWJF’s Healthy Kids, Healthy Communities program; and the W.K. Kellogg Foundation’s Community-Based Public Health initiative are all examples of contemporaneous programs with strategies similar to those of the AF4Q initiative. These types of programs that are invested in communities as the locus of reform share a common set of characteristics that pose similar opportunities and challenges to researchers studying their effects. We believe that our research design, and the lessons learned from our approach, may be valuable to others tasked with evaluating similar community-based initiatives.

While the findings of the AF4Q evaluation team are not directly generalizable to all communities or populations, the evaluation makes several important contributions. It provides formative feedback to the program sponsor, participants, and other interested audiences in real time; develops some of the initial research and approaches assessing innovative and under-studied interventions—many of which are now being adopted through national healthcare reform and other large-scale initiatives; furthers the analysis and understanding of effective community-based collaborative work in healthcare; and helps to differentiate the various facilitators, barriers, and contextual dimensions that affect the implementation and outcomes of community-based health interventions.

Specific findings to date from the AF4Q initiative are available through peer-reviewed papers, such as those included in this supplement, and research summaries and special reports. Individuals interested in learning more about the evaluation research design can contact the authors, and those interested in following the findings of the evaluation team can view the full list of AF4Q evaluation products at http://www.hhdev.psu.edu/chcpr/alignforce. For more information about the AF4Q initiative and the participating communities, visit www.forces4quality.org.

Appendix. Purpose, Uses, and Descriptions of the Aligning Forces for Quality Evaluation Data

This appendix includes a description of each of the main data sources used in the Aligning Forces for Quality (AF4Q) evaluation, details about the purpose and use of each data source, the target population and sampling strategy (where relevant), and other important information. Additional details on data sources and methods can be obtained by contacting the authors.

Survey Data

The evaluation team administers 3 surveys to capture important information about the AF4Q initiative, the context in which it operates, and its effects.

Consumer Survey

Purpose and uses: The consumer survey is designed to capture the components of the AF4Q logic model related to consumer engagement and consumers’ use of publicly available quality information. Survey questions focus on patient activation; consumer knowledge of publicly available performance reports that highlight quality differences among physicians, hospitals, and health plans; the ability to be an effective consumer in the context of a physician visit; patient knowledge about their illness; skills and willingness to self-manage the illness; and other related topics.

In order to provide real-time feedback and information to those implementing the AF4Q initiative, and in the spirit of our formative approach, the evaluation team will produce alliance-specific reports of consumer survey results for each of the 3 planned rounds. These reports present the alliance’s baseline and longitudinal results and comparisons with other AF4Q communities.

Survey data analysis methods are used to examine distributions of key survey questions, model the variation in responses to survey questions, and identify factors that explain the variation in responses to survey questions. The second round of the consumer survey data will be used to estimate the effect of the AF4Q initiative on consumer-related outcomes using a difference-in-difference design, where the control group includes a pre- and post-sample of consumers with chronic illnesses drawn from the national comparison sample created from areas of the country that do not include AF4Q communities.

Target population: The targeted study population of the consumer survey is adults (>18 years old) with at least 1 of 5 chronic conditions (diabetes, hypertension, heart disease, asthma, and depression). The consumer survey collects data from all of the AF4Q communities and a national comparison sample. The sampling design for the survey is a random digit dialing telephone sample, which was created to yield a representative sample of respondents. Additionally, an oversample based on respondent race and ethnicity was drawn in 12 of the AF4Q communities to examine differences in survey responses between minorities and non-minorities.

Additional details: The consumer survey population was chosen early on in the project, when the AF4Q initiative was focused solely on ambulatory care of individuals with at least 1 of the aforementioned chronic illnesses. Despite the expansion of the AF4Q initiative to include all inpatient care and all members of the population regardless of health status, the consumer survey design has remained the same to provide consistency across the rounds of data collection. Also, because the focus is on those with chronic illness, it ensures that the sample consists of people who are most likely using healthcare services, especially many that are highly relevant to the areas of focus in the AF4Q communities.

Physician Survey

Purpose and uses: The physician survey (National Survey of Small and Medium-Sized Physician Practices [NSSMPP]) is designed to capture data related to ambulatory quality improvement and assists the evaluation team in learning about the ambulatory QI component of the AF4Q logic model. One of the primary objectives of the NSSMPP is to assess the extent to which physician practices have adopted key components of the chronic care model, the patient-centered medical home, and other care management processes. In addition to organizational information about the practice, the NSSMPP survey instrument includes 7 domains: (1) meaningful use of clinical information technology; (2) use of care management processes to improve the quality of care for 4 chronic diseases (asthma, congestive heart failure, depression, and diabetes); (3) provision of clinical preventive services and health promotion; (4) exposure to external performance incentives such as pay-for-performance and public reporting; (5) payer mix, forms of compensation from health plans, and forms of compensation paid by the practice to its physicians; (6) organizational culture; and (7) information about health plans’ provision of care management and preventive services for patients in each practice in the survey.

 
Copyright AJMC 2006-2020 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
x
Welcome the the new and improved AJMC.com, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up