Currently Viewing:
Supplements The Aligning Forces for Quality Initiative: Early Lessons From Efforts to Improve Healthcare Quality
Creating and Sustaining Change: Early Insights From Aligning Forces
Claire B. Gibbons, PhD, MPH; and Anne F. Weiss, MPP
Getting the Structure Right for Communitywide Healthcare Improvement
Gordon Mosser, MD
Lessons for Reducing Disparities in Regional Quality Improvement Efforts
Scott C. Cook, PhD; Anna P. Goddu, MSc; Amanda R. Clarke, MPH; Robert S. Nocon, MHS; Kevin W. McCullough, MJ; and Marshall H. Chin, MD, MPH
The Imperative to Promote Collaborative Consumer Engagement: Lessons From the Aligning Forces for Quality Initiative
Debra L. Ness, MS
That Was Then, This Is Now
Lisa A. Simpson, MB, BCh, MPH, FAAP
Regional Health Improvement Collaboratives Needed Now More Than Ever: Program Directors' Perspectives
Randall D. Cebul, MD; Susanne E. Dade, MPA; Lisa M. Letourneau, MD, MPH; and Alan Glaseroff, MD, ABFM
The Aligning Forces for Quality Initiative: Background and Evolution From 2005 to 2012
Dennis P. Scanlon, PhD; Jeff Beich, PhD; Jeffrey A. Alexander, PhD; Jon B. Christianson, PhD; Romana Hasnain-Wynia, PhD; Megan C. McHugh, PhD; and Jessica N. Mittler, PhD
Barriers and Strategies to Align Stakeholders in Healthcare Alliances
Larry R. Hearld, PhD; Jeffrey A. Alexander, PhD; Jeff Beich, PhD; Jessica N. Mittler, PhD; and Jennifer L. O’Hora, BA
The Aligning Forces for Quality Initiative: Background and Evolution From 2005 to 2012 - eAppendix
Midterm Observations and Recommendations From the Evaluation of the AF4Q Initiative
Jeffrey A. Alexander, PhD; Dennis P. Scanlon, PhD; Megan C. McHugh, PhD; Jon B. Christianson, PhD; Jessica N. Mittler, PhD; Romana Hasnain-Wynia, PhD; and Jeff Beich, PhD
Currently Reading
Producing Public Reports of Physician Quality at the Community Level: The Aligning Forces for Quality Initiative Experience
Jon B. Christianson, PhD; Karen M. Volmar, JD, MPH; Bethany W. Shaw, MHA; and Dennis P. Scanlon, PhD
Approaches to Improving Healthcare Delivery by Multi-stakeholder Alliances
Megan C. McHugh, PhD; Jillian B. Harvey, MPH; Dasha Aseyev, BS; Jeffrey A. Alexander, PhD; Jeff Beich, PhD; and Dennis P. Scanlon, PhD
Evaluating a Community-Based Program to Improve Healthcare Quality: Research Design for the Aligning Forces for Quality Initiative
Dennis P. Scanlon, PhD; Jeffrey A. Alexander, PhD; Jeff Beich, PhD; Jon B. Christianson, PhD; Romana Hasnain-Wynia, PhD; Megan C. McHugh, PhD; Jessica N. Mittler, PhD; Yunfeng Shi, PhD; and Laura J. B
Using Websites to Engage Consumers in Managing Their Health and Healthcare
Jessica N. Mittler, PhD; Karen M. Volmar, JD, MPH; Bethany W. Shaw, MHA; Jon B. Christianson, PhD; and Dennis P. Scanlon, PhD
Participating Faculty: The Aligning Forces for Quality Initiative: Early Lessons From Efforts to Improve Healthcare Quality at the Community Level
Letter From the Guest Editor
David Blumenthal, MD, MPP
Samuel O. Thier Professor of Medicine and Professor of Health Care Policy Massachusetts General Hospital/Partners HealthCare System and Harvard Medical School, Boston

Producing Public Reports of Physician Quality at the Community Level: The Aligning Forces for Quality Initiative Experience

Jon B. Christianson, PhD; Karen M. Volmar, JD, MPH; Bethany W. Shaw, MHA; and Dennis P. Scanlon, PhD
The AF4Q initiative’s initial focus was on reporting ambulatory quality measures for the treatment of chronic illnesses; these measures, especially on diabetes-related care, dominated early reports (Table). Alliances readily accepted the AF4Q initiative’s direction to use nationally endorsed measures; it was easier to muster physician support for them and, given the reporting target date, alliances did not have the time or resources to develop new measures. Consequently, most alliances relied on National Quality Forum–endorsed chronic care measures and/or those produced by the National Committee for Quality Assurance. Securing stakeholder agreement around patient experience measures was more problematic. While hospital patient experience measures have been in use for some time, ambulatory care patient experience measures were less familiar to stakeholders. As one alliance leader observed, “…patient experience is a huge, vast gray area for us,” while another noted that these measures were “…politically a very difficult sell for physicians” compared with clinical quality measures. A further complication was that national health plans had their own measures of patient experience and were often not willing to engage with, or provide support for, an alliance process that could result in selection of different measures, usually coming from specific surveys. Alliances that did find a way to report patient experience used nationally endorsed clinician and group Consumer Assessment of Healthcare Providers and Systems measures. Two alliances participated in a patient experience pilot program with Consumers’ Checkbook, and all alliances attended a meeting in which options for patient experience measurement were discussed. For many alliances, the cost of collecting patient experience data proved to be a significant barrier.

For alliances new to reporting physician performance, the measure selection process typically took longer than expected, in part because it was enmeshed with early alliance efforts to build credibility and support in their communities. One alliance leader reported that the alliance “…had to be very deliberate in our selection of what our methodology was going to be and it had to be data that the physician could not just believe in but it had to be a program that the physicians could drive and own,” which meant developing guiding principles for measure selection and “a methodology that is explicit and open to scrutiny.” In summary, the measure selection and specification process often was the first consequential act undertaken by alliances under the auspices of the AF4Q initiative; they approached it cautiously, expecting that it could establish or destroy their credibility with community stakeholders.

Measure Construction

The main decision regarding construction of physician performance measures was whether to use administrative (ie, claims) data or data from medical records (Table). Initially, despite physician distrust of the accuracy and completeness of claims data, most alliances chose to use these data to construct their measures. The AF4Q initiative funds typically were used to produce claims-based measures. Alliances believed this would be the quickest path to public reporting, as the data were already available and being used by commercial health plans to produce performance measures that were available to their members. (Four alliances using claims data also were successful in incorporating data for patients covered by Medicaid.) To construct claims-based measures, alliances contracted with data aggregators. These firms obtained the claims data from participating health plans, corrected and standardized the data, attributed patients to individual physicians or physician practices across the merged data set, and constructed measures according to alliance specifications. Typically, the first time measures were constructed using this process, and measure values were reviewed by physician practices and then revised based on physician feedback. In subsequent reports, physicians were given a time period in which to review results (sometimes mandated by state law) before they were released to the public. While alliances anticipated that using claims data would accelerate the public reporting process, for most alliances, this proved not to be the case. Typically, it took time to convince plans to participate, and not all obliged. Once plans agreed, drafting legal agreements for data sharing and confidentiality also proved to be time consuming. In addition, plans submitted data in various ways, and the data did not often meet measure production standards. Finally, after receiving plan data, some aggregators took longer than expected to construct the measures.

Another measure construction option was to use data from paper or electronic medical records; 2 alliances used variants of this process prior to joining the AF4Q initiative. Physician practices provided clinical data from a random sample of medical charts or for a population of appropriately identified patients drawn from a patient registry (often computer-based). Some alliances expressed concern that physicians would reject either approach as too burdensome. In practice, while it did impose costs to practices, physicians agreed to clinical data submission, believing measures would better reflect the quality of care in their practices because they would be constructed using data from the entire patient population. Using data from patient records rather than claims minimized technical issues around attribution of patients to physicians; allowed reporting of biologic markers, such as low-density lipoprotein cholesterol levels, not possible using claims data; and facilitated reporting at a physician practice level, as opposed to a larger medical group level, due to a greater number of observations available for measure construction. For many alliances, this seemed like the appropriate level of reporting, as it coincided with the level at which quality improvement efforts were likely to be implemented. However, it was not necessarily faster, initially, than producing reports using claims data. The alliances using this approach did not attempt to construct measures at the individual physician level, in part because physicians opposed doing so.

As with claims-based measure construction, building the infrastructure to support measure construction using clinical data was arduous. It required careful specification of procedures for sampling patients and identifying eligible patients based on measure guidelines. Alliances established portals to receive physician data and visited physician practices to audit submissions. Alliances that had never constructed clinical performance measures typically adopted the policies of other alliances. One experienced alliance even configured its portal to accept physician data submissions from practices of another alliance.

Irrespective of the approach, when several alliances realized that they wouldn’t produce their first physician performance report by the the AF4Q initiative target date, they turned to the Centers for Medicare & Medicaid Services’ GEM (Generating Medicare Physician Quality Performance Measurement Results) data to construct a small number of measures. CMS contracted with Massachusetts’ quality improvement organization to generate physician practice–level performance measures using Medicare administrative claims data only, resulting in 12 summary measures for each practice.12,13 One respondent called using GEM data “checking the box” to meet the AF4Q initiative public reporting requirements and felt it damaged the alliances’ credibility with local physicians who were working toward constructing medical records–based measures.

Contribution to Publicly Available Physician Performance Information

One of the AF4Q initiative’s goals is that alliance public reports would increase the amount of credible physician performance information available to consumers in alliance communities. The average number of highly credible reports—defined as reports using data from multiple payer or provider sources, produced by a neutral community-based organization, and available to the general public—increased in the 14 original AF4Q communities from 0.43 to 2.07, versus an increase from 0.43 to 0.57 in the 7 comparison communities. During this period, only 1 comparison community added a physician report sponsored by a community organization (Figure). The addition of these types of reports is significant, as the physician performance information they contain is available to all community residents, not just health plan enrollees.

Almost all reports included preventive care measures and measures of adherence to treatment guidelines for people with chronic illnesses, or biologic markers of chronic illness, irrespective of report sponsor. Preventive measures were specific to gender or age, while chronic illness measures were relevant for subgroups of community residents. Alliance reports in some communities expanded the amount of information about the treatment of chronic illnesses (eg, diabetes) by reporting measures based on medical records data that were not available in health plan reports. Also, by combining data from multiple sources, alliances were able to publish performance measures at the physician practice level, in contrast to measures constructed at a medical group level (Table). Also, alliances were able to report actual measure values, in contrast to many health plan reports that classified physicians into groups of high and low performers because there were not enough observations for each network physician or group practice to report actual measure values that were reliable. The addition of patient experience measures expanded the information in public reports beyond clinical measures or measures relevant only to certain subpopulations based on diseases or recommended preventive care guidelines. At baseline, measures of patient experience with physicians were available (in any report) for only 3 AF4Q communities and 1 comparison community. By 2011, consumers in 10 of 14 AF4Q communities had access to publicly reported patient experience measures (across reports from all sponsors) versus consumers in 2 of 7 comparison communities. This increase was due to the addition of patient experience measures in alliance reports.

Another perspective on the contribution of alliances to the availability of publicly reported physician quality information can be gained from a comparison of the reporting activity of alliances and chartered value exchanges (CVEs). Beginning in 2008, community organizations could apply to the federal government for CVE designation, which was awarded to multistakeholder coalitions. Among other requirements, the coalitions had to state their commitment to publish provider quality information. The CVEs were given access to summary Medicare provider performance data and received technical assistance through a peer-learning network; in contrast to the AF4Q initiative, however, they did not receive direct funding or a target reporting date.14 At present, 11 of the initial 14 AF4Q alliances are among the 24 organizations that have received CVE designation. Among CVEs that are not AF4Q alliances, only 3 report physician quality measures: California, Colorado (both in the comparison group; see Figure), and Virginia. Clearly, CVEs that are not AF4Q initiative participants are much less likely to provide physician performance information to community residents than AF4Q alliance CVEs.

Conclusions

 
Copyright AJMC 2006-2020 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
x
Welcome the the new and improved AJMC.com, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up