• Center on Health Equity and Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

The Mis-Measure of Physician Performance

Publication
Article
The American Journal of Managed CareOctober 2013
Volume 19
Issue 10

We discuss our concerns about tying physician performance to CG-CAHPS scores and suggest an alternative approach to facilitate translation of service excellence into clinical practice.

The Affordable Care Act directs the Secretary of Health and Human Services to compare individual physicians using patient experience measures. This policy initiative will utilize the Clinician and Group Consumer Assessment of Healthcare Providers and Systems (CG-CAHPS) survey program. It will impact over 700,000 eligible physicians and will be tied to reimbursement and the Centers for Medicare & Medicaid Services’ Physician Compare reporting feature starting in 2015. We believe that the relevance of this framework to today’s clinical environment is a critical issue to address before implementing this regulatory mandate. In this article we discuss our concerns about tying individual physician performance to CG-CAHPS scores, including: 1) intrinsic versus extrinsic approaches to assessing the patient experience, 2) measurement issues, and 3) unintended consequences. We also suggest an alternative pathway and opt-out mechanism to facilitate more rapid translation of service excellence into clinical practice.

Am J Manag Care. 2013;19(10):782-785The Affordable Care Act directs the Secretary of Health and Human Services to compare individual physicians using patient experience measures.

  • This initiative could impact over 700,000 eligible physicians and will be tied to reimbursement and the Centers for Medicare & Medicaid Services’ physician compare reporting feature starting in 2015.

  • We have policy concerns which center around 3 areas: 1) intrinsic versus extrinsic approaches to assessing the patient experience, 2) measurement issues, and 3) unintended consequences.

  • An opt-out pathway allowing organizations to assume accountability for patient experience measurement should be considered.

The United States is poised to embark on one of the largest measurement efforts in the history of medicine—the assessment of patient satisfaction with physician performance using the Clinician and Group Consumer Assessment of Healthcare Providers and Systems (CG-CAHPS) surveys.1 The Affordable Care Act (ACA) directs the Secretary of Health and Human Services to compare individual physicians using patient experience measures.2 This initiative will impact over 700,000 eligible physicians and will be tied to reimbursement and the Centers for Medicare & Medicaid Services’ (CMS’) Physician Compare reporting feature starting in 2015 for provider groups and for all individual physicians by 2017.3

While acknowledging the role of CMS in catalyzing an increasing focus on the patient experience in provider organizations, we believe that the CG-CAHPS construct for evaluating individual physician performance is flawed. The final regulations to implement this section of the ACA for individual providers have not yet been written, thus the importance of raising our concerns about this process to the physician community now. Our concerns about tying individual physician performance to CG-CAHPS data center around 3 areas: 1) intrinsic versus extrinsic approaches to assessing the patient experience, 2) measurement issues, and 3) unintended consequences.

Intrinsic Versus Extrinsic Approaches to Assessing the Patient Experience

Since the inception of CAHPS, we have seen profound improvement in our understanding of the concept of service operations in healthcare.4 In a learning healthcare system, an information feedback loop to clinicians on their activities and outcomes is critical in achieving the best possible care.5 Service transformation is a dynamic process that requires actionable data for physicians and managers and the flexibility to focus on operational issues unique to each clinical environment. However, CG-CAHPS was not designed for such a framework. In fact, the explicit goal of CAHPS is to “develop standardized patient surveys that can be used to compare results across sponsors and over time” where priority is on public reporting rather than facilitating improvement at the practice level.1

By analogy, one of the key methodologies in the service operations field is the Toyota Production System.6 Here, information is developed and assessed in real time by all members of the production team. This approach cannot work when data points are 1) determined by an external group, 2) not designed for the specific environment at hand, and 3) not available in real time. Additionally, Toyota does not wait for the annual J.D. Power survey of consumers to decide how to improve its automobiles. Contrary to the Toyota model, CMS prevents physicians from taking ownership of measurement of the patient experience. For example, the surveys are generated by an external body in a standardized fashion which does not allow modification of the core measures. Further, in an effort to preserve the integrity of the external measurement process, CMS requires that its surveys be administered before any internally informed patient survey.7 This complexity makes it very difficult for a provider organization to collect its own patient experience data. Of note, the delay in CAHPS survey administration may be up to 6 weeks and responses cannot be tracked to individual patients, thereby limiting the utility and actionability of the data. There is no reason to believe the guidance on this issue will be any different for CG-CAHPS than it is for CAHPS Hospital Survey (HCAHPS).

This is not to argue the potential benefits to extrinsic measurement of the patient experience, such as greater transparency and accountability. However, we believe that a preferred approach to measurement would be intrinsic, with a more limited approach to external evaluation that does not inhibit internal innovation within provider organizations. In fact, CMS recognizes the importance of this intrinsic approach, stating “HCAHPS survey items complement the data hospitals currently collect to support improvements in internal customer services and quality related activities.”8 However, current regulations do not allow sufficient flexibility around this framework. Later in this paper we will discuss ways in which CMS could consider implementing pay for performance concepts focused on enhancing internal performance efforts.

Measurement Challenges and Methodologic Issues

Specific constructs within CAHPS are indeed evidence-based and in cross-sectional studies patient experience measures are associated with improved clinical performance and outcomes.9,10 However, there is a gap in effective strategies to translate solutions to improving the patient experience into clinical practice. The creation of financial penalties and public reporting programs without a more robust understanding f the science of improving the patient experience in provider organizations may fail to achieve the policy goal of enhancing patient care or may have unintended consequences.11 Previous work suggests that substantial change within provider organization around the patient experience will only occur with significant changes in organizational strategy, leadership, and culture.12 These organizational challenges require actionable data to help managers lead this transformation.

CG-CAHPS proposes to capture data on at least 300 patient encounters per practice and 45 surveys per physician. Currently, participation in the program is the sole criterion established by CMS for receipt of payment updates, but in the hospital realm, these data have been used to “incentivize” performance as well. Without a specific construct of how these data will be used in physician payment, we envision several major challenges. In measuring performance of 700,000 physicians, minute differences on a 10-point likert scale will be statistically significant. If the data are used to rank physicians (assuming a score distribution similar to HCAHPS), a change of 100,000 places in ranking would result from a difference of only 0.1 on an overall score. Moreover, ceiling effects could exacerbate these measurement issues within and across practices, especially as they develop strategies to improve scores to try to achieve payment updates.

In many situations, physicians are now employees of larger systems without direct control of their practice environment. Will the CG-CAHPS measure be used to update professional services, technical/facility fees, or both? In fact, physicians in a larger system might correctly argue they have minimal control over the overall patient experience, and that their survey data reflect system versus individual performance. For many physicians in academic medicine, patient care is melded into a life that also includes teaching and research. It may not be possible to attain the number of encounters for those whose practice may be limited to 1 or 2 specialty clinics per week.

Unintended Consequences

The reported patient experience is often a factor in determining compensation for senior managers to the detriment of using assessment of the patient experience for improving service operations. Rather than a positive achievement, we fear we have institutionalized administrative aspects of assessment of the patient experience rather than the concept of improving service operations. In fact, the constant “strive for five” throughout the healthcare system is evidence of how measurement of patient experience has been coopted into a performance metric for senior managers.13 Strive for five is a marketing campaign for higher scores, not a commitment to a better patient experience. Previous work has shown that hospitals that focus on administrative components of performance measures actually deliver worse clinical outcomes.14 The emphasis on public reporting may also cause providers to shy away from treating certain “high-risk” patients or make them more likely to acquiesce to patient requests, leading to unnecessary healthcare utilization (ie, prescription drugs, imaging) and higher costs.15

From a policy perspective, the original goal of CAHPS to develop comparative information for consumers in an era of solo practices has been eclipsed over time. To this end, the evidence seems to suggest that health systems, not patients, are most impacted by publically reported performance measures.16 These organizations are now struggling with the concept of how to create robust measurement tools for individual practices within the overall system, again an effort very different from that proposed by CMS through the CAHPS program.

Proposed Next Steps

There is an urgent need to move the science of measurement of the patient experience forward.14,17 We believe that the time is right to consider approaches that incentivize the transformation of healthcare consistent with the Toyota Production System model, where service operations is nurtured as a management construct. To address this policy goal, we propose a CG-CAHPS opt-out alternative for organizations willing to take responsibility for development and implementation of their own patient experience measurement. CMS could require confirmation that such a measurement process is in place and actively being used by the management of organizations in the opt-out pathway. Further, there could be a requirement that each “self-managed” site must develop an annual report of achievements (submitted to hospital or practice management, a public website, or CMS). This report will require oversight to ensure that organizations are using the opt-out mechanism to develop actionable data and that they are building management capacity to respond to the data they receive. From a financial perspective, those sites that choose this path and complete the required reporting would then be considered in the to tier of any pay-for-performance criteria. From a policy perspective, CMS should aggressively consider this type of optout pathway for other clinical performance measurement approaches through the Centers for Medicare & Medicaid Innovation in order to move the science of performance measurement forward. We believe this alternative pathway should apply to future CAHPS surveys (eg, emergency department CAHPS [ED CAHPS]) and other related quality measurement efforts.

CONCLUSION

We need to carefully reconsider the overall CG-CAHPS framework and its associated measurement issues before this policy experiment becomes a reality for individual physicians and their practices all over the country. In addition, we propose an opt-out pathway allowing organizations to assume accountability for assessment of the patient experience to facilitate more rapid translation of service excellence into clinical practice. Author Affiliations: From Department of Emergency Medicine, University of North Carolina School of Medicine, Chapel Hill (SWG), Chapel Hill, NC; Duke Clinical Research Institute, and Department of Medicine, Duke University School of Medicine (KAS), Durham, NC.

Funding Source: None.

Author Disclosures: Drs Glickman and Schulman are co-founders, equity holders, and board members of Bivarus, Inc. They report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (SWG, KAS); acquisition of data (SWG, KAS); analysis and interpretation of data (SWG, KAS); drafting of the manuscript (SWG, KAS); critical revision of the manuscript for important intellectual content (SWG, KAS); statistical analysis (SWG, KAS); provision of study materials or patients (SWG, KAS); administrative, technical, or logistic support (SWG, KAS); and supervision (SWG, KAS).

Address correspondence to: Seth W. Glickman, MD, MBA, University of North Carolina, 170 Manning Dr, CB #7594, Chapel Hill, NC 27599. E-mail: seth_glickman@med.unc.edu.1. Surveys and tools to advance patient care: about CAHPS. http://cahps.ahrq.gov/about.htm. Accessed February 18, 2013.

2. The Patient Protection and Affordable Care Act (Pub. L. 111—148). 2010. Section 10331(a).

3. Federal Register. Department of Health and Human Services Centers for Medicare & Medicaid Services 42 CFR Parts 410, 414, 415, et al. Medicare Program; Revisions to Payment Policies Under the Physician Fee Schedule, DME Face-to-Face Encounters, Elimination of the Requirement for Termination of Non-Random Prepayment Complex Medical Review and Other Revisions to Part B for CY 2013; Final Rule. http://www.gpo.gov/fdsys/pkg/FR-2012-11-16/pdf/2012-26900.pdf. Accessed June 13, 2013.

4. Kenagy JW, Berwick DM, Shore MF. Service quality in health care. JAMA. 1999;281:661-665.

5. Institute of Medicine of the National Academies. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Washington, DC: The National Academies Press; 2012.

6. Culig MH, Kunkle RF, Frndak DC, Grunden N, Maher TD Jr, Magovern GJ Jr. Improving patient care in cardiac surgery using Toyota production system based methodology. Ann Thorac Surg. 2011;91:394-399.

7. The CAHPS Hospital Survey Quality Assurance Guidelines, Version 7.0. http://www.hcahpsonline.org/files/HCAHPS%20Quality%20Assurance%20Guidelines%20V7.0%20March%202012.pdf. Published March 2012. Accessed April 18, 2013.

8. Hospital Care Quality Information from the Consumer Perspective. http://www.hcahpsonline.org. Accessed June 11, 2013.

9. Sequist TD, Schneider EC, Anastario M, et al. Quality monitoring of physicians: linking patients’ experiences of care to clinical quality and outcomes. J Gen Intern Med. 2008;23:1784-1790.

10. Manary MP, Boulding W, Staelin R, Glickman SW. The patient experience and health outcomes. N Engl J Med. 2013;368:201-203.

11. Auerbach AD, Landefeld CS, Shojania KG. The tension between needing to improve care and knowing how to do it. N Engl J Med. 2007; 357(6):608-613.

12. Davies E, Shaller D, Edgman-Levitan S, et al. Evaluating the use of a modified CAHPS survey to support improvements in patient-centered care: lessons from a quality improvement collaborative. Health Expect. 2008;11:160-176.

13. Lee F. If Disney Ran Your Hospital: 9-1/2 Things You Would Do Differently. Bozeman, MT: Second River Healthcare Press; 2004.

14. Glickman SW, Boulding B, Staelin R, Roos MT, Schulman KA. An alternative pay for performance scoring method: implications for quality improvement and patient outcomes. Medical Care. 2009;47:1062-1068.

15. Fenton JJ, Jerant AF, Bertakis KD, Franks P. The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality. Arch Intern Med. 2012;172:405-411.

16. Ketelaar NA, Faber MJ, Flottorp S, et al. Public release of performance data in changing the behaviour of healthcare consumers, professionals or organisations. Cochrane Database Syst Rev. 2011;9;(11): CD004538.

17. Berenson RA, Pronovost PJ, Krumholz HM. Achieving the potential of health care performance measures: the Robert Wood Johnson Foundation. http://www.rwjf.org/content/dam/farm/reports/reports/2013/rwjf406195. Published May 2013. Accessed June 12, 2013.

Related Videos
Pat Van Burkleo
Screenshot of Jennifer Vaughn, MD, in a Zoom video interview
Pat Van Burkleo
Patrick Vermersch, MD, PhD
dr mitzi joi williams
dr dalia rotstein
dr marisa mcginley
James Robinson, PhD, MPH, University of California, Berkeley
Carrie Kozlowski
Carrie Kozlowski
Related Content
© 2024 MJH Life Sciences
AJMC®
All rights reserved.