Publication
Article
The American Journal of Managed Care
The quality care opportunities model can lead to increased utility and validity of current measurement tools and more accurate assessment of physician performance.
Accurately measuring the quality of care that ambulatory care physicians provide is an important endeavor. Current measurement instruments, while offering useful information about care systems, remain suboptimal for the measurement of individual physician performance. We offer the quality care opportunities model of ambulatory care physician performance measurement, which may address issues with current instruments while also offering useful information about efficiency and productivity for individual physicians and delivery systems from a patient-centered perspective.
(Am J Manag Care. 2012;18(6):e212-e216)This quality care opportunities model is important in the context of currently flawed systems of measurement to:
The quality of US healthcare has been called into question.1 Reform efforts have prioritized quality improvement aimed at improving care delivery systems and outcomes.2 Accordingly, there is a push to quantify the care quality delivered by systems and individual physicians.
Quality measures, being published at rapid pace, are aimed at assessing physician performance. Most of these measures, while useful in performance measurement of large physician groups, pose serious limitations in quantifying individual physician performance.3,4 Instead of using isolated diagnosis-specific outcome measures (OMs), process measures (PMs), and composite measures (CMs), we propose that evaluating the sum of opportunities available to each physician for evidence-based patient care and further assessing the fraction addressed over time can increase validity of physician performance measurement. This approach may also offer insights into systems quality.
CURRENT PERFORMANCE MEASURES
Studies of physician performance measurement to date have mainly focused on measuring discrete aspects of care applied to groups of diagnosis- related patients.3-12 Current measures include OMs, PMs, and CMs. Outcome measures evaluate physicians based on targeted evidence-based clinical outcomes within diagnosis-related patient groups (eg, percentage of diabetic patients with glycated hemoglobin [A1C] <7%). Process measures assess physicians on whether evidence-based care processes were used within diagnosis-related patient groups (eg, percentage of time a physician ordered A1C tests for diabetic patients at minimum intervals). Individual OMs have been criticized for failing to account for factors outside of physician control (eg, patient preferences, nonadherence).3,4 In addition, while PMs do capture actions directly under provider control, critics indicate that PMs are often too disconnected from clinical outcomes to be individually useful.3,4 Furthermore, most physicians do not have large enough diagnosis-related patient populations to generate a statistically significant result for individual outcome or PMs.3,4
Composite measures are an attempted improvement of the low statistical significance relating to individual OMs and PMs; these CMs combine multiple OMs and PMs into a single measurement (eg, diabetic CM = [A1C <7%] + [low-density lipoprotein cholesterol <100 mg/dL] + [aspirin use] + [nonsmoking status]). Traditionally, CMs, like individual OMs and PMs, are applied across diagnosis- related patient groups cared for by a given physician.4-9,13-15 For example, a diabetes care composite measure such as that proposed by Kaplan et al9 would be applied across all diabetic patients cared for by a single physician in an attempt to illustrate the quality of diabetes care provided by that particular physician. Because more events are measured within this diagnosis-specific CM compared with individual diagnosis-specific OMs or PMs, there is a greater likelihood that a greater number of physicians will have the minimum number of quality events required to generate a statistically significant outcome.3-15
A growing number of CMs have been proposed in the literature, each attempting to measure the quality of care provided by physicians for specific diagnosis-related patient subsets.4,8,13-15 For example, diagnosis-specific CMs used in the Quality Oncology Practice Initiative involve breast cancer care—related measures, lymphoma care–related measures, and so forth.13 Several studies also demonstrate the use of CMs including “preventive care” and “chronic care” CMs.14 These types of CMs are related to types of care rather than specific diagnoses and are typically applied to patients in the same manner as diagnosis-specific CMs. Despite the improved statistical validity of these CMs compared with more granular diagnosis-specific OMs and PMs, several issues limit the clinical utility of these diagnosis-specific (or care type—specific) CMs when used as sole indicators of physician performance.
At the outset, questions about the interpretability of these CMs have limited their use.5 In other words, it is difficult to understand the source of the relative deficit for physicians with lower scores. Further, it is difficult to justify the idea that physicians with otherwise similar composite scores (eg, diabetes care composite scores, preventive care composite scores) could theoretically see major differences in overall patient care quality depending on the source of the deficit in their individual
scores. Overall, scores tend to be less likely to offer meaningful insights for areas of actionable improvement when used as sole indicators of physician performance.5
Other issues that limit traditional CM usefulness are problems common to the use of diagnosis-specific OMs, PMs, and CMs as applied only to diagnosis-related patient groups cared for by the physician being measured. Measuring physician performance by applying diagnosis-specific measures generates perverse incentives for physicians to eliminate complex patients or focus on measured care at the expense of unmeasured care.11 Further, there are various methods of applying physician responsibility to individual patients that lead to widely variant performance scores.4,12 Additionally, these measures fail to address physician productivity and efficiency. Finally, adequately accounting for physician panel complexity has proved difficult when working with diagnosis-specific measures as applied to diagnosis-related patient groups.6
REFINING PERFORMANCE MEASURES: QUALITY CARE OPPORTUNITIES
We propose a measurement tool, quality care opportunities (QCOs), that could enhance the utility of diagnosis-specific OMs, PMs, and CMs. It would also potentially address issues associated with those diagnosis-specific measures when applied only to diagnosis-related patient groups for individual physicians.
Over a day’s duration, many opportunities to provide evidence-based care for patients confront physicians. These opportunities include chief complaint management, chronic condition management, and screening/prevention services. Each patient presents with a unique set of needs based on demographics, risk factors, social factors, chronic medical problems, and chief complaint. Each of these needs is reflected by specific outcome or PMs (eg, patient-specific testing, medications, vaccinations, outcomes measures).
To refine performance measurement, the sum of relevant OMs/PMs would first be assigned to each physician’s scope of practice. Each opportunity to provide care (reflected through all relevant OMs/PMs) represents a QCO. Total QCOs for each physician are based on specialty-relevant OMs/PMs applicable to patients seen in the office over a defined time period. Physician performance would be assessed by measuring the fraction of all QCOs met for each patient seen, ie, quality efficiency (QE).
QCO = [Total OMs] + [Total PMs]
QE = ([OMs met] + [PMs met])/[QCO]
Hence, QCO and QE measures change with each patient and at each point of measurement. That is, each patient presents a unique set of applicable OMs/PMs. For example, a 55-year-old woman with uncontrolled diabetes who has not had healthcare in 5 years will carry a set of QCOs different from those of a 68-year-old male smoker with good care continuity. This QCO-QE system captures those unique intervention opportunities at each point of service.
The QCO concept applied across same-specialty physicians can provide substantive productivity and efficiency insights, related both to absolute numbers of care opportunities confronting physicians and to the proportion of QCOs being met by physicians over equal time intervals. For example, over 1 week, assume that physician A meets 500 of 750 QCOs for his patients, while physician B meets 350 of 400 QCOs for hers. In this example, physician A has an arguably more complex patient load presenting with 750 potential QCOs versus physician B’s 400 QCOs. Further, though physician B has been more efficient (physician B’s QE is 0.875 versus physician A’s QE of 0.667), physician A has been more productive (500 QCOs met over the same period of time that physician B has met 350 QCOs).
Using QCO in combination with QE can help define a benchmark balance between efficiency and productivity for same-specialty physicians. Traditionally, physician productivity has been a measure of patients seen per period of time, where same-specialty physicians are expected to see relatively equal numbers of patients over equal periods of time.16 However, for example, assume physicians A and B are both meeting 350 QCOs per week but physician A is confronted with 500 QCOs and physician B is confronted with 1000 QCOs due to patient panel complexity differences. Although both physicians are equally productive in the quality context and by number of patients seen per hour, patients cared for by physician A are arguably receiving better care than patients care or by physician B because a higher percentage of their needs are being met. Thus, reducing the volume of patients being seen by physician B might be justifiable.
Furthermore, while the QCO-QE model represents an improvement in physician performance measurement, traditional OMs/PMs can be extracted to make relative statements about strengths and weaknesses of individual physicians. This type of information, while subject to aforementioned flaws, remains meaningful for internal quality control and directing continuing medical education efforts. Likewise, extracting OMs/PMs for provider groups can offer the statistically significant information currently measured for health system care quality delivered to diagnosis-related patient populations (eg, diabetic patients).
Finally, QCO and QE measurement utility may be expanded by quantifying the percentage of QCOs met for each patient cared for by the health system and comparing this score with the physician QE. This allows for evaluation of relative nonphysician and physician contributions to system performance. For example, if a targeted fraction of patients cared for by the system have not had minimum QCO percentages met but high physician productivity/efficiency is evident, then greater nonphysician-based concerns may be contributing to poor system performance.
The QCO-QE approach is flexible, reflecting patient changes (eg, QCOs arising from a formerly controlled diabetic patient now presenting with A1C >7%), as well as changes in practice systems (eg, assigning care of diabetic patients to a new, integrated team). Hence, QCO-QE measurement can assess and compare physician action with patient-centered needs, shifts in patient needs, and systems issues related to ongoing opportunities to provide patient-centered quality care.
ADDRESSING CURRENT MEASURE CONCERNS
Beyond shifting the focus toward patient-centered care, the QCO-QE approach may address concerns with current performance measures as related to measuring individual physicians. First, QCO-QE measurement eliminates isolated dependence of performance evaluation on OMs, over which physicians feel they have little control. Given that several processes of care are required to accomplish a given clinical outcome, PMs will be more numerous than OMs in QCO-QE measurements. Consequently, by absolute numbers, actionable PMs will carry heavier weight than uncontrolled OMs. However, because improving clinical outcomes is an important goal in healthcare delivery, OMs will justifiably contribute to the final score.
Second, QE scoring mitigates potential incentives to concentrate only on discrete aspects of measured care. Unlike diagnosis-specific OMs, PMs, and CMs, the QCO-QE system aims to evaluate all aspects of care and, as such, makes it difficult to focus on a single aspect of care (eg, diabetes management) over another (eg, screening and prevention). Again, various OMs and PMs can be extracted from the QCO-QE measurements to identify relative physician problem areas and strengths for internal quality control efforts. Further, while statistically valid diagnosis-specific CMs can also be extracted to inform performance on diagnosis-related care, the QCOQE measurement offers an added dimension of understanding physician performance across the spectrum of care provided.
Third, evaluation of care delivered at the point of service, rather than with groups of diagnosis-related patients who may or may not have been seen, can eliminate the problem of assigning physician responsibility to various patients. We argue that while a health system—whether solo practitioner, multiphysician group, or accountable care organization (ACO)—should be accountable for all patients assigned to the system, a physician should be individually accountable only for care provided at the point of service. If a physician never sees a patient, despite being the assigned provider, then the QCOs for that patient should not contribute to the individual physician QCO-QE score. However, the patient will be captured by systems scoring; and where physicians are team leaders/members in primary care medical homes or ACOs, these physicians will still be accountable for the quality of care delivered to patients not being actively seen in the office.
Fourth, by providing insights about the absolute number of QCOs met by each physician over a period of time, QCO measurement can provide an added dimension of physician performance measurement not afforded by diagnosisspecific OMs, PMs, and CMs: productivity. Further, this measurement of productivity is more patient centered and quality centered than traditional concepts of productivity that simply use an absolute number of patients seen over a period of time. Given that complex patients will present more QCOs at each visit, QCO is proportional to patient complexity for defined patient volumes and therefore can also obviate the need for a separate system of panel-complexity stratification.
Finally, care efficiency can be elucidated. QE measurement potentially highlights physician efficiency with the fraction of all acute, chronic, and preventive patient care needs met at the point of service. Current measures of care efficiency are largely based on cost-efficiency, which is cost centered instead of patient centered, and thus promotes a number of perverse incentives.17
QCO-QE IN THE CONTEXT OF HEALTHCARE REFORM
Healthcare reform has introduced urgency for effective physician assessment. The Centers for Medicare & Medicaid Services have created a public database aimed at publishing the quality scores for physicians serving Medicare beneficiaries next year.2 A quality measurement system capable of parsing out physician performance in a patient-centered and statistically significant manner may assist meaningful quality assessment while mitigating the flaws and perverse incentives inherent to current measurement methods.
Further, the QCO-QE model could be an important evaluation tool for ACO demonstration projects. ACO models are aimed at improving quality via coordinated care systems.18 QCO-QE measurement can assist evaluation of this complex system of coordinated physician groups, hospitals, and intermediate care providers by parsing out individual physician, group, and intermediate provider contributions.
LIMITATIONS AND FUTURE CONSIDERATIONS
The QCO-QE system, as conceived here, does not measure misuse of care (prescribing dangerous medications to elders or pregnant females) or overuse of care (prescribing unnecessary antibiotics or imaging modalities). However, previous studies have shown that physician performance is generally worse for quality measures characterized by underuse (those at which the QCO is directed) than for misuse or overuse.19,20 Further, the QCO model could theoretically be adapted to measure misuse and overuse as a greater library of measures are produced and validated for measurement of this type of care.
Furthermore, electronic medical records are a necessary element for implementing the QCO-QE model because claims data do not provide enough clinical detail to measure all available outcomes and PMs.12 Moreover, not all electronic medical records currently on the market are sufficiently sophisticated enough to capture the important level of clinical detail needed to measure the QCO-QE. Thus, to fully implement a QCO/QE model, physicians and groups would have to first implement electronic medical records of sufficient capability, adapt current administrative systems, or participate in record data extraction.21 Also, because the QCO/QE model presupposes evaluating all patients who have had care delivery under an individual physician, not just the patients who fall within a specific plan or set of plans, physician participation and cooperation are necessary features of this model. However, with healthcare reform mandates beginning the process of posting physician quality scores publicly, physician participation will likely be a standard in the future.
Furthermore, as with performance measurement in general, standardization and consensus on all OMs and PMs are needed. When choosing which OMs and PMs will comprise the QCO for a given set of physicians, it is important to choose measures that indicate a level of care quality. The choice of which measures comprise the QCO-QE carries the likelihood of informing and directing the types of care provided. The QCO-QE model would also require standardization and consensus on scope of care that would be attributed to different physician specialties. As is now the practice, clinical guidelines, the US Preventive Services Task Force, and established quality indicator libraries can serve as a starting point for defining the included measures, which may then be refined using evidence-based practice, local factors, and administrative databases.
Finally, once the entire set of OMs and PMs are assigned to the QCO for a given physician, the various OMs and PMs should be weighted according to ease of completion to avoid the incentive of focusing on increasing QCO-QE merely by focusing on easily completed measures. For example, while assessing the pain level of certain patients is important, it is much easier to accomplish than achieving an A1C level <7% in a diabetic patient. Thus, the weights should reflect the difference in ease or difficulty of completion in order to give an accurate assessment of the care delivered. An alternative would be weighting based on the quality-adjusted life-years associated with each intervention, though much of these data are currently lacking for various processes and outcomes. Of course, empirical assessment of the QCO-QE measurement tool would guide additional improvements in this approach, as has been done in other large databases for quality and safety purposes.21
CONCLUSIONS
Providing optimal care quality is the medical system’s mandate. To understand where we currently stand and guide where we wish to go, valid physician performance measurement is imperative. By using a QCO-QE system to refine current measures, more effective physician and systems approaches can potentially emerge to yield greater opportunities to provide patient-centered care.Author Affiliations: From Southern California Permanente Medical Group-Family Medicine (KML), El Cajon, CA; California Western School of Law (BAL), Institute of Health Law Studies, San Diego, CA.
Funding Source: None.
Author Disclosures: The authors (KML, BAL) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (KML, BAL); analysis and interpretation of data (BAL); drafting of the manuscript (KML, BAL); and critical revision of the manuscript for important intellectual content (KML, BAL).
Address correspondence to: K. M. Lovett, MD, Attending Physician, Southern California Permanente Medical Group—Family Medicine, 1630 E Main St, El Cajon, CA 92012. E-mail: klovett@ucsd.edu.1. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academies Press; 2001.
2. One Hundred Eleventh Congress of the United States of America. The Patient Protection and Affordable Care Act. Pub L 111-148, 124 Stat 119 (2010).
3. Eddy DM. Performance measurement: problems and solutions. Health Aff (Millwood). 1998;17(4):7-25.
4. Thorpe CT, Flood GE, Kraft SA, Everett CM, Smith MA. Effect of patient selection method on provider group performance estimates. Med Care. 2011;49(8):780-785.
5. Scholle SH, Roski J, Adams JL, et al. Benchmarking physician performance: reliability of individual and composite measures. Am J Manag Care. 2008;14(12):833-838.
6. Hong CS, Atlas SJ, Chang Y, et al. Relationship between patient panel characteristics and primary care physician clinical performance rankings. JAMA. 2010;304(10):1107-1113.
7. Holmboe ES, Weng W, Arnold GK, et al. The comprehensive care project: measuring physician performance in ambulatory practice. Health Serv Res. 2010;45(6, pt 2):1912-1933.
8. Weifeng Weng, Hess BJ, Lynn LA, Holmboe ES, Lipner RS. Measuring physicians’ performance in clinical practice: reliability classification, accuracy, and validity. Eval Health Prof. 2010;33(3):302-320.
9. Kaplan SH, Griffith JL, Price LL, Pawlson LG, Greenfield S. Improving the reliability of physician performance assessment: identifying the “physician effect” on quality and creating composite measures. Med Care. 2009;47(4):378-387.
10. Fung V, Schmittdiel JA, Fireman B, et al. Meaningful variation in performance: a systematic literature review. Med Care. 2010;48(2): 140-148.
11. McDonald R, Roland M. Pay for performance in primary care in England and California: comparison of unintended consequences. Ann Fam Med. 2009;7(2):121-127.
12. Scholle SH, Roski J, Dunn DL, et al. Availability of data for measuring physician quality performance. Am J Manag Care. 2009;15(1): 67-72.
13. Campion FX, Larson LR, Kadlubek PJ, Earle CC, Neuss MN. Advancing performance measurement in oncology: quality oncology practice initiative participation and quality outcomes. J Oncol Pract. 2011;7(3)(suppl):31s-35s.
14. Sequist TD, Schneider EC, Li A, Rogers WH, Safran DG. Reliability of medical group and physician performance measurement in the primary care setting. Med Care. 2011;49(2):126-131.
15. Martirosyan L, Haaijer-Ruskamp FM, Braspenning J, Denig P. Development of a minimal set of prescribing quality indicators for diabetes management on a general practice level [published online ahead of print October 14, 2011]. Pharmacoepidemiol Drug Saf.
16. Duck E, DeLia D, Cantor JC. Primary care productivity and the health care safety net in New York City. J Ambul Care Manag. 2001;24(1):1-14.
17. Greene RA, Beckman HB, Mahoney T. Beyond the efficiency index: finding a better way to reduce overuse and increase efficiency in physician care. Health Aff (Millwood). 2008;27(4):w250-w259.
18. Weixel N. Speakers: ACO regulations uncertain, but providers should still start integrating. Bureau of National Affairs Medicare Report. 2010;21:1169.
19. McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348(26):2635-2645.
20. Mangione-Smith R, DeCristofaro AH, Setodji CM, et al. The quality of ambulatory care delivered to children in the United States. N Engl J Med. 2007;357(15):1515-1523.
21. Escobar GJ, Folck BF, Gardner MN, et al. Looking for trouble in all the right places: scanning for “electronic signatures” associated with high risk clinical situations. In: Henriksen K, Battles JB, Marks ES, Lewin DI, eds. Implementation Issues. Washington, DC: Agency for Healthcare Research and Quality; 2005:51-68. Advances in Patient Safety: From Research to Implementation; vol 3.
Oncology Onward: A Conversation With Penn Medicine's Dr Justin Bekelman
Finding Health Care Best Practices From Successes Around the World