Shifting from claims to integrated electronic health records to calculate quality metrics will improve reported quality attributable to data capture changes, not true quality improvements.
Published Online: September 07, 2011
Jennifer Tjia, MD, MSCE; Terry S. Field, DSc; Shira H. Fischer, AB; Shawn J. Gagne, BA; Daniel J. Peterson, MA, MS; Lawrence D. Garber, MD; and Jerry H. Gurwitz, MD
Objectives: While the 2011 implementation of “meaningful use” legislation for certified electronic health records (EHRs) promises to change quality reporting by overcoming data capture issues affecting quality measurement, the magnitude of this effect is unclear. We compared the measured quality of laboratory monitoring of Healthcare Effectiveness Data and Information Set (HEDIS) medications based on specifications that (1) include and exclude patients hospitalized in the measurement year and (2) use physician test orders and patient test completion.
Study Design: Cross-sectional study.
Methods: Among patients 18 years and older in a large multispecialty group practice utilizing a fully implemented EHR between January 1, 2008, and July 31, 2008, we measured the prevalence of ordering and completion of laboratory tests monitoring HEDIS medications (cardiovascular drugs [angiotensin-converting enzyme inhibitors or angiotensin receptor blockers, digoxin, and diuretics] and anticonvulsants [carbamazepine, phenobarbital, phenytoin, and valproic acid]).
Results: Measures excluding hospitalized patients were not statistically significantly different from measures including hospitalized patients, except for digoxin, but this difference was not clinically significant. The prevalence of appropriate monitoring based on test orders typically captured in the EHR was statistically significantly higher than the prevalence based on claims-based test completions for cardiovascular drugs.
Conclusions: HEDIS quality metrics based on data typically collected from claims undermeasured quality of medication monitoring compared to EHR data. The HEDIS optional specification excluding hospitalized patients from the monitoring measure does not have a significant impact on reported quality. Integration of EHR data into quality measurement may significantly change some organizations’ reported quality of care.
(Am J Manag Care. 2011;17(9):633-637)
Integration of electronic health record (EHR) data into quality measurement may significantly change some organizations’ reported quality of care.
For example, measuring laboratory monitoring quality from claims data (test completion) underestimates care quality compared with measurements based on EHR data (physician test ordering).
If quality-of-care measurements improve concurrent with “meaningful use” implementation, it may be difficult to discern whether this change reflects true quality improvements or measurement changes.
One strategy to address this issue is to measure both claims only and claims plus EHRbased measures concurrently so that quality changes can be interpreted going forward.
The Institute of Medicine’s report Crossing the Quality Chasm1 prompted significant efforts toward the pursuit of high-quality healthcare. As a result, major investments to improve healthcare quality have focused on 2 areas: (1) the development and public reporting of quality-of-care measures and (2) the promotion and adoption of electronic health records (EHRs).2 The synergy of these 2 concurrent efforts was recently accelerated by the 2011 implementation of incentive payments for the meaningful use of certified EHR technology under the 2009 American Recovery and Reinvestment Act3; this synergy will have an important impact on healthcare in the United States.
The Centers for Medicare & Medicaid Services’ stage 1 rollout of “meaningful use” criteria for EHR in January 2011 aimed to reduce disparities in EHR use across healthcare deliverers by providing monetary incentives for EHR adoption.4 While one of the goals in promoting the widespread adoption of EHR is to improve quality of care,5 there is evidence to suggest that expanded EHR availability and meaningful data integration may impact quality based on changes in data capture and measurement, and not based on actual improvements in healthcare quality.2,6-8
For example, prior studies have shown that quality measures calculated from administrative claims alone are different from measures that incorporate medical record data.2,6-8 As a result, “hybrid” administrative-medical record–based calculation methods were incorporated into some, but not all, Healthcare Effectiveness Data and Information Set (HEDIS) performance measures.7 The implication is that healthcare enterprises without EHRs are disadvantaged,9-11 while those equipped to readily capture medical record data for quality reporting have an advantage by being able to report higher numbers for performance measures than those using only administrative claims. While this phenomenon has been described for cancer screening and vaccination rates,2,6-8 it has not been examined for the quality measurement of the laboratory monitoring of medications.
Since drug-induced injury is common12,13 and failure to monitor high-risk medications is one of the leading factors contributing to adverse drug events,13 the National Committee for Quality Assurance included medication monitoring measures in HEDIS in 2006.14 These standards recommend laboratory monitoring of high-risk, narrow therapeutic window medications, including angiotensin-converting enzyme (ACE) inhibitors, angiotensin II receptor blockers (ARBs), digoxin, diuretics, and anticonvulsants (phenobarbital, phenytoin, valproic acid, and carbamazepine)15 based on administrative data only. Aware of the difficulties with data capture across transitions of care from the hospital to the ambulatory setting, HEDIS optionally specifies that measurement may be affected by population selection (ie, excluding or including hospitalized patients whose hospital laboratory tests are not consistently reported to ambulatory medical records or claims), but it does not specify that measurement may be affected by the source of the data.
Because the magnitude of data source and population selection effects is unclear, we conducted this study to assess the ordering and completion of laboratory tests for high-risk HEDIS medications in a large multispecialty group practice to compare the reported quality of monitoring based on (1) 2 HEDIS specifications for the population (including and excluding patients hospitalized in the measurement year) and (2) 2 outcome measures (physician test orders vs actual completion of tests). With the federal investment promising to eliminate barriers to EHR adoption, our findings have implications for quality-of-care reporting and measurement, and will inform some expected developments resulting from the EHR meaningful use legislation.
Study Setting and Population
This study was conducted in a large multispecialty group practice that provides the majority of medical care to members of a closely associated New England–based health plan. The group practice employs 250 outpatient physicians at 30 ambulatory clinic sites. The practice uses the EpicCare Ambulatory EHR system and provides medical care to approximately 180,000 individuals. Patients had to be continuously enrolled during the observation period and not residing in a long term care facility. Data about medication exposure were derived from the prescription drug claims of the health plan. Data about laboratory test ordering and completion were derived from the multispecialty group EHR. The age and sex characteristics of the study population were similar to those of the general population of the United States in 200016: 54% of the adults were female and 36% were over 65 years of age. The group practice has only recently begun to capture race and has incomplete data, but the health plan’s market research indicates a patient racial mix consistent with the plan’s catchment area, which includes whites (79%), Hispanics (12%), African Americans (5%), and other races (4%).
HEDIS Drugs and Recommended Monitoring Tests
We used drug dispensing claims from January 1, 2008, to July 31, 2008, to identify the first dispensing of 1 of the highrisk medications for a patient on or after January 1, 2008. We used drug dispensing claims from January 1, 2007, to December 31, 2007, as a look-back period in order to identify patients who were taking a drug for at least 180 days, as specified by HEDIS guidelines; we included only patients with evidence of another drug dispensing in the 180 days prior to January 1, 2008 (Table 1). For the study drugs, appropriate annual monitoring was defined as receipt of a serum potassium test and either a serum creatinine or blood urea nitrogen test for ACE inhibitors/ARBs, digoxin, and diuretics, and receipt of a test for anticonvulsant drug serum concentration for the anticonvulsants. Test ordering and test completion were defined as having occurred if there was at least 1 recommended test order and test completion for each specific drug-test pair either 180 days before or after the index dispensing in 2008. To test the effect of population specification on reported quality as specified by HEDIS,12 we created 2 estimates for each drug-test combination: one based on the entire study population and a second excluding patients with any hospitalization in the observation year. To test the effect of outcome specification on reported quality, we also created estimates based on test completion.
We used a X2 test to compare differences in monitoring based on the 2 HEDIS specifications for the population (including and excluding patients hospitalized in the measurement year) and the 2 outcome measures (physician test orders vs actual test completion). All analyses were conducted with SAS version 9.2 (SAS Institute Inc, Cary, North Carolina). This study was approved by the institutional review boards of our research institution and the participating group practice.
Test Ordering by Clinicians Including and Excluding Hospitalized Patients
Approximately 10% of each population of medication users had a hospitalization in the observation year, except digoxin users, approximately 20% of whom were hospitalized (Table 2). There were no statistically significant differences in the prevalence of appropriate test monitoring when we compared estimates based on the sample including hospitalized patients with estimates based on the sample excluding hospitalized patients. For example, 93.9% of hospitalized patients prescribed digoxin had appropriate test monitoring, compared with 92.3% of patients who were not hospitalized (P = .18).
Test Ordering by Clinicians Compared With Overall Test Completion
The prevalence of test completion for all drugs was lower than that of physician test ordering because patient adherence to test orders ranged from 85.6% to 93.3% (data not shown). When we examined the sample that included hospitalized patients and compared physician test orders with overall test completion, there were statistically significant differences between these 2 measures for the cardiovascular drugs, but not the anticonvulsants. For example, for diuretics, 92.3% of physicians ordered the appropriate monitoring test, but only 80.2% of all indicated tests were completed (P <.001; Table 3). These differences were statistically insignificant when the drug was less commonly used.
This study examines 2 aspects of HEDIS quality measurement for medication monitoring. We found that the selection of the outcome measure can affect the reported quality of a physician. In contrast to HEDIS recommendations, it does not appear that the decision of whether or not to include hospitalized patients in the measure estimates has a significant impact. These results have implications for quality-of-care measurement.
PDF is available on the last page.