Patterns of electronic health record adoption among highand low-quality hospitals indicated that high-quality institutions had far greater use of most electronic health record functions.
To determine whether patterns of electronic health record (EHR) adoption and “meaningful use” vary between high-, intermediate-, and low-quality US hospitals.
We used data from the Hospital Quality Alliance program to designate hospitals as high quality (performance in the top decile nationally), low quality (bottom decile), and intermediate quality (all others). We examined EHR adoption and meaningful use using national survey data.
We used logistic regression models to determine the frequency with which hospitals in each group adopted individual EHR functions and met meaningful use criteria, and factor analyses to examine adoption patterns in high- and lowquality hospitals.
High-quality hospitals were more likely to have all clinical decision support functions. High-quality hospitals were also more likely to have computerized physician order entry for medications compared with intermediate- and low-quality hospitals. Among those who had not yet implemented components of clinical decision support, two-thirds of low-quality hospitals reported no concrete plans for adoption. Finally, high-quality hospitals were more likely to meet many of the meaningful use criteria such as reporting quality measures, implementing at least 1 clinical decision support rule, and exchanging key clinical data.
We found higher rates of adoption of key EHR functions among high-quality hospitals, suggesting that high quality and EHR adoption may be linked. Most low-quality hospitals without EHR functions reported no plans to implement them, pointing to challenges faced by policy makers in achieving widespread EHR adoption while simultaneously improving quality of care.
(Am J Manag Care. 2011;17(4):e121-e147)
For hospitals seeking to improve care, focusing on specific electronic health record (EHR) functions, particularly order entry with clinical decision support, is likely a key part of achieving high-quality performance.
The United States has embarked on an ambitious effort to promote the adoption and meaningful use of electronic health records (EHRs) and the key functionalities that underlie these systems.1,2 The motivation for this effort is simple: the current system of paper-based records exacerbates deficiencies in information and can lead to piecemeal, poor-quality care. Electronic health records, when properly designed and implemented, can provide more complete, timely, and sophisticated clinical information and support to clinicians, and therefore improve the quality of care delivered to patients.3-6 There has been broad, bipartisan interest in EHRs, initially with the Bush administration and now in the Obama administration. Most recently, the American Recovery and Reinvestment Act allocated nearly $30 billion in direct incentives designed to encourage physicians and hospitals to adopt and use these systems through “meaningful use.”7
Since the passage of the Health Information Technology for Economic and Clinical Health (HITECH) Act, several studies have called into question the relationship between EHR use and quality of care.8,9 These data have fueled criticisms of current efforts to promote EHR adoption; skeptics point to these studies to argue that there is inadequate evidence to support widespread EHR use. However, studies demonstrating only modest overall effects of EHRs on quality of care may miss important differences in EHR use between the best and worst hospitals. If the underlying goal is to improve quality, examining how high-quality hospitals in the United States use EHRs and determining whether this is substantively different from how poor-quality hospitals use EHRs could provide important insights for clinicians and policy makers seeking to move providers toward the provision of higher quality care. Further, understanding which specific EHR functionalities are in use among the high-quality hospitals could provide guidance in terms of how low- or intermediate-quality hospitals might focus their EHR efforts going forward.
We used national data on patterns of EHR adoption to address 4 key questions. First, were there differences in the adoption of specific EHR functionalities (eg, medication lists, computerized prescribing, clinical decision support) between high- and low-quality hospitals? Second, if these dif-ferences exist, which functionalities displayed the largest disparities in adoption when comparing high- and low-quality hospitals? Third, did the highest quality hospitals seem to have different patterns of adoption than the lowest quality hospitals (ie, did the cluster of functions adopted vary between the high- and low-quality institutions)? Fourth, among those hospitals that have not yet adopted individual functionalities, were there important differences between highand low-quality hospitals in their current plans to implement them? And finally, were there differences in adoption of the specific functions that comprise the newly established meaningful use criteria10 for EHR adoption?
Measures of Electronic Health Record Functions
We used 2 primary data sources for this analysis: the 2009 American Hospital Association (AHA) hospital information technology (IT) survey of US acute care hospitals and the 2006 Hospital Quality Alliance database. The AHA IT survey was distributed as a supplement to the AHA’s annual survey in 2009. This survey has served as a data source for many analyses, and the details of its development and distribution are described in prior publications.10 The survey was administered to all 4493 acute care hospitals in the AHA (an estimated 97% of all hospitals in the United States) from March through September 2009. A total of 3101 surveys were completed, for a 69% response rate. The survey assessed the level of adoption of specific EHR functionalities. Respondents were asked to report a score of 1 through 6 to assess the degree of adoption for each functionality, ranging from full adoption of the function across all units to a declaration that the functionality was not in place and that there were no plans or considerations to implement it. We focused on the 24 electronic functions that a federally sanctioned expert panel identified as part of a comprehensive EHR.10
Measures of Quality
We used data from the Hospital Quality Alliance, which contains information on process measures for patients cared for during calendar year 2006. We created summaryscores for performance on care for acute myocardial infarction, congestive heart failure, pneumonia, and prevention of surgical complications.11 The specific indicators are summarized in . We took an average of each hospital’s summary score within each of the 4 clinical areas and ranked all the hospitals in order of performance. We excluded hospitals with fewer than 30 observations for any of the 4 clinical conditions of interest, as well as hospitals located outside of the 50 states and the District of Columbia.
We began by categorizing the hospitals in our sample into quality deciles based on their overall quality score and created 3 groups for our main analysis: hospitals in the top 10% of performance were designated as high quality, those in the bottom 10% were designated as low quality, and all other hospitals (those in deciles 2 through 9) were designated as intermediate quality. In sensitivity analyses, we examined other cut-points for designating hospitals as high versus low quality, including the top and bottom 20% as well as top and bottom 30%. We calculated the proportion of hospitals within each cohort (high quality, medium quality, and low quality) that had adopted each EHR functionality in at least 1 hospital unit. We used X2 tests to compare the proportions of hospitals that had adopted each function across the 3 groups. To account for potential confounding, we built multivariate logistic regression models, adjusting for hospital size, region, ownership (for profit, nonprofit, or public), teaching status, membership in a hospital system, urban versus nonurban location, presence of a cardiac intensive care unit (an indicator of technological capacity), and the percentage of each hospital’s patients who were covered by Medicaid (an indicator of the socioeconomic status of patients treated in each hospital). For each specific functionality, hospitals with missing data were excluded from that calculation. We only included the presence of several key decision support tools related to medication alerts if the hospital also had computerized provider order entry (CPOE) for medications. This was done to reflect true decision support at the point of care by healthcare providers, which would require the presence of electronic order entry. We reran our analyses without the requirement for CPOE and our results were qualitatively similar. Thus, we only present the findings of those decision support tools in the presence of CPOE.
Next, we used factor analysis to determine the covariance of adoption of functionalities within each of the quality cohorts. We simply describe the patterns of clustering of functions across the 3 quality cohorts.
Using the same groups but limiting our analysis this time to those hospitals that had yet to implement each EHR functionality, we calculated the proportion of hospitals that reported no concrete plans for implementation (defined as the proportion reporting either that they had considered but had no resources identified for EHR implementation or that they had no plans to implement EHRs). We compared the frequency of these responses across the 3 groups initially using X2 tests and subsequently using multivariate logistic regression analyses as described above to adjust for potential confounders.
Finally, we examined the proportion of hospitals within each quality cohort that had adopted the specific functions required to meet meaningful use criteria. These included 12 objectives that had clear analogues to the AHA health IT survey (9 of the 14 Core Objectives and 3 of the 10 Menu Objectives; see ). For these analyses, we used X2 tests to determine whether the proportion of adopters varied across these 3 groups and did not exclude missing data from calculations.
There were slight differences between hospitals that did and did not respond to the health IT survey.10 In the analyses reported, all results were weighted to account for the differences due to nonresponse using a previously described method. 10 All analyses were performed using Stata/SE, version 10.1 (StataCorp, College Station, TX). A 2-sided P value less than .05 was considered to be statistically significant.
Of the 1637 hospitals in our sample, 166 were designated as high quality, 1318 as intermediate quality, and 153 as low quality (). There were substantial differences in the characteristics of these hospitals: high-quality hospitals were more often large compared with low-quality hospitals (26% vs 8%, P <.001), and more often nonprofit in ownership (84% vs 49%, P <.001). High-quality hospitals were significantly more likely than low-quality hospitals to be teaching hospitals (44% vs 23%, P <.001), to belong to a hospital system (71% vs 55%, P <.05), to be located in urban areas (86% vs 59%, P <.001), and to have a dedicated coronary intensive care unit (62% vs 28%, P <.001). Finally, the percentage of patients with Medicaid was substantially lower in the highquality than the low-quality hospital cohort (9% vs 15%, P <.001).
We found substantial differences in the adoption of EHR functions among the 3 groups of hospitals (). Highquality hospitals more often had electronic nursing notes (81% vs 73% and 68%, P = .04) and medication lists (89% vs 79% and 73%, P <.01) than intermediate-quality and lowquality hospitals, respectively. All decision support tools had significantly higher adoption levels in the high-quality cohort. The differences between the high- and low-quality cohorts in adoption of all of these clinical decision support functions ranged from 17% to 20%, and all were significant (Table 2).
After multivariable adjustment, we found that adoption of 22 of the 24 functions was still higher in high-quality hospitals, although most of the differences were no longer statistically significant (). Functions for which the differences across the 3 quality cohorts were statistically significant included problem lists, medication lists, diagnostic test images, and many of the clinical decision support tools.
In sensitivity analyses, when we examined groupings based on alternative cut-points, we found that most of the results were qualitatively similar. However, expanding the high- and low-quality groups to the 30% cutoff decreased the differences between groups, some of which became nonsignificant (see ).
We performed separate factor analyses in each of the 3 cohorts of hospitals (high quality, intermediate quality, and low quality) and found relatively similar results across all 3 groups. Within each cohort, there were 2 factors with relatively high Eigen values (greater than 3). For example, among the highquality cohort, the hospitals differed most in terms of whether they had adopted CPOE and decision support. The second factor clustered together adoption of patient demographics with viewing lab and radiology reports. The patterns were very similar in the intermediate- and low-quality cohorts (see ).
Among those hospitals that had yet to implement specific EHR functions, we found high rates of hospitals reporting that they had no concrete plans to implement many key functionalities (). For clinical documentation, results viewing, and computerized order entry functionalities, low-quality hospitals were generally more likely to report no concrete plans to adopt the functions, although none of the differences were statistically significant. This may have been due, in part, to the fact that the underlying rates of adoption of specific functions were high and the number of nonadopters was relatively low.
The patterns for decision support functions were, however, different. We found that nearly two-thirds of all nonadopters in the low-quality cohort reported no concrete plans to implement these functions, rates that were significantly higher than those reported by high-quality hospitals. For example, lowquality hospitals without clinical guidelines were more likely to report having no concrete plans to implement them than intermediate or high-quality hospitals (67% vs 55% and 47%, P = .02). After multivariable adjustment, the lowest quality hospitals were still significantly more likely to report no concrete plans to implement 2 of the key decision support tools ().
Finally, when we examined hospitals’ ability to meet the meaningful use criteria, we found that a very small percentage of hospitals across all quality categories adopted the entire set of functions, with modest differences between them: 2.1% of high-quality hospitals could meet all 9 core measures compared with 1.1% of low-quality hospitals, a difference that was not statistically significant. In sensitivity analyses, we found that the results were qualitatively similar for the alternative cut-points (see Appendix 4 and ).
When we examined individual meaningful use criteria, the majority were present significantly more frequently in the high-quality group. Among these functions were the ability to report Hospital Quality Alliance measures to the Centers for Medicare & Medicaid Services (41% vs 30% and 34%, P = .02), implementation of drug-drug and drug allergy checks (25% vs 17% and 13%, P = .02), data exchange capabilities with other facilities (60% vs 54% and 42%, P <.01), and implementation of at least 1 clinical decision support tool (84% vs 72% and 63%, P <.001; ).
We found that high-quality hospitals had higher levels of adoption of nearly all EHR functions, and that the largest differences were in the presence of clinical decision support tools available at the point of care. These high-performing hospitals also had greater availability of clinical documentation tools like patient problem and medication lists. Among nonadopters, a large majority of low-quality hospitals reported no concrete plans to adopt clinical decision support tools. Fin nally, we found that high-quality hospitals were more likely to be able to meet many of the meaningful use criteria than low-quality hospitals.
While there is a broad base of studies that have shown that EHRs can be effective in improving quality, much of the data come from a small number of pioneering facilities using home-grown EHRs.5,12,13 The failure of other studies to show a relationship between the average EHR user and quality of care benefits has led some critics to call the push for EHRs premature. Our findings suggest otherwise. We found a distinct pattern of high-quality hospitals consistently using EHRs at much higher rates than low-quality hospitals. These findings indicate that EHRs are likely a key, necessary component of high-quality healthcare, although they alone may not transform the way care is delivered.
Our factor analysis has 2 important insights worth discussing: first, that clinical decision support tools cluster together and they do so in conjunction with CPOE, which is clinically intuitive and driven partly by the requirement that CPOE must be present for clinical decision support to be optimally effective; and second, among the highest quality hospitals, functionalities tied to viewing of clinical results more often appear together with clinical documentation functions —a pattern that was not evident in other hospitals. Whether this clustering of functions is directly related to better quality performance, or is just a marker for more advanced EHR systems, is unclear and needs further investigation.
Our findings also point to the challenges ahead. Among institutions that had not yet implemented the individual EHR functions, more than half of the poor-quality hospitals reported having no concrete plans for implementing CPOE for medications or several of the key clinical decision support tools. If the goal of federal policy makers is to drive improvements in care, especially among the poor performers, getting these hospitals to engage in the quality improvement process and seriously consider EHR adoption and use will be critically important. Our findings also suggest that many of the functions emphasized by the new meaningful use rules are already being used by high-quality institutions, providing further validation for the meaningful use efforts as a potential way to improve quality. However, we found that only a very small percentage of all hospitals have been able to adopt all functions. Whether the millions of dollars in incentives from HITECH will be enough to achieve widespread adoption is unclear, but ensuring that all hospitals, particularly the low-quality ones, focus on implementing robust decision support is critically important. Our finding that high-quality hospitals are more likely to be able to meet many of the meaningful use criteria has financial implications: if HITECH does not spur poor-quality hospitals to adopt EHR systems, they may fall further behind, widening the quality gulf between the best and worst hospitals.
Others also have investigated the relationship between EHR functions and quality, though none have looked for specific differences in adoption patterns between high-quality hospitals and low-quality hospitals. Using similar (albeit older) data, DesRoches et al found that neither “basic” nor “comprehensive” adoption of EHR systems produced substantial gains in quality.8 However, this study examined the average scores among those with and without EHRs and did not examine whether EHR adoption patterns differed between the high- and low-quality hospitals. Himmelstein and colleagues used a data set from the Healthcare Information and Management Systems Society Analytics program and also found modest improvements in quality for those hospitals which had adopted more comprehensive computing systems compared with those with less comprehensive systems.9
There are important limitations to this study. First, although the health IT supplement to the AHA survey achieved a 69% response rate, nonresponders were likely different from responders. Although we attempted to statistically correct for potential nonresponse bias, these techniques are imperfect. Next, while we examined the adoption of specific functionalities, we had no information as to how these functionalities were used within responding institutions. That could have obscured potentially important relationships between certain functionalities and quality, and we suspect that the gaps we observed between the best and worst hospitals would be even more sizable had we been able to measure effective use of these functions. Furthermore, hospitals were not asked directly about meaningful use. However, our responses were mapped to analogous survey questions and our approach was generally conservative. Finally, the most important limitation of our study is the cross-sectional nature of our analysis, reducing our ability to claim a causal relationship between hospital quality and adoption of specific EHR functionalities. We did attempt to adjust for baseline differences between the quality cohorts, but as always, there could be differences in other relevant characteristics that were not measured.
In conclusion, we examined patterns of adoption of key EHR functions among the highest and lowest quality hospitals in the United States and found that high-quality institutions had far greater use of most EHR functions, especially clinical decision support. These high performers were also more likely to meet many criteria for meaningful use. Although we could not establish that this relationship was causal, our findings suggest that for hospitals seeking to emulate the care provided by high-performing institutions, focusing on CPOE with clinical decision support is likely a key part of achieving high performance on standard quality measures. Widespread resistance to adoption, especially among low-quality hospitals,points to the challenges ahead for federal policy makers as they seek to ensure that all Americans receive high-quality hospital care, irrespective of where they are treated.
Author Affiliations: From Harvard Medical School (SME), Boston, MA; Department of Health Policy and Management, Harvard School of Public Health (KEJ, SJB, AKJ), Boston, MA; Division of Cardiovascular Medicine (KEJ), Brigham and Women’s Hospital, Boston, MA; General Internal Medicine (AKJ), Brigham and Women’s Hospital, Boston, MA; VA Boston Healthcare System (AKJ), Boston, MA.
Funding Source: Dr Joynt was supported by NIH Training Grant T32HL007604-24, Brigham and Women’s Hospital, Division of Cardiovascular Medicine.
Author Disclosures: Dr Jha reports serving as a consultant for Humedica. The other authors (SME, KEJ, SJB) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (SME, AKJ); acquisition of data (SME, SJB, AKJ); analysis and interpretation of data (SME, KEJ, AKJ); drafting of the manuscript (SME, KEJ, SJB, AKJ); critical revision of the manuscript for important intellectual content (SME, KEJ, SJB, AKJ); statistical analysis (SME); obtaining funding (AKJ); administrative, technical, or logistic support (SJB, AKJ); and supervision (AKJ).
Address correspondence to: Ashish K. Jha, MD, MPH, Harvard School of Public Health, 677 Huntington Ave, Boston, MA 02115. E-mail: email@example.com.
1. Department of Health and Human Services. Medicare and Medicaid Programs; Electronic Health Record Incentive Program.Federal Register. July 28, 2010. http://www.federalregister.gov/articles/2010/07/28/2010-17207/medicare-and-medicaid-programs-electronic-health-recordincentive-program.
2. Blumenthal D, Tavenner M. The “meaningful use” regulation for electronic health records. N Engl J Med. 2010;363(6):501-504.
3. Chaudhry B, Wang J, Wu S, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006;144(10):742-752.
4. Goldzweig CL, Towfigh A, Maglione M, Shekelle PG. Costs and benefits of health information technology: new trends from the literature. Health Aff (Millwood). 2009;28(2):w282-w293.
5. Kaushal R, Shojania KG, Bates DW. Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch Intern Med. 2003;163(12):1409-1416.
6. Walker J, Pan E, Johnston D, Adler-Milstein J, Bates DW, Middleton B. The value of health care information exchange and interoperability. Health Aff (Millwood). 2005;Suppl Web Exclusives:W5-10—W15-18.
7. 111th Congress. American Recovery and Reinvestment Act. PL 111-5. 123 Stat 115 (2009).
8. DesRoches CM, Campbell EG, Vogeli C, et al. Electronic health records’ limited successes suggest more targeted uses. Health Aff (Millwood). 2010;29(4):639-646.
9. Himmelstein DU, Wright A, Woolhandler S. Hospital computing and the costs and quality of care: a national study. Am J Med. 2010;123(1): 40-46.
10. Jha AK, DesRoches CM, Campbell EG, et al. Use of electronic health records in U.S. hospitals. N Engl J Med. 2009;360(16): 1628-1638.
11. Jha AK, Orav EJ, Dobson A, Book RA, Epstein AM. Measuring efficiency: the association of hospital costs and quality of care. Health Aff (Millwood). 2009;28(3):897-906.
12. Bates DW, Kuperman GJ, Rittenberg E, et al. A randomized trial of a computer-based intervention to reduce utilization of redundant laboratory tests. Am J Med. 1999;106(2):144-150.
13. Bates DW, Leape LL, Cullen DJ, et al. Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. JAMA. 1998;280(15):1311-1316.