A national study of electronic health record (EHR) adoption and hospital quality finds that existing measures may be inappropriate for assessing the effect of EHR adoption on quality.
To estimate the relationship between quality improvement and electronic health record (EHR) adoption in US hospitals.
National cohort study based on primary survey data about hospital EHR capability collected in 2003 and 2006 and on publicly reported hospital quality data for 2004 and 2007.
Difference-in-differences regression analysis to assess the relationship between EHR adoption and quality improvement for acute myocardial infarction, heart failure, and pneumonia care.
Availability of a basic EHR was associated with a significant increase in quality improvement for heart failure (additional improvement, 2.6%; 95% confidence interval [CI], 1.0%-4.1%). However, adoption of advanced EHR capabilities was associated with significant decreases in quality improvement for acute myocardial infarction and heart failure. We observed 0.9% (95% CI, -1.7% to -0.1%) less improvement for acute myocardial infarction quality scores and 3.0% (95% CI, -5.2% to -0.8%) less improvement for heart failure quality scores among hospitals that newly adopted an advanced EHR, and 1.2% (95% CI, -2.0% to -0.3%) less improvement for acute myocardial infarction quality scores and 2.8% (95% CI, -5.4% to -0.3%) less improvement for heart failure quality scores among hospitals that upgraded their basic EHR.
Mixed results suggest that current practices for implementation and use of EHRs have had a limited effect on quality improvement in US hospitals. However, potential "ceiling effects" limit the ability of existing measures to assess the effect that EHRs have had on hospital quality. In addition to the development of standard criteria for EHR functionality and use, standard measures of the effect of EHRs on quality are needed.
(Am J Manag Care. 2010;16(12 Spec No.):SP64-SP71)
Consistent availability of an electronic health record (EHR) over the study period was associated with a significant increase in quality improvement for heart failure; however, adoption of advanced EHR capabilities was associated with significant decreases in the improvement of acute myocardial infarction and heart failure quality scores.
There is general consensus that widespread adoption of health information technology (IT), in particular electronic health records (EHRs), will result in increased efficiency and improved patient care.1-6 This belief is reflected in the Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009, which includes funds to stimulate adoption of EHRs.7 The Congressional Budget Office estimates net payments of approximately $30 billion for Medicare and Medicaid incentives over the life of the program.8 A sizable portion of these payments will be made to hospitals that are able to demonstrate “meaningful use” of a “certified” EHR. Using EHRs to improve quality is an example of meaningful use.9 Smaller (eg, 75 bed) hospitals can receive up to $3.5 million in incentive payments, whereas larger (eg, 500 bed) hospitals could receive up to $6.1 million over the life of the program.10,11 The Congressional Budget Office projects that these incentives will induce 25% more US hospitals to adopt an EHR that would not have done so otherwise.8
An expected benefit of EHR adoption is improved quality of care.7 However, much of the current knowledge about the relationship between health IT and hospital quality comes from a few hospitals that may not be representative of the broader set of hospitals being targeted by the HITECH incentives.3,12 The handful of studies13-21 that have examined the relationship between hospital quality and EHR use in larger samples of hospitals have to a varying degree reported positive associations between EHR use and hospital quality. With adoption of hospital IT, some studies (eg, those by Amarasingham et al13 and by Menachemi et al14) report substantial decreases in hospital mortality rates and complications, while more recent studies (eg, those by DesRoches et al15 and by McCullough et al16) report only modest improvements in hospital quality associated with the availability of an EHR. Aside from analyses by McCullough et al16 and by Parente and McCullough,17 studies have been limited to cross-sectional data. In addition to their limited capacity to control for confounding factors, cross-sectional studies may not fully address the important question facing policy makers: Will installing a new EHR (or increasing the functionality of an existing EHR) lead to increased improvements in quality over time? To examine this question, we evaluated longitudinal data on EHR adoption and hospital quality from a large sample of US hospitals.
We used primary survey data from the Health Information and Management Systems Society (HIMSS) to measure hospital EHR adoption. The HIMSS Analytics Database includes approximately 90% of hospitals in the United States.22 The database contains information on the implementation status of a wide range of clinical IT applications and has been frequently used for research purposes.16-19,22,23 To resolve previously observed inconsistencies among the HIMSS data,24 we dropped hospitals that failed to disclose their software vendor or reported a vendor that was inconsistent with the clinical IT application reported. Hospitals with self-developed systems were included in the analysis. (available at www.ajmc.com) lists software vendor and clinical application combinations included in the analysis.
Data on hospital characteristics were obtained from the American Hospital Association Annual Survey Database.25 This database includes 800 variables on more than 3900 US
general acute care hospitals.
Data on hospital quality were obtained from the Hospital Compare26 database for 2004 and 2007. The Hospital Compare database includes process-of-care measures that indicate how often hospitals provide elements of clinical care that are well-established interventions for 3 common clinical conditions (acute myocardial infarction [AMI], heart failure, and pneumonia) for more than 4200 hospitals. These measures are calculated based on all hospital patients, not just Medicare beneficiaries.
We selected all nonfederal general acute care hospitals located in the United States from the American Hospital Association Annual Survey Database (3971 hospitals). We linked these eligible hospitals to the HIMSS Analytics Database using Medicare provider numbers, restricting the analytic file to 2086 hospitals that reported their EHR capability in 2003 and 2006. We linked the combined data set to the Hospital Compare database using Medicare provider number and year. Because the HIMSS database captures the most up-to-date information on a given hospital’s EHR implementation status at year end, we linked quality measures from the ensuing year (eg, 2006 data from the HIMSS database were linked to 2007 data from the Hospital Compare database). This linkage process resulted in a final database that contained 2021 hospitals with observations in both years.
Measures of Hospital Quality
We selected 17 measures of hospital process quality across 3 clinical conditions that were common in both years of the Hospital Compare database (). Eight of these measures were for processes related to the treatment of AMI, 4 for processes related to the treatment of heart failure, and 5 for processes related to the treatment of pneumonia. These process-of-care measures were chosen because they apply to conditions that are common causes of hospitalization,27 because they are generally regarded as being valid indicators of quality,28 and because EHRs are more likely to facilitate adherence to recommended processes of care than to affect patient outcomes (eg, in-hospital mortality).3,19 The dependent variables for this analysis were 3 composite measures of hospital process quality for AMI, heart failure, and pneumonia. The composite measures were constructed using the approach prescribed by the Joint Commission29 of grouping 17 individual process measures by condition (AMI, heart failure, or pneumonia), summing the numerators of the individual measures (ie, recommended care delivered), and then dividing by the sum of the denominators (ie, total eligible population). This produced an overall quality performance rate for each clinical condition. Only hospital-year composite measure combinations with at least 30 denominator observations were included in the analysis.
Measures of EHR Capability
There is no standard measurement of EHR capability.30 However, the current state of the art is to delineate between EHR systems that offer more advanced functionalities and those that do not.31 Jha et al5 and DesRoches et al15,31 have advocated a 3-tiered framework (no EHR, basic EHR, and comprehensive EHR) for classifying EHR capability. Their framework is based on the presence or absence of 24 EHR functionalities. McCullough et al16 observed that, while the classification framework by Jha and DesRoches and their colleagues establishes an important standard for leading EHR adopters, the restrictive inclusion criteria for full adoption do not facilitate the analysis of the typical EHR adopter.
We sought to adapt the 3-tiered framework proposed by Jha et al5 and by DesRoches et al15,31 and to create a less restrictive classification that would allow for the analysis of typical EHR adopters. To determine the level of EHR capability for the hospitals in our study, we evaluated the self-reported implementation status of the following 4 clinical IT applications: clinical data repository, electronic patient record, clinical decision support systems, and computerized provider order entry (see eAppendix B for definitions of clinical IT applications). Hospitals that did not report having the full complement of technology necessary to constitute a basic EHR were included in the first tier (no EHR). Hospitals that reported having an operational electronic patient record, clinical data repository, and clinical decision support systems were included in the second tier (basic EHR). Hospitals that reported having EHRs with all the functionality of the second tier plus an operation- al computerized provider order entry system were classified in the third tier (advanced EHR). Computerized provider order entry was chosen as the marker of an advanced EHR because its adoption is low compared with other clinical IT applications, indicating that it is often implemented after other elements of the clinical information system are already in place. Furthermore, well-documented functional enhancements accompany its implementation in conjunction with other clinical IT applications. In particular, computerized provider order entry is regarded as a more effective means of delivering clinical decision support because it facilitates decision support at the point of care.32-38
The goal of the analysis was to assess whether acquisition or upgrade of an EHR was associated with increased improvement in hospital quality over time, controlling for baseline characteristics that might influence changes in quality. We first assessed whether baseline EHR capability varied by hospital characteristics using Fisher exact test. To adjust for baseline differences between hospitals with different EHR capabilities, we estimated a propensity score for baseline EHR capability using an ordinal logistic regression analysis. We regressed baseline EHR capability on the hospital covariates listed in . Hospitals were assigned an indicator variable (range, 1-5) based on the quintiles of the propensity score distribution. Covariate balance between the 5 levels of the propensity score indicator was assessed via ordinal logistic regression analysis.39
We used a difference-in-differences analytic approach to estimate the relationship between EHR transitions and improvement in each of 3 composite measures of hospital quality. To evaluate the association between quality improvement over time and the availability of an EHR, we compared hospitals that maintained a basic or an advanced EHR with hospitals that reported having no EHR in 2003 and 2006. To evaluate the association between quality improvement over time and new adoption (and upgrades) of an EHR, we stratified hospitals by their baseline EHR capability in 2003 and then compared hospitals that newly adopted or upgraded an EHR with hospitals that did not change their EHR capability. This approach allowed us to specifically examine EHR transitions that the HITECH legislation is designed to induce.
Unadjusted difference-in-differences estimates were calculated using standard methods.40 Adjusted differences in differences were calculated via linear mixed-effects regression analysis. We estimated separate regression models for each composite measure. Each regression model included a random-effects term to adjust for clustering within individual hospitals and fixed effects for the hospital’s EHR transition (no transition, new adoption, or EHR upgrade), baseline propensity score (range, 1-5), period (baseline vs follow-up), and characteristics listed in Table 2. Each regression model also included an interaction term between time and the EHR transition variable, which estimated the association between the composite measure and a given EHR transition relative to no EHR transition. The unadjusted and adjusted estimates can be interpreted as the additional quality gains associated with adoption (or upgrade) of an EHR compared with a referent set of hospitals that did not adopt or upgrade their EHR. For the most part, interpretation of the adjusted and unadjusted results was consistent. However, adjusted estimates were generally more precise; therefore, we report them only. All statistical analyses were performed using R version 2.10.0 (R Foundation for Statistical Computing, Vienna, Austria).41 The study qualified for exemption by the local institutional review board.
As listed in column 1 of Table 2, the hospitals in our sample varied in size; the largest hospital group (31.0%) had 100 to 199 beds. Most hospitals in our sample were nonteaching (71.1%), not for profit (68.4%), or affiliated with a health system (62.1%). More than half (53.8%) of hospitals were located in metropolitan areas, and 41.9% had a dedicated coronary care unit. Consistent with other evidence,5 we found that baseline levels of EHR capability were associated with hospital size, teaching status, healthcare system affiliation, urbanization, and the presence of a dedicated coronary care unit. summarizes how the study sample of hospitals compares with the full American Hospital Association Annual Survey Database sample of general acute care hospitals. Generally, the hospitals in our sample were larger and nongovernment owned. They were also more likely to be affiliated with a healthcare system, be located in an urban area, have a dedicated coronary care unit, and be a teaching facility.
As summarized in , the number of hospitals in our sample with a basic or an advanced EHR increased from 24.0% in 2003 to 37.7% in 2006. We also found a sizable increase in the percentage of hospitals with an advanced EHR, from 2.0% in 2003 to 12.2% in 2006. lists levels of improvement in hospitals that did not experience a transition in EHR capability during the study period. Relative to hospitals with no EHR, hospitals that maintained a basic EHR realized significantly greater improvement in their heart failure quality scores (increased improvement, 2.6%). Hospitals that maintained a basic EHR experienced similar increases in AMI and pneumonia quality scores compared with hospitals that did not, and quality scores in hospitals with an advanced EHR did not improve significantly more or less than quality scores in hospitals without an EHR.
gives changes in AMI, heart failure, and pneumonia quality scores among hospitals that changed their EHR capability. Quality scores did not improve significantly more or less in hospitals that adopted a basic EHR than in hospitals that did not adopt an EHR. However, in hospitals that newly adopted an advanced EHR, AMI and heart failure quality scores improved significantly less than in hospitals that did not adopt an EHR (−0.9% for AMI and −3.0% for heart failure). Acute myocardial infarction and heart failure quality scores improved significantly less in hospitals that upgraded their basic EHR compared with hospitals that maintained their basic EHR (−1.2% for AMI and −2.8% for heart failure). We found no significant relationship between new EHR adoption or upgrade and quality improvement for pneumonia. Full regression tables for the propensity score model and forthe mixed-effects regression models are available in through .
During the study period, the quality of care for AMI, heart failure, and pneumonia was broadly improving. Heart failure quality scores improved significantly more among hospitals that maintained a basic EHR than among hospitals with no EHR. We did not observe a similar effect on AMI or pneumonia quality scores, nor did we find that adopting or upgradingan EHR accelerated quality improvement. Instead, our results indicate that new adoption or upgrade to an advanced EHR was associated with smaller gains in AMI and heart failure
Our findings overall were mixed; on the one hand, the increased improvement in heart failure quality scores over time associated with maintenance of a basic EHR is encouraging. On the other hand, the smaller quality gains associated with new adoption or upgrade to an advanced EHR are somewhat counterintuitive. Although unexpected, our results are consistent with findings by Greenhalgh et al,42 who report that less complex EHRs may have greater positive effects than more sophisticated ones. This phenomenon may be attributable to the complex nature of healthcare work environments. Hospitals have been described as “the most complex human organization[s] ever devised,”43 and the introduction of increasingly complex technology into already complex work environments may trigger various unintended interactions that undermine or outweigh the potential benefits of the new technology.24,44
Our study has some limitations. First, it is possible that EHR adoption might have different effects on quality improvement for conditions other than the 3 we studied. Second, our approach to measuring EHR capability did not account for the extent or adequacy of EHR implementan tion within a given hospital or the frequency and manner in which the EHR was used, nor were we able to account for the substantial variation in functionality that exists between different EHRs. Moving forward, metrics of meaningful use of an EHR should make it possible to better assess and identify which elements of EHR use have the greatest effect on clinical quality. Third, although it was our intent to analyze the effect of EHR adoption at typical hospitals, the subset of hospitals for which we had data may not be entirely representative of all US hospitals. It is also possible that uncaptured baseline differences between hospitals that already had or subsequently adopted an EHR could bias our results.
Fourth, the potential for “ceiling effects” may limit the interpretation of our results. The composite measures for AMI and pneumonia may be particularly affected by this phenomenon. Our results suggest that improving quality scores beyond 91.0% to 92.0% for pneumonia and 93.0% to 94.0% for AMI may be considerably more challenging than improving quality
below those levels. These ceiling effects may explain why we observed a significant decrease in the rate of quality improvement for AMI and no significant change in the rate of quality improvement for pneumonia among hospitals that adopted new EHR capabilities. The heart failure composite quality scores seem less likely to be subject to ceiling effects, as the mean 2007 scores are generally lower (83.0% to 87.0%) than the scores for the other clinical conditions. In fact, Table 5 illustrates that hospitals newly adopting an advanced EHR had lower heart failure quality scores than hospitals not adopting an EHR in 2007, despite having slightly higher baseline quality scores in 2004. This result suggests that new adoption of an advanced EHR indeed slowed quality improvement in these hospitals.
Fifth, it is possible that, in hospitals that were adopting or upgrading an EHR, resources that would have otherwise been devoted to quality improvement efforts were diverted toward EHR implementation efforts. The literature suggests that both tasks (quality improvement and EHR implementation) are resource-intensive, and it is feasible that both processes might suffer if conducted simultaneously and forced to compete for resources.45,46
Sixth, our study may not have been long enough to fully estimate the relationship between EHR adoption and quality improvement. Institutions with “homegrown” EHRs that have been developed and refined over decades typically report that their EHRs have significantly improved clinicians’ adherence to recommended practices.3,12,47-49 In fact, our analyses are somewhat illustrative of this phenomenon, as we observed that hospitals that had a basic EHR in place at the outset of the study realized significantly higher gains in heart failure quality scores.
Combined with recent findings by DesRoches et al15 and by McCullough et al,16 our results should temper expectations for the pace and magnitude of the effects of the HITECH legislation. The challenges and unintended consequences of EHR adoption are well documented.24,44 HITECH provides the Office of the National Coordinator for Health Information Technology substantial resources to address some these challenges.7 The office has initiated several programs designed to increase the likelihood that the transformative vision of the EHR will finally be realized. Key policies and programs include EHR certification, development of meaningful use criteria, a regional extension program, state health information exchanges, funding of university and community college programs to bolster the health IT workforce, and research support to improve the safety, security, and usefulness of the next generation of EHRs.50
We believe that these programs are well conceived and anticipate that they will lead to more effective use of EHRs, which will in turn lead to improved quality in US hospitals. However, we are concerned that the standard methods for measuring hospital quality will not be appropriate for measuring the clinical effects of EHR adoption. The generally high levels of performance on the Hospital Compare database measures are to be celebrated, but in going forward, these high levels of performance will make it difficult to detect the effect of EHR adoption on hospital quality. Although the Office of the National Coordinator for Health Information Technology has made progress in defining standard criteria forthe functionality and use of an EHR, it has yet to propose a set of standard measures by which the effects of EHR use can be measured. Kern and colleagues51 developed a set of 32 metrics designed to evaluate the effect of the EHR on ambulatory clinical quality. On their face, the metrics by Kern et al seem valid; however, there has been no attempt to further validate these metrics or to develop similar metrics for hospitals, to our knowledge. The initial focus of the Office of the National Coordinator for Health Information Technology on developing standard criteria for the functionality and use of EHRs is justifiable, but valid measures of the effects of EHR adoption on hospital quality will be necessary to evaluate the return on the federal government’s investment in EHRs.
We acknowledge helpful comments on an early version of the manuscript by Jeffrey Wasserman, PhD; Paul S. Heaton, PhD; Paul G. Shekelle, MD, PhD; and James R. Broyles, PhD. We also acknowledge the assistance of Paul S. Heaton, PhD, in obtaining the data for this analysis.
Author Affiliations: From RAND Corporation (SSJ, JLA, JSR, EAM), Santa Monica, CA; RAND Corporation (ECS), Boston, MA; Division of General Medicine and Primary Care (ECS), Brigham and Women's Hospital, Boston, MA; and Department of Health Policy and Management (ECS), Harvard Medical School, Boston, MA.
Funding Source: This study was funded by philanthropic contributions from members of the RAND Health Board of Advisors to support analyses of health reform policy options.
Author Disclosures: The authors (SSJ, JLA, ECS, JSR, EAM) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (SSJ, JLA, JSR, EAM); acquisition of data (SSJ); analysis and interpretation of data (SSJ, JLA, ECS, JSR, EAM); drafting of the manuscript (SSJ, ECS, EAM); critical revision of the manuscript for important intellectual content (SSJ, ECS, EAM); statistical analysis (SSJ, JLA); obtaining funding (EAM); and supervision (JSR, EAM).
Address correspondence to: Spencer S. Jones, PhD, RAND Corporation, 1776 Main St, PO Box 2138, Santa Monica, CA 90407. E-mail: firstname.lastname@example.org.
1. Committee on Quality of Health Care in America, Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001.
2. Kohn LT, Corrigan J, Donaldson MS. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 2000.
3. Chaudhry B, Wang J, Wu S, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006;144(10):742-752.
4. Blumenthal D, Glaser JP. Information technology comes to medicine. N Engl J Med. 2007;356(24):2527-2534.
5. Jha AK, DesRoches CM, Campbell EG, et al. Use of electronic health records in U.S. hospitals. N Engl J Med. 2009;360(16):1628-1638.
6. Hillestad R, Bigelow J, Bower A, et al. Can electronic medical record systems transform health care? Potential health benefits, savings, and costs. Health Aff (Millwood). 2005;24(5):1103-1117.
7. American Recovery and Reinvestment Act, HR 1, 111th Congress, 1st Sess (2009). http://frwebgate.access.gpo.gov/cgi-bin/getdoc. cgi?dbname=111_cong_bills&docid=f:h1enr.txt.pdf. Accessed October 15, 2010.
8. Letter to the Honorable Charles B. Rangel, Chairman, Committee on Ways and Means, U.S. House of Representatives. January 21, 2009. http://www.cbo.gov/ftpdocs/99xx/doc9966/HITECHRangelLtr.pdf. Accessed December 15, 2009.
9. Office of the National Coordinator for Health Information Technology, US Department of Health and Human Services. Health IT Policy Council recommendations to National Coordinator for defining meaningful use: final. August 2009. http://healthit.hhs.gov/portal/ server.pt/gateway/PTARGS_0_10741_888532_0_0_18/FINAL%20MU%20 RECOMMENDATIONS%20TABLE.pdf. Accessed February 1, 2010.
10. American Recovery and Reinvestment Act of 2009. HIMSS legislative overview, policy implications, and healthcare ramifications. http:// www.himss.org/content/files/HIMSS_ARRA_Appendices.pdf. Accessed December 3, 2010.
11. PricewaterhouseCoopers LLP. Rock and a hard place: an analysis of the $36 billion impact from health IT stimulus funding. 2009. http://www.pwc.com/us/en/healthcare/publications/rock-and-a-hard-place.jhtml. Accessed October 14, 2009.
12. Goldzweig CL, Towfigh A, Maglione M, Shekelle PG. Costs and benefits of health information technology: new trends from the literature. Health Aff (Millwood). 2009;28(2):w282-w293.
13. Amarasingham R, Plantinga L, Diener-West M, Gaskin DJ, Powe NR. Clinical information technologies and inpatient outcomes: a multiple hospital study. Arch Intern Med. 2009;169(2):108-114.
14. Menachemi N, Chukmaitov A, Saunders C, Brooks RG. Hospital quality of care: does information technology matter? The relationship between information technology adoption and quality of care. Health Care Manage Rev. 2008;33(1):51-59.
15. DesRoches CM, Campbell EG, Vogeli C, et al. Electronic health records: limited successes suggest more targeted uses. Health Aff (Millwood). 2010;29(4):639-646.
16. McCullough JS, Casey M, Moscovice I, Prasad S. The effect of health information technology on quality in US hospitals. Health Aff (Millwood). 2010;29(4):647-654.
17. Parente ST, McCullough JS. Health information technology and patient safety: evidence from panel data. Health Aff (Millwood). 2009;28(2):357-360.
18. Himmelstein DU, Wright A, Woolhandler S. Hospital computing and the costs and quality of care: a national study. Am J Med. 2010; 123(1):40-46.
19. Kazley AS, Ozcan YA. Do hospitals with electronic medical records (EMRs) provide higher quality care? An examination of three clinical conditions. Med Care Res Rev. 2008;65(4):496-513.
20. Yu F, Houston TK. Do most "wired" hospitals deliver better care? Jt Comm J Qual Patient Saf. 2007;33(3):136-144.
21. Jha AK, Orav EJ, Ridgway AB, Zheng J, Epstein AM. Does the Leapfrog program help identify high-quality hospitals? Jt Comm J Qual Patient Saf. 2008;34(6):318-325.
22. Fonkych K, Taylor R; RAND Corporation. The State and Pattern of Health Information Technology Adoption. Santa Monica, CA: RAND Corp; 2005.
23. HIMSS Foundation. Usage agreement and application for the Dorenfest Institute for H.I.T. Research and Education Database. http://www.himss.org/DorenfestInstitute/agreement.aspx. Accessed November 2, 2009.
24. Ash JS, Sittig DF, Dykstra R, Campbell E, Guappone K. The unintended consequences of computerized provider order entry: findings from a mixed methods exploration. Int J Med Inform. 2009;78(suppl 1):S69-S76.
25. American Hospital Association Annual Survey Database [CDROM]. Chicago, IL: American Hospital Association; 2007.
26. Centers for Medicare and Medicaid Services. Hospital Compare: Hospital Quality Alliance. 2005-2007. http://www.cms.hhs.gov/ HospitalQualityInits/11_HospitalCompare.asp. Accessed November 2, 2009.
27. Healthcare Cost and Utilization Project. Statistical brief #59. September 2008. http://www.hcup-us.ahrq.gov/reports/statbriefs/sb59. jsp. Accessed December 17, 2009.
28. Jha AK, Orav EJ, Li Z, Epstein AM. The inverse relationship between mortality rates and performance in the Hospital Quality Alliance measures. Health Aff (Millwood). 2007;26(4):1104-1110.
29. Jha AK, Orav EJ, Dobson A, Book RA, Epstein AM. Measuring efficiency: the association of hospital costs and quality of care. Health Aff (Millwood). 2009;28(3):897-906.
30. Dullabh P, Moiduddin A, Babalola E. White paper: measurement of the utilization of an installed EHR. June 2009.http://aspe.hhs.gov/sp/ reports/2009/ehrutil/report.pdf. Accessed January 21, 2010.
31. DesRoches CM, Campbell EG, Rao SR, et al. Electronic health records in ambulatory care: a national survey of physicians. N Engl J Med. 2008;359(1):50-60.
32. Doolan DF, Bates DW. Computerized physician order entry systems in hospitals: mandates and incentives. Health Aff (Millwood). 2004;21(4):180-188.
33. Aarts J, Ash J, Berg M. Extending the understanding of computerized physician order entry: implications for professional collaboration, workflow and quality of care. Int J Med Inform. 2007;76(suppl 1): S4-S13.
34. Butler J, Speroff T, Arbogast PG, et al. Improved compliance with quality measures at hospital discharge with a computerized physician order entry system. Am Heart J. 2006;151(3):643-653.
35. Kuperman GJ, Teich JM, Gandhi TK, Bates DW. Patient safety and computerized medication ordering at Brigham and Women's Hospital. Jt Comm Qual Improv. 2001;27(10):509-521.
36. Ball MJ, Douglas JV. IT, patient safety, and quality care. J Healthc Inf Manag. 2002;16(1):28-33.
37. Ball MJ, Garets DE, Handler TJ. Leveraging information technology towards enhancing patient care and a culture of safety in the US. Methods Inf Med. 2003;42(5):503-508.
38. Bates DW, Kuperman G, Teich JM. Computerized physician order entry and quality of care. Qual Manag Health Care. 1994;2(4):18-27.
39. Zanutto E, Lu B, Hornik R. Using propensity score subclassification for multiple treatment doses to evaluate a national antidrug media campaign. J Educ Behav Stat. 2005;30(1):59-73.
40. Stock JH, Watson MW. Introduction to Econometrics. Boston, MA: Addison Wesley; 2003.
41. R 2.10.0 [computer program]. Vienna, Austria: R Foundation for Statistical Computing; 2009.
42. Greenhalgh T, Potts HW, Wong G, Bark P, Swinglehurst D. Tensions and paradoxes in electronic patient record research: a systematic literature review using the meta-narrative method. Milbank Q. 2009; 87(4):729-788.
43. Drucker PF. The Essential Drucker: Selections From the Management Works of Peter F. Drucker. New York, NY: HarperBusiness; 2001:118.
44. Harrison MI, Koppel R, Bar-Lev S. Unintended consequences of information technologies in health care: an interactive sociotechnical analysis. J Am Med Inform Assoc. 2007;14(5):542-549.
45. Meyer JA, Silow-Carroll S, Kutyla T, Stepnick L, Rybowski L. Hospital quality: ingredients for success: overview and lessons learned. July 27, 2004. http://www.cmwf.org/Publications/Publications_ show.htm?doc_id=233868. Accessed October 15, 2010.
46. Ash JS, Bates DW. Factors and forces affecting EHR system adoption: report of a 2004 ACMI discussion. J Am Med Inform Assoc. 2005; 12(1):8-12.
47. Gardner RM, Pryor TA, Warner HR. The HELP hospital information system: update 1998. Int J Med Inform. 1999;54(3):169-182.
48. Pryor TA, Gardner RM, Clayton PD, Warner HR. The HELP system. J Med Syst. 1983;7(2):87-102.
49. Dobrev A, Jones T, Stroetmann K, Vatter Y, Peng K. The socioeconomic impact of interoperable electronic health record (EHR) and ePrescribing systems in Europe and beyond: final study report. October 2009. http://www.ehr-impact.eu/downloads/documents/EHRI_ final_report_2009.pdf. Accessed April 1, 2010.
50. Office of the National Coordinator for Health Information Technology, US Department of Health and Human Services. HITECH programs. http://healthit.hhs.gov/portal/server.pt?open=512&objID=1487&parentname=CommunityPage&parentid=3&mode=2&in_hi_userid=10741&cached=true. Accessed April 7, 2010.
51. Kern LM, Dhopeshwarkar R, Barran Y, Wilcox A, Pincus H, Kaushal R. Measuring the effects of health information technology on quality of care: a novel set of proposed metrics for electronic quality reporting. Jt Comm J Qual Patient Saf. 2009;35(7):359-369.