Quality in Healthcare and National Quality Awards

December 14, 2016
Beth M. Beaudin-Seiler, PhD

Kieran Fogarty, PhD

The American Journal of Accountable Care, December 2016, Volume 4, Issue 4

Hospitals receiving national quality awards showed better performance in selected outcomes variables compared with hospitals that had not received these awards.


Objectives: To examine the differences between national quality award—winning hospitals’ quality outcomes compared to with the outcomes of non–national quality award–winning hospitals in 11 selected key variables.

Study Design: Data from the Medicare Hospital Compare database was collected for over 3000 hospitals across the United States. National quality award—winning hospitals are defined as those that have received either the Malcolm Baldrige National Quality Award for Health Care or the Healthgrades Distinguished Hospital for Clinical Excellence Award in 2012, 2013, or 2014. Hospitals that did not receive either of these 2 awards were categorized as non–national quality award–winning. Variables from the Medicare Hospital Compare database were refined to include only key variables in clinical quality.

Methods: This quantitative study used the Mann-Whitney U nonparametric test to examine the differences between hospitals that have and have not received national quality awards and their clinical performance outcomes.

Results: Hospitals that had received a national quality award showed statistically significant better performance on 6 of 11 identified quality variables compared with those that had not. One variable (emergency department wait time) had statistically significant better performance in non—national quality award–winning hospitals. The remaining 4 variables did not show a statistically significant difference in performance between hospitals that had and had not won a national quality award.

Conclusions: The findings suggest that national quality award—winning hospitals have better performance outcomes on 6 of 11 specific quality variables compared with hospitals that did not win 1 of the 2 defined national quality awards.

The American Journal of Accountable Care. 2016;4(4):26-32

The Institute of Medicine’s (IOM) 1999, To Err is Human, as well as subsequent follow-up reports in 2005 and 2009, indicate that patient safety and quality continue to be problems within the healthcare industry in the United States.1-3 One in 3 patients experiences adverse events (AEs) during the course of their hospital stay,4 and the most recent research suggests that upwards of 440,000 individuals die each year due to medical errors, which can cost the US economy as much as $980 billion.5,6 Empirical evidence of medical errors within the hospital setting is not always transparent—a fact that may deter the identification and the tackling of AEs. Hospitals may fear reprisals if they publish data on medical errors; however, concealing such information may not hold a benefit, as indicated by medical researchers like Marty Makary (a specialist at Johns Hopkins Hospital and professor at Johns Hopkins Bloomberg School of Public Health), who is regarded as an international expert in patient safety, and others.7

Although logic suggests that the hospitals receiving national quality awards have lower levels of medical errors and have better performance on quality indicators, thus making the cost in resources to obtain or publicize these awards worthwhile, the literature does not indicate whether this is truly the case.8 Performance measurements have shown an increase in patient safety, but arguably not as much as is wanted or needed. In the 2014 National Healthcare Quality & Disparities report from the Agency for Healthcare Research and Quality (AHRQ), the median change in quality in 2012 was only 3.6% per year among measures of patient safety, and only 1.7% per year among measures of effective treatment.9

This study examined over 3000 hospitals across the United States, looking at 11 specific variables from the timely and effective care category of the Medicare Hospital Compare database8 in order to better understand if there is a relationship between hospitals that have won national quality awards compared with hospitals that have not. The question of whether it is worthwhile for hospitals to invest resources in obtaining and marketing national quality awards is also discussed. In addition, a connection to highly reliable organizations is a theme drawn through the conclusions. Our findings support the research of Weick and Sutcliffe (2007) that highly reliable organizations (HROs) make continuous efforts to analyze their systems and look for errors; they recognize the near misses and dangers of complacency and continually look at the sensitivity of their operations, developing situational awareness for their organization.10

The Joint Commission is the United States’ oldest and largest accrediting and standard-setting organization in healthcare, with a number of quality initiatives and programs offered to healthcare organizations.11 Nevertheless, the quality metrics of The Joint Commission were purposefully not used in this study; instead, quality awards from independent sources—from the accrediting body—were chosen. There are also a number of independent quality awards given; however, the Malcolm Baldrige National Quality Award (Malcolm Baldrige Award) and the Healthgrades Distinguished Hospital for Clinical Excellence Award (Healthgrades Award) were selected as nationally recognizable for quality.

Study Design

A quantitative study using secondary data sources from the Medicare Hospital Compare database was used to determine performance outcomes on selected variables. Hospitals having received either a Malcolm Baldrige Award or a Healthgrades Award were categorized as national quality award—winners; all other hospitals were categorized as non–national quality award-winners. Data housed in the Medicare Hospital Compare database were collected through a number of different sources, including voluntary reporting from the CMS Abstraction and Reporting Tool, Medicare Enrollment and Claims data, Veterans Affairs administrative data, National Healthcare Safety Network by the CDC, and AHRQ Patient Safety Indicators and listed as 86 separate variables.12 The variables were categorized into multiple areas, including: patients’ experience; timely and effective care; readmissions, complications, and deaths; use of medical imaging; payment and value of care; and number of Medicare patients.12

This study focused on variables categorized in timely and effective care. These variables indicated the following: percentage of patients who received best-practice treatments for their condition, timeliness of treatment for patients with certain medical emergencies, and recommendations for preventive treatments. The measures reported reflect the accepted standard of care based on current scientific evidence; they are regularly reviewed and revised to ensure they are current. The measures do not have a risk adjustment calculation, but are reported as percentages.12 There are 50 variables listed in the timely and effective care category.13 A refinement process of cross-referencing quality indicators from various national organizations and empirical research14-22 reduced the number to 11 key quality variables (Table 1).

There are 4861 healthcare organizations that report in to the Medicare Hospital Compare database. Not all, however, offer the same services; therefore, any reporting organization that had missing data for 6 or more of the selected key quality variables was excluded from the analysis. This exclusionary criterion resulted in 3118 healthcare organizations eligible for analysis in this study.8

The healthcare organizations were then separated into 2 groups: national quality award—winning (those hospitals having received the Malcolm Baldrige Award and/or the Healthgrades Award) and non–national quality award-winning (all others who had not won at least 1 of the aforementioned awards). When a Malcolm Baldrige Award–winning system—meaning that more than 1 hospital was included in the application—occurred, all hospitals within that system were depicted as a national quality award–winner. A total of 493 national quality award–winning organizations and 2625 non–national quality award–winning organizations was identified. Of the 493 national quality award–winning organizations, 11 received both the Malcolm Baldrige and Healthgrades awards.


Each variable was evaluated separately using a Mann-Whitney U test at the .05 level of significance. The Mann-Whitney U test was selected because the variable data violated the assumptions of the independent t test. However, the data for all the variables in this project satisfied the assumptions of the Mann-Whitney U test (ie, the dependent variable is continuous, there is 1 independent variable with 2 categories, cases cannot be in both categories, and whether the distribution of data has the same shape). Distribution of the scores for national quality award—winning hospitals and nonwinning hospitals were similar, as assessed by visual inspection.

In order to establish homogeneity among hospital subjects, several delimitations were selected. The entire population of hospitals that report into the US Medicare Hospital Compare database was initially selected; however, hospitals that had missing data for half or more of the refined variables were excluded. The Medicare Hospital Compare database collects information on 86 variables, but only those in the timely and effective care category were selected, because they were unaltered in any way. Next, the variables were refined from 86 to 11 through a search of quality indicators from the literature and professional societies and association; however, selection of different variables may yield different results. Finally, it is also noted that hospitals that have self-selected to apply for a national quality award, such as the Malcolm Baldrige, may have better funding, more staff, and more resources—that are not necessarily directly related to quality—that were not being measured in this study.8


Findings indicated that, overall, there were statistically significant better performances in more variables from hospitals that had received national quality awards than those that had not. The variables specified in this study reflect the percentage of correct procedures and total minutes to execution of an event. For the majority of the variables, the higher the percentage, the more often the treatment occurred in the correct time and in the correct manner; the exception is the obstetrics variable, which better performance reflected in a lower percentage. For the variables tracking minutes, a lower number of minutes to execution of an event was considered better performance.

In a comparison of hospitals that had received a national quality award with those that had not, on average, for hospitals that had received an award, a higher percentage of patients received appropriate medications for heart failure (97.8% and 96.5%, respectively; P = .036) and pneumonia (97.3% and 95.8%; P = .000), had their urinary catheters removed after surgery in a timely manner (97.9% and 96.7%; P = .003), received appropriate medication to prevent blood clots after surgery (98.9% and 98.0%; P = .000), and received antibiotics prior to surgery in a timely manner (97.8% and 96.7%; P = .018).

In addition, the percentage of babies delivered too early was statistically significantly lower in hospitals that had won national quality awards (3.4% and 5.0%, respectively; P = .001). However, the number of minutes spent in the emergency department (ED) was statistically significantly higher in hospitals having received a national quality award compared with that of hospitals that had not received one (299.3 and 277.3 minutes, respectively; P = .000).

Table 2 provides an overview of the statistical analysis for the identified key quality variables in this study.


Overall, the findings of the study were expected, as it is logical that the hospitals that received national quality awards would be better-performing than those that did not receive an award. The only unexpected finding was that the number of minutes spent in the EDs of national quality award—winning hospitals was statistically significant higher than the number of minutes spent in the EDs of nonwinning hospitals. We speculated there may be a couple of reasons for this finding: 1) the national quality award–winning hospitals have a reputation for quality and, therefore, are the hospital of choice for people, causing more of a wait; and/or 2) the national quality award–winning hospitals may also be functioning more as an HRO, refusing to simplify and, in doing so, increasing time in the ED.

Pope (2015) found that hospitals that had won the Malcolm Baldrige Award were not quite yet HROs, but were closer to reflecting an HRO than other hospitals.23 However, as the data were analyzed a step further and the type of national award received (ie, either the Malcolm Baldrige or the Healthgrades awards) was examined, findings indicated there were more variables with statistically significant better performances among Healthgrades Award recipients than in Malcolm Baldrige Award recipients. This may be due to the fact the Malcolm Baldrige Quality Award is a holistic process, focusing on the entire organization, whereas the Healthgrades Award focuses on outcomes only.

The answer to the question, “Is it worth it for hospitals to invest resources into obtaining and marketing these awards?” is a resounding “It depends.” Looking specifically at an obstetrics and gynecology unit, the average damages awarded in a successful lawsuit involving a neurologically impaired infant is $1,150,687.24 Is it worth the hundreds of hours of manpower, other resources, and upwards of $50,000 in fees/site visit fees25 to go through the Malcolm Baldrige Award process in order to reduce the opportunity for liability in obstetrics? Possibly, but if the organization already performs at a high level in obstetrics, maybe not. These 2 quality awards have opposite evaluation points. The Malcolm Baldrige Award has up-front costs, and a large hospital could easily spend over $100,000 in resources of personnel, as well as fees—including site visit fees—in order to participate in the assessment. For the Healthgrades Award, the start is at the outcomes level. There are no up-front costs to the hospital, and they have not invested personnel or fees to be evaluated by Healthgrades. All hospitals are calculated and ranked without a request by the hospital. However, should the hospital perform at a level high enough to receive an award, in order to market or publicize the honor, a licensing fee is required, which can be upwards of $145,000.26

Overall, the findings indicate there are some variables, such as the percentage of patients receiving an intervention within 90 minutes, where the mean scores for the hospitals that have not won a national quality award are considerably lower than those that have. This may indicate that there are some nonwinning hospitals performing at a very low level and dragging the mean down. For those hospitals, documentation of a continuous process improvement plan, assessment of that plan, and initiatives for performance outcomes improvement—as required by the Malcolm Baldrige criteria—may prove to be exceedingly beneficial in their journey to achieve quality.

The answer to the question, “Is it worth it to pursue national quality awards?” depends on what the hospital needs: does it need a diagnostic test equivalent to the Malcolm Baldrige or does it need to publicize the fact that it can handle high-risk obstetrics with low percentages of early deliveries?


Findings from this research can provide healthcare professionals and administrators with important information. We would propose to all healthcare professionals and administrators to not ask the question of whether pursuing the Malcolm Baldrige or Healthgrades awards are worth the cost, but instead to ask what the hospital can do to move toward becoming an HRO.

By virtue of research by Pope,23 we see that hospitals that have received national quality awards are functioning more closely, and adhering more consistently, with principles of HROs, and that should be the goal. It is not the pursuit of the award that should be the focus for the hospitals, but instead, the pursuit of the principles of an HRO. With that journey will come the quality performance outcomes worthy of national awards.

However, this work reveals that even the hospitals that have received national quality awards are not comparatively better in all variables than those hospitals that have not received one. There is room for improvement for everyone, and if your organization has not received a national quality award, maybe that should not, nor need to be, your focus.

Author Affiliations: Altarum Institute, Ann Arbor, MI (BMB-S); Western Michigan University (KF), Kalamazoo, MI.

Source of Funding: None.

Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (BMB-S, KF); acquisition of data (BMB-S); analysis and interpretation of data (BMB-S); drafting of the manuscript (BMB-S); critical revision of the manuscript for important intellectual content (KF); statistical analysis (BMB-S); administrative, technical, or logistic support (KF); and supervision (KF).

Send Correspondence to: Beth M. Beaudin-Seiler, PhD, Altarum Institute, 3520 Green Ct #300, Ann Arbor, MI 48105. E-mail: beth.beaudin-seiler@altarum.org


1. Kohn LT, Corrigan JM, Donaldson MS (eds.). To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 1999.

2. Leape LL, Berwick DM. Five years after To Err Is Human: what have we learned? JAMA. 2005:293(19):2384-2390.

3. Clancy CM. Ten years after To Err Is Human. Am J Med Qual. 2009:24(6):525-528. doi: 10.1177/1062860609349728.

4. Classen DC, Resar R, Griffin F, et al. ‘Global trigger tool’ shows that adverse events in hospital may be ten times greater than previously measured. Health Aff (Millwood). 2011:30(4):581-589. doi: 10.1377/hlthaff.2011.0190.

5. James JT. A new, evidence-based estimate of patient harms associated with hospital care. J Patient Saf. 2013:9(3):122-128. doi: 10.1097/PTS.0b013e3182948a69.

6. Andel C, Davidow SL, Hollander M, Moreno DA. The economics of health care quality and medical errors. J Health Care Finance. 2012:39(1):39-50.

7. Makary, M. Unaccountable: What Hospitals Won’t Tell You and How Transparency Can Revolutionize Health Care. New York, NY: Bloomsbury Press; 2012.

8. Beaudin-Seiler BM. National Quality Awards in Healthcare and Actual Quality in US Hospitals [dissertation]. Kalamazoo, MI: Western Michigan University; 2015.

9. 2014 national healthcare quality & disparities report. Agency for Healthcare Research and Quality website. http://www.ahrq.gov/research/findings/nhqrdr/nhqdr14/index.html. Published May 2015. Accessed October 14, 2016.

10. Weick KE, Sutcliffe KM. Managing the Unexpected: Assuring High Performance in an Age of Complexity. San Francisco, CA: Jossey-Bass. 2001.

11. Facts about The Joint Commission. The Joint Commission website. http://www.jointcommission.org/facts_about_the_joint_commission/. Published July 2016. Accessed October 14, 2016.

12. Data sources. Medicare Hospital Compare website. http://www.medicare.gov/hospitalcompare/data/data-sources.html. Accessed December 12, 2014.

13. Hospital profile: Bronson Methodist Hospital [2014]. Medicare Hospital Compare website. http://www.medicare.gov/hospitalcompare/profile.html#profTab=2&ID=230017&cmprID=230017&dist=50&loc=49008&lat=42.2609411&lng=-85.612648&cmprDist=3.0&Distn=3.0. Accessed December 12, 2014.

14. Quality indicators [2014]. National Quality Forum website. http://bit.ly/2dvjtXt. Accessed October 24, 2016.

15. PCPI approved quality measures. American Medical Association website. http://www.ama-assn.org/apps/listserv/x-check/qmeasure.cgi?submit=PCPI. Accessed December 20, 2014.

16. The state of healthcare quality report [2014]. National Committee for Quality Assurance website. http://store.ncqa.org/index.php/2014-state-of-health-care-quality-report.html. Accessed October 14, 2016.

17. Guidelines and clinical documents in progress. American College of Cardiology website. http://www.acc.org/guidelines/guidelines-and-clinical-documents-in-progress. Accessed October 14, 2016.

18. Measures matrix [2014]. Agency for Healthcare Research and Quality website. https://www.qualitymeasures.ahrq.gov/hhs/matrix.aspx. Accessed October 14, 2016.

19. The Ambulatory Care Quality Alliance recommended starter set. Agency for Healthcare Research and Quality website. https://archive.ahrq.gov/professionals/quality-patient-safety/quality-resources/tools/ambulatory-care/starter-set.html. Updated May 2005. Accessed October 14, 2016.

20. Carretta HJ, Chukmaitov A, Tang A, Shin J. Examination of hospital characteristics and patient quality outcomes using four inpatient quality indicators and 30-day all-cause mortality. Am J Med Qual. 2013:28(1):46-55. doi: 10.1177/1062860612444459.

21. Nietert PJ, Wessell AM, Jenkins RG, Feifer C, Nemeth LS, Ornstein SM. Using a summary measure for multiple quality indicators in primary care: the Summary QUality InDex (SQUID). Implement Sci. 2007:2:11.

22. Moore L, Lavoie A, Sirois MJ, Amini R, Belcaid A, Sampalis JS. Evaluating trauma center process performance in an integrated trauma system with registry data. J Emerg Trauma Shock. 2013:6(2):95-105. doi: 10.4103/0974-2700.110754.

23. Griffith JR. Understanding high-reliability organizations: are Baldrige recipients models? J Healthc Manag. 2015;60(1):44-61.

24. Shwayder JM. Liability in high-risk obstetrics. Obstet Gynecol Clin North Am. 2007:34(3):617-625.

25. Baldridge Performance Excellence Program: about. National Institute of Standards and Performance website. http://www.nist.gov/baldrige/about/history.cfm. Accessed December 13, 2014.

26. Rau J. Hospital ratings are in the eye of the beholder. Kaiser Health News website. http://kaiserhealthnews.org/news/expanding-number-of-groups-offer-hospital-ratings/. Published March 18, 2013. Accessed February 20, 2015.