Currently Viewing:
The American Journal of Accountable Care December 2016
Tobacco Control in Accountable Care: Working Toward Optimal Performance
Edward Anselm, MD
High-Dose Flu Vaccine Prevents Symptomatic Influenza and Reduces Hospitalizations
Quality in Healthcare and National Quality Awards
Beth M. Beaudin-Seiler, PhD, and Kieran Fogarty, PhD
Currently Reading
The Impact of Health Information Technologies on Patient Satisfaction
Christine J. Manta, BA; Richard Caplan, PhD; Jennifer Goldsack, MChem, MA, MBA; Shawn Smith, MBA; and Edmondo Robinson, MD, MBA
Can a Virtual Coach Activate Patients? A Proof of Concept Study
Sheri D. Pruitt, PhD, and Dannielle E. Richardson
Beyond Regulatory Requirements: Designing ACO Websites to Enhance Stakeholder Engagement
Esther Hsiang, BA; Andrew T. Rogers, BS; Daniel J. Durand, MD; Scott Adam Berkowitz, MD, MBA
Treating Behavioral Health Disorders in an Accountable Care Organization
Neil D. Minkoff, MD
The Post-Election Future of ACOs
Anthony D. Slonim, MD, DrPH, and Amber M. Maraccini, PhD

The Impact of Health Information Technologies on Patient Satisfaction

Christine J. Manta, BA; Richard Caplan, PhD; Jennifer Goldsack, MChem, MA, MBA; Shawn Smith, MBA; and Edmondo Robinson, MD, MBA
Health information technologies can be implemented without impact on patient satisfaction. The lacking synergistic relationship should be concerning to stakeholders for optimizing costs and quality.
ABSTRACT

Objectives: To determine if health information technologies (IT) impact patient responses on the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) satisfaction survey.

Study Design: A retrospective, pre-post observational study to compare the percentage of top-box responses by HCAHPS composite domain before and after implementation of electronic medication administration record (eMAR), computerized provider order entry (CPOE), and electronic progress notes (PN).

Methods: We defined 3 pre-post comparison periods for introduction of eMAR, CPOE, and PN, and 2 control periods. The pre-implementation periods comprised the 4 months prior to tool addition. Postimplementation periods comprised the 4 months from the second to the fifth month, inclusive, following the unit going live with a tool. Changes in the percentage of top-box scores were tested using logistic regression. The combined changes during the health IT implementation periods and the combined changes during control periods were tested and compared using contrasts in the logistic regression model.

Results: Only PN had a significant negative impact on 2 questions in unadjusted analysis, both of which became nonsignificant in adjusted analyses. eMAR had a significant negative impact on 1 question in adjusted analyses only. The combined impact of health IT had mixed results, none of which were significant. During control periods, there was improvement in all domains, with statistically significant improvements in Discharge Information and Communication About Medicines. 

Conclusions: Health IT investments appear to have no impact on HCAHPS scores. Stakeholders should investigate synergistic opportunities between these resource-intensive initiatives to optimize costs, quality, and patient experience. 

The American Journal of Accountable Care. 2016;4(4):9-15
The introduction of CMS’ Meaningful Use incentive payments in 2011 has encouraged hospitals to increasingly utilize health information technologies (IT).1 When implementing new tools, an institution should consider how health IT can influence the quality, safety, and outcomes measures that determine reimbursements from CMS’ value-based purchasing (VBP) program.2 In 2013, CMS added the patient experience of care domain to VBP through the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS), which linked patient-reported satisfaction scores to value-based payments.3 Consequently, within a 2-year span, healthcare systems have had to simultaneously invest millions of dollars and large amounts of clinician and administrator time into new initiatives to improve patient experience and meet Meaningful Use policies.4-8 Mechanisms by which health IT investments can impact HCAHPS scores should be an important consideration for administrators, policy makers, and clinicians responsible for optimizing delivery of affordable and high-quality care.9

Currently, there are limited studies with mixed results investigating direct relationships between health IT utilization and HCAHPS scores. A study of all hospitals eligible to participate in the VBP program for 2013 found no significant differences in total patient experience of care scores by level of advanced electronic health record usage.9 Another national study found positive associations between the average responses to the HCAHPS global domain questions and medium and high health IT usage.10 Only 1 study evaluated effects of health IT on composite domains of the HCAHPS survey.11 Using 2013 VBP data, this study showed that health IT utilization does improve patient satisfaction with discharge information, but does not influence satisfaction with provider communication or the patient’s willingness to recommend the hospital.11

The purpose of our study was to evaluate if investments in health IT accelerated performance in patient-reported satisfaction scores, which have steadily risen since our institution began using the HCAHPS survey in 2008. Our administration uses a combination of HCAHPS and other valuable patient satisfaction indicators, but our study focused on HCAHPS scores as they are a generalizable measure upon which significant reimbursement is contingent. Since 2008, our institution has implemented 3 point of care tools—the electronic medication administration record (eMAR), computerized provider order entry (CPOE), and electronic progress notes (PN)—that have been shown to have positive associations with process efficiency, staff satisfaction, and quality measures.12-15 To investigate whether these tools improve care from the patient perspective, we compared HCAHPS scores by composite domain before and after tool implementation. By determining how health IT utilization influences patient perceptions of their care, we aimed to inform stakeholders about opportunities to capitalize on existing synergistic effects to provide high-quality care while maximizing value-based reimbursements. 

METHODS

Study Design

A retrospective, pre-post observational study to compare HCAHPS scores before and after implementation of eMAR, CPOE and PN.

Setting and Sample

We analyzed surveys from 31 inpatient units—13 medical, 7 surgical, 5 stepdown/intensive care unit (ICU), and 6 women’s and children’s services—collected from 2008 to 2015 at 2 hospitals in our independent academic health system based in Wilmington, Delaware. Christiana Hospital is a 913 bed suburban teaching hospital and Wilmington Hospital is a 241-bed urban teaching hospital.

We defined 5 comparison periods: 3 pre-post periods (A, B, and C) for the introduction of eMAR, CPOE, and PN, respectively; and 2 control periods (D and E) in which health IT was unchanged. The Figure shows the periods of interest for surgical units at Christiana. The control periods, D and E, were included to determine how HCAHPS scores changed during the years when health IT was constant. By comparing changes during control periods to changes during implementation periods, the general improvement in HCAHPS scores at our institution over the 7-year timeframe of our study could be specifically linked to periods of control or implementation.

The pre-implementation periods comprised the 4 months prior to the addition of a tool onto the unit. The postimplementation periods comprised the 4 months from the second to the fifth month, inclusive, following the unit going live with a tool. Based on methods of previous studies, we chose 4-month periods, excluding the first month after implementation, to allow sufficient time for provider adjustment while capturing a small enough period to link survey scores to the addition of a new tool.16-20

HCAHPS surveys were combined into periods based on patient discharge date. Each tool was implemented on different units at different times.

Outcomes

We compared percentages of top-box responses for 14 HCAHPS questions focused on patient interactions. Top-box responses are the measures used to calculate the patient experience of care score that determines CMS reimbursements.21 “Top box” is defined as the highest positive category for all questions, except for hospital rating, for which top box includes the highest 2 categories. The patient interaction questions we selected fall into 6 domains: Global, Nurse Communication, Doctor Communication, Discharge Information, Communication About Medicine, and Pain Management.

Data Analysis

Each question was transformed to binary variables of top box versus not top box. Changes for each of the 5 periods (A, B, C, D, E) were tested using logistic regression. The combined changes during the 3 health IT implementation periods (A, B, C) and the combined changes during control periods (D, E) were tested using contrasts in the logistic regression model. Statistical significance is noted when P <.005.

Adjustment for variables that have been suggested to impact HCAHPS responses were done using logistic regression, including age, gender, race, overall self-reported health (excellent, very good, good, fair, poor) and unit category (medical, stepdown/ICU, surgical, women’s and children’s services).22

RESULTS

Our health system’s Data Information and Analytics Office of Quality and Safety produced 47,057 HCAHPS surveys from 2008 to 2015. After excluding incomplete surveys, and those completed by minors and patients discharged from units that did not meet inclusion criteria, we had 11,728 surveys for analysis. Table 1 shows demographics information for patients whose surveys were included and unit category distributions.

Period A comprised 2947 surveys. The pre-eMAR period had 1638 surveys from May 2008 to October 2008 for medical, surgical, and stepdown/ICU and from October 2013 to February 2014 for women’s and children’s services. The post-eMAR period had 1309 surveys from September 2008 to February 2009 for medical and surgical units, December 2008 to March 2009 for stepdown/ICU, and March 2014 to April 2014 for women’s and children’s services. All surveys in Period A are from Christiana Hospital because eMAR was implemented at Wilmington Hospital prior to our institution using the HCAHPS survey.

Period B comprised 4325 surveys. The pre-eMAR + CPOE period had 2060 surveys from November 2009 to February 2010 for medical, surgical, and stepdown/ICU; from March 2014 to April 2014 for women’s and children’s services; and from September 2009 to December 2009 for all units at Wilmington. The post-eMAR + CPOE period had 2265 surveys from April 2010 to July 2010 for medical, surgical, and stepdown/ICU at Christiana, and from February 2010 to May 2010 for all units at Wilmington. Women’s and children’s services could not be included in this period because the time between eMAR and CPOE addition was too short. 

Period C comprised 4446 surveys. The pre-eMAR + CPOE + PN period had 2102 surveys from January 2014 to May 2014 for all units at both Christiana and Wilmington, and from April 2014 to May 2014 for women’s and children’s services. The post-eMAR + CPOE + PN period had 2344 surveys from June 2014 to October 2014 for all units at both hospitals.

Period D (control) comprised 3369 surveys from February 2009 to October 2009 for medical, surgical, and step-down/ICU at Christiana; and from August 2008 to August 2009 at Wilmington. Period E (control) comprised 4367 surveys from August 2010 to December 2013 at Christiana, and from June 2010 to December 2013 at Wilmington.

The percentage of top-box scores increased, but not significantly, for 13 of the 14 questions as units evolved from paper only, pre-eMAR documentation systems in May 2008 to full health IT implementation in October 2014 (Table 1).

Neither eMAR nor CPOE implementation significantly impacted the percentage of top-box responses as seen in Period A and Period B pre-post unadjusted comparisons (Table 2). However, decreases in top-box responses for the Discharge Information question, “Did hospital staff talk with you about whether or not you would have the help you needed when you left the hospital?” and for the Global question of hospital rating, were statistically significant following PN implementation, as seen in Period C pre-post comparisons.

The unadjusted combined impact of health IT, determined by the sum of changes in proportions of top-box responses in periods A, B and C, was mixed across composite domains with no significance (Table 3). During control periods D and E, there were positive changes in top-box responses in all composite domains. Improvements in Discharge Information and Communication About Medicine were statistically significant.

Adjusted analyses for age, gender, race, overall self-reported health, and unit category changed the statistical significance for only 5 comparisons. In Period C the decrease in hospital rating became nonsignificant when adjusted for age, and the decrease in the Discharge Information question became nonsignificant when adjusting for either age or unit category. In Period A, following eMAR introduction the decrease in top-box responses for “How often did the hospital staff do everything they could to help with your pain?” became statistically significant when adjusted for age or unit category.

DISCUSSION

Our results indicated that improvements in HCAHPS scores at our institution were not associated with health IT implementation. By comparing the sum of the proportional changes in percentages of top-box scores following the addition of eMAR, CPOE, and PN to the sum of proportional changes during the control periods, we determined that improvements in HCAHPS scores were associated with the control periods. Periods of health IT implementation showed a mix of positive and negative impacts—none of which were significant. Individually, only PN had significant negative impact on 2 questions, both of which became nonsignificant in adjusted analyses; only 1 question became significant in adjusted analyses. Because the tools, individually and combined, had very little to no impact in both unadjusted and adjusted analyses, our study suggests an absence of a relationship between these electronic point of care tools and patient satisfaction.

 
Copyright AJMC 2006-2019 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
x
Welcome the the new and improved AJMC.com, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up