CAH Staff Perceptions of a Clinical Information System Implementation

This study examines staff perceptions of patient care quality and the processes before and after implementation of a comprehensive clinical information system in 7 critical access hospitals.
Published Online: May 21, 2012
Marcia M. Ward, PhD; Smruti Vartak, PhD; Jean L. Loes, RN, BSN, MS; John O’Brien, MBA; Troy R. Mills, MS; Jonathon R. B. Halbesleben, PhD; and Douglas S. Wakefield, PhD
Objectives: This study examines staff perceptions of patient care quality and the processes before and after implementation of a comprehensive clinical information system (CIS) in critical access hospitals (CAHs).


Study Design: A prospective, nonexperimental design, evaluation study.


Methods: A modified version of the Information Systems Expectations and Experiences (I-SEE) survey instrument was administered to staff in 7 CAHs annually over 3 years to capture baseline, readiness, and postimplementation perceptions.


Results: Descriptive analyses examined 840 survey responses across 3 survey administrations and job categories (registered nurses [RNs], providers, and other clinical staff). Analysis of variance compared responses for main effects (ie, administration, staff position, hospital, and cohort) and interactions between groups over time. Correlations examined the relationships between variables. In general, the responses indicate a high level of positive perceptions regarding the processes and quality of care in these hospitals. For most of the items, responses were quite consistent across the 3 survey administrations. Significant changes occurred for 5 items; 4 reflecting information flow and increased communication, and 1 reflecting a decrease in improved patient care. Overall, providers had lower mean responses compared with nurses and other clinical staff. Significant interactions between administrations and job categories were found for 4 items.


Conclusions: Even though staff had overwhelmingly positive perceptions of patient care quality and processes, significant differences between providers, RNs, and other clinical staff were observed. Variability was also found across CAHs. Research on CIS implementation in small hospitals is rare and needed to guide the identification of factors and strategies related to success.


(Am J Manag Care. 2012;18(5):244-252)
The study results indicate that even though overall responses show a high level of positive perceptions of the processes and quality of care in these hospitals, significant variability across job positions and critical access hospitals is observed.

  • Hospital employees’ perceptions of how new information technology will affect their work flow and the actual quality of care they deliver is very important.

  • Our study stands apart because there is so little research focused on information systems in small hospitals, and with the move to meaningful use requirements in all hospitals research is sorely needed to guide the implementation efforts.
A recent survey by the American Hospital Association found that small, rural, and critical access hospitals (CAHs) had consistently lower levels of rates of adoption of electronic health records (EHRs) as compared with their large or urban counterparts.1 The major models of information technology (IT) use indicate that perceptions of the impact on work and outcomes are significant determinants of technology use and adoption.2 Research has shown that users’ attitudes regarding risks to service quality and disruptions in work flow play a large role in the use of health information technology (HIT).3-5 Research on the evaluation of IT systems in healthcare organizations is quite limited, with the available studies differing in organizational settings, IT approach, and evaluation techniques.6 In addition, there is a shortage of studies on measurement tools for IT evaluation.7-9

Research on the effect of IT implementation in small, rural, and CAHs is limited and needed in order to explore the healthcare users’ role and viewpoint related to the lagging EHR adoption in these hospitals. For small hospitals, IT implementation represents a significant investment and hence evaluation is critical to guarantee its success.This study examines staff perception in 7 CAHs of patient care quality and the processes before and after implementation of a comprehensive clinical information system (CIS). It follows Kaplan and Shaw’s recommendations for IT evaluation and examines how well a system works with affected users in a particular setting.10

METHODS

Study Hospitals

Mercy Health Network—North Iowa consists of Mercy Medical Center—North Iowa (MMC-NI), 9 CAHs, and a primary physician network. MMC-NI is a rural referral hospital owned by Trinity Health in Novi, Michigan, and it in turn owns 1 of the CAHs and manages the others. Seven of the 9 network CAHs collaborated in a comprehensive EHR and computerized provider order entry (CPOE) system implementation (termed the EHR10 project) as part of Trinity Health’s extensive CIS initiative.11 As shown in Table 1, the 7 study CAHs have 25 or fewer acute care beds (1 includes a 10-bed psychiatric unit) and 2 have attached nursing homes.12 Full-time inpatient personnel range from 75 to 180 employees; all perform surgical services, and all but 2 offer obstetrics services.

CIS Implementation and Survey Timing

The EHR10 implementation process extended over several years of planning and execution.11 A well-formulated readiness process documented the progress through project milestones. The CAHs, along with MMC-NI, worked together to define the structure for communication and the decision making that would enable effective change management across the 7 CAHs. To meet the implementation goals, the CAHs worked collaboratively to create standardized processes and system designs. Major activities involved setting the stage for network collaboration, which included identifying both local and network-wide structures for communication and decision making. CAH staff, identified to fill key project roles, were freed from their regular duties and educated as to the use of the readiness plan and project tools. Electronic communication was ongoing and included monthly in-person meetings of each task-defined affinity group and the overall leadership team.

The study survey was administered 3 times at annual intervals. The first survey (administration 1) was timed to precede major changes related to the EHR implementation and captured the steady state baseline (March 2007).

The second survey (administration 2) occurred a year later (March 2008) after phase 1 of readiness had occurred. At this point, CAH personnel had become used to the read-only electronic capacity (eg, online laboratory reports), work flow processes had been redesigned, hardware had been acquired and tested, and “super users”—staff who had earlier and more hours of training and practice—were being trained. The administration 2 survey was distributed a few months before the “Go-Live” date of phase 2—the specific date when the EHR/CPOE system was activated. At this point most of the CAH personnel had not yet undergone training for full EHR/CPOE implementation, but were generally aware of the planned changes in work flow, communication, and care processes.

The CAHs followed the Trinity Health readiness process, which pays particular attention to end-user training.11 Training was conducted for all employees who would use the system over a 3-month period immediately preceding Go-Live. Each CAH identified trainers and super users.13 The CAHs varied somewhat in how they managed work schedules during training; most used the weekly 4-hour formal training sessions with ongoing practice sessions. For implementation purposes the 7 CAHs were divided into 2 cohorts; the first activated Go-Live in July 2008, and the second in September 2008.

The third survey (administration 3) occurred 1 year later (March 2009). Administration 3 surveys occurred 6 to 8 months after Go-Live and just after the automated medicine dispensing cabinet installation and bar-code medication administration were implemented.

Survey Design and Administration

A previously validated version of the instrument (the Information Systems Expectations and Experiences [I-SEE] survey) was designed to assess expectations and experiences regarding the impact that CISs have on work processes and outcomes.13,14 For the previous version, the instructions asked respondents to indicate how each item would change (or had changed) as a result of the new CIS, with response options ranging from “much worse” to “no change” to “much improved.”

As HIT is implemented within the context of specific clinical and administrative work processes, the resulting information flow and work flow become much more closely bundled and integrated and the research focus shifts to how communications and work processes are changed. Thus, for the current study, the I-SEE survey item content was retained, but the instructions were modified to remove reference to the CIS, thus creating a “current perception” survey focused on patient care quality and processes. In this way the focus shifted to being able to assess perceptions of the underlying care processes regardless of the presence or stage of an HIT implementation. This modification also facilitated direct comparison of perceptions across 3 survey administrations each conducted 1 year apart. Specifically, respondents were asked to indicate the degree to which they agreed with statements related to work flow, information flow, and selected care processes in their hospital at a given point in time. The survey items were measured on a 6-point Likert scale (ie, strongly disagree, moderately disagree, mildly disagree, mildly agree, moderately agree, or strongly agree) with an option to indicate “don’t know” or “not applicable.”

Approximately 700 surveys were mailed each year. The Institutional Review Board–approved survey packets were mailed to the Human Resources director at each CAH who distributed to all hospital personnel except facility support employees and service support employees who had no interaction with the CIS. No identifying information was collected except for hospital name, years of healthcare experience, and the work position category. For each administration, a follow-up survey was distributed to increase the response rate.

Data Processing and Statistical Analysis

Surveys were entered by 2 individuals independently into a Microsoft Access template, data sets were compared, discrepancies were corrected, and a final data set was created. Out of the 1201 surveys returned, 37 lacked hospital identification, 16 lacked an identifiable work position, and 14 survey respondents were facility/service support employees. These surveys were deleted, leaving 1132 usable surveys.

To facilitate comparison of responses according to staff positions, the 15 position options on the survey were combined into 4 groups: providers (physicians and mid-levels), registered nurses (RNs), other-clinical, and nonclinical. “Not applicable” responses were relatively common among the nonclinical group (more than 11% overall and 55% for some items), reflecting employees that did not have clinical duties. This group was deleted, leaving 840 relevant surveys for subsequent analyses. Of the 840 surveys, 48 (5.7%) were completed by providers, 341 (40.6%) by RNs, and 451 (53.7%) by other clinical personnel. We do not have specific counts of the personnel who actually received the surveys, but of the 221 RNs at these 7 CAHs,12 135 (62% response rate) completed the first survey, 106 (48% response rate) completed the second survey, and 96 (43% response rate) completed the third survey. Of the 46 physicians and 25 mid-level providers affiliated with these hospitals, 48 surveys were completed across the 3 administrations for an estimated response rate of 26% among providers.

Analyses were conducted using SAS version 9.1 (SAS Institute Inc, Cary, North Carolina). Analysis of variance compared responses for main effects (ie, administration, staff position, hospital, and cohort) and interactions between groups over time with post-hoc t tests used to examine significant findings. Correlations examined relationships between variables.

RESULTS

PDF is available on the last page.
Feature
Recommended Articles
Heterogeneity in Medicaid coverage of the anti-viral agent sofosbuvir is rampant across the United States, with varying reimbursement criteria and lack of conformation to recommendations by professional infectious disease organizations.
In an effort to better examine and understand the implementation and effects of the Affordable Care Act, researchers from the Urban Institute found that nongroup health insurance premiums in insurance marketplaces were lower than expected.
Although the cost of oral oncolytic medications can vary greatly, cost sharing routinely plays less of a role because patients with cancer nearly always reach their maximum out-of-pocket limits, especially in late-stage disease care. Bruce A. Feinberg, DO, and Brian B. Kiss, MD, recently addressed this issue in the final segment on palliative care from the Oncology Stakeholders Summit, Spring 2015 meeting.
In the sixth entry of our Insights discussion on palliative care from the Oncology Stakeholders Summit, Spring 2015 meeting, Bruce A. Feinberg, DO, and Brian B. Kiss, MD, address the decision-making process regarding whether a patient receives oral or infusion medications during end-of-life care.