Currently Viewing:
The American Journal of Managed Care December 2015
Interest in Mental Health Care Among Patients Making eVisits
Steven M. Albert, PhD; Yll Agimi, PhD; and G. Daniel Martich, MD
The Impact of Electronic Health Records and Teamwork on Diabetes Care Quality
Ilana Graetz, PhD; Jie Huang, PhD; Richard Brand, PhD; Stephen M. Shortell, PhD, MPH, MBA; Thomas G. Rundall, PhD; Jim Bellows, PhD; John Hsu, MD, MBA, MSCE; Marc Jaffe, MD; and Mary E. Reed, DrPH
Health IT-Assisted Population-Based Preventive Cancer Screening: A Cost Analysis
Douglas E. Levy, PhD; Vidit N. Munshi, MA; Jeffrey M. Ashburner, PhD, MPH; Adrian H. Zai, MD, PhD, MPH; Richard W. Grant, MD, MPH; and Steven J. Atlas, MD, MPH
A Health Systems Improvement Research Agenda for AJMC's Next Decade
Dennis P. Scanlon, PhD, Associate Editor, The American Journal of Managed Care
An Introduction to the Health IT Issue
Jeffrey S. McCullough, PhD, Assistant Professor, University of Minnesota School of Public Health; Guest Editor-in-Chief for the health IT issue of The American Journal of Managed Care
Preventing Patient Absenteeism: Validation of a Predictive Overbooking Model
Mark Reid, PhD; Samuel Cohen, MD; Hank Wang, MD, MSHS; Aung Kaung, MD; Anish Patel, MD; Vartan Tashjian, BS; Demetrius L. Williams, Jr, MPA; Bibiana Martinez, MPH; and Brennan M.R. Spiegel, MD, MSHS
EHR Adoption Among Ambulatory Care Teams
Philip Wesley Barker, MS; and Dawn Marie Heisey-Grove, MPH
Impact of a National Specialty E-Consultation Implementation Project on Access
Susan Kirsh, MD, MPH; Evan Carey, MS; David C. Aron, MD, MS; Omar Cardenas, BS; Glenn Graham, MD, PhD; Rajiv Jain, MD; David H. Au, MD; Chin-Lin Tseng, DrPH; Heather Franklin, MPH; and P. Michael Ho, MD, PhD
Currently Reading
E-Consult Implementation: Lessons Learned Using Consolidated Framework for Implementation Research
Leah M. Haverhals, MA; George Sayre, PsyD; Christian D. Helfrich, PhD, MPH; Catherine Battaglia, PhD, RN; David Aron, MD, MS; Lauren D. Stevenson, PhD; Susan Kirsh, MD, MPH; P. Michael Ho, MD, MPH; and Julie Lowery, PhD
Innovations in Chronic Care Delivery Using Data-Driven Clinical Pathways
Yiye Zhang, MS; and Rema Padman, PhD
Health Information Technology Adoption in California Community Health Centers
Katherine K. Kim, PhD, MPH, MBA; Robert S. Rudin, PhD; and Machelle D. Wilson, PhD
Characteristics of Residential Care Communities That Use Electronic Health Records
Eunice Park-Lee, PhD; Vincent Rome, MPH; and Christine Caffrey, PhD
Using Aggregated Pharmacy Claims to Identify Primary Nonadherence
Dominique Comer, PharmD, MS; Joseph Couto, PharmD, MBA; Ruth Aguiar, BA; Pan Wu, PhD; and Daniel Elliott, MD, MSCE
Physician Attitudes on Ease of Use of EHR Functionalities Related to Meaningful Use
Michael F. Furukawa, PhD; Jennifer King, PhD; and Vaishali Patel, PhD, MPH

E-Consult Implementation: Lessons Learned Using Consolidated Framework for Implementation Research

Leah M. Haverhals, MA; George Sayre, PsyD; Christian D. Helfrich, PhD, MPH; Catherine Battaglia, PhD, RN; David Aron, MD, MS; Lauren D. Stevenson, PhD; Susan Kirsh, MD, MPH; P. Michael Ho, MD, MPH; and Julie Lowery, PhD
This paper identified 4 factors associated with implementation success of e-consults in 8 VA medical centers, with implications for implementing similar health IT initiatives elsewhere.
ABSTRACT

Objectives: In 2011, the Veterans Health Administration (VHA) implemented electronic consults (e-consults) as an alternative to in-person specialty visits to improve access and reduce travel for veterans. We conducted an evaluation to understand variation in the use of the new e-consult mechanism and the causes of variable implementation, guided by the Consolidated Framework for Implementation Research (CFIR).

Study Design: Qualitative case studies of 3 high- and 5 low-implementation e-consult pilot sites. Participants included e-consult site leaders, primary care providers, specialists, and support staff identified using a modified snowball sample.

Methods: We used a 3-step approach, with a structured survey of e-consult site leaders to identify key constructs, based on the CFIR. We then conducted open-ended interviews, focused on key constructs, with all participants. Finally, we produced structured, site-level ratings of CFIR constructs and compared them between high- and low-implementation sites.

Results: Site leaders identified 14 initial constructs. We conducted 37 interviews, from which 4 CFIR constructs distinguished high implementation e-consult sites: compatibility, networks and communications, training, and access to knowledge and information. For example, illustrating compatibility, a specialist at a high-implementation site reported that the site changed the order of consult options so that all specialties listed e-consults first to maintain consistency. High-implementation sites also exhibited greater agreement on constructs.

Conclusions: By using the CFIR to analyze results, we facilitate future synthesis with other findings, and we better identify common patterns of implementation determinants common across settings.

Am J Manag Care. 2015;21(12):e640-e647
Take-Away Points
 
Our research identified implementation factors that distinguished between medical centers that were less versus more successful at implementing a health information technology initiative: electronic consults (e-consults). These factors and their implications for implementing new health information technology programs include: 
  • Compatibility: design initiative to fit in with existing work processes. 
  • Networks and communications: assess degree of communication among participants; attend to indications of poor communication. 
  • Training resources: expend effort on training. 
  • Access to knowledge and information: establish key contacts easily accessible to program participants.
In 2010, the Secretary for the Department of Veterans Affairs (VA) identified improving access to care as a top priority.1 The Veterans Health Administration (VHA) had been collecting and analyzing data on wait times for more than a decade, and observational studies found associations between wait times and poorer short- and long-term quality indicators.2 Research also highlighted challenges faced by veterans in rural communities and by female veterans, with travel demands and transportation difficulties sometimes exacerbated by veterans’ functional status, resulting in delayed or forgone care.3,4

Technology was seen as part of the solution by offering alternate ways to access care.5 Research suggested telehealth interventions could improve access, including speeding time to treatment while achieving results similar to in-person visits in terms of patient satisfaction and experience of care.6 Simultaneously, there were concerns about implementation of new technologies introducing problems such as privacy and confidentiality vulnerabilities and disruption to clinic work flow.7

In 2011, the VHA implemented specialty care electronic consults (e-consults) at 15 pilot sites. E-consults offer primary care providers (PCPs) the option to obtain specialty care expertise by submitting patient consults via the VHA’s electronic health record (EHR)8,9; e-consults have been implemented in other healthcare systems as well.10-13 Specialists then respond with advice and/or recommendations on whether veterans should be seen in person. If implemented effectively, e-consults should improve specialty care access and reduce travel for veterans.

The VHA’s Office of Specialty Care Transformation (OSCT), which was responsible for overseeing the dissemination of e-consults, requested assistance in identifying the challenges associated with implementation to facilitate further dissemination. Thus, the Specialty Care Evaluation Center was created to evaluate e-consult implementation. We used the Consolidated Framework for Implementation Research (CFIR) to identify those factors that facilitated or hindered e-consult implementation among pilot sites. The CFIR consolidates and standardizes definitions of implementation factors, thereby providing a pragmatic structure for identifying potential influences on implementation and comparing findings across sites and studies.14,15 The CFIR is composed of 5 domains: intervention characteristics, outer setting, inner setting, characteristics of individuals involved in implementation, and the process of implementation.14 Thirty-seven constructs characterize these domains. The objective of this study is to use the CFIR for identification and comparison of implementation factors across sites in an effort to learn from their experiences.

METHODS

A post implementation interpretive evaluation16 was conducted using semi-structured, key informant interviews with structured ratings of CFIR constructs. The unit of analysis was the site and included 8 of 15 pilot sites (geographic-site/specialty combinations), selected for variation on overall e-consult implementation rates, measured as a ratio of e-consults to all consults for specialties of interest. Three e-consult sites were randomly selected from the 7 sites in the top half of e-consult implementation rates, and 5 were selected from the 8 sites in the bottom half (53% of sites interviewed). E-consult volume data were assessed from the beginning of the pilot period to initial site selection, May 2011 to February 2012.

A modified snowball sample was used to recruit participants, beginning with local site leaders and directors from both primary care and specialty care; e-consult programs straddle multiple clinical divisions, so some sites had multiple leaders. Interview participants were asked to identify specialists, PCPs, and support staff (nurse practitioners, pharmacists, and medical support assistants) engaged in initiatives. The rationale for conducting interviews at a small, but purposefully selected, sample of sites was to focus on obtaining an in-depth understanding of the differences in context in which implementation occurred, and how these differences might be related to implementation success.17-19

Data and Analysis

To identify a subset of high-probability CFIR constructs, a Web-based survey (available in the eAppendix A [eAppendices available at www.ajmc.com]) was first conducted of e-consult pilot site leaders to rate relevance of CFIR constructs to e-consult implementation. The initial CFIR survey was returned by all 21 e-consult site leaders. Of the 37 CFIR constructs, 14 were rated as important or very important by at least 90% of participants (Table 1). An interview guide (eAppendix B) was developed around those constructs and updated iteratively, a standard accepted practice in qualitative evaluations.20,21 Prior to conducting the interviews, analysts participated in 2 in-person, 2-day CFIR qualitative analysis training meetings, which included conducting CFIR ratings, group debriefings, and discussions of ratings. Interviews were conducted by telephone by an interviewer and note-taker and were digitally recorded. Interview pairs reviewed and clarified interview notes post interview, referring to recordings as needed. The pairs then independently coded interview notes from each participant according to CFIR constructs, ensuring that notes were consistent with the definitions of the CFIR constructs.

Following coding of the interview responses, they rated the influence of each construct in the organization (positive or negative) and magnitude or strength of its influence22 (–2, –1, 0, +1, +2) using established criteria (Table 2). Pairs distinguished constructs that were not specifically mentioned (missing) from those with ambiguous or neutral effects (rated 0). Following independent coding, pairs convened via phone or in person to resolve discrepancies and reach consensus, based on consensual qualitative research methods.20,21 Using ratings across participants and participants’ roles (some participants’ responses were weighted more heavily than others), pairs derived an overall rating for each construct for each site, and noted if there was significant variability for constructs (a difference of at least 2 points across 2 or more participants). Assigning ratings to the qualitative interview data in this way allows for a systematic, rapid comparison of findings across sites.23 A matrix of ratings for all constructs across sites was developed and used to examine the extent to which constructs were more likely to be rated as negative or zero/mixed among sites with low volume and more likely to be rated as positive at sites with a high volume of e-consults.14

RESULTS

Thirty-seven interviews were completed with participants across 8 sites (Table 3). At all sites, a minimum of 3 people were interviewed, including e-consult site leader(s). In site-level CFIR ratings, 3 CFIR constructs had negative ratings in both low- and high-volume sites: design quality and packaging (perceptions of how the intervention is bundled and presented), leadership engagement, and goals and feedback, suggesting that these might be areas of concern for VHA. Nevertheless, the high-volume sites were able to overcome these challenges. Specifically, 4 CFIR constructs had more positive ratings at high-volume sites and more negative, neutral, or mixed ratings at low-volume sites, suggesting they might be critical implementation determinants: 1) compatibility, 2) networks and communications, 3) available resources (specifically training), and 4) access to knowledge and information. Differences between the low- and high-volume sites for each of these constructs are described below, and more examples are provided in Table 4.

Compatibility. Compatibility refers to the degree of tangible fit between meaning and values attached to the intervention, as well as those of the individuals’ own norms, values, perceived risks, and needs in the context of how the intervention fits with existing work flows and systems.14 Participants’ opinions on compatibility varied at low-volume sites. Some PCPs perceived e-consults as adding to their workload and were not happy with the transfer of responsibility for certain tasks: “I feel like they’ve tried to transfer a lot of the work and basically [are] making the PCP a clerical person to collect and collate and put all this data together.” Others at low-volume sites saw the potential for e-consults to make a difference, but were frustrated with the need to account for numbers of e-consults: “Here’s what drives me nuts. We have always done e-consults here. We just didn’t call them e-consults…Then suddenly someone gave them a name—e-consults—and someone decided we could measure them. Had to change the process to deliver advice so they could get counted…[the] number of e-consults isn’t the be-all, end-all. [We] do [e-consults] to decrease visits. Appropriate to measure prompt access, not number [of e-consults].”

For high-volume sites, e-consults were more consistently described as good for work flow by streamlining existing consult processes. One e-consult site leader from a high-volume site thought the process was very efficient: “I love it; I think it’s fantastic. There are many times things come up and I would like opinions on and get notes in [the] chart but I don’t think the provider needs to see the patient. I can do it when I have time to organize my time and thoughts…Most of the time this is faster than [a] face-to-face appointment.”

Examining further the differences in context between the low- and high-volume sites that might account for differences in perception of the compatibility of e-consults with existing processes, the high-volume sites incorporated e-consults in ways that improved efficiency of operations, whereas the low-volume sites did not. Specifically, high-volume sites spent considerable time and effort tailoring the EHR templates to be completed easily and quickly. One site hired a pharmacist to handle the additional workload needed to generate and follow-up on e-consults. In contrast, low-volume sites did not take extra steps to facilitate implementation.

 
Copyright AJMC 2006-2018 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
x
Welcome the the new and improved AJMC.com, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up