Consult Coordination Affects Patient Experience

Given that accountable care organizations (ACOs) will be rated on patient experience and wait times for specialist consults are associated with patient satisfaction, ACOs should monitor this process.
Published Online: March 10, 2017
Steven D. Pizer, PhD; Michael L. Davies, MD; and Julia C. Prentice, PhD
ABSTRACT

Objectives: The Medicare accountable care organization (ACO) program financially rewards ACOs for providing high-quality healthcare, and also factors in the patient experience of care. This study examined whether administrative measures of wait times for specialist consults are associated with self-reported patient satisfaction. 

Study Design: Analyses used administrative and survey data from a clinically integrated healthcare system similar to an ACO. 

Methods: Veterans Health Administration (VHA) data from 2012 was obtained. Administrative access metrics included the number of days between the creation of the consult request and: 1) first action taken on the consult, 2) scheduling of the consult, and 3) completion of the consult. The Survey of Healthcare Experiences of Patients—which is modeled after the Consumer Assessment of Healthcare Providers and Systems family of survey instruments used by ACOs to measure patient experience—provided the outcome measures. Outcomes included general VHA satisfaction measures and satisfaction with timeliness of care, including wait times for specialists and treatments. Logistic regression models predicted the likelihood of patients reporting being satisfied on each outcome. Models were risk adjusted for demographics, self-reported health, and healthcare use. 

Results: Longer waits for the scheduling of consults and completed consults were found to be significantly associated with decreased patient satisfaction.  

Conclusions: Because patients often report high levels of powerlessness and uncertainty while waiting for consultation, these wait times are an important patient-centered access metric for ACOs to consider. ACOs should have systems and tools in place to streamline the specialist consult referral process and increase care coordination.
 
The American Journal of Accountable Care. 2017;5(1):23-28
The Medicare accountable care organization (ACO) program financially rewards ACOs for providing high-quality healthcare. Participating providers are then financially rewarded if healthcare spending is kept below targets set by Medicare, which can be achieved by the prevention of medical errors and duplication of services.1,2 The success of ACOs assumes that a structure of clinical integration and appropriately targeted incentives will improve coordination of care and quality.3 A key measure of quality of care is patient experience of care, including perceived coordination. Using the Consumer Assessment of Healthcare Providers and Systems (CAHPS) family of survey instruments, ACOs are required to collect information on patient experiences, such as the ability to obtain timely care and access to specialists.4 

Care coordination between primary and specialty care has received little attention despite the fact that specialty care accounts for more healthcare resource use than primary care, and specialists outnumber primary care physicians in the United States.5 Primary care and specialist providers report inadequate communication between each other about referrals, which compromises their ability to provide high-quality care6 and may also have a negative impact on self-reported patient satisfaction. 

Patients report high levels of uncertainty and powerlessness during the period of time between a requested referral and subsequent action, as they wait for clarity on disease outcomes.7-9 Consequently, if patients experience inadequate coordination between primary and specialty care, their experiences may suffer. Previous research has found that shorter wait times for appointments are not always the most important priority for patients. Patients are willing to wait longer to get an appointment at a convenient time or to see a preferred provider, especially for low-worry, long-standing conditions. Yet, when there is a new health concern, faster access becomes a higher priority.10-12

The Veterans Health Administration (VHA)—the largest clinically integrated system similar to an ACO in the United States that coordinates primary and specialty care—is a source of data that can provide important insights into the effect of the consult process on patient experience. In 2014, the VHA had over 9 million enrollees and provided or coordinated over 92 million outpatient encounters.13 Since 2009, the VHA has consistently measured patient experiences and satisfaction with the Survey of Healthcare Experience of Patients (SHEP) Survey that is modeled after the Consumer Assessment of Healthcare Providers and Systems (CAHPS) Survey that ACOs use to measure patient experiences.4,14 This paper investigates the effect of several different measures of VHA consult wait times on self-reported patient satisfaction. This effect on patient satisfaction provides a specific point of intervention for ACOs to consider when thinking of ways to measurably improve patient experiences. 

METHODS

As discussed in detail below, analyses used administrative data on wait times for consults in the VHA to predict self-reported patient satisfaction with care. Study approval for human subjects was obtained from the VA Boston Healthcare System Institutional Review Board. 

Administrative Consult Wait Time Measures

The VHA’s electronic consult system was implemented in 1999 and its use is mandated for all consultation requests (according to internal VHA data). Data from this study was extracted from fiscal year (FY) 2012. The consult system automatically records time stamps when consult-related administrative events occur. These events can be used to understand the number of elapsed days between consult creation and first action and/or scheduling. An additional measure included total consult resolution time (Figure and Table 1); in other words, consults are considered resolved when the appointment (if performed) has been completed and the report is written and signed. Time stamps are also recorded when the consult is updated, discontinued, or returned to the sending service for clarification. When consults are returned for clarification of the request, the consult wait time clock is not reset. The time stamps include the time from consult creation through appointment scheduling, and eventually to consult completion. In contrast, discontinued consults stop the clock.

Standardization of the consult system enables distinctions between documents used for traditional clinical consultation and those used for other administrative purposes, such as for requesting transportation.15 With this in mind, we narrowed the 2012 data to focus on consults for clinical services by excluding administrative consults and non-VHA care consults. 

Following our previous work, the wait times were weighted by a national proportion based on FY2011 data. Weights were developed based on the frequency of different consult appointments. If a station did not have a consult request for every type of appointment in a month, the remaining appointment weights were adjusted so they would sum to 100.16,17 Wait time measures were used in 2 ways in statistical models: 1) as a continuous variable; and 2) categorized roughly into quartiles, with the lowest quartile used as the reference group.

Sample Selection 

Consult wait times were linked to self-reported satisfaction using the 2012 Survey of Healthcare Experiences of Patients (SHEP), which is modeled after the Consumer Assessment of Healthcare Providers and Systems (CAHPS) family of survey instruments. Managed by the VHA Office of Analytics and Business Intelligence, SHEP is an ongoing nationwide survey that seeks to obtain patient feedback on recent episodes of VHA inpatient or outpatient care. For outpatient care, a simple random sample of patients with completed appointments at VHA facilities is selected each month (according to internal VHA Support Service Center data). The overall response rate was 53% and respondents came from all VHA medical centers (n = 130). 

Different sample selection rules were applied to each consult wait measure. First, all individuals who had the visit date in SHEP match the date for a completed/updated consult were flagged. In addition, we required the station (medical center code) in SHEP to match the station of the completed consult. This was the sample for the completed consult wait measure (n = 28,328). The wait computed was the facility average for all resolved consults in that month. Not surprisingly, because a clinic visit triggers a patient to be eligible to be contacted for SHEP, 90% of the individuals in this sample had a completed/updated status compared with a discontinued or canceled status. 

For the next 2 measures—days to first action and days to scheduled consult—all individuals who had a visit date in SHEP match the date a consult was initiated were flagged (n = 44,387). The receiving (not the sending) station for these requests was linked because receiving stations actually do the scheduling. We computed a facility average wait time looking forward for all consults requested at that receiving station in the month. 

Patient Satisfaction Dependent Variables

Satisfaction measures were selected and operationalized following previous work.18 Satisfaction with timeliness of care was measured by asking respondents how often they were able to get VHA appointments as soon as they thought they needed care, excluding times they needed urgent care. Access to VHA tests, treatments, and appointments with VHA specialists was measured by asking how easy it was to get this care in the last 12 months. Response options for the above 3 measures included “always,” “usually,” “sometimes,” and “never”; we estimated the likelihood of answering always/usually compared with sometimes/never. We also examined more general satisfaction measures that wait times for consults may influence. General satisfaction is measured by asking respondents to rate VHA healthcare in the last 12 months on a scale of 0 to 10 and their satisfaction with their most recent VHA visit using a Likert scale ranging from 1 to 7 (higher numbers indicate greater satisfaction). We estimated the likelihood of a 9 or 10 rating compared with less than a 9 on the first measure, and the likelihood of a 6 or 7 compared with less than a 6 on the second measure. 

Risk Adjustors

Risk adjustors included age, gender, race/ethnicity, education level, number of visits to a doctor’s office in the last 12 months, and self-reported health status—all obtained from SHEP FY2012. Models also included month fixed effects to control for secular changes in wait times and a VHA medical center random effect to control for facility quality and case-mix differences. 

Analyses

STATA version 10.0 (Statacorp, College Station, Texas) was used to estimate logistic regression models that predicted the dichotomized patient satisfaction variables. 

RESULTS

The SHEP respondents selected for this sample reflect the larger VHA patient population. Respondents were predominantly male, in poor health, and frequent healthcare users. There is evidence of high satisfaction with VHA care, with nearly 80% of respondents reporting they usually or always received appointments, treatment, or specialist care in a timely fashion. Additionally, 81% expressed the top 2 highest satisfaction levels for the most recent visit and 58% expressed the highest satisfaction levels with VHA care in the last 12 months (Table 2).

PDF is available on the last page.
Compendia
COPD Compendium
Dermatology Compendium
Diabetes Compendium
GI Compendium
Immuno-oncology Compendium
Lipids Compendium
MACRA Compendium
Oncology Compendium
Rare Disease Compendium
Reimbursement Compendium
Rheumatoid Arthritis Compendium
Know Your News
HF Compendium
Managed Care PODCAST