Currently Viewing:
The American Journal of Managed Care July 2014
Managed Care Patients' Preferences, Physician Recommendations, and Colon Cancer Screening
Sarah Hawley, PhD, MPH; Sarah Lillie, PhD; Greg Cooper, MD; and Jennifer Elston Lafata, PhD
Individual Treatment Effects: Implications for Research, Clinical Practice, and Policy
Jennifer S. Graff, PharmD; Thaddeus Grasela, PharmD, PhD; David O. Meltzer, MD, PhD; and Robert W. Dubois, MD, PhD
Simple Errors in Interpretation and Publication Can Be Costly
Dwight Barry, PhD; Lindsey R. Hass; MPH, Paul Y. Takahashi, MD; Nilay D. Shah, PhD; Robert J. Stroebel, MD; Matthew E. Bernard, MD; Dawn M. Finnie, MPA; and James M. Naessens, ScD
Currently Reading
Success of Automated Algorithmic Scheduling in an Outpatient Setting
Patrick R. Cronin, MA; and Alexa Boer Kimball, MD, MPH
Medicaid Prior Authorization Policies and Imprisonment Among Patients With Schizophrenia
Dana Goldman, PhD; John Fastenau, MPH, RPh; Riad Dirani, PhD; Eric Helland, PhD; Geoff Joyce, PhD; Ryan Conrad, PhD; and Darius Lakdawalla, PhD
Structural Capabilities in Small and Medium-Sized Patient-Centered Medical Homes
Shehnaz Alidina, MPH; Eric C. Schneider, MD, MSc; Sara J. Singer, MBA, PhD; and Meredith B. Rosenthal, PhD
The Underuse of Carotid Interventions in Veterans With Symptomatic Carotid Stenosis
Salomeh Keyhani; Eric Cheng; Susan Ofner; Linda Williams, and Dawn Bravata
Characteristics of Older Adult Physical Activity Program Users
Dori E. Rosenberg, PhD, MPH; Lou Grothaus, MA; and David Arterburn, MD, MPH
Evaluating a Hepatitis C Quality Gap: Missed Opportunities for HCV-Related Care
Yang Liu, MD; Renee H. Lawrence, PhD; Yngve Falck-Ytter, MD; Brook Watts, MD; and Amy A. Hirsch, PharmD
Shifting Cardiovascular Care to Nurses Results in Structured Chronic Care
Elvira Nouwens, MSc; Jan van Lieshout, MD, PhD; Pieter van den Hombergh, MD, PhD; Miranda Laurant, PhD; and Michel Wensing, PhD

Success of Automated Algorithmic Scheduling in an Outpatient Setting

Patrick R. Cronin, MA; and Alexa Boer Kimball, MD, MPH
Algorithmically generated booking recommendations based on customizable physician assumptions and predictive modeling modestly increased productivity without overburdening physicians in a randomized controlled trial.
To determine if algorithmically generated double-booking recommendations could increase patient volume per clinical session without increasing the burden on physicians.

Study Design
A randomized controlled trial was conducted with 519 clinical sessions for 13 dermatologists from December 1, 2011, through March 31, 2012.

Sessions were randomly assigned to “Smart-Booking,” an algorithm that generates double-booking recommendations using a missed appointment (no-shows + same-day cancellations) predictive model (c-statistic 0.71), or to a control arm where usual booking rules were applied. The primary outcomes were the average number and variance of arrived patients per session, after controlling by physician. In addition, physicians received a survey after each session to quantify how busy they felt during that session.

257 sessions were randomized to Smart-Booking and 262 sessions were randomized to control booking. Using a generalized multivariate linear model, the average number of arrived patients per session was higher in the Smart-Booking intervention arm than the control (15.7 vs 15.2, difference between groups 4.2; 95% CI, 0.08-0.75; P = .014).The variance was also higher in the intervention than control (3.72 vs 3.33, P = .38).The survey response rate was 92% and the physicians reported being similarly busy in each study arm.

Algorithmically generated double-booking recommendations of dermatology clinical sessions using individual physician assumptions and predictive modeling can increase the number of arrived patients without overburdening physicians, and is likely scalable to other settings.

Am J Manag Care. 2014;20(7):570-576
Algorithmically generated booking recommendations based on customizable physician assumptions and predictive modeling modestly increased productivity without overburdening physicians in a randomized controlled trial.
  • Predictive modeling can identify patients likely to be no-shows.

  • Advance scheduling systems can be successfully implemented.

  • The role of clinical scheduling decision support should be further studied.
Rising healthcare costs and federal budget deficits continue to put pressure on physicians to more 1-3 efficiently deliver care. Patients who miss outpatient appointments without prior adequate notification, colloquially called “no-shows,” are a frequent source of complaint because they decrease efficiency, have a negative financial impact, and waste appointment slots that could be used by others. One study at a family practice residency clinic concluded that no-shows resulted in a 3% to 14% revenue loss.4 In an era in which access to primary care and several other specialties is constrained, optimal utilization of physician time is paramount.5,6

No-shows have a variety of causes. Logistical issues such as an inability to miss work, find child care, or find transportation are reasons for many patients.7-9 Simply forgetting is another obvious problem8-11 and often the most frequent cause of missed appointments (48%9 and 39%8 in 2 studies, for instance). Interestingly, patients’ perceptions or concerns about their visit also affect adherence. A hesitation to hear bad news, endure an uncomfortable procedure, and encounter perceived disrespect by the medical establishment have been reported.7,10 Self-resolving symptoms have also been also cited as a cause for no-shows.7,8,11 And not surprisingly, the greater the number of “wait days”—the days between the scheduling of the appointment and the appointment date— the greater the risk of a no-show.7,11,12

There are numerous no-show interventions, but most rely on appointment reminders to decrease no-shows or on double booking to compensate for them. Reminder interventions discussed in the literature include staff phone reminders, automated phone reminders, text messages, mailed reminders, other electronic reminders, and financial penalties, all of which have been shown to work to some degree in some settings.13-15 A systematic review demonstrated a 39% reduction in the non-arrival rate after manual reminders and a 29% reduction for automated reminders; the reminders were shown to be cost-effective.13 Several randomized controlled trials (RCTs) have also concluded that reminders meaningfully reduce no-show rates.8,9,16-21 For example, a 3-arm outpatient RCT resulted in no-show rates of 23.1% with no reminders, 17.3% for automated phone reminders, and 13.6% with staff phone reminders.16 However, in some settings automated reminders have been ineffective.8,22,23

Overbooking is a common strategy, but most clinics do so with a nonscientific approach, such as, “If we double book early, we will catch up later.” Two groups have published the results of nontraditional scheduling system implementations that used overbooking. A pediatric ophthalmology clinic implemented a system that algorithmically opened appointment slots based on predicted patient demand, physician supply, and scheduling rules; however, their implementation was preliminary at the time of publication.24 In another study, Israeli dermatologists reduced the baseline non attendance rate from 32.9% to 27.9% using managed overbooking and service centralization.25

Open access, predictive modeling, and advanced scheduling models have been proposed to improve clinic efficiency. A systematic review of 24 open access studies concluded that no-show rates were lower in practices with a prior baseline >15%; however, other outcome measures were mixed.26 No-show prediction models have typically been designed using association rules,27 logistic regression,28,29 and a combination approach.30 Two studies reported C statistics of 0.82 and 0.84; however, neither was externally validated.28,29 Others have designed and validated advanced scheduling systems that maximize clinic utility through computer simulation and calculation.24,27,28,31-39

Given the relative lack of published outcomes of advanced scheduling system implementations, we developed and validated a model to predict no-shows and same-day cancellations and conducted a randomized controlled trial to determine the following: Can an automated algorithmic approach to double-booking dermatology appointments increase the number of arrived patients per session without overburdening the physicians?


Study Setting

A randomized controlled trial was conducted from December 1, 2011, through March 31, 2012 at the Department of Medical Dermatology at Massachusetts General Hospital (MGH) under the approval of the Partners Institutional Review Board. The department hosts approximately 80 medical dermatology clinical sessions per week, for a total volume of approximately 50,100 patient visits per year. At the time of the study, 8 full-time physicians averaged 6 sessions weekly, and 22 physicians worked 1 to 5 sessions weekly; the practice also employs residents. Clinical sessions run from 8:00 am to 12:00 pm or 1:00 pm to 5:00 pm, and each appointment slot is 15 minutes long except for 30-minute procedures. Physician compensation is based on Relative Value Unit productivity.

Prior to the study, the missed appointment rate was 16.5%, and given this rate, some clinicians routinely double booked their sessions. Appointments were booked using the electronic booking system IDX by 10 schedulers employed by the practice at the front desk or in a small call center adjacent to the practice. Prior to the intervention, schedulers would doublebook by attempting to spread additional patients out evenly across the schedule, and there was an average of 55 days between the scheduling and arrival of each new patient.


Sixteen of 30 physicians were excluded from the study for the following reasons: 7 worked 1 session a week and lacked booking flexibility, 4 were in another scheduling study, 3 worked primarily at other practice sites, 1 was on maternity leave, and 1 new physician’s schedule was not yet standardized. The other 14 physicians were asked to join the study, and all consented to participate. Double booking in designated urgent access, procedural, evening, and weekend clinics was excluded from this pilot.

Develop Missed Appointment Predictive Model

A missed appointment (no-shows + same-day cancellations) predictive model was developed. Potentially predictive variables were identified through literature review, discussion with the practice leadership, and evaluation of existing administrative data sources. The variables identified were: appointment type, day of week, wait days, language, ethnicity, age, historical diagnoses, insurance, and appointment arrival history. One year of data about approximately 54,000 dermatology appointments were collected to develop and validate the model (see eAppendix A, available at for details of the methodology). In addition, missed appointment models were developed using the same methodology for 5 additional departments to quantify the predictive model’s generalizability.

Develop Smart-Booking Algorithm

The Smart-Booking approach was developed to algorithmically generate double booking recommendations using individual physician assumptions and missed appointment probabilities. The resulting system was a stochastic algorithm that outputted reports identifying the slots in which appointments should be doublebooked, appointment types to book in open slots, and where open slots should be blocked.


Clinic sessions were randomized 1:1 to the intervention and control, stratified by physician, using a computer-generated list of random numbers. In the control, appointments were booked based on prior methodology; in the intervention, schedulers booked using the Smart-Booking recommendations. Physician scheduling assumptions were collected from all participating physicians and then adjusted based on input from practice leadership.

The Smart-Booking system was used to generate 2 reports. The Follow-up Double-Booking Report (eAppendix B) identified the control and intervention sessions, and in the intervention sessions, time slots were identified to double book follow-up appointments for the next 2 to 60 days. The Next-Day Report (eAppendix C) identified intervention time slots to double book new and follow-up appointments, to convert empty time slots into new or follow-up appointments, and to block empty slots because the schedule was overbooked.

Booking recommendations were electronically delivered at 5:30 am each weekday to the department. Every day the schedulers printed the Follow-up Double-Booking report and booked accordingly. In addition, the Next-Day report was used to modify the next day’s schedule. Schedulers did double-book in the control arm based on each physician’s historical booking maximums. Additionally, patients were double-booked in both arms beyond recommended limits on occasion by physician request or for urgent issues.

Study Outcomes

Data collected from the IDX scheduling system were used to calculate the number of normalized arrived patients per 4-hour session (referred to as arrived patients) (eAppendix D). The primary outcomes were the average and variance of the arrived patients in each study arm. As a secondary outcome, the average arrived patients were calculated for each individual physician. In addition, the following survey was sent to each physician to quantify how busy they felt after each session.

Your clinic session was:

-3 Too slow



0 Neither too slow nor too busy



+3Too busy

Sample Size

Based on historical data, the average number of patients arrived per session was 14.0 with a standard deviation of 3.0. This study was powered to detect the difference of 1 additional arrived patient per physician session with an alpha = 0.05 and power = 0.80. This resulted in an initial sample size of 141 sessions per study arm. After accruing 141 sessions in each arm, there were no physician or staff complaints. Therefore, we continued the study for an additional 10 weeks, which resulted in 654 randomized clinical sessions to gain further physician-specific data.

Data Analysis

We fit a generalized multivariate linear model to determine the impact of the intervention. Arrived patients was the dependent variable, the arm was the independent variable, and the physician was the random effect. An F-test was used to determine the significance of the difference of the variance in each arm. The physician survey results were plotted for all sessions and summarized descriptively.

One physician was removed from the primary analysis because Smart-Booking was incorrectly calculated and implemented. The threshold for total number of booked patients in the intervention arm was set lower than intended; therefore, when the physician wanted to add extra patients into a schedule, they were systematically added to the control sessions. Sensitivity analyses were performed that included this physician’s data.


Copyright AJMC 2006-2019 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
Welcome the the new and improved, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up