Preventing Patient Absenteeism: Validation of a Predictive Overbooking Model

December 10, 2015

Electronic health record data can be used to predict patient absenteeism accurately. Predictive overbooking of missed appointments can significantly increase service utilization.

ABSTRACTObjectives: To develop a model that identifies patients at high risk for missing scheduled appointments (“no-shows” and cancellations) and to project the impact of predictive overbooking in a gastrointestinal endoscopy clinic—an exemplar resource-intensive environment with a high no-show rate.

Study Design: We retrospectively developed an algorithm that uses electronic health record (EHR) data to identify patients who do not show up to their appointments. Next, we prospectively validated the algorithm at a Veterans Administration healthcare network clinic.

Methods: We constructed a multivariable logistic regression model that assigned a no-show risk score optimized by receiver operating characteristic curve analysis. Based on these scores, we created a calendar of projected open slots to offer to patients and compared the daily performance of predictive overbooking with fixed overbooking and typical “1 patient, 1 slot” scheduling.

Results: Data from 1392 patients identified several predictors of no-show, including previous absenteeism, comorbid disease burden, and current diagnoses of mood and substance use disorders. The model correctly classified most patients during the development (area under the curve [AUC] = 0.80) and validation phases (AUC = 0.75). Prospective testing in 1197 patients found that predictive overbooking averaged 0.51 unused appointments per day versus 6.18 for typical booking (difference = —5.67; 95% CI, –6.48 to –4.87; P <.0001). Predictive overbooking could have increased service utilization from 62% to 97% of capacity, with only rare clinic overflows.

Conclusions: Information from EHRs can accurately predict whether patients will no-show. This method can be used to overbook appointments, thereby maximizing service utilization while staying within clinic capacity.

Am J Manag Care. 2015;21(12):902-910

Take-Away Points

Absenteeism for scheduled clinical procedures and visits is common and costly. We developed a predictive overbooking system that uses patient- and clinic-level electronic health record data to project future no-shows and cancellations with a high degree of accuracy. We made projected open appointments available to patients willing to be seen promptly.

  • Previous absenteeism, comorbid disease burden, and current mental illness accurately predict no-shows and cancellations.
  • Predictive overbooking could improve service utilization rates from 62% to 97%, allowing dozens of additional patients to be seen weekly.
  • Clinic capacity could be maximized on most days, with minimal and manageable clinic overflow.

Absenteeism for scheduled outpatient visits and procedures—also called “no-show”—occurs frequently in healthcare systems worldwide, resulting in treatment delays, poor use of clinic resources, and significant financial loss.1-13 No-show rates at outpatient clinics range from 12% to 80%, resulting in revenue losses exceeding 20%.13 Patient no-shows diminish clinical productivity, increase appointment lead times for others in the queue, lower patient satisfaction, and reduce quality of care.1-12,14,15

There are many approaches to preventing no-shows, including telephone reminders,16,17 home mailings,16,18,19 text messages,20 and patient navigator programs.16,21,22 However, these interventions yield modest and inconsistent improvements in attendance.16-18,22 Levying fines as a deterrent for no-shows achieves better success,7 but financial sanctions are suboptimal because they disproportionately impact patients with fewer resources.

Another approach is to schedule more patients than there are available appointments (ie, “overbook”).4,23-25 This method is used to maximize “perishable-asset” utilization in the travel and lodging industries by overbooking at a fixed, average historical no-show rate. However, fixed overbooking in healthcare settings can still overburden staff, increase patient wait times, lower patient satisfaction, and potentially increase no-show rates thereafter.8,22 Moreover, it is not acceptable to deny a scheduled service that directly impacts health or survival.

Because fixed overbooking is unlikely to meet the dynamic needs of healthcare settings, an optimal solution should account for each patient’s individualized risk of absenteeism—not just the average clinic no-show rate. Studies that have examined individual predictors of clinic no-shows reveal that patients who have missed appointments previously tend to be uninsured, unmarried, and younger; have active mental health comorbidities such as depression or substance abuse; have poor access to transportation; or have other socioeconomic problems.9,10,12,26-33

Appointment lead-time, urgency of the appointment, timing of the appointment, and clinic proximity also are associated with patient absenteeism.

These patient-level characteristics have been used in some studies to evaluate the impact of predictive (rather than fixed) overbooking in healthcare.14,24,25 Probabilistic computer simulations reveal that predictive overbooking may not only improve patient throughput and reduce staff idle time, but it may also increase wait times and staff overtime on days overbooking exceeded capacity.14,25 Notably, these models have not been tested in a real clinical setting. In this study, we developed a predictive model for no-shows at a Veterans Affairs (VA) healthcare network clinic and validated the model prospectively. This work is topical given recent concerns about scheduling in the VA healthcare system.34 As opposed to wait-listing patients, we sought to develop a means of seeing more patients more quickly. We also desired a model that would utilize electronic health records (EHRs), minimizing impact on patients and allowing for easy implementation across a variety of clinical settings.

We hypothesized that EHRs—a high-volume, highly varied, rapidly delivered “big data” resource—would allow us to employ diverse data points to project open appointments accurately in real time and to offer those spots to additional patients. We tested this approach in a gastrointestinal (GI) endoscopy clinic because it is a model highthroughput, resource-intense environment commonly affected by patient no-shows.5,6,19,33,35-39 We surmised that if the predictive overbooking approach could work in a GI endoscopy clinic, then it might work in other clinical environments also affected by poor service utilization rates, with minimal clinic overflow.


Study Overview

For phase 1 of this study, we used patient- and clinic-level data obtained retrospectively over an 8-month period to develop a predictive model for patient absenteeism. During phase 2, we validated the model using patient data obtained in real time over a 4-month period, testing how well the model projected openings in the clinic schedule and evaluating how using the model to direct scheduling would affect service utilization and patient throughput.


Table 1

All patients in this study were US military veterans scheduled for all types of outpatient endoscopies (primarily esophagogastroduodenoscopy and colonoscopy) in the VA Greater Los Angeles Healthcare System, a geographically and demographically diverse network of 15 clinics serving 1.4 million veterans. In phase 1, we collected data on 1392 patients scheduled for GI procedures between November 2012 and June 2013; in phase 2, we collected data on an additional 1197 patients scheduled between July and October 2013 (see for demographic details). All data were collected in a VA-approved database and obtained through automated searches of the VA EHR. Study design and procedures were formally reviewed and approved by the VA Institutional Review Board (VA IRB # CC 2013-040489). Because this study involved passive data collection, participants were not compensated and did not provide informed consent.

Predictor Variables

Table 2

We selected predictor variables based on a review of the no-show literature.9,10,12,26-32 Patient data were obtained from the VA Computerized Patient Record System, including demographics, clinical diagnoses, and patient attendance histories (see for complete list). Demographic variables included age, race/ethnicity, level of VA cost coverage, and socioeconomic status (SES).

Clinical and treatment variables. In predicting no-shows for GI endoscopy procedures, we considered both GI-specific (eg, previous endoscopy) and generic variables (eg, recent history of depression). Raw text from patients’ active problem lists and procedure histories was automatically processed to flag International Classification of Diseases, Ninth Revision, Clinical Modification codes associated with particular diagnoses or treatments, and dichotomous variables for each relevant condition were generated. Clinical history reviews were limited to the most recent 3 years of data available for a given patient. We also calculated the Charlson comorbidity index score to capture overall disease burden for each patient.40

Appointment attendance history. Appointment attendance variables were obtained from the endoscopy clinic scheduling software endoPRO iQ (Pentax Medical, Montvale, New Jersey). We recorded appointment outcome (no-show vs attended) and presence or absence of any cancellations or no-shows for any GI appointment during the previous 2 years. We also computed a ratio for each patient of all cancelled or missed outpatient appointments (not just GI) across the entire VA healthcare system to the total number of outpatient appointments booked (see “cancellation proportion” in Table 2).

Outcome Variable

We utilized a functional definition of “no-show” that accounted for both outright no-shows (ie, patients who did not attend the scheduled appointment or cancel) and all late cancellations that could not be rebooked. Our clinic does not have a mechanism to fill short-term cancellations rapidly (ie, no waitlist), and the clinic cannot ask patients to stop contraindicated medications and consume a bowel preparation on short notice.

Phase 1: Model Development

We allowed multiple perspectives to inform the selection of model variables. First, we generated a list of variables reported by other studies of absenteeism. Next, we examined the face validity of variables, selecting those that we deemed relevant to appointment attendance. We also conducted preliminary analyses on a small sample of patients before the current study and included significant predictors. Finally, we conducted logistic regressions for each variable using data from our development cohort. Candidate variables that met at least 2 of these selection criteria were included in multivariable tests (see Table 2).

We conducted a series of multivariable logistic regressions using backward stepwise elimination to test the predictive power of these variables. To avoid estimation bias in our selection of predictors, we utilized bootstrapping with 2000 replications for each stepwise model, and eliminated all predictors with bootstrapped P values exceeding .05 in the final model. We created a final logit model from these predictors and used the model to estimate a no-show probability for each patient. We identified a score threshold that maximized the area under the receiver operating characteristics (ROC) curve (AUC) in classifying patients as shows versus no-shows. We calculated the sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of the classification threshold. All analyses were conducted using Stata version 13.1 (Statacorp, College Station, Texas).

Phase 2: Prospective Model Validation

Over a 4-month period, we collected data on all patients with upcoming appointments weekly, and used the logit regression model from phase 1 to calculate a no-show probability score for each patient. Because we conducted this validation prospectively, we included patients who rescheduled appointments after not showing during the validation; thus, we made projections for 1197 patients scheduled for 1426 appointments. For each day, we summed the number of appointment slots projected to be available for patients, and recorded this value. Next, we evaluated the performance of this no-show predictive overbooking versus “1 patient, 1 slot” traditional scheduling versus 3 fixed overbooking percentages (19%, 29%, and 38%—covering the range of quarterly no-show rates at the West Los Angeles VA GI endoscopy clinic during the development and validation phases). Although we did not recruit patients to fill projected openings in this exploratory analysis, we projected what would have occurred had patients been overbooked based on predicted openings. We calculated projected clinic utilization by week, and indexed the number of days predictive overbooking exceeded or underfilled clinic capacity.


Phase 1: Model Development

Table 2 presents data regarding predictors of no-show. The strongest predictor was a patient’s cancellation proportion—the proportion of all outpatient appointments missed, regardless of specialty—where a larger proportion was associated with a greater no-show probability. Patients with histories of no-shows for GI appointments were also unlikely to attend subsequent GI appointments. Patients with histories of mood or substance use disorders were less likely to show for appointments, as were patients who carried a greater overall disease burden. Patients who were booked for multiple procedures on the same day and patients who had previously undergone GI procedures were more likely to show for their appointments.

In single variable tests—but not the final multivariable regression model—individuals were less likely to show if they had cancelled previous GI appointments; had reported socioeconomic problems; or had conduct, personality, or anxiety disorders. Married patients, individuals with a history of diverticular disease, and those patients who had attended a colonoscopy education class were more likely to show for GI procedures. Also, patients whose medical care is funded in part by the VA because of military service—related injuries (ie, service connectedness) were more likely show for their appointments. Entered either as a continuous variable, or reduced to an ordinal variable, younger age was not a significant predictor of no-show in this sample. Urgency of appointment, race, ethnicity, and day of the week of appointment also were not significant predictors of no-show.

Figure 1

The final multivariable model was created using bootstrapped logistic regression with backward deletion. Using a ROC-optimized cutoff score of 0.45, the model was able to correctly classify 1084 of 1120 individuals who showed and 99 of 272 individuals who did not show, corresponding to an AUC of 0.80 (see [1A]). The Hosmer-Lemeshow test revealed adequate model fit [χ²(degrees of freedom = 8) = 6.70; P = .57], as the difference between observed and expected frequencies was not significant. Also, the model accounted for 23% of the variability in these data. The predictive model had a sensitivity, specificity, PPV, and NPV of 36%, 98%, 73%, and 86%, respectively. Although the model was not highly sensitive, this was acceptable because we principally sought to avoid excessive overbooking that would burden staff and patients (ie, we preferred a specific model that reduced the number of “false” no-shows, at the expense of a sensitive model with more “true” no-show spots available).

Phase 2: Prospective Model Validation

Figure 2

Table 3

Figure 3

shows the results of applying competing scheduling methods during the 18-week prospective validation period. The regression model was able to correctly classify 711 of 888 attended appointments, and 317 of 538 appointments that were missed, corresponding to an AUC of 0.75 (see Figure 1 [1B]). Using a cutoff value of 0.45, the model had a sensitivity, specificity, PPV, and NPV of 59%, 80%, 64%, and 76%, respectively. shows the predictive power of the validation model at varying cutoff values. Although the predictive characteristics of the model changed from development to validation (more sensitive, but less specific), application of the model would nonetheless have significantly improved rates of service utilization if used to fill projected openings (Figure 2). Under “1 patient, 1 slot” scheduling, an average of 6 appointments were left unused per day (out of an average of 16 appointments available per day during the validation phase); if the predictive overbooking system were utilized, one-half of 1 spot on average would have been unused (—6.18 vs –0.51; difference = –5.67; 95% CI, –6.48 to –4.87; P <.0001). If fully utilized, predictive overbooking would have increased service utilization from 62% to 97%, with manageable clinic overflows on most days (see [3A]).

Fixed overbooking produced less consistent results, and is dependent on the chosen rate. If a fixed overbooking rate of 38% (the no-show rate during the validation phase) had been used, a few more appointment slots would have been filled, on average, compared with the predictive model (—0.51 vs 0.14; difference = –0.18; 95% CI, –0.57 to 0.20; P = .10), but more than 10% of days would have exceeded capacity by 4 or more appointments. A fixed overbooking rate of 19% (the lowest quarterly no-show rate during the development phase) performed better than no overbooking, but filled significantly fewer appointments than predictive overbooking (—0.51 vs –3.01; difference = –2.51; 95% CI, –2.19 to –1.33; P <.0001). Based on average performance during the validation period, the most promising metric for fixed overbooking (a rate of 34%) produced results comparable to those of predictive overbooking (—0.51 vs –0.63, P = .75); however, greater variability led to a number of days where capacity was substantially under- or overfilled (see Figure 3 [3B]) compared with predictive overbooking.


We sought to design and test a predictive model of patient absenteeism using information passively collected from EHRs. We found that EHR data can accurately predict no-shows in a high-volume clinical environment commonly affected by absenteeism. Moreover, we found that collecting EHR data to predict no-shows can be automated, is feasible to perform using a healthcare organization data enterprise system, and would increase service utilization while minimizing clinic overflow. This predictive overbooking approach may benefit patients while increasing clinic revenue by avoiding idle downtime.

We believe that predictive overbooking presents an important benefit over fixed overbooking. Predictive overbooking is based on upcoming patient data, allowing schedulers to book additional appointments nimbly, only on days when the clinic is predicted to be under capacity. Conversely, a fixed overbooking system can only be based on historical averages and may lead to under- or overbooking of appointments, with regularity, depending on the chosen rate. The average no-show rate in our GI clinic was volatile during the study (19%-38% per quarter), so identifying an appropriate fixed rate was difficult. Choosing an appropriate rate was only possible through significant retrospective data collection, and the application of that rate produced more inconsistent results than predictive overbooking (see Figure 3). At its best, some days of fixed overbooking resembled an underutilized clinic employing “1 patient, 1 slot” scheduling. Also, if the chosen rate was too high, many days significantly exceeded capacity. When patients arrive prepared for appointments (eg, by altering medication regimens or consuming bowel preparation regimens for a colonoscopy), cancelling them was unacceptable. Thus, we cannot recommend using fixed overbooking, despite its simplicity.

Given the variety, velocity, and volume of “big data” now available for healthcare analytics, we examined a wide range of variables to predict no-show behavior and chose ones demonstrating the greatest face validity and predictive power. The most successful predictors were behavioral measures of absenteeism (ie, patients’ cancellation rate, previous no-shows). We found few specific physical health problems that were associated with no-show. In contrast, patients who were struggling with mental health problems (mood and substance-use disorders) consistently had more difficulty attending appointments. This is potentially actionable because mental health could be addressed in a hypothetical behavioral intervention for patients who habitually no-show.

Despite attempts to control estimation bias, the performance of the model changed between the development and validation phases: specificity declined slightly, although sensitivity improved, for several reasons. First, the Charlson comorbidity index score, a measure of general disease burden, was not a significant predictor of no-show in the validation phase. The predictive model was also tested prospectively in real time, meaning that the same patient may no-show on a given week and then show for an appointment on a subsequent week. This inconsistency

may explain degraded performance overall and specifically in the latter weeks of the validation phase. As shown in Table 3, adjusting the cutoff value of the model could have improved performance slightly, but this is not feasible during prospective testing or real-world application.

The utility of the predictive model is limited by the nature of the clinical service being scheduled. In the current study, many patients were scheduled for colonoscopy, a procedure negatively stigmatized for its invasive nature and substantial preparation steps. Although GI clinics are commonly impacted by high no-show rates,5,6,19,33,35-39 other clinics offering less stigmatizing services may not see such dramatic changes in patient attendance if they were to employ a predictive overbooking model. Also, procedure-specific predictors (such as having multiple GI procedures at a single visit or having a previous GI procedure) are not applicable to other specialties. Subsequent replication studies may identify comparable specialized variables, but these studies may not be necessary if a given specialty is not affected by patient absenteeism. Nonetheless, GI endoscopy clinics offer a proof-of-principle setting to test the predictive overbooking model in a challenging environment.


This study has limitations. First, the patient population examined was scheduled for appointments at a single GI clinic at a VA hospital in the western United States. The population was predominantly male and regularly received medical care at the VA. Thus, we could not examine the effects of gender or other insurance coverage. Also, our data collection period was limited to a few months of the year, so we could not examine seasonal trends in patient absenteeism, which appears more prominent during autumn months (Figure 2, weeks 13-18). Further, the composition of the VA sample—lower SES, more prevalent mental health problems—may not reflect the population at large. Other patient populations may miss appointments due to schedule conflicts with work or leisure activities instead of transportation problems and substance abuse. Other means of addressing absenteeism, such as fees or reminders, may work better for these groups.


A lack of available patients to fill all projected openings on short notice prevented us from assessing whether this system truly reduces financial loss or improves resource utilization. We are continuing to test this model using an active recruitment strategy to address these questions. Nevertheless, we maintain that predictive overbooking is ideal for other resource-intense clinics with high no-show rates.


The authors would like to thank Hartley Cohen, MD, and Joseph Pisegna, MD, for their support of this research.Author Affiliations: Department of Gastroenterology, VA Greater Los Angeles Healthcare System (MWR, SC, AK, AP, DW, BMRS), Los Angeles, CA; Cedars-Sinai Center for Outcomes Research and Education (MWR, DW, BM, BMRS), Los Angeles, CA; Kaiser Permanente Northern California (HW), Oakland, CA; David Geffen School of Medicine at University of California Los Angeles (VT, BMRS), Los Angeles, CA; Department of Health Policy and Management, University of California Los Angeles Fielding School of Public Health (BMRS), Los Angeles, CA.

Source of Funding: This study was funded by a VA Health Services Research and Development Merit Award (IIR 12-055).

Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (MWR, HW, BMRS); acquisition of data (MWR, SC, HW, AK, AP, VT, DW, BM); analysis and interpretation of data (MWR, SC, HW, AP, VT, DW, BM, BMRS); drafting of the manuscript (MWR, AK, AP, BMRS); critical revision of the manuscript for important intellectual content (MWR, SC, AK, BMRS); statistical analysis (MR, VT, DW); provision of patients or study materials (MWR, AP, DW, BM, BMRS); obtaining funding (BMRS); administrative, technical, or logistic support (MWR, VT, DW, BM); and supervision (MWR, DW, BMRS).

Address correspondence to: Brennan M.R. Spiegel, MD, MSHS, Professor of Medicine and Public Health, Division of Gastroenterology, West Los Angeles VA Medical Center, 11301 Wilshire Blvd, Bldg 115, Rm 215, Los Angeles, CA 90073. E-mail:

1. Moore CG, Wilson-Witherspoon P, Probst JC. Time and money: effects of no-shows at a family practice residency clinic. Fam Med. 2001;33(7):522-527.

2. Hixon AL, Chapman RW, Nuovo J. Failure to keep clinic appointments: implications for residency education and productivity. Fam Med. 1999;31(9):627-630.

3. Johnson BJ, Mold JW, Pontious JM. Reduction and management of no-shows by family medicine residency practice exemplars. Ann Fam Med. 2007;5(6):534-539.

4. Barron WM. Failed appointments. who misses them, why they are missed, and what can be done. Prim Care. 1980;7(4):563-574.

5. Gurudu SR, Fry LC, Fleischer DE, Jones BH, Trunkenbolz MR, Leighton JA. Factors contributing to patient nonattendance at open-access endoscopy. Dig Dis Sci. 2006;51(11):1942-1945.

6. Sola-vera J, Sáez J, Laveda R, et al. Factors associated with non-attendance at outpatient endoscopy. Scand J Gastroenterol. 2008;43(2):202-206.

7. Bech M. The economics of non-attendance and the expected effect of charging a fine on non-attendees. Health Policy. 2005;74(2):181-191.

8. Sharp DJ, Hamilton W. Non-attendance at general practices and outpatient clinics. BMJ. 2001;323(7321):1081-1082.

9. Neal RD, Hussain-Gambles M, Allgar VL, Lawlor DA, Dempsey O. Reasons for and consequences of missed appointments in general practice in the UK: questionnaire survey and prospective review of medical records. BMC Fam Pract. 2005;6:47.

10. Weingarten N, Meyer DL, Schneid JA. Failed appointments in residency practices: who misses them and what providers are most affected? J Am Board Fam Pract. 1997;10(6):407-411.

11. Lehmann TN, Aebi A, Lehmann D, Balandraux Olivet M, Stalder H. Missed appointments at a Swiss university outpatient clinic. Public Health. 2007;121(10):790-799.

12. Bickler C. Defaulted appointments in general practice. J R Coll Gen Pract. 1985;35(270):19-22.

13. Berg BP, Murr M, Chermak D, et al. Estimating the cost of no-shows and evaluating the effects of mitigation strategies. Med Decis Making. 2013;33(8):976-985.

14. Daggy J, Lawley M, Willis D, et al. Using no-show modeling to improve clinic performance. Health Informatics J. 2010;16(4):246-259.

15. Murray M, Berwick DM. Advanced access: reducing waiting and delays in primary care. JAMA. 2003;289(8):1035-1040.

16. Cha JM, Lee JI, Joo KR, Shin HP, Park JJ. Telephone reminder call in addition to mailing notification improved the acceptance rate of colonoscopy in patients with a positive fecal immunochemical test. Dig Dis Sci. 2011;56(11):3137-3142.

17. Hardy KJ, O’Brien SV, Furlong NJ. Information given to patients before appointments and its effect on non-attendance rate. BMJ. 2001;323(7324):1298-1300.

18. Moser SE. Effectiveness of post card appointment reminders. Fam Pract Res J. 1994;14(3):281-288.

19. Denberg TD, Coombes JM, Byers TE, et al. Effect of a mailed brochure on appointment-keeping for screening colonoscopy: a randomized trial. Ann Intern Med. 2006;145(12):895-900.

20. Liew SM, Tong SF, Lee VK, Ng CJ, Leong KC, Teng CL. Text messaging reminders to reduce non-attendance in chronic disease follow-up: a clinical trial. Br J Gen Pract. 2009;59(569):916-920.

21. Chen LA, Santos S, Jandorf L, et al. A program to enhance completion of screening colonoscopy among urban minorities. Clin Gastroenterol Hepatol. 2008;6(4):443-450.

22. Percac-Lima S, Grant RW, Green AR, et al. A culturally tailored navigator program for colorectal cancer screening in a community health center: a randomized, controlled trial. J Gen Intern Med. 2009;24(2):211-217.

23. LaGanga LR, Lawrence SR. Clinic overbooking to improve patient access and increase provider productivity. Dec Sci. 2007;38(2):251-276.

24. Bibi Y, Cohen AD, Goldfarb D, Rubinshtein E, Vardy DA. Intervention program to reduce waiting time of a dermatological visit: managed overbooking and service centralization as effective management tools. Int J Dermatol. 2007;46(8):830-834.

25. Alaeddini A, Yang K, Reddy C, Yu S. A probabilistic model for predicting the probability of no-show in hospital appointments. Health Care Manag Sci. 2011;14(2):146-157.

26. Smith CM, Yawn BP. Factors associated with appointment keeping in a family practice residency clinic. J Fam Pract. 1994;38(1):25-29.

27. Gruzd DC, Shear CL, Rodney WM. Determinants of no-show appointment behavior: the utility of multivariate analysis. Fam Med. 1986;18(4):217-220.

28. Cosgrove M. Defaulters in general practice: reasons for default and patterns of attendance. Br J Gen Pract. 1990;40(331):50-52.

29. Cashman S, Savageau J, Lemay C, Ferguson W. Patient health status and appointment keeping in an urban community health center. J Health Care Poor Underserved. 2004;15(3):474-488.

30. Goldman L, Freidin R, Cook EF, Eigner J, Grich P. A multivariate approach to the prediction of no-show behavior in a primary care center. Arch Intern Med. 1982;142(3):563-567.

31. Bean AG, Talaga J. Predicting appointment breaking. J Health Care Mark. 1995;15(1):29-34.

32. Hamilton W, Round A, Sharp D. Patient, hospital, and general practitioner characteristics associated with non-attendance: a cohort study. Br J Gen Pract. 2002;52(477):317-319.

33. Murdock A, Rodgers C, Lindsay H, Tham TC. Why do patients not keep their appointments? prospective study in a gastroenterology outpatient clinic. J R Soc Med. 2002;95(6):284-286.

34. Kizer KW, Jha AK. Restoring trust in VA health care. N Engl J Med. 2014;371(4):295-297.

35. Adams LA, Pawlik J, Forbes GM. Nonattendance at outpatient endoscopy. Endoscopy. 2004;36(5):402-404.

36. Wong VK, Zhang HB, Enns R. Factors associated with patient absenteeism for scheduled endoscopy. World J Gastroenterol. 2009;15(23):2882-2886.

37. Guse CE, Richardson L, Carle M, Schmidt K. The effect of exit-interview patient education on no-show rates at a family practice residency clinic. J Am Board Fam Pract. 2003;16(5):399-404.

38. Denberg TD, Melhado TV, Coombes JM, et al. Predictors of nonadherence to screening colonoscopy. J Gen Intern Med. 2005;20(11):989-995.

39. Corfield L, Schizas A, Noorani A, Williams A. Non-attendance at the colorectal clinic: a prospective audit. Ann R Coll Surg Engl. 2008;90(5):377-380.

40. Deyo RA, Cherkin DC, Ciol MA. Adapting a clinical comorbidity index for use with ICD-9-CM administrative databases. J Clin Epidemiol. 1992;45(6):613-619.