Currently Viewing:
The American Journal of Managed Care November 2013
Opioid Analgesic Treated Chronic Pain Patients at Risk for Problematic Use
Joseph Tkacz, MS; Jacqueline Pesa, PhD, MPH; Lien Vo, PharmD, MPH; Peter G. Kardel, MA; Hyong Un, MD; Joseph R. Volpicelli, MD, PhD; and Charles Ruetsch, PhD
Safety and Effectiveness of Mail Order Pharmacy Use in Diabetes
Julie A. Schmittdiel, PhD; Andrew J. Karter, PhD; Wendy T. Dyer, MS; James Chan, PharmD, PhD; and O. Kenrik Duru, MD, MSHS
Depression Self-Management Assistance Using Automated Telephonic Assessments and Social Support
John D. Piette, MSc, PhD; James E. Aikens, PhD; Ranak Trivedi, PhD; Diana Parrish, MSW; Connie Standiford, MD; Nicolle S. Marinec, MPH; Dana Striplin, MHSA; and Steven J. Bernstein, MD, MPH
Creating Peer Groups for Assessing and Comparing Nursing Home Performance
Margaret M. Byrne, PhD; Christina Daw, PhD; Ken Pietz, PhD; Brian Reis, BE; and Laura A. Petersen, MD, MPH
Upcoding Emergency Admissions for Non-Life-Threatening Injuries to Children
Zachary Pruitt, MHA; and Etienne Pracht, PhD
Variations in the Service Quality of Medical Practices
Dan P. Ly, MD, MPP; and Sherry A. Glied, PhD
Currently Reading
Using Health Outcomes to Validate Access Quality Measures
Julia C. Prentice, PhD; Michael L. Davies, MD; and Steven D. Pizer, PhD

Using Health Outcomes to Validate Access Quality Measures

Julia C. Prentice, PhD; Michael L. Davies, MD; and Steven D. Pizer, PhD
Medicare payment reforms require valid measures of high-quality healthcare. Different types of administrative wait time measures predicted glycated hemoglobin levels for new and returning patients.
Models included dummy variables (fixed effects) for each facility to remove between-facility variation in wait times and outcomes.6 In effect, we compared the A1C level of an individual in 1 observation period with the A1C level of the same individual in other observation periods. This design eliminated concerns about permanent case-mix differences between facilities. Facility fixed effects also controlled for all aspects of facility quality that remain constant over time (eg, managerial inefficiencies).

We also included a dummy variable for January through June observations compared with July through December observations to control for any systematic variation in A1C between half-years, as well as yearly dummies to control for any overall increase or decrease in A1C levels over time. This statistical design, featuring a predetermined cohort of patients as well as time and facility fixed effects, means that any estimated relationship between waiting time and A1C level was identified exclusively by within-facility variations over time that were independent of national trends.

Outcome and Analyses

Data were analyzed using Stata version 10.0 (StataCorp, College Station, Texas). We modeled the average 6-month A1C level and uncontrolled A1C (6-month A1C average >9%) during each observation period. The average wait time for the previous 6 months predicted A1C level in the current 6-month period. Separate models were run for each of the 5 new and returning patient wait time measures. We standardized wait times to allow direct comparisons across measures. Standard errors were clustered on individuals to account for the lack of independence between observations from the same individual.

Patients in the hospital or nursing home during the wait time measurement period should not be affected by outpatient wait times, so we censored observation periods if the veteran was institutionalized for all 6 months of an observation period.

Despite using 6-month observation periods to maximize the availability of A1C level data, 32% of the values during the outcome period were missing. Missing values may have been due to a veteran not having his/her A1C tested in a VHA facility or a veteran being hospitalized during the 6-month observation period. Following Prentice and colleagues,6 we treated these observations as censored using a 2-stage Heckman selection model.31

The first stage of the Heckman model used a probit to explain whether or not an A1C level was observed. The second stage used linear regression to predict the average A1C value or a logistic regression to predict uncontrolled A1C. The 2 stages were jointly estimated so the missing observations were accounted for in the second stage. This simultaneous-equations approach explicitly modeled the correlation between unobservable factors in the first and second stages. The necessity of the Heckman model was confirmed with a significant Wald statistic that tested whether this correlation was zero and indicated that common unobservable factors affected both censoring and the outcome (Appendix B).


Similar to other samples of elderly VHA users, our sample was predominantly male and had a high burden of physical and mental health conditions. During the baseline period, about one fifth of the sample had an average A1C level greater than or equal to 8, a quarter of the sample had an obesity diagnosis code, 87% had a hypertension diagnosis, and 15% had a depression diagnosis (Table 2).

There was significant variation in measured wait times using the different methods of measurement for new and returning patients (Table 3). Wait time measures that relied on the CD for appointments had means of 20 to 34 days for new patients and 41 to 97 days for returning patients. The DD measures were shorter, with means of 7 to 18 days for new patients and 4 to 23 days for returning patients. The mean wait time for the FNA appointment capacity measure was similar to the retrospective CD measure for new patients and 8.1 days for returning patients.

The Heckman model is a 2-equation model that benefits from a variable that distinguishes the first equation from the second equation, and we used the number of VHA primary care visits during baseline for this purpose. More frequent visits during baseline significantly increased the likelihood of observing an A1C value in all the models (data not shown). As an example, Appendix B provides complete results for the first-stage equation of the model that predicted the linear A1C 6-month average using the retrospective CD wait time measure. The coefficient on VHA primary care visits was 0.032 (P <.001).

Wait time had small but statistically significant effects on A1C (Table 4). There was a significant (P <.001) and positive relationship between the FNA, retrospective and prospective CD measures for new patients, and average A1C levels, with the FNA measure having the strongest relationship (β = 0.009 vs β = 0.007, β = .006; Table 4). Among the new patient measures, retrospective CD was the strongest predictor of uncontrolled A1C (marginal effect = 0.0010; P = .001), but longer FNAs also significantly increased the likelihood of having uncontrolled A1C (marginal effect = 0.0007; P = .05). Neither of the new patient DD wait time measures significantly predicted A1C levels.

When considering returning patient wait measures, the prospective CD measure was the strongest predictor of A1C for both outcomes (β = 0.009 for linear A1C, P = .002; marginal effect = 0.019 for uncontrolled A1C, P = .001; Table 4). There was also a positive significant relationship between the DD wait time measures and the A1C outcomes and the uncontrolled A1C outcomes (P <.05 for linear A1C and P <.10 for uncontrolled A1C). The returning FNA wait time measure had a significant (P = .036) and negative relationship with linear A1C but no significant relationship with uncontrolled A1C. Neither outcome was significantly predicted by the returning patient retrospective CD measure.

The effect sizes were small and not clinically significant. For example, the largest observed effect was for the returning patient prospective CD when predicting uncontrolled A1C. An increase of 1 standard deviaton in this measure would increase the likelihood of a typical patient having uncontrolled A1C by 0.19 percentage points (Table 4).


Findings in this study suggest that longer wait times measured in a variety of different ways had small but statistically significant effects on A1C levels and the likelihood of having uncontrolled A1C. Specifically, the new patient capacity wait time measure (FNA) and the retrospective and prospective new patient wait time measures using CD exhibited expected relationships with A1C. Among the returning patient measures, the retrospective CD measure and the retrospective and prospective DD measures did so as well. These results are consistent with the previous research finding that the new patient FNA measure significantly predicts A1C.6

The ongoing implementation of ACOs requires quality measures that are linked to patient health outcomes.4 The relationship between process quality measures and improved health outcomes is often modest.5,6 Although not clinically significant, the administrative wait time measures reliably predict both A1C and patient satisfaction. This is significant because patients are more interested in improved health outcomes than the process of care.5 Another advantage of the wait time measures is their low cost. Access to care in ACOs is currently being evaluated by using the expensive and time-consuming process of surveying patients about their ability to get healthcare as soon as they wanted.15,18 Wait times based on administrative scheduling data are a less costly alternative.

The most appropriate wait time measures differ for new and returning patients. The ability of the capacity and CD versions of the new patient wait time measures to predict A1C when the DD measures did not supports previous research finding these same associations when predicting patient satisfaction.18 New patients typically want to be seen as soon as possible, often due to a change in health status that is causing concern.22 Consequently, it is not surprising that capacity or time stamp wait time measures that use the date that an appointment request was made as the start date (see Table 1) for measuring wait times are successful predictors for a variety of different outcomes. When considering ACO reimbursement, an advantage of these measures is that they can be easily calculated from most scheduling systems. The date that an individual requests an appointment is commonly cited as the start date to measure access outside of the VHA. For example, the Advanced Access literature uses this date when calculating the number of days until the third next available appointment.16,17

Developing consistent administrative wait time measures for returning patients is more complicated because these patients may not wish to obtain the next available appointment for follow-up care.18 Surveys of patients have found that scheduling future appointments at convenient times or maintaining continuity of provider may outweigh concerns about long waits for appointments for follow-up care.22,32,33 Recognizing these complexities, VHA policy makers shifted to using a DD approach in 2010 where schedulers ask patients what day they desire their appointment.25 Our results generally support the focus on DD for returning patients, with both the prospective and retrospective DD measures significantly predicting A1C. A disadvantage of implementing DD measures outside of the VHA is that schedulers in the private sector are not routinely collecting DDs when patients request appointments.

The main limitation of this study is that we did not have random variation in wait times, so we had to construct facility averages to minimize potentially confounding effects of individual health on individual waits. Consequently, we cannot completely rule out alternative explanations for our findings, including reverse causation and omitted variable bias. For example, an unobserved flu epidemic at a VHA facility could increase wait times facilitywide and cause higher A1C levels that are not attributable to longer wait times. Our analyses included facility-level fixed effects, yearly dummies, and a seasonal effect to minimize this possibility as much as possible. On the other hand, there is now a significant literature using these methods that consistently finds that longer wait times using capacity measures lead to poorer health outcomes.6,8-10,26 The growing evidence base utilizing different populations, time periods, and outcomes strengthens the likelihood that the relationship is causal.

Copyright AJMC 2006-2019 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
Welcome the the new and improved, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up