The Association of Mental Health Program Characteristics and Patient Satisfaction

Across many measures of Veterans Health Administration mental health care program characteristics, treatment continuity is most strongly and positively associated with patient satisfaction.  


Objectives: Satisfaction with care is an important patient-centered domain of health system quality. However, satisfaction measures are costly to collect and not directly modifiable. Therefore, we assessed the relationships between veterans’ satisfaction and measures of modifiable aspects of Veterans Health Administration (VHA) mental health care programs.

Study Design: For a sample of 6990 patients who received mental health care from the VHA in 2013, we used survey and administrative data to investigate the association of a suite of access and encounter satisfaction measures with a large collection of measures of program characteristics.

Methods: We estimated risk-adjusted correlations between 6 satisfaction measures (across 2 domains: access and encounter satisfaction) and 28 mental health care program characteristics (across 4 domains: program reach, psychosocial service access, program intensity, and treatment continuity).

Results: We found that satisfaction with access to care was higher than experiences with care encounters, but that broad measures of mental health care program reach and intensity were positively associated with both kinds of satisfaction. No measures of psychosocial service access were positively associated with access and encounter satisfaction. Most measures of treatment continuity were consistently and positively associated with both kinds of satisfaction.

Conclusions: As the VHA strives to increase access to, and provision of, mental health care, policy makers and program managers should be aware that satisfaction with care, as it is currently measured, may not rise as more patients initiate treatment, unless continuity of care is maintained or enhanced.

Am J Manag Care. 2017;23(5):e129-e137

Takeaway Points

  • Broad measures of mental health care program reach and intensity and most measures of treatment continuity were consistently and positively associated with patient satisfaction.
  • Because psychosocial services are clinically valuable, policy makers and managers should not interpret a lack of association with satisfaction to justify reducing their availability.
  • Policy makers and managers should be aware that satisfaction with care, as currently measured, may not rise as more patients initiate treatment, unless continuity of care is maintained or enhanced.
  • Policy makers and managers should continue to track patient satisfaction and to specifically target satisfaction with mental health care.

Satisfaction with care is an important, patient-centered measure of health system performance because it can identify gaps in quality that could be missed by other measures,1 help detect cross-population disparities,2 and serve as a catalyst for quality improvement.3,4 Yet, the relationship between patient satisfaction and quality of care, although generally positively correlated,5 is not fully understood.6 Studies have found patient satisfaction to be associated with hospital process quality7; lower rates of readmissions, heart attack mortality,8,9 and surgical quality10; and better long-term outcomes.11 Fenton et al12 found it also positively correlated with higher healthcare utilization, costs, and all-cause mortality.

Assessment of what objective health system measures drive patient satisfaction is important for 2 reasons. First, because it relies on patient surveys, satisfaction remains expensive and challenging to measure at a high frequency.13 However, many other measures of health system performance are easily obtained at a high frequency from administrative data (eg, process quality measures or readmissions). If satisfaction is highly correlated with these other administrative measures, they offer supplements to satisfaction surveys—ways to monitor and improve aspects of care related to satisfaction during longer intervals between measurement. Second, satisfaction is not directly modifiable; improvements must come from changes in the processes of care or investments in services that patients value.

For these reasons, we studied the relationships between a set of patient satisfaction measures and a large collection of mental health program characteristics for patients with a recent mental health encounter in the Veterans Health Administration (VHA), the largest provider of mental health care in the United States.14 Prior work has documented variation in satisfaction across VHA patients with psychiatric diagnoses. Rosenheck et al15 found that VHA patients who were discharged from the hospital with a primary psychiatric or substance use diagnosis were more likely to be satisfied with their care if they were older, in better health, or had a long length of stay.

Burnett-Zeigler et al16 reported that VHA patients with psychiatric diagnoses who were younger, nonwhite, or lower-income; had a service-connected disability; or had received a posttraumatic stress disorder (PTSD) or a substance use disorder diagnosis were less likely to be satisfied with their care. Hepner et al17 examined perceptions of behavioral health care among VHA patients who received mental health care. Seventy-four percent said they were helped by treatment, but only 32% reported an improvement in symptoms. Holcomb et al18 found that the satisfaction of midwestern VHA patients with psychiatric diagnoses positively correlated with better self-reported outcomes. Patients with co-occurring substance use and psychotic disorders who were treated in VHA residential substance use disorder treatment programs that had more positive perceptions and satisfaction exhibited greater engagement in care and experienced better outcomes.19 Finally, Hoff et al20 reported lower levels of satisfaction among VHA patients with psychiatric diagnoses than those with medical diagnoses.


Since 2002, the VHA Office of Quality and Performance has fielded the Survey of Healthcare Experiences of Patients (SHEP), an ongoing monthly mail survey of patients’ experiences during their most recent VHA encounter. Modeled on the Consumer Assessment of Healthcare Providers and Systems survey and based on a stratified design that selects from the specialty care domains as well as new and established primary care patients within each facility,21 SHEP samples about 30,000 ambulatory care patients each month who visit the VHA and who were not surveyed in the prior year. The 2013 version of SHEP is our source of satisfaction measures, with an overall response rate of about 44% and slightly higher response rates for males and substantially higher response rates for older patients (eAppendix Table A1 [eAppendices available at]).

In 2010, the Department of Veterans Affairs (VA)’s Office of Mental Health Operations (OMHO) implemented the Mental Health Information System (MHIS) Dashboard,22 which includes facility-level quality metrics consistent with the goals of the VA’s Uniform Mental Health Services Handbook.23,24 In addition, the mental health domain of the VHA Strategic Analytics for Improvement and Learning (SAIL) includes 25 administrative data—based performance measures related to access, continuity of care, patient safety, and quality of care at a facility level.25 We used the 2013 MHIS Dashboard and precursors to SAIL mental health domain report metrics (MHIS and SAIL are refined on an ongoing basis), shared with us by OMHO, to predict patient-level satisfaction responses to the 2013 SHEP.

SHEP surveys patients with a recent VHA encounter (the “index” encounter). To merge facility-level MHIS/SAIL-based mental health program characteristics, we associated each SHEP respondent with the VHA facility where they had the index encounter. For risk adjustment, we also merged, at the patient level, demographic and Elixhauser26 comorbidity data from VHA administrative files.

Our interest was in the relationships between mental health care program characteristics and patient satisfaction, so we used data from a subset of SHEP respondents—those with a recent mental health encounter. To accomplish this, we restricted the SHEP sample to respondents with index encounters in the same quarter and year as encounters for mental health. Because most SHEP respondents complete and return surveys 2 or more months after the index visit, this approach guaranteed that the majority would have had a recent mental health encounter prior to providing satisfaction feedback. Therefore, although some of the survey questions ask patients to report satisfaction based on the prior 12 months of care, it would be likely that patients’ impressions were more heavily influenced by their most recent mental health encounter. Nevertheless, unlike prior analyses of satisfaction among VHA mental health patients,1,17 we were not directly assessing satisfaction with mental health care services. Our final sample included 6990 patients across 165 VHA facilities, although not all patients responded to all survey items due to SHEP question skip patterns (eAppendix Table A2). All analyses were conducted at the patient level.

Patient Satisfaction Variables

Satisfaction with timeliness of care, which we termed “access satisfaction,” is measured by SHEP asking respondents how often they were able to obtain needed care right away and were able to get VHA appointments as soon as they thought they needed care, excluding the times they needed urgent care. Access to VHA tests or treatments is measured by SHEP asking how easy it was to access that care in the last 12 months. Response options for the above 3 measures included “always,” “usually,” “sometimes,” or “never.” There was no cardinal meaning to these categorical responses. Therefore, we dichotomized them to eliminate fine gradations in the ordinal scale Specifically, following Prentice et al,27 we dichotomized these to 1 for responses of “always” or “usually” and 0 otherwise (Table 1).

Encounter satisfaction, which measures satisfaction with the care received or provider seen, is measured by SHEP asking respondents to rate VHA healthcare in the last 12 months on a scale of 0 to 10, where 0 indicates the “worst healthcare possible” and 10 the “best healthcare possible.” Satisfaction with the respondents’ personal doctor/nurse is also assessed on a 0-to-10 scale. For the same reasons given above, we dichotomized these to 1 for responses of 9 or 10 and 0 otherwise.27 Satisfaction with the most recent VHA visit is assessed on SHEP with a scale ranging from 1 to 7, where 1 indicates “completely dissatisfied” and 7 “completely satisfied.” We dichotomized this to 1 for responses of 6 or 7 and 0 otherwise.27

Program Characteristics Variables

The mental health program measures we considered are listed and defined in Table 2 and are organized into 4 areas of focus: 1) program reach (eg, the proportion of patients receiving mental health care), 2) psychosocial service access (eg, the proportion of patients initiating psychosocial treatment or psychotherapy), 3) program intensity (eg, the number of encounters per year), and 4) treatment continuity (eg, the proportion of discharged patients with follow-up within 7 days). Within each area, we examined 5 or more performance metrics. Transitional work visits and supportive employment visits mentioned in Table 2 are occupational therapy treatment modalities.28,29


Descriptive Statistics

Tables 1 and 2 report means of demographic control variables, dependent variables (patient-level satisfaction), and key independent variables (facility-level program characteristics). eAppendix Table A3 reports the means for diagnostic risk-adjustment variables. Table 1 shows that the average age of patients in our sample was 62 years, 55% were married, 8% were female, and 12% were black. In addition, 17% of our sample had an alcohol use disorder; 9%, a drug use disorder, 37%, psychosis; and 48%, a depression diagnosis (all as defined by Elixhauser25 and listed in eAppendix Table A3). These figures were higher than the general population because we deliberately selected a sample of patients with a VHA mental health visit. Table 1 also shows that, across our sample, most patients reported high levels of satisfaction for all but 1 measure. Table 2 shows facility-level program characteristics organized by the 4 domains.

The facility-level program characteristics in Table 2 were each computed by OMHO on the full sample of patients implied by each characteristic, not just the patients in our study sample. For example, the program reach characteristic of “PTSD” is defined as “% of patients with PTSD who receive specialty outpatient care for PTSD.” This means that this measure captures, for each facility and year, the percentage of patients with PTSD seen by the facility in that year who received specialty outpatient care for PTSD.

Multivariate Analysis

Separately, for each satisfaction measure (the dependent variable) and each program characteristic (the key independent variable), we estimated an ordinary least squares (OLS) model, controlling for age (in years), marital status (1 = married, 0 = not married), sex (1 = female, 0 = male), race (1 = black, 0 = nonblack), and comorbidities.26 We accounted for heteroscedasticity with robust standard errors. In sensitivity analyses, we also ran models with clustering, facility random effects, and logistic regression. These produced similar results, which are not shown.

Based on the OLS models, for program reach, psychosocial service access, program intensity, and treatment continuity measures, Tables 3, 4, and 5 indicate a positive, negative, or not statistically significant association of program characteristics with satisfaction measures. (Coefficient estimates are provided in eAppendix Tables A4-A7.)

Program Reach

Table 3 shows that the broad measure of program reach, “MH patients”—the percentage of veterans service-connected for a mental health condition who received mental health care—and the percentage of VHA patients diagnosed with psychotic disorders, bipolar, major depression, or PTSD who had transitional work visits (“trans’l work”) were positively associated with at least 2 of 3 measures of access satisfaction over the prior 12 months. They were also each positively associated with 2 of 3 measures of encounter satisfaction.

None of the other 4 (more narrow) program reach measures were positively associated with access or encounter satisfaction. All but the percentage of patients with serious mental illness (SMI) who received mental health intensive case management for psychosis (“case mgmnt”) had no statistically significant association with satisfaction. Consistent with prior work,30 condition-specific measures may have been negatively associated with satisfaction if patients with those conditions generally rated satisfaction lower (because of their condition, not their care) and their representation in our sample was higher at facilities that treat more of them.16 No matter the reason, the use of measures that were negatively associated with satisfaction, or not associated with it at all, should be justified and validated on other grounds (eg, they measured some aspect of clinically appropriate care). In “case mgmnt,” for example, availability of this treatment has been shown to improve the clinical outcomes of patients with chronic SMI.31

Psychosocial Service Access

Of all the categories of program characteristics, those pertaining to psychosocial service access were least associated with satisfaction measures (Table 4). This could be because initiating psychosocial services is a challenging time for patients, so facilities with greater access to it also have more patients who exhibited less satisfaction.

Program Intensity

With 1 exception (psychosocial rehab and recovery center), all program intensity measures were either positively associated with satisfaction or were not associated with any satisfaction measure (Table 4). Although the number of mental health encounters per unique patient seen at a facility (“overall”) was not associated with the satisfaction of its patients who received mental health care, the number of encounters per patient with any encounters (“MH patients”) was positively associated with all 6 satisfaction measures. Because the denominator of the overall measure was all VHA patients who were seen at a facility, it reflects both the proportion of patients in the healthcare system who received mental health treatment and the intensity of services felt by those who received care. A more focused decomposition of the overall measure into metrics that assessed 1) the reach of mental health services among patients requiring them (eg, the “MH patients” reach measure), and 2) the intensity of services among mental health care utilizers (eg, the “MH patients” intensity measure) was easier to interpret.

Measures of intensity of therapeutic and supportive employment program services (“sup empl” and “trans’l work”) were either not correlated with satisfaction or only associated with 1 access measure. It is possible that patients thought of these principally as employment programs and did not consider their experience with them when responding to healthcare satisfaction surveys.

As with other measures of services for patients with serious mental illness, the “psychosocial rehab and recovery centers” measure was negatively associated with satisfaction. Again, this may reflect enrichment of the patient population with patients who tended to rate healthcare services poorly.

Treatment Continuity

With few exceptions, continuity, variously measured, was positively associated with half or more of the access and/or encounter satisfaction measures (Table 5). In the case of the percentage of outpatients who received mental health care without a second visit in 6 months (“gap”), the association was negative with all 6 satisfaction measures, which is still consistent with the idea that less continuity of care is less satisfying to patients.


Our analysis has a few limitations. First, it is observational, so we cannot infer causality. Also, our sample is of patients with a recent VHA mental health encounter. As such, it is not necessarily representative of all VHA enrollees or even all VHA enrollees with mental health diagnoses, many of whom may not have had a recent mental health visit. Third, the SHEP survey response rate is relatively low for patients younger than 50 years, which could threaten the generalizability of findings for that group. Finally, the survey instrument was not specifically designed to elicit impressions of mental health care only. It is possible they were also influenced by other aspects of VHA care.


In a sample of patients who visited the VHA for mental health conditions, we assessed the relationship between satisfaction and program characteristics, spanning multiple domains. Our results provide some important lessons for policy makers and healthcare managers. We found that satisfaction with VHA access among patients with mental health conditions was higher than satisfaction with care encounters. Broad measures of the program’s reach of mental health care treatment (ie, the proportion of patients served) and intensity (ie, the number of visits received) tended to be positively associated with both access and encounter satisfaction. No measures of access to psychosocial services (ie, the proportion of patients who received psychosocial services regardless of setting of care) and most measures of treatment continuity (ie, measures of outpatient follow-up after inpatient care) were positively associated with both kinds of satisfaction. Also, more narrow performance measures—those that focused on specific diagnostic populations (eg, those with PTSD and SMI)—were less likely to be positively associated with satisfaction. This is consistent with prior work that suggests certain types of patients who receive mental health care are less likely to be satisfied with care, perhaps more of a characteristic of the patients than the treatment programs that serve them.16

Policy makers and program managers should be aware that as they attempt to increase psychosocial service access, they may not see a positive relationship to satisfaction. Efforts to ensure initial access to psychosocial services to all patients who need them may negatively impact the availability of ongoing or more intensive services for those who initiate, as a larger pool of patients initiating services would compete for available treatment slots. We found that because measures of access to psychosocial services had the weakest relationship to satisfaction, while treatment continuity had the most consistent relationship, further investigation is needed. If one took satisfaction as the only assessment of health system performance, they might conclude that psychosocial treatment access is not valuable. That is not the right interpretation. Psychosocial treatment access is valuable for other reasons. For example, the subpopulation that does not respond adequately to medications may rely on this modality of care for improvement.

On the other hand, continuity was most consistently associated with greater satisfaction. This is in line with a growing body of work showing positive outcomes associated with continuity, such as better quality of life, community functioning, symptom reduction,32 increasing Global Assessment of Functioning scores,33 and lower mortality risk.34 Boden and Moos19 also showed that greater engagement with care is associated with higher satisfaction. It is likely that continuity of care was associated with greater satisfaction because those who were not satisfied with care tended to be lost to follow-up. Continuity of care may directly cause greater satisfaction (because patients want it), but the reverse may also be true: patients who are satisfied (for other reasons) may be more likely to return for subsequent appointments, increasing measured continuity of care.

One measure for which we found no relationship to satisfaction is hard to justify on nonsatisfaction-based grounds: the program intensity “overall” measure of the number of mental health encounters per unique VHA patient. Because this measure could go up through a reduction in VHA patients, independent of their mental health care needs, and because there are other valid measures of intensity more specific to mental health patients, this did not appear to be a measure of high value.

Several measures in each domain may capture the same or similar aspects of care. For instance, several reach measures were positively correlated and exhibited similar patterns of relationships to satisfaction. Such redundancy is useful for managers, particularly in the context of incentivized or prioritized measure performance. As has been observed in other work,13,27 incentives can lead to loss of fidelity in the data underlying metrics,34 as behavior may be modified in direct response to the measure. If a metric tied to incentives starts to deviate considerably from another measuring the same thing in a different way, but it is not tied to incentives, that is a signal that the integrity of the underlying data may have been affected by the incentives.


With a few exceptions, this research demonstrates that the set of mental health program characteristics used by the VHA exhibits the expected associations with patient satisfaction and should be useful in monitoring patient-centered aspects of care quality.


This work was funded by a Department of Veterans Affairs, Health Services Research and Development grant (CRE 12-023), by the Department of Veterans Affairs Program Evaluation and Resource Center, Office of Mental Health Operations, and by a Department of Veterans Affairs, Quality Enhancement Research Initiative grant (PEC 16-001). It was approved by the VA Boston Healthcare System Institutional Review Board. The views expressed are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs, Boston University, Northeastern University, or Harvard University.

Author Affiliations: Healthcare Financing & Economics, VA Boston Healthcare System (ABF, SDP), Boston, MA; School of Medicine, Boston University (ABF), Boston, MA; Harvard T.H. Chan School of Public Health, Harvard University (ABF), Boston, MA; Program Evaluation and Resource Center, VA Office of Mental Health Operations (JT), Palo Alto, CA; Center for Innovation to Implementation, VA Palo Alto Healthcare System (JT), Palo Alto, CA; School of Pharmacy and Department of Economics, Northeastern University (SDP), Boston, MA.

Source of Funding: This work was funded by a Department of Veterans Affairs, Health Services Research and Development grant (CRE 12-023), by the Department of Veterans Affairs Program Evaluation and Resource Center, Office of Mental Health Operations, and by a Department of Veterans Affairs, Quality Enhancement Research Initiative grant (PEC 16-001).

Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (ABF, SDP, JT); acquisition of data (ABF, SDP, JT); analysis and interpretation of data (ABF, SDP, JT); drafting of the manuscript (ABF, JT); critical revision of the manuscript for important intellectual content (ABF, SDP, JT); statistical analysis (ABF, SDP); obtaining funding (SDP); and supervision (SDP).

Address Correspondence to: Austin B. Frakt, PhD, VA Boston Healthcare System, 150 S Huntington Ave, Boston, MA 02130. E-mail:


1. Blonigen DM, Bui L, Harris AH, Hepner KA, Kivlahan DR. Perceptions of behavioral health care among veterans with substance use disorders: results from a national evaluation of mental health services in the Veterans Health Administration. J Subst Abuse Treat. 2014;47(2):122-129. doi: 10.1016/j.jsat.2014.03.005

2. Zickmund SL, Burkitt KH, Gao S, et al. Racial differences in satisfaction with VA health care: a mixed methods pilot study. J Racial Ethn Health Disparities. 2015;2(3):317-329. doi: 10.1007/s40615-014-0075-6.

3. Jha AK, Orav EJ, Zheng J, Epstein AM. Patients’ perception of hospital care in the United States. N Engl J Med. 2008;359(18):1921-1931. doi: 10.1056/NEJMsa0804116.

4. Vital signs: core metrics for health and health care progress (2015). The National Academies Press website. Accessed April 4, 2017.

5. Chatterjee P, Tsai TC, Jha AK. Delivering value by focusing on patient experience. Am J Manag Care. 2015;21(10):735-737.

6. Davies E, Shaller D, Edgman-Levitan S, et al. Evaluating the use of a modified CAHPS survey to support improvements in patient-centred care: lessons from a quality improvement collaborative. Health Expect. 2008;11(2):160-176. doi: 10.1111/j.1369-7625.2007.00483.x.

7. Tajeu GS, Kazley AS, Menachemi N. Do hospitals that do the right thing have more satisfied patients? Health Care Manage Rev. 2015;40(4):348-355. doi: 10.1097/HMR.0000000000000034

8. Boulding W, Glickman SW, Manary MP, Schulman KA, Staelin R. Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17(1):41-48.

9. Glickman SW, Boulding W, Manary M, et al. Patient satisfaction and its relationship with clinical quality and inpatient mortality in acute myocardial infarction. Circ Cardiovasc Qual Outcomes. 2010;3(2):188-195. doi: 10.1161/CIRCOUTCOMES.109.900597.

10. Sacks GD, Lawson EH, Dawes, Jacobson NB, Ayanian JZ. Relationship between hospital performance on a patient satisfaction survey and surgical quality. JAMA Surg. 2015;150(9):858-864. doi: 10.1001/jamasurg.2015.1108.

11. Fremont AM, Cleary PD, Hargraves JL, Rowe RM. Patient-centered processes of care and long-term outcomes of myocardial infarction. J Gen Intern Med. 2001;16(12):800-808.

12. Fenton JJ, Jerant AF, Bertakis KD, Franks P. The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality. Arch Intern Med. 2012;12;172(5):405-411. doi: 10.1001/archinternmed.2011.1662.

13. Prentice JC, Frakt AB, Pizer SD. Metrics that matter. J Gen Intern Med. 2016;31(suppl 1):70-73. doi: 10.1007/s11606-015-3559-0.

14. Chen S, Smith MW, Wagner TH, Barnett PG. Spending for specialized mental health treatment in the VA: 1995-2001. Health Aff (Millwood). 2003;22(6):256-263.

15. Rosenheck R, Wilson NJ, Meterko M. Influence of patient and hospital factors on consumer satisfaction with inpatient mental health treatment. Psychiatr Serv. 1997;48(12):1553-1561.

16. Burnett-Zeigler I, Zivin K, Ilgen MA, Bohnert AS. Perceptions of quality of health care among veterans with psychiatric disorders. Psychiatr Serv. 2011;62(9):1054-1059. doi: 10.1176/

17. Hepner KA, Paddock SM, Watkins KE, Solomon J, Blonigen DM, Pincus HA. Veterans’ perceptions of behavioral health care in the Veterans Health Administration: a national survey. Psychiatr Serv. 2014;65(8):988-996. doi: 10.1176/

18. Holcomb WR, Parker JC, Leong GB, Thiele J, Higdon J. Customer satisfaction and self-reported treatment outcomes among psychiatric inpatients. Psychiatr Serv. 1998;49(7):929-934.

19. Boden MT, Moos R. Predictors of substance use disorder treatment outcomes among patients with psychotic disorders. Schizophr Res. 2013;146(1-3):28-33. doi: 10.1016/j.schres.2013.02.003.

20. Hoff RA, Rosenheck RA, Meterko M, Wilson NJ. Mental illness as a predictor of satisfaction with inpatient care at Veterans Affairs hospitals. Psychiatr Serv. 1999;50(5):680-685.

21. Burgess DJ, Gravely AA, Nelson DB, et al. A national study of racial differences in pain screening rates in the VA health care system. Clin J Pain. 2013;29(2):118-123. doi: 10.1097/AJP.0b013e31826a86ae.

22. Trafton JA, Greenberg G, Harris AH, et al. VHA mental health information system: applying health information technology to monitor and facilitate implementation of VHA Uniform Mental Health Services Handbook requirements. Med Care. 2013;51(3, suppl 1):S29-S36. doi: 10.1097/MLR.0b013e31827da836.

23. Uniform mental health services in VA medical centers and clinics. VHA handbook 1160.01. Department of Veterans Affairs website. Published September 11, 2008. Updated November 16, 2015. Accessed April 4, 2017.

24. Frakt AB, Trafton J, Pizer SD. Maintenance of access as demand for substance use disorder treatment grows. J Subst Abuse Treat. 2015;55:58-63. doi: 10.1016/j.jsat.2015.02.009.

25. Strategic Analytics for Improvement and Learning (SAIL) fact sheet. Veterans Affairs website. Published November 2014. Accessed April 4, 2017.

26. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27.

27. Prentice JC, Davies ML, Pizer SD. Which outpatient wait-time measures are related to patient satisfaction? Am J Med Qual. 2014;29(3):227-235. doi: 10.1177/1062860613494750.

28. Penk W, Drebing CE, Rosenheck RA, Krebs C, Van Ormer A, Mueller L. Veterans Health Administration Transitional work experience vs. job placement in veterans with co-morbid substance use and non-psychotic psychiatric disorders. Psychiatr Rehabil J. 2010:33(4):297-307. doi: 10.2975/33.4.2010.297.307.

29. Kinoshita Y, Furukawa TA, Kinoshita K, et al. Supported employment for adults with severe mental illness. Cochrane Database Syst Rev. 2013;13;(9):CD008297. doi: 10.1002/14651858.CD008297.pub2.

30. Fiorentini G, Ragazzi G, Robone S. Are bad health and pain making us grumpy? an empirical evaluation of reporting heterogeneity in rating health system responsiveness. Soc Sci Med. 2015;144:48-58. doi: 10.1016/j.socscimed.2015.09.009.

31. Dieterich M, Irving CB, Park B, Marshall M. Intensive case management for severe mental illness. Cochrane Database Syst Rev. 2010;6(10):CD007906 doi: 10.1002/14651858.CD007906.pub2.

32. Adair CE, McDougall GM, Mitton CR, et al. Continuity of care and health outcomes among persons with severe mental illness. Psychiatr Serv. 2005;56(9):1061-1069.

33. Greenberg GA, Rosenheck RA. Special section on the GAF: continuity of care and clinical outcomes in a national health system. Psychiatr Serv. 2005;56(4):427-433. doi: 10.1176/

34. Harris AH, Gupta S, Bowe T, et al. Predictive validity of two process-of-care quality measures for residential substance use disorder treatment. Addict Sci Clin Pract. 2015;10:22. doi: 10.1186/s13722-015-0042-5.