The Relationship of System-Level Quality Improvement With Quality of Depression Care

, , , , , , , ,
The American Journal of Managed Care, November 2004 - Part 2, Volume 10, Issue 11 Pt 2

Objective: To explore the relationship of systemwide continuousquality improvement (CQI) with depression care quality in theVeterans Health Administration (VHA).

Study Design: Observational study using data from 2 VHAstudies.

Patients and Methods: The Depression Care Quality Study(DCQS) was a retrospective cohort study of depression care qualityin the northeastern United States involving 12 678 patients caredfor at 14 VHA facilities; it used guideline-based process measures(ie, dosage and duration adequacy). The VHA CQI survey was across-sectional survey of systemwide CQI among a representativesample of VHA hospitals; it assessed CQI and organizational culture(OC) at 116 VHA hospitals nationwide and provided data onthe 14 study facilities. We used analysis of variance to identify differencesin the adequacy of depression care among these facilities.Pearson's correlation was used to identify the relationship of CQIand OC with facility-level depression care adequacy.










Results : Mean depression care adequacy differed among the 14DCQS facilities ( < .0001). Overall dosage adequacy was 90%(range: 87%-92%). Overall duration adequacy was 45% (range:39%-64%). There was no correlation between CQI and eitherdosage adequacy ( = .004, = .98) or duration adequacy ( = &#8722;.17, = .55). Similarly, there was no correlation between OC and eitherdosage adequacy ( = &#8722;.35, = .22) or duration adequacy ( = &#8722;.12, = .68).

Conclusion: Although CQI may help bridge the healthcarequality gap, it may not be associated with higher disease-specificquality of care.

(Am J Manag Care. 2004;10(part 2):846-851)

Continuous quality improvement (CQI) has beendefined as a "philosophy of continual improvementof the processes associated with providinga good or service that meets or exceeds customer expectations."1Given the positive CQI experience of industriesother than healthcare, CQI has been embraced asa key mechanism of enhancing quality of care whilepreserving cost-effectiveness. In a recent review,Shortell et al1 described how CQI has been implementedin varied healthcare settings and its diverse effectson healthcare processes and outcomes. These authorsconcluded that CQI is a potentially promising mechanismfor bridging the existing gap in healthcare quality.

Key components necessary for successful implementationof CQI have been described for the private sector2and the Veterans Health Administration (VHA).3 Theseinclude a dynamic, innovative quality leadership withrepresentatives from top management, boards oftrustees, and clinical arenas. An organizational culture(OC) that emphasizes teamwork and consensual decisionmaking (ie, group culture), and innovation and risktaking (ie, developmental culture) has been linked togreater CQI implementation3 and to improved clinicaloutcomes.4 Furthermore, ongoing participation in qualityimprovement activities is a necessary element of astrong CQI leader and program. However, qualityimprovement activities alone are not sufficient to sustainoverall organizational CQI.2 Thus, quality improvementand quality assurance (QI/QA) principles haveremained conceptually distinct from CQI.1

There is considerable uncertainty in the literatureregarding the link between CQI and specific healthcarequality domains. Understanding the relationship of systemwideCQI with disease-specific quality of care isimportant for targeting local CQI and QI/QA activities.Depression serves as a good disease role model for studybecause it is prevalent, disabling, and costly; lacks qualitycare; and has well-studied, guideline-based benchmarksthat can serve as criteria for measuring thequality of care.

Depression is a prevalent medical condition with far-reachingand potentially serious consequences, especiallyif undertreated, yet gaps in the quality of depressioncare continue to be identified.5 To our knowledge, only1 small study of the effect of CQI on depression carequality has been reported, and those results were equivocal.6Nonetheless, the positive effect of QI/QA multidisciplinaryteams who provided guideline-baseddepression care7,8 was recently described. These interventionshave demonstrated remission of short- andlong-term depression, and improved depressive symptoms,patient satisfaction, work-related performance,health-related quality of life, and cost-effectiveness.9-19Although the design and implementation of these QI/QAdepression interventions have been somewhat diverse,a commonality among all has been collaborative caredelivered via multidisciplinary teams in a chronic-caremodel approach.20 In light of this evidence, the USPreventive Services Task Force recently recommendedroutine screening for depressive disorders in primarycare settings with established systems for diagnosis,treatment, and follow-up.21,22

Central VHA leadership has strongly advocated forthe development of local VHA CQI activities since theearly 1990s. After the creation of 22 VHA healthcareadministrative regions in 1995, local CQI activitiesremained diffuse.3 A large, nationally representativesurvey of 116 VHA hospitals was undertaken in 1998 todescribe VHA CQI and to define predictors of successfulCQI in the VHA. Details of this work have beendescribed elsewhere (referred to herein as the VHA CQIsurvey).3

In this study, we examine facility-level differences indepression care quality and explore the relationship ofCQI to depression care quality by using existing datafrom the VHA CQI survey and our past work evaluatingdepression care quality in the VHA.23,24 Although theoptimal design for studying the effects of CQI is stillunder debate,25 these data presented a unique opportunityto conduct an observational study examining thiscrucial question. Because the VHA is the nation's largestmanaged healthcare system, its experience can serve asa leading example for other healthcare systems.


Study Design and Sample Definition

In previous work, we conducted a retrospectivecohort study of 14 VHA hospitals in New England andupstate New York. For that study, we used VHA centralizeddata to define a depressed cohort, to assess depressioncare quality using guideline-based processmeasures, to identify predictors of depression care adequacy,23 and to determine the predictive validity of theseguideline-based depression process measures by examiningtheir relationship with subsequent overall and psychiatrichospitalizations.24 We now seek to assess thelink between depression care quality as measured bythose guideline-based process measures and CQI asmeasured by the VHA CQI survey.3 The methodologiesof these studies are described in brief below.

Depression Care Quality Study

International Classificationof Diseases, Ninth Revision, ClinicalModification (ICD-9-CM)


We used VHA administrative and centralized pharmacyrecords to define a depressed cohort of 12 678patients who received antidepressant treatment duringa 3-month period in 1999. Subject eligibility criteriawere as follows: (1) at least 1 diagnosis code 296.2x or296.3x (major depression single or recurrent episode,respectively) noted in a psychiatry, primary care, emergency,or social work clinical setting during October 1,1997, through September 30, 1999 (fiscal year [FY]1998 or FY 1999) at 1 of the 14 northeastern VHA hospitals,or at least 1 diagnosis code 311.xx(depression, not otherwise specified) noted in a primarycare clinical setting during FY 1998 or FY 1999, exclusiveof other depression diagnosis codes; (2) no comorbidschizophrenia and/or bipolar disorder; and (3)receipt of at least 1 antidepressant from a VHA pharmacyduring the time that depression care was profiled(June 1, 1999, through August 31, 1999). These criteriaproduced the final sample for which the process ofdepression care can be linked to system-level CQI. Thecharacteristics of the sample are reported elsewhere.23,24

Guideline-based Depression Process Measures

We compared 2 dimensions of antidepressant therapywith clinical guideline benchmarks using the 1997VHA Depression Guidelines,26 a compilation of recommendationsfrom the Agency for Healthcare Researchand Quality and the American Psychiatric Associationdepression guidelines.7,8

We exclusively used centralized data sources, predominatelyautomated pharmacy records, to profiledepression care quality in this study. Centralized pharmacyrecords have been examined in various diseasemodels and validated as acceptable sources for assessingmedication regimens.27-29 The accuracy of centralizedpharmacy records in predicting suboptimal dosingand premature discontinuation of medications wasrecently confirmed in a depressed cohort by comparingcentralized pharmacy records with patient self-report ofmedication administration.30


Duration adequacy

We describe the guideline-based depression processmeasures in full in another paper.23 In short, was achieved when the average daily dosageof antidepressant during the 3-month profiling periodmet the guideline-recommended minimum dailydosage. This resulted in a dichotomous outcome variable(ie, guideline-recommended minimum dailydosage, yes/no). for each patientconcerned the overall length of therapy with any eligibleantidepressant during the profiling period.Antidepressant eligibility criteria are described elsewhere.23 Duration adequacy was defined as a dichotomy,with inadequate duration being >21% of theprofiling period without antidepressants. This boundary(.21) translates into 3 weeks of the 3-month period (or1 week per month), and is consistent with other definitionsof continuous dosing in the literature.31

The VHA CQI Survey

From April through September 1998, a large, nationallyrepresentative number of hospital employees(9993/14 892; 67% response3), managers (2406/3400;71% response), and hospital directors (130/155; 84%response) were sampled from 162 VHA hospitals for theVHA CQI survey. There were 116 hospitals with availabledata for final quantitative analysis. The 14 facilitiesexamined in this paper had lower response rates foremployees (532/1237, 43% response) and managers(152/258, 59% response) than overall. Developed in theVHA Office of Quality Management with assistance froma private-sector consultant, the VHA CQI survey is conceptuallybased on criteria from the Malcolm BaldridgeAward, granted by the US Department of Commerce tocompanies excelling in quality assurance.3 Althougheach of these items has significant face validity, to ourknowledge, neither their construct nor criterion validityhas been formally tested.

The VHA CQI survey consisted of 42 items dividedinto 5 categories of CQI: role of managers, informationand analysis, strategic quality planning, humanresources development and management, and managementof process quality. An example from the informationand analysis category is the following item, towhich respondents were asked to indicate their level ofagreement on a 5-point Likert scale: "We try to use dataabout quality to prevent problems, not to just fix themafter they have occurred." An overall CQI measurederived from these items was scored from 1 through 5,with higher scores indicating greater CQI implementation.In previous research at VA medical centers, the 5subscales have demonstrated good internal consistency(Cronbach's alphas ranged from .89 to .92).32

The survey had 20 items divided into 4 dimensionsof OC: group, developmental, hierarchical, and rational.These were assessed exclusively by hospital employees,who were asked to distribute 100 points among variousstatements best characterizing their organization. Anexample of a group-culture item is the following: "Thehospital is a very personal place. It is a lot like anextended family; people seem to share a lot of themselves."The overall OC measure was derived by combiningthe average percentage of points respondentsallocated to the group culture and developmental culture.Higher scores indicated greater innovation andteamwork (ie, a stronger group and developmental culturevs a hierarchical and rational culture).

Assessing the Link Between System-level CQIand Depression Care Quality

We calculated the mean dosage and duration adequacyfor each facility using patient-level adequacyscores. We observed crude facility-level differencesin depression care adequacy (Table) and were concernedthat patient-level differences might explainthis variation. Therefore, we recalculated the facility-levelmean dosage and duration adequacy, adjustingfor patient-level demographic and clinical predictorsof depression care adequacy identified in our previouswork.23 Specifically, we calculated the expected facility-level mean dosage and duration adequacy usingprobabilities generated by logistic regression from ourprior models. In these, patient age, race, marital status,comorbid illness, and type of clinical care wereidentified as significant predictors of depression careadequacy.23

Using patient-level adequacy scores, we calculatedthe mean risk-adjusted depression care adequacy foreach facility by using the observed minus expected (ie,O &#8722; E) rate of depression care adequacy. We establishedthese rates separately for dosage and duration adequacy.In our previous work, we used generalized estimatingequations to adjust for clustering at the hospitallevel. Because we did not identify a clustering effect, wereported the simpler models examining individual-levelpredictors of depression care adequacy.23 Nevertheless,we observed crude facility-level differences in depressioncare adequacy in the present study. Therefore, wecomputed and used facility-specific risk-adjusteddepression care adequacy rates in all analyses.

We used analysis of variance to establish the statisticalsignificance of differences in the adequacy ofdepression care among the facilities. We ranked facilitiesin order of depression care quality by comparingcrude and risk-adjusted duration adequacy scores(Table). We assessed facility rankings by using scoresfor duration adequacy because these demonstratedmore facility-level variation than dosage adequacy.

We identified facility-level CQI implementation andOC using data from the VHA CQI survey describedabove. We used Pearson's correlation to determine therelationship between CQI implementation and depression care adequacy (identified separately for dosage andduration adequacy). Because OC is strongly linked toCQI implementation,3 we separately examined the associationof OC with depression care adequacy usingPearson's correlation. We performed all analyses withSAS v. 8.1 (SAS Institute Inc, Cary, NC).






We presented the results of our evaluation of depressioncare adequacy in full in a prior paper.23 To summarizebriefly, we identified 90% dosage adequacy and45% duration adequacy in the overall sample (12 678patients). Dosage adequacy ranged from 85% to 92%among the 14 facilities ( < .0001 for differences).Duration adequacy ranged from 39% to 64% among the14 facilities ( < .0001 for differences). Risk-adjusteddosage adequacy (O% &#8722; E%) ranged from &#8722;4.8% to 2.9%among the 14 facilities ( < .0001 for differences amongmeans). Risk-adjusted duration adequacy (O% &#8722; E%)ranged from (&#8722;8.6% to 17.8% among the 14 facilities( <.0001 for differences among means). Facility rankingsfor depression care quality did not change substantiallyafter risk adjustment. The largest shift in rankingwas demonstrated with facility 10, which shifted tofacility 7 after risk adjustment. In the Table, we presentcrude and risk-adjusted rates of depression care adequacyand facility rankings.

The median CQI score for the 14 facilities was 3.42(range of 3.10 to 3.64, scored from 1-5; higher scoresindicate greater CQI implementation). This compareswith the overall CQI mean &#177; SD of 3.69 &#177; .06 from thetop quartile scores, and the overall CQI mean &#177; SD of3.30 &#177; .10 from the bottom quartile scores of the 116facilities surveyed in the VHA CQI. The median OCscore for the 14 facilities was 31.78 (range of 26.44-49.91, scored from 0-100; higher scores indicate greatergroup/developmental culture). This compares with theoverall OC mean &#177; SD of 39.34 &#177; 3.26 from top quartilescores, and the overall OC mean &#177; SD of 32.41 &#177; 3.51from the bottom quartile scores of the 116 facilities surveyedin the VHA CQI.









We did not identify a correlation between CQI andeither dosage adequacy ( = .004, = .98) or durationadequacy ( = &#8722;.17, = .55). Similarly, we did not finda correlation between OC and either dosage adequacy( = &#8722;.35, = .22) or duration adequacy ( = &#8722;.12, =.68).We repeated these analyses using crude dosage andduration adequacy scores. This did not change ourresults (data not shown).


In this study, we examined facility-level differencesin depression care quality among 14 VHA facilities inthe northeastern United States and explored the relationshipof depression care quality with CQI and OC. Toour knowledge, this is the first study to examine howsystemwide CQI implementation relates to depressioncare quality in VHA healthcare settings. This subject isimportant, given that veteran patients have a disproportionatelyhigher prevalence of mental illness.33 Additionally,VHA healthcare settings provide an excellentlaboratory for such analyses, as annual completion ofdepression screening has been a systemwide qualitymeasure for several years. Furthermore, the VHA hasworked to narrow the depression care quality gap bydissemination of VHA-specific depression guidelines,26and provider education and performance feedback.

In this study, we have identified depression care thatfalls short of guideline recommendations, especiallyregarding duration of therapy. We also found variations indepression care among the 14 facilities concerning adequacyof both dosage and duration of antidepressanttreatment, although the clinical significance of these differencesis less clear. More clinically detailed informationwould allow us to assess the comprehensiveness of ourprocess measures in capturing the quality of depressioncare. Nevertheless, our exclusive use of administrativeand centralized pharmacy records mirrors the practicalreality of quality profiling in a naturalistic clinical setting.


We did not identify a significant association of systemwideCQI or OC with depression care quality amongthese 14 VHA facilities. Our small sample size renderedthe analyses underpowered for these purposes. In thisstudy, we had 25% power to detect a true populationrelationship of strength = .35, commonly accepted asa moderate effect size.34 Future studies need to examinethese relationships among at least 50 facilities to have80% power for detecting a moderate effect size.Nevertheless, despite our underpowered examination ofthese important relationships, our study has widerimplications for the VHA and all managed care settings.

Herein, we highlight the significance of furthering anunderstanding of the link between CQI, OC, and disease-specifichealthcare quality. In the VHA and other managedcare settings, precious resources are directedtowards CQI efforts. Understanding how these effortsimpact patient-level care is critical. The VHA is wellpoised to study these important relationships, given itsvertical integration and large number of healthcare sites.Other large managed care organizations with establishedCQI efforts and more than 50 healthcare sites (to renderat least 80% power to detect a moderate effect size)should consider conducting a similar study to broadenour understanding of the relationship between CQI, OC,and disease-specific healthcare quality.


Despite our underpowered analyses, our results aresimilar to others that assessed the relationship betweenCQI and disease-specific healthcare quality. Berlowitzet al32 recently found no association between CQI andrisk-adjusted pressure ulcer development in 35 VHAnursing homes (beta coefficient &#177; SE [ value]: (1.73 &#177;1.30 [.19] from a regression model examining risk-adjustedpressure ulcer development as a function ofCQI). Our average facility-level CQI implementationand OC were similar to, though lower than, those of the35 VHA nursing homes. Our median CQI was 3.42(range: 3.10-3.64), compared with theirs of 3.55 (range2.98:4.08). Our median OC was 31.78 (range: 26.44-49.91), compared with theirs of 44.90 (range: 29.3-64.9).

To our knowledge, the clinical and policy significanceof quantitative differences in OC and CQI scores has notbeen elucidated to date. (In other words, how many pointson the CQI scale represent a meaningful, real-life changein CQI?) Nevertheless, qualitative interviews among 5"top-performing" and 5 "low-performing" facilities fromthe VHA CQI survey highlighted important differencesbetween these 2 groups.3 For example, several notablecharacteristics of OC among those with CQI scores in thehighest quartile were as follows: high morale, low staffturnover, open communications, long tenure of leadingquality assurance champions, collegiality of top management,and team-oriented staff.3 On the other hand, severalcharacteristics of OC among those with CQI scoresin the lowest quartile were high turnover at top managementlevels, lack of ownership, poor relationship betweenmanagement and staff, and resistance to change.3

This study has limitations. Restricting depressioncare profiling to the northeastern United States limitsgeneralizability to the national veteran population. Thesesites were chosen for ease of data collection. The nationalVHA pharmacy database was organized just before thestudy, and access to these data was unclear initially.Nevertheless, these hospitals represent diverse geographicsettings and reflect the experience of the national veteranpatient population.35 One should maintain caution,however, in applying these findings to non-VHA healthcaresettings. Our predominately male sample (92%)might not generalize to the national population undergoingdepression care, the majority of whom are women.36Furthermore, our small sample size (ie, 12 678 subjectsfrom 14 facilities) rendered our analyses underpoweredin testing our study hypotheses. Moreover, restrictedresources limited our access to other important organizationalcharacteristics that might affect depression carequality, such as teaching status, facility size, and on-sitemental health care. Finally, we are unsure why theresponse rates for the VHA CQI survey were lower for our14 facilities than for the 116 VHA hospitals nationwide.Additionally, we were unable to isolate the hospital directorresponses for our 14 facilities and thus cannot comparethese to the hospital director responses overall. Thesurvey methodologists who conducted the 1998 VHACQI survey were unavailable for comment. Despite theselimitations, this study adds to a growing literature assessingthe relationship of system-level CQI and OC with disease-specific healthcare quality. To our knowledge, it isthe first study to detail the association of system-levelCQI and OC with mental healthcare quality.

Based on our experience, we would make a number ofrecommendations regarding future research in this area.First, given the difficulty of observational studies examiningthe link between CQI implementation and disease-specifichealthcare quality, we suggest measuring CQIimplementation among those most directly concernedwith the disease-specific healthcare delivery. Our studyexamined CQI implementation on a broad, systemwidescale. This approach contrasts with the studies describedabove,4,32 which measured CQI implementation amongthose more closely involved with the disease-specificquality domain under study (ie, nursing home staff32).

Future work should specifically explore the relationshipbetween broad-based CQI efforts and mentalhealthcare quality. Although parity for mental healthcareaccess, coverage, and quality is now a legislativemandate, it is not yet a reality compared with the carerendered for general medical conditions.37 Finally,future research might focus on the process by whichCQI impacts disease-specific healthcare quality. Thisapproach will require longitudinal study and will allowricher hypothesis testing than the dose-response evaluationthat our cross-sectional analyses provided.

From the Division of General and Geriatric Medicine, University of Kansas MedicalCenter, Kansas City, Kan (AC, JW); the Center for Health Quality, Outcomes, and EconomicResearch, Edith Nourse Rogers Memorial Veterans Hospital, Bedford, Mass (VP, AKR, BK,DRB); the Management Decision and Research Center, Boston VAMC, Boston, Mass (MM);the Center for Mental Healthcare and Outcomes Research, Central Arkansas VeteransHealthcare System, Little Rock, Ark (RRO); and the Section of General Internal Medicine,Boston Medical Center, Boston, Mass (ASA).

This work was supported by grants CPI 99-134 and MNH 98-001 (Mental HealthQUERI) from the Health Services Research and Development Service, Department ofVeterans Affairs.

Address correspondence to: Andrea Charbonneau, MD, MSc, Division of General andGeriatric Medicine, University of Kansas Medical Center, 3901 Rainbow Blvd, Wescoe5026, Kansas City, KS 66160. E-mail:

Milbank Q.

1. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous qualityimprovement on clinical practice: what it will take to accelerate progress [see comment]. 1998;76(4):593-624.

Health Serv Res.

2. Weiner BJ, Shortell SM, Alexander J. Promoting clinical involvement in hospitalquality improvement efforts: the effects of top management, board, and physicianleadership. 1997;32:491-510.

Am J MedQual.

3. Parker VA, Wubbenhorst WH, Young GJ, Desai KR, Charns MP. Implementingquality improvement in hospitals: the role of leadership and culture. January-February 1999;14:64-69.

Med Care.

4. Shortell SM, Jones RH, Rademaker AW, et al. Assessing the impact of totalquality management and organizational culture on multiple outcomes of care forcoronary artery bypass graft surgery patients. 2000;38:207-217.

Arch Gen Psychiatry.

5. Young AS, Klap R, Sherbourne CD, Wells KB. The quality of care for depressiveand anxiety disorders in the United States. 2001;58:55-61.

Jt CommJ Qual Improv.

6. Goldberg HI, Wagner EH, Fihn SD, et al. A randomized controlled trial of CQIteams and academic detailing: can they alter compliance with guidelines? 1998;24:130-142.

Clinical Practice Guideline Number 5:Depression in Primary Care, 2: Treatment of Major Depression.

7. Depression Guidelines Panel. Rockville, Md:Agency for Health Care Policy and Research; 1993. Publication 93-0551.

Am J Psychiatry.

8. American Psychiatric Association. Practice guideline for major depressive disorderin adults. 1993;150(suppl):S1-S26.


9. Wells KB, Sherbourne C, Schoenbaum M, et al. Impact of disseminating qualityimprovement programs for depression in managed primary care: a randomizedcontrolled trial. 2000;283:212-220.

J Gen Intern Med.

10. Katon W, Russo J, Von Korff M, et al. Long-term effects of a collaborative careintervention in persistently depressed primary care patients. 2002;17:741-748.

J Gen Intern Med.

11. Rost K, Nutting P, Smith J, Werner J, Duan N. Improving depression outcomesin community primary care practice: a randomized trial of the quEST intervention.Quality Enhancement by Strategic Teaming. 2001;16:143-149.

Med Care.

12. Meredith LS, Orlando M, Humphrey N, Camp P, Sherbourne CD. Are betterratings of the patient-provider relationship associated with higher quality care fordepression? 2001;39:349-360.

Health Serv Res.

13. Schoenbaum M, Unutzer J, McCaffrey D, Duan N, Sherbourne C, Wells KB.The effects of primary care depression treatment on patients' clinical status andemployment. 2002;37:1145-1158.


14. Schoenbaum M, Unutzer J, Sherbourne C, et al. Cost-effectiveness of practice-initiatedquality improvement for depression: results of a randomized controlledtrial. 2001;286:1325-1330.


15. Katon W, Von Korff M, Lin E, et al. Collaborative management to achievetreatment guidelines. Impact on depression in primary care. 1995;273:1026-1031.

J Gen Intern Med.

16. Hedrick SC, Chaney EF, Felker B, et al. Effectiveness of collaborative caredepression treatment in Veterans' Affairs primary care. 2003;18:9-16.

17. Unutzer J, Katon W, Callahan CM, et al. Collaborative care management oflate-life depression in the primary care setting: a randomized controlled trial [seecomment]. JAMA. 2002;288:2836-2845.

Arch Gen Psychiatry.

18. Unutzer J, Rubenstein L, Katon WJ, et al. Two-year effects of quality improvementprograms on medication management for depression. 2001;58:935-942.

Arch Fam Med.

19. Katzelnick DJ, Simon GE, Pearson SD, et al. Randomized trial of a depressionmanagement program in high utilizers of medical care. 2000;9:345-351.

Eff Clin Pract.

20. Wagner EH. Chronic disease management: what will it take to improve care forchronic illness? August-September 1998;1:2-4.

Ann Intern Med.

21. US Preventive Services Task Force. Screening for depression: recommendationsand rationale. 2002;136:760-776.

Ann InternMed.

22. Pignone MP, Gaynes BN, Rushton JL, et al. Screening for depression in adults:a summary of the evidence for the US Preventive Services Task Force. 2002;136:765-776.

Med Care.

23. Charbonneau A, Rosen AK, Ash AS, et al. Measuring the quality of depressioncare in a large integrated health system. 2003;41:669-680.

Med Care.

24. Charbonneau A, Rosen AK, Owen RR, et al. Monitoring depression care: insearch of an accurate quality indicator. 2004;42:522-531.

Health Serv Res.

25. Samsa G, Matchar D. Can continuous quality improvement be assessed usingrandomized trials? [See comment.] 2000;35:687-700.

Clinical Practice Guidelines for MajorDepressive Disorder.

26. Department of Veterans Affairs. Washington, DC: Department of Veterans Affairs; 1997.


27. Steiner JF, Koepsell TD, Fihn SD, Inui TS. A general method of complianceassessment using centralized pharmacy records. Description and validation. 1988;26:814-823.

28. Steiner JF, Prochazka AV. The assessment of refill compliance using pharmacyrecords: methods, validity, and applications. J Clin Epidemiol. 1997;50:105-116.

Med Care.

29. Choo PW, Rand CS, Inui TS, et al. Validation of patient reports, automatedpharmacy records, and pill counts with electronic monitoring of adherence to antihypertensivetherapy. 1999;37:846-857.

J Clin Epidemiol.

30. Saunders K, Simon G, Bush T, Grothaus L. Assessing the feasibility of usingcomputerized pharmacy refill data to monitor antidepressant treatment on a populationbasis: a comparison of automated and self-report data. 1998;51:883-890.

Jt Comm J Qual Improv.

31. Kerr EA, McGlynn EA, Van Vorst KA, Wickstrom SL. Measuring antidepressantprescribing practice in a health care system using administrative data: implications forquality measurement and improvement. 2000;26:203-216.

Health Serv Res.

32. Berlowitz DR, Young GJ, Hickey EC, et al. Quality improvement implementationin the nursing home. 2003;38(1 pt 1):65-83.

Am J Psychiatry.

33. Hankin CS, Spiro A 3rd, Miller DR, Kazis L. Mental disorders and mentalhealth treatment among US Department of Veterans Affairs outpatients: theVeterans Health Study. 1999;156:1924-1930.

Statistical Power Analysis for the Behavioral Sciences.

34. Cohen J. 2nd ed.Mahwah, NJ: Lawrence Erlbaum Associates, Inc; 1988.

Health Status and Outcomes of Veterans:1999 Health Survey of Veteran Enrollees.

35. Perlin J, Kazis L, Skinner K, et al. Executive Report. Washington, DC:Department of Veterans Affairs; 2000.

J Gen Intern Med.

36. Goldman LS, Nielsen NH, Champion HC. Awareness, diagnosis, and treatmentof depression. 1999;14:569-580.

N Engl J Med.

37. Frank RG, Goldman HH, McGuire TG. Will parity in coverage result in bettermental health care? 2001;345:1701-1704.