Identifying Highly Effective Psychotherapists in a Managed Care Environment

August 1, 2005
G. S. (Jeb) Brown, PhD

,
Michael J. Lambert, PhD

,
Edward R. Jones, PhD

,
Takuya Minami, PhD

Volume 11, Issue 8

Objective: To investigate the variability and stability of psychotherapists'effectiveness and the implications of this differentialeffectiveness for quality improvement in a managed careenvironment.

Study Design: Subset archival outcome data for patients receivingbehavioral health treatment were divided into 2 time periods tocross-validate the treating therapists' effectiveness. After categorizingthe therapists as "highly effective" and "others" during thebaseline period, the stability of their individual effectiveness wascross-validated in the remaining time period.

Methods: Outcomes for 10 812 patients (76.0% adults, 24.0%children and adolescents) treated by 281 therapists were included.Patients initiated treatment between January 1999 and June 2004.Mean residual change scores obtained by multiple regression wereused to adjust for differences in case mix among therapists. Rawchange scores as well as mean residualized change scores werecompared between the 71 psychotherapists identified as highlyeffective (25%) and those identified as other (remaining 75%).

P

Results: During the cross-validation period, mean differences inresidualized change score between highly effective therapists andothers were statistically significant (difference = 2.8; < .001),which corresponded to an average of 53.3% more change in rawchange scores with the highly effective therapists. Results could notbe explained by case mix differences in diagnosis, age, sex, intakescores, prior outpatient treatment history, length of treatment, ortherapist training/experience.

Conclusion: Behavioral health outcomes for a large system ofcare could be significantly improved by measuring clinical outcomesand referring patients to therapists with superior outcomes.

(Am J Manag Care. 2005;11:513-520)

Managed behavioral health care organizations(MBHOs) are responsible to their customersand enrollees to ensure that psychotherapyservices meet accepted standards of quality and effectiveness.To this end, MBHOs engage in a variety of credentialingand quality-improvement activities. Withrespect to credentials, typical MBHOs (1) verify thatparticipating therapists have valid licenses to providebehavioral health services, (2) specify a minimum numberof years of experience, and (3) require that there beno evidence of malpractice. To ensure quality, mostMBHOs encourage the use of empirically supportedtreatments established in psychotherapy clinical trials.1,2 Under these standards, MBHO care managersreview treatment plans submitted by qualified therapistsand approve only those deemed appropriate. Thus,quality assurance in most MBHOs is to determine thefollowing: the ability of the licensed therapist to proposea treatment plan to a care manager that involves anempirically supported treatment for a specified disorder.

However, advances in research methodology haveenabled researchers to more critically examine theaccumulating clinical evidence upon which these qualityassurance standards are based. One advancement isthat of meta-analysis, which is a statistical method forcombining results from a large number of studies andthereby permitting the investigator to draw conclusionsthat could not be drawn from individual studies.3 A secondadvancement is that of hierarchical linear modeling,which is an extension of multiple regression andallows for analysis of data that are hierarchical.4

Recent reviews of clinical trials based on theseadvancements have indicated that the commonly utilizedquality assurance standards may no longer be adequate.The past 2 decades of meta-analytic studies haverevealed that differences in the relative efficacy of variouspsychotherapies are minimal at most.5-9 Therefore,treatment plans based on empirically supported treatmentsfor specified disorders do not ensure that the proposedtreatment will be more effective than othertreatments.

One could certainly argue that although treatmentplans based on empirically supported treatments do notensure the most effective treatment, this approach atleast ensures that the treatment being delivered is at anacceptable standard. Despite its logical appeal, thisassertion holds true only if (1) therapists are sufficiently trained to provide an empirically supported treatmentfor the specific disorder, (2) therapists do deliverthe treatment that they proposed, and (3) all therapistsdeliver these treatments with equal competence.Supposedly, the therapists' credentials speak to the firstissue, whereas the therapists' performance with regardto the second issue is almost impossible for an MBHO todetermine.

However, data from clinical trials do speak to thethird issue: the assumption of equal competence.Reanalysis of past clinical trial data, as well as a numberof more recent studies, reveal that there is a considerableamount of variability in outcomes across individualtherapists, even in well-controlled clinical trials.8,10-18This variability is far from trivial.18 Furthermore, eventhe most well-controlled and well-executed clinical trialto date, the National Institute of Mental HealthTreatment of Depression Collaborative Research Program,found that the variance due to therapist farexceeded that of the treatment.15 Therefore, variabilityamong therapists has become one of the most criticalareas of psychotherapy research.

It is beyond the scope of this article to review thebroad area of research on what may contribute to thisvariability in therapist outcomes. However, a recentcomprehensive review of the literature in this areaarrived at the conclusion that readily measurable therapistvariables such as age, sex, race, years of training,and type of degree explain little of the variance in outcomes.19 Although research on inferred therapist traits,such as interpersonal style and the role of the therapeuticrelationship, showed more promises in explainingtreatment outcomes, evidence is too scant towarrant any conclusion.19,20 If quality includes a domainof clinical outcomes, then quality assurance initiativesthat ignore this variability in outcomes at the clinicianlevel are unlikely to improve quality as promised.

PacifiCare Behavioral Health, Inc (PBH) differs fromother national MBHOs with regard to how patient selfreportoutcome questionnaires are used as a criticalcomponent of its comprehensive quality-improvementprogram. PBH encourages its panel of psychotherapyproviders to administer the outcome questionnaires atregular intervals in treatment to as many patients aspossible. The PBH outcome data provide a uniqueopportunity to investigate this variability among therapistoutcomes and its practical importance for behavioralhealth care management.21

PBH developed its ALERT clinical information systemto help care managers and clinicians monitor andmanage clinical outcomes. The first 3 authors (GSB,MJL, ERJ) participated in the development of the system,which was first implemented by PBH in 1999. Thecurrent study analyzed the outcome data contained inthe ALERT system to investigate the stability of thisvariability in therapist outcomes. In addition, the practicalimplications of this variability are discussed inlight of quality improvement in an MBHO environment.

METHODS

Sample Description

The ALERT database contains outcomes data for69 503 unique patients (79 748 episodes of care) whoinitiated treatment during the period from January1999 through June 2004. Of these, 46 052 were treatedby 1 of 5834 psychotherapists in private practice.Patients treated at a multidisciplinary group practiceare excluded from this count and from subsequentanalyses because the available dataset does not permitus to identify the treating clinician. The 46 052 patientstreated by an individually identifiable clinician will bereferred to as the total sample.

A subset of 281 therapists was selected for inclusionin this study based on their having a sample of at least15 cases with change scores between January 1999 andDecember 2002 and at least 5 cases in the subsequentcross-validation period between January 2002 and June2004. These clinicians treated a total of 10 812 patients(study sample) during the study period. This numberconstitutes 23.5% of the total sample.

Overall, the study sample was highly comparable tothe total sample with respect to diagnoses and testscores. Table 1 provides the breakdown by diagnosticgroups. It should be noted that diagnostic data, whichwas obtained from the Provider Assessment Report, wasonly available for 33% of this sample.

Likewise, the study sample was comparable to thetotal patient sample with regard to sex and age. As istypical of outpatient treatment samples, approximatelytwo thirds were female (63.5% in study sample;64.5% in total sample), and juveniles under the age of18 years comprised a quarter of the sample. Testscores at intake and change scores during treatmentalso were comparable between the study sample andtotal sample. Space limitations do not permit the useof tables to present this data, but these tables are availableupon request.

The therapists in the study sample were comparableto the total sample of therapists with regards to age andyears of experience. The mean age of the therapists inthe study sample was 55 years (SD = 7 years), comparedwith 54 years (SD = 8 years) for the total sample. Thetherapists in the study sample had a mean of 22 years(SD = 7 years) of postlicensure experience, comparedwith 22 years (SD = 8 years) for the total sample.Female therapists comprised 70% of the study sample,compared with 63% of the total sample. With regard tolicensure type, marriage and family therapists were disproportionatelypresent in the study sample, comprising48% of the study sample compared with 28% of the totalsample of therapists. The percentage of psychologists,social workers, and other licensed mental health professionswas lower in the study sample. The reason for thisdisproportionate representation is unclear, unless marriageand family therapists as a whole are more inclinedthan other professions to use outcome measures.

Outcome Measures

The ALERT system uses 2 outcome measures: theLife Status Questionnaire (LSQ) for adults and YouthLife Status Questionnaire (YLSQ) for children and adolescents.22-25 The YLSQ can be completed either by aparent or a guardian for younger children, or by adolescentson their own. In the remainder of the article, theabbreviation Y/LSQ will be used when referring to bothmeasures simultaneously.

The majority of items on these measures inquireabout psychiatric symptoms (primarily symptoms ofanxiety and depression), while a subset of items alsoinquire about interpersonal relationships and functioningin daily activities. The items ask patients to indicatehow often the statement is true for them over thepast week, responding on a 5-point Likert scale withanchors ranging from "never"to "almost always" (valuesscored as 0 to 4). Higher scores indicate greaterseverity of symptoms, subjective distress, and/orimpaired functioning.

The outcome measures demonstrate excellent psychometricproperties, with a Cronbach's alpha of .93and higher. The measures also have consistently highcorrelations with other well-established self-reportquestionnaires widely used in psychotherapy research.22,23 Both the LSQ and YLSQ have been administeredto large samples of patients in treatment(clinical sample) and individuals not seeking clinicalservices (community sample). These samples providenormative information on the means and standarddeviations of the clinical and community samples,which were used to calculate clinical cutoff scoresusing the method recommended by Jacobson andTruax.26 Scores at or above the clinical cutoff are determinedas more characteristic of individuals seekingbehavioral health services.

Data Collection and Clinical Feedback

Therapists are asked to administer the questionnairesat sessions 1, 3, and 5 and every fifth session thereafter.Completed Y/LSQs are faxed to a central toll-free number,where optical mark recognition software is used toread the data from the completed form. These files arethen uploaded to the ALERT system, which scores thequestionnaires, calculates the rate of each patient'schange compared with normative expectations, andchecks for values on critical items (eg, self-harm, substanceabuse).27-30 The system also evaluates data obtainedfrom the clinician such as the patient's diagnosis.

The ALERT system notifies therapists via regular mailregarding cases with high-risk indicators, drawing thetherapist's attention to the test scores and critical items,and offers to authorize more sessions as needed.30

Therapists also are notified of cases with good outcomes,as evidenced by test scores within the normal range. Ona quarterly basis, therapists are provided summary dataon all of their cases within the past 36 months.Therapists are given no financial incentives to usethe outcome questionnaires. However, in late 2002, thesystem was enhanced so that submission of completedoutcome questionnaires resulted in an automaticauthorization for additional sessions for the particularcase. Authorization is granted regardless of the testscores or the responses on the critical items (eg, suicidalideation), and this provides some incentive for cliniciansto submit data.

Study Design

The design of this study utilizes a cross-validationstrategy. Specifically, therapists' outcomes for patientsinitiating treatment between January 1999 and December2002 were used as the baseline. This baselineperiod corresponds to the period prior to implementingthe automated authorization process. The therapist outcomesin the following period between January 2003and June 2004 were used for cross-validation.

For the purpose of this study, a treatment episodewas defined as a period of consecutive administration ofthe Y/LSQs with no intervals between administrationsmore than 90 days. Therefore, if more than 90 days haselapsed between 2 Y/LSQ scores, the former administrationis considered to be the posttreatment score of anepisode, and the latter administration is considered tobe the intake score of a new episode.

The choice of a maximum lapse of 90 days betweenmeasurements to define an episode is of course tosome extent arbitrary, although we chose this intervalbecause it reasonably fit our collective experiences asclinicians. Different time lapses were tested, including a180-day period. This resulted in a small decrease in thenumber of episodes, but otherwise no meaningful differencein the assessment of change.

To ensure independence of observations, a case wasdefined as the patient's first treatment episode with atleast 2 Y/LSQ scores under a single therapist. Thismeans that for patients treated multiple times, only theirfirst episode was included in the analysis. Cases withonly 1 Y/LSQ score for an episode also were excludedbecause calculation of the change score requires at least2 measurements. Finally, cases with outcome data submittedby more than 1 clinician with overlapping datesof service were excluded because of the difficulty ofassigning the outcome to more than 1 clinician.

This method resulted in the inclusion of 281 therapiststreating 10 812 patients. The average number ofcases per therapist during the baseline period was 26.5(SD = 12.4) with a median of 22 and a range of 15 to 78.During the cross-validation period, the average numberof cases per therapist was 12.0 (SD = 8.2) with a medianof 10 and a range of 5 to 73.

Therapist outcome was determined by the therapists'average residualized change score on the Y/LSQ ratherthan the average raw score difference between theintake and posttreatment Y/LSQ. This was done so thatdifferences in the types of patients seen among differenttherapists (ie, case mix) did not confound the therapists'average outcomes.

Diagnostic and Statistical Manual of

Mental Disorders, Fourth Edition [DSM-IV]

Case mix was controlled using a multiple regressionmodel. The residualized change score for each patientwas calculated as the difference between the predictedfinal score (based on the case mix model) and the actualfinal score. Thus, if a patient's residualized score wasgreater than 0, that indicated that the patient improvedmore than what would be expected based on the particularcase mix. Specifically, the following case mix variableswere controlled for: intake score, age group (child,adolescent, adult), sex, diagnostic group (8 groupingsbased on the diagnosticcode), and session number of the first assessment in thetreatment episodes. Use of the session number whenthe Y/LSQ was first administered as a predictor controlsfor failure to collect a baseline score at the first session.

The intake score proved to be the strongest predictorof the test score at the end of the treatment, accountingfor approximately 49% of the variance in the finalscores. It is important to control for this variablebecause patients with high intake scores average morechange than patients with low scores. This is in part dueto regression to the mean, but the change observed withthe Y/LSQ data exceeds what is expected from this statisticalartifact. The other case mix variables also werepredictive of outcomes, even after controlling for intakescore, and thus were included as well. However, the othercase mix variables (diagnosis, age, sex) only explained anadditional 2% of the variance in combination.

Outcomes were assessed based on intent to treatrather than using predetermined criteria for treatmentcompletion. In other words, all cases with intake andposttreatment scores were included in the evaluation ofeffectiveness, even if the patient left treatment after asfew as 3 sessions. This was done to provide a conservativeestimate of the therapists' effectiveness, rather thanoverestimating outcomes by assessing effectivenessbased only on "successfully completed" cases.

The therapists included in thestudy were rank-ordered based ontheir residualized change scoresduring the baseline period, and theywere classified as either highly effectiveor other. Based on the rankings,71 therapists (25%) were classifiedas highly effective by averaging aresidualized change score of 2.8 orgreater. Therapist outcomes thenwere cross-validated using their separatesample of cases betweenJanuary 2003 and June 2004.

RESULTS

P

P

Table 2 presents the intake andposttreatment scores during thebaseline period, broken out by therapistgroup (ie, highly effective and others). During thebaseline period, there were statistically significant differencesin the residualized change scores as expected.The highly effective therapists averaged over3-fold as much change per case based on raw scores (< .001) and averaged a difference of 5.9 points in meanresidualized change scores (< .001) compared withthe other therapists. The number of days between firstand last test administration was similar, though thehighly effective group averaged 5 days longer.

P

Table 3 provides the results obtained during thecross-validation period. As expected, the differencebetween the highly effective clinicians and the rest of thesample decreased at cross-validation due to regressiontoward the mean, but the remainingdifference was substantial. Thehighly effective group continued toaverage greater change, with a differenceof 2.8 points in the meanresidualized change score (< .001)compared with the other therapists.

To examine therapist outcomesfor patients that would be deemedsimilar to those observed in clinicaltrials, analysis of the differencesbetween the 2 groups of therapistswas restricted to patients withintake scores above the clinical cutoff.Table 4 provides the results forthis restricted sample.

P

Consistent with the previousanalysis, the highly effective therapistsaveraged significantly betteroutcomes with patients above the clinical cutoff thanthe other therapists did, resulting in a difference of 2.7points in the mean residualized change score (< .001).This demonstrated that the impact of therapist effectivenessremained robust even among cases withgreater acuity.

DISCUSSION AND CONCLUSIONS

The purpose of this study was to assess the variabilityand stability of therapist outcomes. The results provideevidence that therapists in an MBHO environmentvary substantially in their patient outcomes, and thatthese differences are robust.

Limitations

As with any study, limitations need to be taken intoaccount when interpreting these results. Although well-knowncase mix variables were controlled for, it isalways possible that other unknown variables couldsubstantially influence therapist outcomes. Regardlessof the complexity or completeness of the case mixmodel, without random assignment of clients to therapists,it is a logical impossibility to rule out the potentialimpact of other unmeasured case mix variables.However, just as it seems unlikely that the case mixadjustment model is fully adequate to account forpatient differences, it is equally unlikely that all of therapistdifferences reported in this study are due to undetecteddifferences in case mix, especially in light of thesubstantial body of published research pointing to theexistence of clinician effects in controlled trials.

It is possible that the patient outcomes of a singletherapist may vary significantly across different diagnoses,age groups, or some other patient characteristic.The present study did not explore this question ingreater detail because of problems of cell size.Disaggregating the treatment sample into the variousdiagnosis groups would have meant that the samplesizes within the multiple cells for each therapist wouldbe only a few cases. It is logical to assume that certaindiagnoses may pose particular challenges and are besttreated by a specialist, and it does appear likely that clinicianeffectiveness would vary at least across agegroups. Clinician effectiveness with adults cannot beexpected to necessarily translate into effectiveness withadolescents or young children.

These and related questions of therapist variabilityacross patient types and/or treatment methods will beinvestigated in future studies. The problem of cell sizeswill decrease going forwardbecause of the continued rapidaccumulation of data.

A potential bias in the datacollection process relates tothe automatic authorizationprocess. The rate of clinicianparticipation and Y/LSQ submissionshad increased steadilybetween 1999 and 2002, butthe implementation of the automaticauthorization of additionalsessions in late 2002 resultedin a dramatic increase in Y/LSQsubmissions. Therefore, there isa probability that the change inthe system may have biased theresults in unknown ways. Inaddition, as would be expected in such a large system ofcare, there was significant variability in the rate ofcompliance with the data collection protocol, both atthe therapist and patient levels. Not surprisingly, consistencyof collection at the patient level covaried withthe consistency of the therapist, confirming the commonsenseview that the therapists have a large impacton the likelihood that the patient will complete thequestionnaire.

Thus, this variability in compliance also may haveinfluenced the results in unknown ways. Failure to completean assessment within the first 2 sessions resultedin capturing slightly less change over the treatmentepisode, but this artifact was adjusted for in the case mixmodel used. Beyond that artifact, there was not evidencethat outcomes varied systematically with compliance. Itwas true that the highly effective clinicians had largersample sizes, but analysis of claims data confirmed thatthis was because these clinicians treated a disproportionatenumber of the patients in the sample. Still, the possibilitythat some clinicians were selectively submittingforms for patients with good outcomes can't be ruled out.Furthermore, the effects of providing therapists withfeedback could not be assessed with these data.31,32

Undeniably, patient self-report measures provideonly a single perspective, which is that of the patient. Itis therefore likely that use of other perspectives such asoutcomes measured by the therapists might have yieldeddifferent results. Therapist-rated measures areknown to show more change than patient self-reportmeasures.33 However, use of therapists' assessment oftheir own effectiveness poses obvious problems withregards to potential for bias.

Some may argue that use of more objective buttime-and labor-intensive measures (eg, school or workattendance/performance) or batteries of measureslooking at outcomes from multiple perspectives wouldyield more reliable assessments of outcome. On theopposite end of the spectrum, others may argue forimplementing low-cost customer satisfaction surveysas outcome measures.34 However, it is unlikely thateither of these approaches to assessment is appropriateto assess real-time patient progress in psychotherapy,compared with outcome questionnaires that werespecifically designed for its purpose.35-39 Although it isideal to obtain multiple measures of outcome so as todecrease sources of measurement bias, time pressuresof a real-world practice make this problematic. Thus,brief patient self-report measures that are easy toadminister and score were chosen. Self-report questionnairesalso have the benefit of not burdening thetherapists with extra paperwork to complete, whilepotentially providing the therapists with clinical informationotherwise not obtained.

Lastly, the current study provided no insight intowhat treatments were delivered by the therapists inboth groups and how the treatments were delivered.Such information was beyond the scope of the availabledata. To this point, the importance of conducting clinicaltrials to empirically support treatments should notbe dismissed, as potential causal effects of treatmentcan be determined only through experimentaldesigns.40,41 Therefore, to claim that this study providessupport for discontinuing or devaluing empirically supportedtreatments would be a gross misinterpretation.

In summary, there are a number of limitations tothis study arising from the naturalistic nature of thedata and the inherent measurement error in anyattempt to measure a construct as broad as "treatmentoutcome." Clearly, measurement efforts across such alarge system of care involving thousands of clinicianspose many challenges for both collection and interpretationof the data that are avoided entirely in a well-designedclinical trial with random assignment.Therefore, the information culled from the data must beused cautiously, with all consideration for unknownsources of measurement error while simultaneouslybearing in mind that there also is a risk to the patients ifthe data are not used for quality-improvement purposes.

Implications

Despite the limitations of this study, the magnitudeof differences in outcome among the therapists is sufficientlylarge to lend credence to the proposition thoseoutcomes could be improved by focusing on these differences.With therapists differing significantly in theireffectiveness, the patients are best served if the MBHOcan identify and refer to effective therapists.42,43

The role of the MBHO in the outcomes-informedenvironment is still evolving. What responsibility, ifany, do MBHOs have with regards to offering consultationto its provider networks on how to improve outcomes?Some providers may welcome such an offer,while others may reject it as unwarranted intrusioninto the patient-therapist relationship. At the veryleast, MBHOs have a responsibility to the therapists toprovide feedback on their outcomes, while pursuingpolicy for publishing the outcomes for subscriberaccess and encouraging further analysis of the data byindependent investigators.

The outcome data are important to effectiveproviders because these data permit them to make astrong case for the value of their services. MBHOs inthe future may be valued more for their ability to steerpatients to therapists with demonstrated records ofeffectiveness than their current strategy of cost containmentand utilization management.

These results also demonstrate the practical utilityand benefits of utilizing patient self-report outcomedata as part of a quality-improvement program. Directmeasure of patients' outcomes has a greater probabilityof leading to improved outcomes than more commonlyused quality-improvement methods that focuson the method of treatment or other process variables.The data presented in this article speak to feasibility ofimplementing a system of quality improvement basedon use of patient self-report outcome questionnaires toidentify highly effective therapists.

From the Center for Clinical Informatics, Salt Lake City, Utah (GSB); Brigham YoungUniversity, Provo, Utah (MJL); PacifiCare Behavioral Health, Santa Ana, Calif (ERJ); and theUniversity of Utah, Salt Lake City, Utah (TM).

PacifiCare Behavioral Health, Inc. (PBH) provided funding for this research as part ofan ongoing program of outcomes management and quality-improvement research activities.Dr Jones is employed by PBH as vice president and chief clinical officer. Dr. Brown and DrLambert are independent researchers who have consulted for PBH over a number of yearsand received compensation for this work.

Address correspondence to: G. S. (Jeb) Brown, PhD, 1821 Meadowmoor Rd, Salt LakeCity, UT 84117. E-mail: jebbrown@clinical-informatics.com.

J Consult

Clin Psychol.

1. Chambless, DL, Hollon SD. Defining empirically supported therapies. 1998;66:7-18.

Clin Psychol.

2. Task Force on Promotion and Dissemination of Psychological Procedures.Training in and dissemination of empirically-validated psychological procedures:report and recommendations. 1995;48(1):3-23.

Statistical Methods for Meta-Analysis.

3. Hedges LV, Olkin I. San Diego, Calif:Academic Press; 1985.

Hierarchical Linear Models: Applications and Data

Analysis Methods.

4. Raudenbush SW, Bryk AS. 2nd ed. Thousand Oaks, Calif: Sage; 2002.

J Counseling

Psychol.

5. Ahn H, Wampold BE. Where oh where are the specific ingredients? A metaanalysisof component studies in counseling and psychotherapy. 2001;48:251-257.

Clin Psychol Sci Pract.

6. Luborsky L, Rosenthal R, Diguer L, et al. The dodo bird verdict is alive andwell—mostly. 2002;9(1):2-12.

J Consult Clin Psychol.

7. Shapiro DA, Shapiro D. Meta-analysis of comparative therapy outcome studies:a replication and refinement. 1982;92:581-604.

Great Psychotherapy Debate: Models Methods and Finding.

8. Wampold BE. Mahwah, NJ: Erlbaum; 2001.

Psychol Bull.

9. Wampold BE, Mondin GW, Moody M, et al. A meta-analysis of outcome studiescomparing bona fide psychotherapies: empirically, "all must have prizes."1997;122:203-215.

J Consult Clin Psychol.

10. Blatt SJ, Sanislow CA, Zuroff DC, Pilkonis PA. Characteristics of effectivetherapists: further analyses of data from the National Institute of Mental HealthTreatment of Depression Collaborative Research Program. 1996;64:1276-1284.

Psychother Res.

11. Crits-Christoph P, Baranackie K, Kurcias JS, et al. Meta-analysis of therapisteffects in psychotherapy outcome studies. 1991;1:81-91.

J Consult Clin Psychol.

12. Crits-Christoph P, Mintz J. Implications of therapist effects for the design andanalysis of comparative studies of psychotherapies. 1991;59:20-26.

Clin Psychol Sci Pract.

13. Elkin I. A major dilemma in psychotherapy outcome research: disentanglingtherapists from therapies. 1999;6:10-32.

J Consult Clin Psychol.

14. Huppert JD, Bufka LF, Barlow DH, Gorman JM, Shear MK, Woods SW.Therapists, therapist variables, and cognitive-behavioral therapy outcomes in amulticenter trial for panic disorder. 2001;69:747-755.

Psychother Res.

15. Kim DM, Wampold BE, Bolt DM. Therapist effects in psychotherapy: A randomeffects modeling of the NIMH TDCRP data. In press.

Bergin and Garfield's Handbook of Psychotherapy and Behavior

Change.

16. Lambert MJ, Ogles BJ. The efficacy and effectiveness of psychotherapy. In:Lambert MJ, ed. New York: John Wiley & Sons; 2004:139-193.

Am J Orthopsychiatry.

17. Luborsky L, Crits-Christoph P, McLellan T, et al. Do therapists vary much intheir success? Findings from four outcome studies. 1986;56:501-512.

Clin Psychol Psychother

18. Okiishi J, Lambert MJ, Nielsen SL, Ogles BM. Waiting for supershrink: anempirical analysis of therapist effects. . 2003;10:361-373.

Bergin and Garfield's Handbook of Psychotherapy and Behavior Change.

19. Beutler LE, Malik M, Alimohamed S, et al. Therapist variables. In: Lambert MJ,ed. NewYork: John Wiley & Sons; 2004:227-306.

Psychotherapy Relationships That Work.

20. Norcross JC. Empirically supported therapy relationships. In: Norcross JC, ed.New York: Oxford University Press;2002:3-32.

Psychiatr Serv.

21. Brown GS, Burlingame GM, Lambert MJ, et al. Pushing the quality envelope:a new outcomes management system. 2001;52:925-934.

Administration and Scoring

Manual for the LSQ (Life Status Questionnaire).

22. Lambert MJ, Hatfield DR, Vermeersch DA, et al. Salt Lake City, UT: AmericanProfessional Credentialing Services; 2001.

Administration and Scoring

Manual for the YLSQ.

23. Burlingame GM, Jasper BW, Peterson G, et al. Salt Lake City, UT: American Professional CredentialingServices; 2001.

J Pers Assess.

24. Vermeersch DA, Lambert MJ, Burlingame GM. Outcome Questionnaire: itemsensitivity to change. 2002;74:242-261.

Psychotherapy.

25. Wells MG, Burlingame GM, Lambert MJ, Hoag M. Conceptualization andmeasurement of patient change during psychotherapy: development of theOutcome Questionnaire and Youth Outcome Questionnaire. 1996;33:275-283.

J Consult Clin Psychol.

26. Jacobson NS, Truax P. Clinical significance: a statistical approach to definingmeaningful change in psychotherapy research. 1991;59:12-19.

J Information Technol

Health Care.

27. Matsumoto K, Jones E, Brown GS. Using clinical informatics to improve outcomes:a new approach to managing behavioral healthcare. 2003;1(2):135-150.

Crisis.

28. Brown GS, Jones ER, Betts W, Wu J. Improving suicide risk assessment in amanaged-care environment. 2003;24(2):49-55.

Jt Comm J Qual Safety.

29. Brown GS, Herman R, Jones ER, Wu J. Improving substance abuse assessmentsin a managed care environment. 2004;30:448-454.

J Clin Psychol.

30. Brown GS, Jones ER. Implementation of a feedback system in a managed careenvironment: what are patients teaching us? 2005;61(2):187-198.

Psychother Res.

31. Lambert MJ, Whipple JL, Smart DW, et al. The effects of providing therapistswith feedback on patient progress during psychotherapy: are outcomes enhanced?2001;11:49-68.

Clin Psychol Sci Pract.

32. Lambert MJ, Whipple JL, Hawkins EJ, et al. Is it time for clinicians to routinelytrack patient outcome? A meta-analysis. 2003;10:288-301.

J Consult Clin Psychol.

33. Lambert MJ, Hatch DR, Kingston MD, Edwards BC. Zung, Beck, andHamilton Rating scales as measures of treatment outcome: a meta-analytic comparison.1986;54:54-59.

Consumer Reports

Am Psychol.

34. Seligman MEP. The effectiveness of psychotherapy: the study. 1995;50:965-974.

Am Psychol.

35. Sechrest L, McKnight P, McKnight K. Calibration of measures for psychotherapyoutcome studies. 1996;51:1065-1071.

Consumer Reports

Am Psychol.

36. Brock TC, Green MC, Reich DA, Evans LM. The study ofpsychotherapy: invalid is invalid [letter]. 1996;51:1083.

Consumer Reports

Am Psychol.

37. Hunt E. Errors in Seligman's "The effectiveness of psychotherapy: thestudy" [letter]. 1996;51:1082.

Consumer Reports

Am

Psychol.

38. Kotkin M, Daviet C, Gurin J. The mental health survey. 1996;51:1080-1082.

Am Psychol.

39. Mintz J, Drake RE, Crits-Christoph P. Efficacy and effectiveness of psychotherapy:two paradigms, one science. 1996;51:1084-1085.

J Consult Clin Psychol.

40. Shadish WR, Matt GE, Navarro AM, et al. Evidence that therapy works in clinicallyrepresentative conditions. 1997;65:355-365.

Psychol Bull.

41. Shadish WR, Matt GE, Navarro AM, Phillips G. The effect of psychologicaltherapies under clinically representative conditions: a meta-analysis. 2000;126:512-529.

Heart and Soul of Change.

42. Brown GS, Dreis S, Nace D. What really makes a difference in psychotherapyoutcomes? And why does managed care want to know? In: Hubble MA, DuncanBL, Miller SD, eds. Washington, DC: American PsychologicalAssociation Press; 1999:389-406.

J Consult Clin Psychol.

43. Wampold BE, Brown GS. Estimating therapist variability: a naturalistic study ofoutcomes in private practice. In press.