• Center on Health Equity and Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

Assessing Outcomes in Child Psychiatry

Publication
Article
The American Journal of Managed CareApril 2009
Volume 15
Issue 4

Two standardized rating scales appeared to be valid and reliable for use at admission and possibly follow-up in a child psychiatry system of care.

Objective: To report on the first year of a program using standardized rating scales within a large, multisite mental health system of care for children and to assess the validity, reliability, and feasibility of these scales.

Study Design: Naturalistic follow-up of clinicians’ ratings.

Methods: Clinicians filled out the Brief Psychiatric Rating Scale for Children (BPRS-C) and the Children’s Global Assessment Scale (CGAS) at intake and discharge/90-day follow-up for all new patients.

Results: Data were collected on 2396 patients from all 20 sites serving children in the Partners HealthCare network. Mean scores for both BPRS-C and CGAS showed worst functioning at inpatient sites, followed by Acute Residential Treatment, then partial hospital, then outpatient sites. All patients re-rated at discharge or 90-day follow-up showed a significant improvement in scores. Inter-item reliability on the BPRS-C was acceptable, with Cronbach alphas of .78 and .81. Feasibility at intake was demonstrated in that 66% of all patients had a completed form at intake. Reassessment at discharge also appeared to be feasible in more restrictive levels of care, but less feasible in outpatient sites, where fewer than 25% of all patients had a follow-up form.

Conclusions: This evaluation suggested that the 2 standardized measures appeared to be valid and reliable as part of routine intake and discharge/follow-up in a large child psychiatry system of care. Whether these measures are truly clinically useful remains to be demonstrated because there is at present no gold standard for assessing the quality of treatment or change caused by it.

(Am J Manag Care. 2009;15(4):210-216)

Two standardized psychiatric rating scales used within a large, diverse, multisite mental health system of care for children were assessed after their first year of use.

  • The scales were determined to be valid, reliable, and feasible for assessing admissions as required by a managed care insurer.
  • Collecting data at follow-up/discharge was possible in about two-thirds of inpatient, partial hospital, and acute residential treatment cases, but in fewer than 25% of cases in outpatient settings.
  • Patient scores on both measures showed statistically significant improvements from admission/ intake to discharge/follow-up.

Increasingly, individual insurers as well as governmental and regulatory agencies are requiring mental health clinicians to use standardized instruments to assess patients as a part of clinical care.1 A review by Busch and Sederer1 supports the measurement of outcomes in psychiatry, but stresses the importance of using measures that are feasible in clinical settings as well as valid and reliable. We describe the experiences of a large and diverse child mental health system of care, Partners Psychiatry and Mental Health (PPMH), in the first year of implementing a uniform set of outcome measurements across all sites at all levels of care. We focused on (1) examining the extent to which outcomes measures were implemented successfully within the PPMH system and (2) the validity and reliability of the measures themselves as they pertain to this setting.

To comply with contractual terms from the Commonwealth of Massachusetts, the Massachusetts Behavior Health Partnership (a Medicaid managed care carve-out company) required providers to use standardized, validated instruments to assess and track outcomes for all new patients. Beginning July 1, 2005, these patients had to be rated at intake/admission and at regular 90-day follow-up intervals and/or discharge. The mandate also required clinicians to use the information gathered from these forms in patient feedback and treatment planning.

In order to gather systemwide data, all PPMH sites agreed to utilize the same instruments and procedures. In choosing measures, a steering committee of staff from the sites considered ease of use, low cost (public domain preferred), and broad applicability across a wide range of ages, diagnoses, and levels of care. After assessing the available options, the committee chose 2 measures: the Brief Psychiatric Rating Scale for Children (BPRS-C)2,3 as a measure of symptom severity and the Children’s Global Assessment Scale (CGAS)4 as a measure of overall functioning.

The BPRS-C has been used in a number of studies of research and clinical populations including patients with bipolar disorder, schizophrenia, and psychotic disorder and other studies requiring a multidimensional assessment of symptoms.5-9 The measure consists of 21 discreet symptom areas, each rated for severity10,11 along a 7-point Likert scale. A total score ranging from 0 to 126 is computed by summing all 21 items, with higher scores indicating greater symptom severity. Changes in this total score are used when evaluating pretreatment to posttreatment change.11 The reliability and consistency of the BPRS-C have been tested over time, and it has become an integral measure in diverse psychiatric venues.10-14 The CGAS is a practical complement to the BPRS-C because it is used for the assessment of “social and cognitive competence rather than symptoms.”15 PPMH added the CGAS as a second measure in order to obtain an easily understood rating of overall functioning. The CGAS has been used in research for more than 2 decades and is very similar to Axis V of the Diagnostic and Statistical Manual of Mental Disorders taxonomy.3,16-18 Several studies have used the CGAS to track patients over time.15,19,20

METHODS

Measures

We embedded the BPRS-C and CGAS in an Outcomes Rating Form that also included several other fields that we needed for this project. The Outcomes Rating Form is a single 2-sided measure that contains the BPRS-C, CGAS, and 10 other variables about the treatment facility, patient, and clinician. Individual clinicians are responsible for filling out all fields; they select their answers by filling in a blank circle on the scanable form. Other information collected on each form includes medical record number, date of administration, patient’s age and sex, patient’s insurance provider, hospital location, and indication whether the form is for an initial visit/ admission, a quarterly follow-up, or a discharge visit. Clinical fields include CGAS score, primary and secondary Axes I and II diagnoses, and the 21 BPRS-C items. To ensure reliability, clinicians were provided with the descriptive anchors for the BPRS-C developed by Hughes and colleagues2 and level descriptions for the CGAS as developed by Shaffer and colleagues4 on separate pieces of paper, and received on-site training in how to apply them.

Data Collection

During spring 2005, before the commencement of the outcomes initiative, members of the Child Outcomes committee traveled to all of the PPMH sites to provide clinicians with background on the initiative and instruction in the procedures and the use of the 2 measures. An off-site professional vendor was selected to scan and process the standardized forms. Beginning on July 1, 2005, completed forms were mailed on a monthly basis from each site to the vendor, who then provided data files to the Institute for Health Policy, a healthcare research organization within 1 of the PPMH hospitals. The Institute for Health Policy maintains the database and is primarily responsible for data analyses and reporting on issues of compliance and data accuracy. Administrative personnel generally collect forms, and at many sites those clinicians with outstanding forms are sent reminders.

Setting

Pediatric patients are seen at 20 PPMH sites. These sites are part of academic medical centers (Brigham and Women’s, McLean, and Massachusetts General hospitals), community hospitals (North Shore Medical Center), and community health centers affiliated with these institutions (Massachusetts General Hospital Charlestown, Chelsea, Revere, and North End; Brigham and Women’s South Jamaica Plain and Brookside; North Shore Boston Street and Lafayette Street clinics). The level-of-care array includes inpatient (McLean Franciscan Hospital and North Shore Medical Center Hunt units), combined partial hospital and Acute Residential Treatment (McLean units in Belmont, Brockton, and at the Franciscan Hospital, and the Klarman Eating Disorders Center); freestanding partial hospital (North Shore); and outpatient clinics at 11 of these locations. All sites are in eastern Massachusetts and most are affiliated in some way with the Harvard Medical School. The sites range in size from 2 to 20 full-time-equivalent clinicians.

All patients who were admitted to inpatient, partial hospital, or acute residential treatment, or who had initial visits to outpatient sites between July 1, 2005, and June 30, 2006, were included in the study sample. Patient discharges and follow-up visits after June 30, 2006, that could be matched to admission/initial visits occurring during our study period also were included. The study was reviewed by the Partners Institutional Review Board and classified as exempt.

Because we wanted to assess the degree of compliance with the requirement of obtaining an Outcomes Rating Form for each admission/discharge, we had to calculate the denominator of all new cases for the entire system. We combined data from several PPMH billing databases to determine the total number of admissions and intakes for all sites between July 2005 and the end of June 2006.

Data Analysis

The characteristics of patients participating in our initiative were assessed using information from intake/admission forms. Intake and follow-up completion rates across levels of care were used to assess feasibility. As noted above, completion rates for intakes/admissions were calculated by comparing the number of completed forms with the expected number based on billing data. Follow-up completion rates were calculated by comparing the number of patients with both intake/admission forms and follow-up/discharge forms with the number of patients with intake/admission forms.

We hypothesized that overall validity of the 2 measures score over the course of treatment within each level of care; (2) a significant difference in impairment score on each measure according to the level of care (inpatient, partial hospital, acute residential treatment, outpatient); and (3) cross-measure agreement between the 2 instruments. For the BPRS-C, reliability was explored via the Cronbach alpha statistic. Because the CGAS is a single-item rating, inter-item reliability could not be calculated. Changes in scores over the course of treatment were assessed at the patient level by determining the difference between follow-up/discharge ratings and intake/admission ratings for both the BPRS-C and CGAS, with significance testing using paired t tests. Differences in intake scores across levels of care were assessed by using analysis-of-variance and contrast tests. We assessed cross-measure agreement between the 2 tools at both intake/admission visit and 90-day follow-up/discharge using Pearson product moment correlations.

RESULTS

During the first year of this project, throughout the entire 20 sites of Partners Child Psychiatry and Mental Health System, 3636 children and adolescents were seen for intake/admission. Of these children, 2396 were rated using the standardized Outcomes Rating Form for CGAS/BPRS-C. As would be demonstrated if there were (1) an improvement in shown in Table 1, 508 of these forms (21%) were from inpatient sites, 311 (13%) were from partial hospital sites, 555 (23%) were from acute residential treatment sites, and 1022 (43%) were from outpatient sites. Duplicate record numbers were flagged and removed from the dataset if they appeared by date to represent duplicate ratings of the same patient for the same period of time.

Across all levels of care the mean age of patients was 13.4 years. Systemwide there were 1308 (55%) females and 1075 (45%) males. Table 1 shows that about one-third of all patients (33.6%) were covered by some form of commercial Blue Cross Blue Shield product, another 22% by other commercial insurers, and nearly one-third (29%) were covered by Medicaid. In about 16% of all cases, the insurance could not be categorized or was missing.

Table 2 shows the number of admission/intake forms expected according to Management Information System billing data and the number and percentage of forms actually received for each level of care. As shown, intake form return rates ranged from 63% to 70% across the 4 levels of care, with overall intake/admission returns at 66% systemwide.

Table 2 also shows the number of follow-up/discharge forms received at each level of care for patients who had an Outcomes Rating Form completed at admission/intake. For inpatient, partial, and acute residential treatment hospitals the discharge completion rate ranged from 65% to 79%. At outpatient sites the follow-up completion rate was much lower (22%). Systemwide, the overall rate of follow-up/discharge Outcomes Rating Forms was 49%. It was not possible to calculate the treatment interval between intake and follow-up, because the field on the form asked for date of form completion rather than date of follow-up/discharge. In the outpatient sites the forms were to be given at approximately 90 days into treatment, whereas at inpatient, partial hospital, and acute residential treatment sites the forms were to be completed on the day of discharge, a pre/post interval that could vary considerably.

Given the lower rates of completion for follow-up ratings, within each level of care the full sample at intake was compared with the smaller sample of patients who completed forms at follow-up as well as intake for each of the 3 major demographic variables. Although statistically significant differences were observed for more than half (7/12) of the comparisons, all the differences were small (±10%) and did not appear to show a clear pattern when compared across levels of care. For inpatients there were no significant demographic differences for cases seen for admission and discharge versus those seen for admission alone; for outpatients, only insurance type revealed a significant difference (patients with Medicaid were significantly less likely to have completed follow-up forms). In acute residential treatment and partial hospital cases, however, patients with Medicaid were more likely to complete discharge forms. For acute residential treatment patients, females were less likely to have discharge forms; for partial hospital patients, females were more likely to have discharge forms. For both of these levels of care, patients over age 18 years were less likely to have discharge forms.

Table 3 shows the mean scores, the changes in mean scores, and significance levels of the t tests for pairs comparing pretest and posttest data for subjects with valid BPRS-C and CGAS data at both intake/admission and follow-up/discharge. Time 1 means are for only those 1185 subjects who also have time 2 scores (paired data within subjects), not the full sample of 2396 who completed forms at time 1. The number of subjects at time 2 is much smaller due primarily to the large number of outpatients who did not return for follow-up after intake and/ or missing data. The means are given for each level of care separately.

As the table shows, mean scores for both BPRS-C and CGAS showed significant improvements over time (P <.001 for all) within all 4 levels of care. Paired t tests demonstrate that patients in inpatient, acute residential treatment, partial hospital, and outpatient treatment showed statistically significant improvement in functioning (as measured by mean increases in time 2 CGAS scores), as well as decrease in symptoms (measured by mean decreases in time 2 BPRS-C scores) from intake/admission to 90-day follow-up/discharge.

Table 3 also can be viewed from another perspective, illustrating the way in which BPRS-C and CGAS mean scores differed across levels of care, with poorer functioning at more restrictive levels of care. Contrast analyses using generalized linear models show that mean initial CGAS scores were significantly different among all levels of care and that mean initial BPRS-C scores were significantly different between any pair of settings except between the acute residential treatment and partial hospital settings. Specific contrast analysis significance levels are listed in footnote b of Table 3.

Table 4 shows the correlations between the BPRS-C and CGAS scores within each level of care and in the overall sample at intake/admission (time 1) and at follow-up/discharge (time 2). As in Table 3, data shown are only for subjects who had scores at both time 1 and time 2. At intake/admission these 2 measures showed significant correlations with each other ranging from r = -.34 to r = -.47 for the 4 different levels of care. At follow-up/discharge the correlations between CGAS time 2 and BPRS-C time 2 also were significant and in fact were somewhat higher, ranging from r = -.59 to r = -.69 at the 4 levels of care.

Reliability

For the BPRS-C, the measure clinicians used to rate symptoms of their patients, inter-item reliability was assessed using the Cronbach alpha statistic. Ratings of .78 for initial assessments and .81 for follow-up assessments demonstrate the internal consistency of clinician ratings, suggesting that the BPRS-C was a reliable instrument in this population.

DISCUSSION

Data from the first 12 months of a systemwide outcomes measurement initiative confirm that it is possible to implement procedures using standardized instruments to rate symptom severity and level of functioning at intake/admission and to collect follow-up/discharge data across the full range of facilities for mental health care for children from inpatient to outpatient. All 20 sites in the current sample were able to incorporate standardized rating forms into their intake procedures, with 66% of all eligible intake cases having assessment forms completed during the first year of the initiative.

Follow-up assessment also appeared to be feasible, at least in the more restrictive levels of care. For inpatient, acute residential treatment, and partial hospital patients, the rates of discharge form completion ranged from about two-thirds to about three-quarters of all admissions, whereas in the outpatient settings follow-up forms were completed for fewer than one-quarter of all intakes. Determining the true feasibility of follow-up form completion in outpatient settings proved to be quite difficult. Calculating a valid follow-up form completion rate for outpatient cases was impossible with the data we had because the denominator of all patients who actually entered treatment was not known. Because data on the number of visits for each case were not collected, it was not possible to tell which patients dropped out after just 1 or 2 intake visits.

An unpublished report titled “Measuring outcomes in outpatient child psychiatry: the contribution of electronic technologies and parent report” by J. M. Murphy and colleagues (Massachusetts General Hospital, 2008) provides additional information on completion of follow-up forms. In 1 large clinic where the actual number of visits could be tracked for each patient, an examination of cases that did not have follow-up forms showed that about half of all children seen for intake did not return for more than 1 or 2 additional visits. Because clinicians were not required to fill out follow-up forms for cases unless there had been at least 3 visits, the other cases would have to be excluded from a valid denominator. Using this figure to estimate the true denominator for outpatient follow-up forms in this 1 outpatient clinic would have led to a completion rate of about 40%. It also is important to note that when the above outpatient setting switched to an electronic data collection system, it was possible to obtain follow-up forms for more than 80% of cases. Both of these findings suggest that it is possible to make follow-up assessments more feasible in outpatient settings.

Validity of the measures was supported by several findings. First, scores obtained here were comparable to those reported for patients receiving similar levels of care in other studies. Second, in the current study, the 2 measures used were significantly correlated with each other at both intake and followup. Third, scores on both measures improved significantly from intake to follow-up in all 4 levels of care. Fourth, scores on both measures showed significantly more impairment in more restrictive levels of care compared with less restrictive levels of care. A reasonably high level of inter-item reliability was shown for the BPRS-C at both administrations.

Strengths and Limitations of the Study and the Program

Strengths of this study include a relatively large and representative sample drawn from a relatively large and diverse child psychiatry system of care, high rates of form completion at intake under real-world conditions, and the validity and reliability of the measures used. Limitations include lower levels of form completion at time 2, especially in outpatient settings, and lack of information on compliance with the insurers’ other mandates to incorporate score data into treatment planning and patient feedback. The fact that both measures relied only on clinician report to assess improvement leaves open the question as to the degree to which such reports could be biased.

Strengths of the program were the fact that it was able to incorporate outcome measures as a standard of care in a large and diverse system and that it has been sustainable over more than 3 years. Limitations included the time burden the initiative placed on clinicians and administrative staff. Moreover, many clinicians reported that because they themselves provided all of the information on the rating forms, the scores did not give them any new information that would make a significant contribution to treatment planning. As noted by Busch and Sederer,1 the lack of benefit to individual clinicians and patients was probably a negative incentive for completing the forms. Nevertheless, the Child Outcomes committee concluded that the clinician-completed BPRS-C and CGAS should still be used as a systemwide threshold measure with the possibility that parent-completed instruments and/or diagnosisspecific measures might need to be added in the future. Such additions would have to be weighed carefully, considering the additional burden for clinicians and administrative staff.

In the project’s first year, some unexpected benefits as well as burdens were identified. Data from the project gave the system a yardstick to assess functioning and symptom severity at different levels of care (inpatient, acute residential treatment, partial hospital, and outpatient), which has enabled the psychiatry department to provide quality assurance data for a number of administrative reports.

Although it is not possible from the data in this study to determine the extent to which these measures may have been used to provide data for treatment planning or patient feedback (the insurer’s other 2 mandates), it was clear that the original procedure of using paper and pencil forms mailed out for off-site scoring did not lend itself to providing the kind of rapid feedback that would be most useful for these purposes. However, use of many of the newer types of electronic technology (eg, input of the measures through kiosks, fax backs, tablets, or digital pens) could speed up the process considerably. Several of these methods already have been pilot-tested at 1 of the PPMH sites and have demonstrated an ability to substantially increase outpatient form completion rates at both intake and follow-up (Murphy et al, unpublished report, Massachusetts General Hospital, 2008). With or without these methods, because other major insurers now are mandating their own outcomes measures, it seems likely that the use of such measures is here to stay.

Author Affiliations: From the McLean Hospital (JG, RJB, MP, RLB), Belmont, MA; the Department of Psychiatry (JG, RJB, MP, CV, MJ, JMM), Harvard Medical School, Boston, MA; the Department of Psychiatry (RAC), North Shore Medical Center, Salem, MA; the MGH Institute for Health Policy (CV, ZL), Boston, MA; the Department of Psychiatry (NTK, RLB, MJ, JMM), Massachusetts General Hospital, Boston, MA; Partners Psychiatry and Mental Health (SJD, KJS), Boston, MA; and the Department of Psychiatry (MJ), Newton Wellesley Hospital, Newton, MA.

Funding Source: None reported.

Author Disclosure: The authors (JG, RJB, RAC, MP, CV, NTK, RLB, ZL, MJ, SJD, KJS, JMM) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (JG, RJB, RAC, MP, NTK, ZL, MJ, SJD, KJS, JMM); acquisition of data (RJB, MP, NTK, RLB, JMM); analysis and interpretation of data (JG, RJB, RAC, MP, CV, NTK, RLB, ZL, SJD, KJS, JMM); drafting of the manuscript (RAC, MP, NTK, RLB, MJ, JMM); critical revision of the manuscript for important intellectual content (JG, RJB, MP, CV, RLB, ZL, MJ, SJD, KJS, JMM); statistical analysis (CV, ZL, JMM); provision of study materials or patients (RJB, RLB, JMM); administrative, technical, or logistic support (RJB, RLB, MJ, JMM); and supervision (CV, MJ).

Address correspondence to: J. Michael Murphy, Department of Psychiatry, Massachusetts General Hospital, 55 Fruit St, YAW 6A, Boston, MA 02114. E-mail: mmurphy6@partners.org.

1. Busch AB, Sederer LI. Assessing outcomes in psychiatric practice: guidelines, challenges, and solutions. Harv Rev Psychiatry. 2000;8(6):323-327.

2. Hughes CW, Rintelmann J, Emslie GJ, Lopez M, MacCabe N. A revised anchored version of the BPRS-C for childhood psychiatric disorders. J Child Adolesc Psychopharmacol. 2001;11(1):77-93.

3. Massachusetts Behavioral Health Partnership (MBHP). Clinical outcomes management protocol: performance specifications and phase-in timelines.http://www.masspartnership.com/provider/outcomesmanagement/Outcomesfiles/Quality%20Alert%2010_%20Final.pdf. March 24, 2009.

4. Shaffer D, Gould MS, Brasic J, et al. A Children’s Global Assessment Scale (CGAS). Arch Gen Psychiatry. 1983;40(11):1228-1231.

5. DelBello MP, Findling RL, Kushner S, et al. A pilot controlled trial of topiramate for mania in children and adolescents with bipolar disorder. J Am Acad Child Adolesc Psychiatry. 2005;44(6):539-547.

6. Lachar D, Randle SL, Harper R, et al. The Brief Psychiatric Rating Scale for Children (BPRS-C): validity and reliability of an anchored version. J Am Acad Child Adolesc Psychiatry. 2001;40(3):333-340.

7. Ross RG, Novins D, Farley GK, Adler LE. A 1-year open-label trial of olanzapine in school-age children with schizophrenia. J Child Adolesc Psychopharmacol. 2003;13(3):301-309.

8. Stayer C, Sporn A, Nitin G, et al. Multidimensionally impaired: the good news. J Child Adolesc Psychopharmacol. 2005;15(3):510-519.

9. Zalsman G, Carmon E, Martin A, Bensason D, Weizman A, Tyano S. Effectiveness, safety, and tolerability of risperidone in adolescents with schizophrenia: an open-label study. J Child Adolesc Psychopharmacol. 2003;13(3):319-327.

10. Overall JE, Gorham DR. The Brief Psychiatric Rating Scale. Psychol Rep. 1962;10:799-812.

11. Overall JE, Pfefferbaum B. The Brief Psychiatric Rating Scale for Children [Brief Reports and Reviews]. Psychopharmacol Bull. 1982;18(2):10-16.

12. Gabbard GO, Coyne L, Kennedy LL, et al. Interrater reliability in the use of the Brief Psychiatric Rating Scale. Bull Menninger Clin. 1987;51(6):519-531.

13. Rufino AC, Uchida RR, Vilela JA, Marques JM, Zuardi AW, Del-Ben CM. Stability of the diagnosis of first-episode psychosis made in an emergency setting. Gen Hosp Psychiatry. 2005;27(3):189-193.

14. Gale J, Pfefferbaum B, Suhr MA, Overall JE. The Brief Psychiatric Rating Scale for Children: a reliability study. J Clin Child Psychol. 1986;15:341-345.

15. Schorre BE, Vandvik IH. Global assessment of psychosocial functioning in child and adolescent psychiatry. A review of three unidimensional scales (CGAS, GAF, GAPD). Eur Child Adolesc Psychiatry. 2004;13(5):273-286.

16. Bird HR, Canino G, Rubio-Stipec M, Ribera JC. Further measures of the psychometric properties of the Children’s Global Assessment Scale. Arch Gen Psychiatry. 1987;44(9):821-824.

17. Winters NC, Collett BR, Myers KM. Ten-year review of rating functional scales, VII: scales assessing impairment. J Am Acad Child Adolesc Psychiatry. 2005;44(4):309-338.

18. Dyrborg J, Larsen F, Nielsen S, Byman J, Buhl Nielsen B, Gautre-Delay F. The Children’s Global Assessment Scale (CGAS) and Global Assessment of Psychosocial Disability (GAPD) in clinical practice substance and reliability as judged by intraclass correlations. Eur Child Adolesc Psychiatry. 2000;9(3):195-201.

19. Marriage K, Petrie J, Worling D. Consumer satisfaction with an adolescent inpatient psychiatric unit. Can J Psychiatry. 2001;46(10):969-975.

20. Sourander A, Helenius H, Piha J. Child psychiatric short-term inpatient treatment: CGAS as follow-up measure. Child Psychiatry Hum Dev. 1996;27(2):93-104.

Related Videos
Related Content
© 2024 MJH Life Sciences
AJMC®
All rights reserved.