A methodologically sound, empirically based approach to creating peer groupings can and should be adapted to fit the setting of nursing homes.
Publicly reported performance data for hospitals and nursing homes are becoming ubiquitous. For such comparisons to be fair, facilities must be compared with their peers.
To adapt a previously published methodology for developing hospital peer groupings so that it is applicable to nursing homes and to explore the characteristics of “nearest-neighbor” peer groupings.
Analysis of Department of Veterans Affairs administrative databases and nursing home facility characteristics.
The nearest-neighbor methodology for developing peer groupings involves calculating the Euclidean distance between facilities based on facility characteristics. We describe our steps in selection of facility characteristics, describe the characteristics of nearest-neighbor peer groups, and compare them with peer groups derived through classical cluster analysis.
The facility characteristics most pertinent to nursing home groupings were found to be different from those that were most relevant for hospitals. Unlike classical cluster groups, nearest neighbor groups are not mutually exclusive, and the nearest-neighbor methodology resulted in nursing home peer groupings that were substantially less diffuse than nursing home peer groups created using traditional cluster analysis.
It is essential that healthcare policy makers and administrators have a means of fairly grouping facilities for the purposes of quality, cost, or efficiency comparisons. In this research, we show that a previously published methodology can be successfully applied to a nursing home setting. The same approach could be applied in other clinical settings such as primary care.
Am J Manag Care. 2013;19(11):933-939Comparison of healthcare facilities can be done for assessment of efficiency, as a step in quality improvement, or for resource allocation purposes, to name a few reasons. Our methodology, which results in peer groups of facilities with similar characteristics, is a feasible alternative to classical cluster analysis.
Given the competitive nature of the healthcare industry, as well as continually rising healthcare costs, administrators of healthcare systems are increasingly interested in evaluating the quality of care provided in their system, as well as the efficiency of resource use. To do these evaluations, administrators often compare quality of care or efficiency of performance across healthcare systems or across units within a single healthcare system. Comparisons across units within a system can be useful in determining where resources should be allocated, where incentives or bonuses might be warranted, or where quality improvement is needed. However, to ensure that comparisons across units are equitable, it is important that the units or facilities that are being compared are similar to one another (peer facilities or peers).
In the research presented here, we consider the identification of peers among nursing home facilities within the Department of Veterans Affairs (VA) healthcare system. We used a methodology that we have previously developed for identification of peers among VA hospitals, modified to accommodate the special characteristics of VA nursing homes.1 Department of Veterans Affairs nursing homes (VANHs) are quite divergent in their characteristics; thus, it is important that comparable peers be identified for equitable comparisons to be made across facilities. This can be done using a peer grouping methodology. By determining appropriate VANH peers, we can facilitate the equitable benchmarking of nursing homes in areas of quality or financial evaluation and comparisons.
Here we describe our “nearest-neighbor” methodology for developing VANH peer groupings. We then explore the characteristics of the VANH peer groups developed by using the nearest-neighbor methodology and compare them with peer groups developed by using traditional cluster analysis. We emphasis that this methodology is agnostic to healthcare setting (ie, VA vs non-VA) and can be applied to a wide variety of settings.
Development of Peer Facilities Using Nearest-Neighbor and Traditional Cluster Methodologies
We included all 130 nursing facilities in the VA in our analyses. To address the widely disparate scales of the measures that we
use, all variables are standardized by converting the measure to a z score for the population. With the nearest-neighbor methodology, the software program Clustan (Clustan Software, Edinburgh, UK) was used to calculate the general squared Euclidean distance (GSED) coefficient, which is a multidimensional distance between all pairs of facilities based on the selected variables. Clustan allowsfor a combination of both continuous and binary measures. The GSED coefficient for the distance between any 2 facilities j and i is calculated as:
dij 2 = Sk [ wijk ( Xik — Xjk)2 / Sk wijk ]
where xik is the value of variable k in case i, and wijk is a weight of 1 or 0 depending upon whether or not the comparison is valid for the kth variable. If differential variable weights are specified, it is the weight of the kth variable, or 0 if the comparison is not valid. We report the Euclidean distance, which is the square root of the GSED.
We identified the facilities most closely surrounding each reference facility in the multidimensional space, providing each facility with a customized group of similar facilities. In this way, there were the same number of peer groups as there were facilities.
For comparison with this methodology, we generated peer groups using a standard 2-stage cluster analysis technique.2 We used the same variables and data as in the nearest-neighbor approach. Ward’s method of hierarchical clustering was used to generate cluster seeds.3 We then used these seeds as input to the standard k-means iterative algorithm for cluster analysis. Based on the R2 statistics, which represent the proportion of variation in distance that is explained by the clustering, we elected to create 8 peer groups from the set of VANHs.
There are several differences between peers developed with classical cluster analysis and nearest-neighbor analysis. In classical cluster analysis, membership of a facility in peergroups is mutually exclusive, the size of the peer groups is determined by the results of the clustering, and a facility may be on the “edge” of the sole group that it is in with respect to a certain measure or measures. For example, a facility may have the largest number of beds of any facility in its peer grouping by a substantial margin. Our modification of cluster analysis addressed these concerns. With the nearest-neighbor method, the number of facilities that were designated as peers to the reference facility were selected depending on the needs of the analysis or comparison. Facilities could and did appear in more than 1 peer grouping, and each facility was by design the hub of its own peer grouping. Finally, the peer relationship was not commutative. If facility B was in facility A’s peer group, facility A might or might not have been in facility B’s peer group.
Identifying Measures to Use for Peer Groupings
A number of steps, outlined below, were taken to determine which variables to use in developing peer groups for this research. At each step in the process, we took the latest results to a panel of VA nursing home experts who provided feedback on whether our findings and conclusions mapped to their experience and observations.
Step 1. We examined literature on risk adjustment, casemix reimbursement, determinants of quality of care, and evaluations of quality improvement initiatives.4-15 We also queried our expert panel on variables that they viewed as important in categorizing nursing homes into peer groups. We included variables that were related to how many resources would be needed to run the facility or maintain a certain level of quality or treat patients; variables that characterized the facility and gave information about how costly it would be to treat patients there or about the facility’s environment; and importantly, variables related to facility characteristics that could not be easily changed by the administrators and affected by patient cost. For example, a care center in an urban area with an academic affiliation would be expected to have higher costs than a small rural facility.
We identified an extensive list of potential measures that might be important in developing peer groupings. These measures fell into the following domains, which we determined were critical for assessing similarities and differences among facilities: size, academic mission, workload/case mix, patient population, and community environment. The expert panel approved the complete list.
Step 2. We narrowed the variables to a working list to use in the peer group algorithm. For this task, we used a combination of technical and practical considerations. We gave precedence to variables that were continuous rather than binary or categorical, as continuous variables provide more refined information to use in the algorithm (technical consideration).
Step 3. We considered the availability of each variable, giving preference to more easily obtainable and more frequently updated data (practical consideration).
Step 4. For parsimony in our peer grouping model, we wanted to eliminate variables that did not add new or unique information on the nursing homes. Therefore, we estimated correlations among all of the candidate variables in each domain. Where correlations within a domain were high, we eliminated 1 of the correlated variables (mean correlation [standard deviation] of variables within same domain was 0.283 [0.218]; range, 0.005-0.910). The expert panel approved the reduced list.
Development of Measure Weights for Use in Peer Groupings
To calibrate the relative influence of the various measures, we weighted some of the model variables higher than others before inclusion in the algorithms, with a weight of 1 as the basis point. Selection of the variables to weight higher than 1 was done according to a number of considerations: salience in peer group development or nursing home literature, and information on the importance of the variable provided by our expert panel. The weights greater than 1 that were chosen are shown in in parentheses. These weights ranged from 2, for total operating beds, to 6 for mean activities of daily living (ADLs). The expert panel concurred with the choice of weights.
Exploration of Peer Group Characteristics and Comparison With Traditional Cluster Analysis
We explored the attributes of the peer groups created with the nearest-neighbor methodology and compared them with peer groups formed using traditional cluster analysis. These analyses mirror closely those presented for hospitals as a whole in previous research.1
Nondiscrete Nature of Nearest-Neighbor Peer Groups. We present nearest-neighbor peer groups for 2 reference nursing facilities (here using 20 VANHs per group) to demonstrate the non—mutually exclusive nature of groups developed with this methodology.
Effect of Number of VANHs in a Peer Group on Distance to Furthest Peer. To explore how the number of VANHs in a peer group affected the Euclidean distance from the reference VANH to furthest peer, we developed 8 sets of peer groups using increments of 5 (ie, 5, 10, 15, … 40) as the number of VANHs per peer group. We then calculated for each peer group in each of the sets the Euclidean distance from the reference VANH to farthest peer. We present graphically the distribution of these Euclidean distances for each of these sets of peer groups.
Comparison With Traditional Cluster Analysis. We compared the 2 peer grouping methodologies on how diffuse the resulting peer groups were by computing the square root of the sum of squares (RSS) of the Euclidean distances between all pairs of members of the same group, for all peer groups. The RSS of distances is a metric that represents the diffusion of the peer group, with higher values indicating a more diffuse group. We present minimum and maximum RSS of distances across peer groups for the nearest-neighbor groups, as well as the RSS of distance measures for each of the 8 peer groups
created using traditional cluster analysis.
RESULTSMeasures Used for Peer Groupings
The final measures that were included in our model are shown in Table 1. Definitions of each of these variables, and data sources for construction of these variables, are shown in the (available at www.ajmc.com).
Characteristics of Nearest-Neighbor Peer Groups
Nondiscrete Nature of Nearest-Neighbor Peer Groups. shows an extreme example of the non—mutually exclusive nature of the nearest-neighbor peer groups: Minneapolis was the closest neighbor peer for the grouping where Chicago was the reference VANH, but Chicago was not in Minneapolis’s peer group at all. That was because the peergroup that had Minneapolis as a reference was much less dispersed than the group for Chicago.
Effect of Number of VANHs in a Peer Group on Distance to Furthest Peer. The shows the distribution of Euclidean distances from the reference VANH to the farthest VANH for peer groupings of different sizes (n = 5-40) for the 130 VANH peer groupings. Using a peer group size of 5, for example, the Figure shows that there is a larger cluster of Euclidean distances from reference VANH to farthest peer at around 0.50, and a long right hand—skewed tail. The concentration of Euclidean distances from reference to farthest peer moves to the right as the number of peers increases; however, it does not move to the right very quickly. The distribution of Euclidean distances for peer groupings with 35 members, for example, shows a concentration only twice that of the groups with 5 members.
Comparison With Traditional Cluster Analysis. shows a comparison of the nearest-neighbor peer groupings with peer groupings created by traditional cluster analysis. Because these methodologies are fundamentally different, some of the comparisons were constrained by the methodology. The nearest-neighbor methodology produced 1 peer group for each unique VANH, whereas in traditional cluster analysis the number of groups is determined by the analysis parameters. Also, in nearest-neighbor groupings, the investigator chooses the number of facilities in a group, whereas the cluster analysis itself determines the number of facilities in each grouping, although this can be manipulated by changing analysis parameters.
Table 3 also shows how the RSS of distances between members compared between traditional cluster and nearest-neighbor analysis. The nearest-neighbor peer groups showed substantial variation in diffusion at each of the different peer group sizes. For example, when we included 10 facilities in each peer grouping, the RSS ranged from 1.19 to 290.24. At each comparable peer group size, the nearest-neighbor group that was least diffuse was substantially less diffuse than the traditional group of that size; the opposite was dramatically true for a comparison of the most diffuse nearest-neighbor group. For example, when comparing the 20-member groupings, the cluster analysis RSS was 8.52, and the nearestneighbor groupings ranged from 2.28 to 449.75. When comparing the median RSS between the 2 methodologies, the 3 nearest-neighbor groupings with the most members had greater diffusion than the cluster analysis groupings, whereas the opposite was true for the 5 nearest-neighbor groups with fewer members.
In the study by Byrne and colleagues,1 we described the application of a 2-step process to identify peer groups for medical facilities and applied it to VA medical centers. In the present study, we show how the same technique can be applied to nursing homes. We believe that our method has strong face validity and is practical, because it is grounded in the clinical insights of healthcare providers, uses readily available databases, and applies methodologically sound clustering techniques. We believe that the 2-step methodology described here is generalizable to a wide range of clinical and healthcare settings. No aspect of this process is unique to the VA or to nursing homes. However, it is important that the analyst applying this methodology take the time to determine which attributes of the healthcare setting are most important for grouping the facilities. After that, the healthcare system that wants to implement this methodology must have the data that are needed to adequately describe the facilities of interest. When these 2 steps are successfully taken, this technique (1) can be applied to any large collection of medical facilities (eg, outpatient clinics) in which there is sufficient diversity that they cannot accurately be compared as a homogeneous group and (2) can be used, for example, to compare the facilities on the basis of efficiency, cost, or other outcome measure. It can also be used to look for problem areas in quality of care (ie, to determine whether patients are at greater risk in facilities with certain characteristics).
In this study, we also explored the characteristics of the nearest-neighbor groups that we developed and compared them with traditional cluster analysis. We found that the nearest-neighbor peer groups were less diffuse than those developed with traditional cluster analysis. In addition, the Euclidean radius distances of the peer groups showed a rightskewed distribution for a range of peer group sizes, and the high point of the distribution did not move substantially to the right as the number of facilities in a peer group increased. If tight-knit clusters are a desirable trait for peer groups, then researchers and policy makers should consider a nearest- neighbor approach when developing peer facility groupings.
Increasingly, healthcare providers are being compared based on quality, efficiency, and finances. These decisions are being made across a wide range of healthcare service providers. For example, the Hospital Compare website provides information that allows consumers to directly compare performance measures across multiple hospitals.16 For such comparisons to be fair and to secure buy-in by facility leaders and staff, providers must be compared with their peers, and careful consideration of who are truly “peers” is essential. Author Affiliations: From Department of Epidemiology and Public Health (MMB), University of Miami, Miami, FL; Department of Health and Mental Hygiene (CD), State of Maryland, Baltimore, MD; Health Policy and Quality Program (KP, BR, LAP), Houston VA Health Services Research and Development Center of Excellence, Michael E. DeBakey VA Medical Center and Section for Health Services Research, Baylor College of Medicine, Houston, TX.
Funding Source: This research was conducted with support from a VA contract (project XVA 33-102) at the request of Veterans Integrated Service Networks 1, 12, and 23. This work was supported in part by the Houston VA HSR&D Center of Excellence (HFP90-020). Dr Byrne was a National Cancer Institute career development awardee (K07 CA101812) at the time this research was conducted. The views expressed are solely of the authors, and do not necessarily represent those of the VA.
Author Disclosures: The authors (MMB, CD, KP, BR, LAP) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (MMB, CD, LAP); acquisition of data (CD, BR, LAP); analysis and interpretation of data (MMB, CD, KP, BR, LAP); drafting of the manuscript (MMB, BR, LAP); critical revision of the manuscript for important intellectual content (MMB, KP, BR, LAP); statistical analysis (KP, BR); obtaining funding (LAP); and supervision (MMB).
Address correspondence to: Margaret M. Byrne, PhD, Department of Epidemiology and Public Health, University of Miami, 1120 NW 14th St, Miami, FL 33136. E-mail: email@example.com. Byrne M, Daw CN, Nelson HA, Urech TH, Pietz K, Petersen LA. Method to develop health care peer groups for quality and financial comparisons across hospitals. Health Serv Res. 2009;44(2, pt 1):577-592.
2. Everitt BS, Landau S, Leese M, Stahl D. Cluster Analysis. 5th ed. Hoboken, NJ: John Wiley & Sons, Ltd; 2011.
3. Punj G, Stewart DW. Cluster analysis in marketing research: review and suggestions for application. J Market Res. 1983;20(2):134-148.
4. Grabowski DC, Angelelli JJ, Mor V. Medicaid payment and risk-adjusted nursing home quality measures. Health Aff (Millwood). 2004;23(5): 243-252.
5. Rosen A, Wu J, Chang BH, et al. Risk adjustment for measuring health outcomes: an application in VA long term care. Am J Med Qual. 2001;16(4):118-127.
6. Zinn J, Spector W, Hsieh L, Mukamel DB. Do trends in the reporting of quality measures on the nursing home compare web site differ by nursing home characteristics? Gerontologist. 2005;45(6):720-730.
7. Rask K, Parmelee PA, Taylor JA, et al. Implementation and evaluation of a nursing home fall management program. J Am Geriatr Soc. 2007; 55(3):342-349.
8. Hallenbeck J, Hickey E, Czarnowski E, Lehner L, Periyakoil VS. Quality of care in a Veterans Affairs’ nursing home-based hospice unit. J Palliative Med. 2007;10(1):127-135.
9. Bjorkgren MA, Fries BE. Applying RUG-III for reimbursement of nursing facility care. Int J Healthcare Technol Manage. 2006;7(1/2):82-99.
10. Baier RR, Gifford DR, Patry G, et al. Ameliorating pain in nursing omes: a collaborative quality-improvement project. J Am Geriatr Soc. 2004;52(12):1988-1995.
11. Parmelee PA. Quality improvement in nursing homes: the elephants in the room. J Am Geriatr Soc. 2005;52(12):2138-2140.
12. McCarthy JF, Blow FC, Kales HC. Disruptive behaviors in Veterans Affairs nursing home residents: how different are residents with serious mental illness? J Am Geriatr Soc. 2004; 52(12):2031-2038.
13. Schonfeld L, King-Kallimanis B, Brown LM, et al. Wanderers with cognitive impairment in Department of Veterans Affairs nursing home care units. J Am Geriatr Soc. 2007;55(5):692-699.
14. Grabowski DC, Stewart KA, Broderick SM, Coots LA. Predictors of nursing home hospitalization: a review of the literature. Med Care Res Rev. 2008;65(1):3-39.
15. Mukamel DB, Spector WD. Nursing home costs and risk-adjusted outcome measures of quality. Med Care. 2000;38(1):78-89.
16. Centers for Medicare & Medicaid Services. Hospital compare. http://www.medicare.gov/hospitalcompare/. Database last updated April 18, 2013. Accessed December 11, 2012.