CMS implemented the Categorical Adjustment Index as part of the Medicare Advantage and Part D Star Rating Program in 2017. These analyses informed its development. Check out our website’s new table/figure pop-up feature! Click on the name of a table or figure in the text to see it in your browser.
Objectives: Studies have identified potential unintended effects of not adjusting clinical performance measures in value-based purchasing programs for socioeconomic status (SES) factors. We examine the impact of SES and disability adjustments on Medicare Advantage (MA) plans’ and prescription drug plans’ (PDPs’) contract star ratings. These analyses informed the development of the Categorical Adjustment Index (CAI), which CMS implemented with the 2017 star ratings.
Study Design: Retrospective analyses of MA and PDP performance using 2012 Medicare beneficiary-level characteristics and performance data from the Star Rating Program.
Methods: We modeled within-contract associations of beneficiary SES (Medicaid and Medicare dual eligibility [DE] or receipt of a low-income subsidy [LIS]) and disability with performance on 16 clinical measures. We estimated variability in contract-level DE/LIS and disability disparities using mixed-effects regression models. We simulated the impact of applying the CAI to adjust star ratings for DE/LIS and disability to construct the 2017 star ratings.
Results: DE/LIS was negatively associated with performance for 12 of 16 measures and positively associated for 2 of 16 measures. Disability was negatively associated with performance for 11 of 15 measures and positively associated for 3 of 15 measures. Adjusting star ratings using the CAI resulted in half-star rating increases for 8.5% of MA and 33.3% of PDP contracts that exceeded 50% DE/LIS beneficiaries.
Conclusions: Increases in star ratings following adjustment of clinical performance for SES and disability using the CAI focused on contracts with higher percentages of DE/LIS beneficiaries. Adjustment for enrollee characteristics may improve the accuracy of quality measurement and remove incentives for providers to avoid caring for more challenging patient populations.
Am J Manag Care. 2018;24(9):e285-e291Takeaway Points
CMS implemented the Categorical Adjustment Index (CAI) as part of the Medicare Advantage and Part D Star Rating Program in 2017. These analyses informed its development.
Policy makers use quality measurement, public accountability, and financial incentives to induce health plans and providers to improve performance. Frequently, the process-of-care and intermediate outcome measures on which plans and providers are evaluated are not adjusted for differences in patient socioeconomic status (SES) across plans and providers. Pay-for-performance programs that do not account for differences in patients across providers in their quality measurement risk reducing funding to providers that treat medically complex, disabled, and socioeconomically disadvantaged patients, potentially reinforcing or exacerbating existing SES disparities, as more resources may be required to achieve high quality for such patients.1-3 Moreover, providers caring for disadvantaged patients tend to have fewer resources available to invest in quality improvement due to lower reimbursement rates compared with providers with predominantly commercially insured patients.1,4-7
Low-SES individuals receive recommended care less often and experience worse health outcomes than those with higher SES,2,8,9 possibly because they have greater health burdens and barriers to care, including limited transportation and lower health literacy,10-13 which may discourage providers from treating disadvantaged patients. A recent examination of associations between social risk factors (dual eligibility [DE] for Medicare and Medicaid, black race, Hispanic ethnicity, disability, and rural residence) and performance on quality measures included in 9 Medicare value-based purchasing programs found that beneficiaries with social risk factors received recommended care less often.14 Providers disproportionately serving high-risk beneficiaries performed worse on average, even after controlling for beneficiary differences.
Policy makers have identified closing these quality gaps as a key policy priority.15 One approach to accomplish this is to adjust quality measures for socioeconomic factors. The National Quality Forum (NQF)16 and HHS14 have called on sponsors of value-based measurement and payment programs to determine whether quality measures should be adjusted for differences in providers’ patient mix. The National Academies of Sciences, Engineering, and Medicine developed criteria for determining which social risk factors to address.8,9
Although adjustment of process-of-care and intermediate outcome measures for socioeconomic factors is rare, the use of SES-adjusted patient experience measures in the Medicare Advantage (MA), prescription drug plan (PDP), and Hospital Consumer Assessment of Healthcare Providers and Systems surveys are examples of nationwide implementation of such adjustment.17 Clinical measures in the Medicare Star Ratings Program are not currently adjusted for SES.
Annually, Medicare computes star ratings based on MA and PDP contract performance on clinical, patient experience, customer service, and complaint measures. The star ratings are reported on Medicare Plan Finder,18 determine MA quality-based bonus payments, and affect MA rebates and enrollment (see eAppendix [available at ajmc.com] for star ratings description). To address concerns that the star ratings disadvantage contracts serving low-SES and disabled beneficiaries, CMS implemented a Categorical Adjustment Index (CAI)19 beginning with the 2017 star ratings as an interim policy until measure developers evaluate which clinical measures should be adjusted. The CAI approximates the effect of case mix on star ratings, adjusting the underlying clinical measures for SES characteristics available in CMS administrative data (DE/low-income subsidy [LIS]) and disabled status.
We present analyses that informed the development of the CAI. Our analyses addressed 3 questions: (1) Do within-contract SES and disability performance disparities exist for clinical measures used in the Medicare Star Ratings Program?, (2) How consistent are within-contract disparities across contracts?, and (3) How does adjusting for SES differences affect the overall star rating of MA and PDP contracts, particularly for contracts serving a large portion of beneficiaries who are dually eligible for Medicare and Medicaid, receive a Part D LIS, or are disabled?
STUDY DATA AND METHODS
We used patient-level data from the 2014 star ratings (measurement year 2012) to assess the relationship of contract performance with SES (ie, DE/LIS) and disability and to develop the CAI. The 2014 star ratings used 48 Part D (prescription drug) and Part C (health plan) measures to rate MA prescription drug contracts, 36 Part C measures to rate MA-only contracts, and 15 Part D measures to rate PDPs. Analyses included all MA and PDP contracts eligible for star ratings. We excluded Puerto Rico contracts from analyses due to program differences.
Performance measures. We examined the effect of SES and disability adjustment for 16 (13 Part C and 3 Part D) clinical measures (Table 1; see eAppendix for description). We excluded from evaluation those measures that were already adjusted for SES (n = 10 measures), being retired or revised (n = 6), used only for Special Needs Plans (SNPs; n = 3), addressing plan-level customer service (n = 12), or under direct provider or plan control (n = 1; high-risk medication). Measures were coded to indicate whether the beneficiary received the recommended care or achieved the measured outcome (0 = no; 1 = yes).
Low SES. Beneficiaries were classified as low SES if they were partially or fully dually eligible for Medicare and Medicaid as of December 2012 or if they applied and were approved for an LIS.
Disability. Beneficiaries were classified as disabled based on their original reason for Medicare eligibility.
NQF recommends considering adjustment for within-provider disparities (the extent to which low-SES patients receive lower-quality care than high-SES patients within the same provider) while preserving between-provider differences in performance (the extent to which all patients of a given provider receive lower-quality care than others). Consistent with this recommendation, we assessed average within-contract DE/LIS disparities for each of the 16 measures by fitting logistic regressions predicting performance from the DE/LIS indicator, using fixed effects for MA and PDP contracts to control for between-contract performance differences (see eAppendix for additional detail). A sensitivity test examined the effect on DE/LIS and disabled effects after adjusting for Census-based SES characteristics (block group—level education and income/poverty; see eAppendix for additional detail). We performed similar analyses for the disability indicator.
Contract-level variation in disparity in performance for DE/LIS versus non-DE/LIS beneficiaries was estimated in percentage points using a linear mixed effects model that included DE/LIS as a predictor, mean-centered at the contract level, and random effects for contract and the contract-by-DE/LIS interaction, using empirical best linear unbiased predictions (BLUPs) to account for sampling error in contract-level disparity estimates. As expected from the sample sizes,20 results were insensitive to normality assumptions (not shown). We performed similar analyses for the disability indicator.
Categorical Adjustment Index
The CAI adjustment factor is applied to groups of contracts. Each contract is assigned to an adjustment group based on the percentage of its beneficiaries who are DE or LIS or disabled. The measure subset selected by CMS for the 2017 star ratings CAI is limited to measures with large and/or consistent within-contract disparities, as determined by measures for which the within-contract DE/LIS disparities based on BLUPs were large (median absolute difference in performance of 5 or more percentage points between DE/LIS and non-DE/LIS enrollees) or consistent (DE/LIS performed worse/better than non-DE/LIS enrollees in all contracts) in the 2012 measurement year data. Adjusted scores for the CAI measure subset were derived from logistic regression with contract fixed effects, DE/LIS, and disabled status as righthand-side variables. An adjusted overall star rating for each contract was simulated based on these adjusted measure scores plus all other star rating measure scores. The value of the CAI adjustment factor was computed as the average difference between contract-level adjusted and unadjusted overall star ratings within each CAI adjustment group. To simulate the effect of DE/LIS and disability adjustment on star ratings, we applied the CAI to the star ratings separately for MA and PDP contracts using the 2015 measurement year data (2017 star ratings). The results are summarized for contracts overall and stratified by contract percentage of DE/LIS beneficiaries (<50% DE/LIS and ≥50% DE/LIS).
The study was approved by RAND Corporation’s Human Subjects Protection Committee.
The analyses using the 2012 data included 620 MA and 76 PDP contracts. The number of MA contracts that met the denominator criteria for individual measures varied from 341 to 613 (Table 1). All PDP contracts met the denominator criteria for each of the included prescription drug event (PDE) measures. The average contract-level percentage of DE/LIS beneficiaries was 40.5% (SD = 38.7%), ranging from 0.4% to 100%, for MA contracts. PDP contracts averaged 22.1% DE/LIS (SD = 27.7%), ranging from 0.0% to 86.2%. Contracts with at least 1 SNP, which focus on specific subpopulations of Medicare beneficiaries, including those who are dual-eligible, have chronic conditions, or reside in institutions, had more DE/LIS beneficiaries than contracts without an SNP (59.0% vs 27.8%; P <.0001). Roughly one-third (34.4%) of MA contracts and one-fifth (21.1%) of PDP contracts had at least 50% beneficiaries who were DE/LIS. MA contracts averaged 19.8% (SD = 16.2%) disabled beneficiaries (ranging from 0.0% to 97.1%), whereas PDPs averaged 17.9% (SD = 15.2%) disabled beneficiaries (ranging from 0.0% to 54.7%). Contracts with SNPs enrolled more disabled beneficiaries than contracts without an SNP (29.4% vs 14.6%; P <.0001). Approximately one-fourth (23.2%) of MA contracts and one-third (31.6%) of PDP contracts had at least 25% disabled beneficiaries.
Within-Contract SES and Disability Performance Disparities
Controlling for between-contract differences, DE/LIS beneficiaries received significantly worse care for 12 of 16 MA measures (Figure 1), with odds ratios (ORs) ranging from 0.68 (95% CI, 0.66-0.70) to 0.94 (95% CI, 0.93-0.95). DE/LIS beneficiaries were more likely to have an adult body mass index assessment (OR, 1.10; 95% CI, 1.06-1.14) and have better performance on the measure reducing risk of falling (OR, 1.67; 95% CI, 1.60-1.74) than non-DE/LIS beneficiaries; there were not significant overall differences between DE/LIS and other beneficiaries for 2 measures (controlling high blood pressure and monitoring physical activity). Within PDPs, DE/LIS beneficiaries received significantly lower-quality care than other beneficiaries on all 3 PDE measures, with ORs ranging from 0.67 (95% CI, 0.66-0.67) to 0.81 (95% CI, 0.81-0.81). These results were not sensitive to further adjustment for Census-based SES characteristics (block group—level education and income/poverty) (eAppendix).
Controlling for contract, disabled beneficiaries received significantly less care for 11 of 15 MA measures (Figure 1), with ORs ranging from 0.56 (95% CI, 0.51-0.62) to 0.93 (95% CI, 0.91-0.96). Disabled beneficiaries were more likely to receive rheumatoid arthritis management (OR, 1.13; 95% CI, 1.10-1.17) and have better performance on the measures reducing risk of falling (OR, 1.32; 95% CI, 1.22-1.42) and monitoring physical activity (OR, 1.33; 95% CI, 1.26-1.40) than other beneficiaries; there were not significant overall differences between disabled and other beneficiaries for 1 measure (controlling high blood pressure). Within PDPs, disabled beneficiaries received significantly lower-quality care than other beneficiaries on all 3 PDE measures, with ORs ranging from 0.61 (95% CI, 0.61-0.61) to 0.74 (95% CI, 0.74-0.75).
Consistency of Within-Contract Disparities Across Contracts
Figure 2 illustrates the heterogeneity of the within-contract difference in care received by DE/LIS beneficiaries relative to non-DE/LIS beneficiaries for each measure; Figure 3 provides analogous information for disability. DE/LIS beneficiaries receive, on average, lower-quality care than non-DE/LIS beneficiaries in contracts.
For 3 measures, DE/LIS beneficiaries received lower-quality care than non-DE/LIS beneficiaries in all MA contracts. DE/LIS beneficiaries received lower-quality care in at least 90% of contracts for an additional 3 measures, but higher-quality care in all contracts for 1 measure. For PDPs, DE/LIS beneficiaries received lower-quality care in all contracts for 1 PDE measure and lower-quality care in 90% or more of PDPs for the remaining 2 Part D measures.
Disabled beneficiaries received lower-quality care than nondisabled beneficiaries in all MA contracts for 6 measures and received lower-quality care in at least 90% of contracts for 3 additional measures. They received higher-quality care in at least 90% of MA contracts for 2 measures. For PDPs, disabled beneficiaries received lower-quality care in all contracts for the 3 PDE measures.
Contract Star Ratings Following Adjustment for SES Differences Using CAI
There are 7 measures (6 MA and 1 PDP) for which the contract-level median absolute DE/LIS disparity is at least 5 percentage points or there are no contracts with DE/LIS scores equal to or higher than their non-DE/LIS scores. Table 2 shows the simulation of the overall star ratings when applying the CAI based on these 7 measures, with large and consistent DE/LIS disparities across contracts. Adjustment with the CAI changed the overall star ratings for 8.5% of contracts with 50% or more DE/LIS beneficiaries (Table 2). Gains in overall star ratings were concentrated in the high-DE/LIS group; 9 of 10 contracts that had higher overall star ratings following CAI had 50% or more DE/LIS. One contract that had less than 50% DE/LIS lost one-half star, while no contracts with 50% or more DE/LIS lost stars. No contract gained or lost more than one-half star.
Adjustment with the CAI changed the Part D ratings for 20.3% of PDPs (16.3% of contracts with <50% DE/LIS and 33.3% with ≥50% DE/LIS; Table 2). No contract gained or lost more than one-half star. Gains only occurred among contracts with 50% or more DE/LIS beneficiaries (n = 5; 33.3%), while losses only occurred among contracts with less than 50% DE/LIS (n = 8; 16.3%).
To address gaps in care, public and private payers have undertaken a variety of actions, including performance measurement, public reporting, and performance-based payments. Concerns have been raised that some program designs may create incentives for providers and plans to avoid more challenging patient populations.1 Adjusting performance for differences in the patient populations that plans and providers serve, to level the playing field, is one approach that has been suggested to address potential mismeasurement problems.6,16 Providers caring for low-SES patients may face communication challenges associated with lower education, English proficiency, and health literacy, as well as reduced patient access to care and compliance with medical regimes associated with limited transportation, residential instability, and other barriers.16 Accounting for these differences may reduce the likelihood that providers will avoid lower-SES patients in response to pay-for-performance programs.
We found within-contract disparities in performance on the clinical measures used to assess MA contract and PDP performance, predominantly reflecting lower odds of receiving recommended care for low-SES patients; the magnitude of within-contract disparities varied across measures and contracts. These findings are consistent with those of prior studies demonstrating associations among patient sociodemographic characteristics, including SES, for selected Healthcare Effectiveness Data and Information Set measures in commercially insured populations,21,22 for outcomes measures among the general population with cardiovascular disease or diabetes,23 and for medication adherence measures in the MA population.24
Based on these analyses, CMS implemented the CAI with the 2017 star ratings. Adjustment of star ratings through CAI resulted in increased star ratings for some contracts with higher percentages of DE/LIS beneficiaries; 8.5% of MA contracts with 50% or more DE/LIS received half-star increases and none decreased, and 33.3% of PDPs with 50% or more DE/LIS received half-star increases and none decreased. Of contracts with less than 50% DE/LIS, less than 0.1% of MA contracts and 0% of PDPs had higher star ratings and less than 0.1% of MA and 16.3% of PDP contracts had lower star ratings.
Our study is the first to estimate the effects of adjusting the full set of clinical measures used in the Medicare Star Rating Program for SES factors and to simulate the effect of CAI adjustments for DE/LIS and disability on the star ratings used for quality bonus payments and public reporting. These results should inform future decisions about adjustment for SES.
Strengths and Limitations
Our study has several strengths. First, our analyses used patient-level measures of SES, in contrast to other studies that have used area-level estimates of SES from Census data as proxies for patient-level measures; these estimates measure a combination of the separate effects of the neighborhood in which a person resides and are a less accurate measure of person-level SES. Second, we measured the effect of SES adjustment for the universe of MA and PDP contracts. Third, we measured the effect of adjustment on all clinical measures contained in the Star Rating Program, rather than only a small subset of measures as has previously been reported.21,22,24 Fourth, inclusion of the contract fixed effects in our models allowed for adjustment for within-contract differences in quality for DE/LIS and disability, preserving quality differences between contracts and their affiliated providers that should be the target of improvement efforts.16 Fifth, we translate into policy-relevant terms the effect of risk adjustment at the measure level by examining its effect on the overall star rating used for quality bonus payment determination in MA contracts.
Our study also has several limitations. This study examined the effects of DE and/or receipt of LIS; although Medicaid eligibility varies by state, it is an important and widely available measure of low income and assets and has been recognized as the best proxy for income linkable to the Medicare beneficiary level.9 Furthermore, other measures of SES, such as housing stability, may be important markers of disparity; however, CMS and other payers would face challenges in collecting this measure of disadvantage. We believe that DE/LIS is a partial proxy for housing instability, as it measures the resources available to a beneficiary. This study was not designed to determine what factors allow some contracts to have small or zero disparities in care while others have sizeable disparities. Our findings are limited to beneficiaries in MA and PDP contracts, although other studies have found disparities in care in fee-for-service.25,26
Policy makers, plans, and providers need to understand the effects of case mix on performance scores and to consider whether it is appropriate to adjust for differences. The overall impact of adjustment and the feasibility of adjustment are important considerations.22 In addition, even when risk adjustment does not lead to changes in performance scores for most providers, it provides face validity to the overall measurement effort in signaling to providers that their treatment of more challenging patients will be accounted for in performance assessment. It is important to design performance measures to influence plan and provider behavior in desired ways, and case-mix adjustment could guard against undesired behaviors, improve the accuracy of quality measurement, and increase the incentive for high-performing contracts to enroll low-income and disabled beneficiaries, which, in turn, might help reduce disparities in quality of care. Decisions about whether to adjust and the effects of adjustment will be a function of the existence of within-contract or within-provider disparity, the magnitude of disparity, and the structure of the scoring algorithm used to rate providers.
In addition to adjustment for SES, which primarily addresses issues of quality measurement, policy makers may consider other options to reduce disparities in health and healthcare, including enhancing data collection to better support reporting quality, specifically for patients with social risk factors; developing and including in value-based purchasing programs measures of health equity paired with incentives to improve performance on these measures; changing the payment structure of incentive programs to reward high performance and improvement among beneficiaries with social risk factors; providing support and technical assistance to providers that serve beneficiaries with social risk factors; developing demonstrations that focus on care innovations intended to achieve better outcomes for beneficiaries with social risk factors; and requiring the coordination of benefits between Medicare and Medicaid by contracts that serve dually enrolled beneficiaries.6Author Affiliations: RAND Corporation, Pittsburgh, PA (MES, AH), and Santa Monica, CA (SMP, CLD, MK, AT, MM, MNE).
Source of Funding: This work was performed under contract for CMS.
Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (MES, SMP, MNE); acquisition of data (CLD); analysis and interpretation of data (MES, SMP, CLD, AH, MK, AT, MM, MNE); drafting of the manuscript (MES, SMP, CLD, AH, MK, AT, MM); critical revision of the manuscript for important intellectual content (MNE); statistical analysis (SMP, AH, MK, AT, MM); obtaining funding (SMP); administrative, technical, or logistic support (SMP); and supervision (MES, SMP, CLD).
Address Correspondence to: Melony E. Sorbero, PhD, MS, MPH, RAND Corporation, 4570 Fifth Ave, Ste 600, Pittsburgh, PA 15213. Email: email@example.com.REFERENCES
1. Joynt KE, Jha AK. Characteristics of hospitals receiving penalties under the Hospital Readmissions Reduction Program. JAMA. 2013;309(4):342-343. doi: 10.1001/jama.2012.94856.
2. National Academies of Sciences, Engineering, and Medicine. System Practices for the Care of Socially At-Risk Populations. Washington, DC: The National Academies Press; 2016.
3. Werner RM, Goldman LE, Dudley RA. Comparison of change in quality of care between safety-net and non-safety-net hospitals. JAMA. 2008;299(18):2180-2187. doi: 10.1001/jama.299.18.2180.
4. Casalino LP, Elster A, Eisenberg A, Lewis E, Montgomery J, Ramos D. Will pay-for-performance and quality reporting affect health care disparities [erratum in Health Aff (Millwood). 2007;26(6):1794. doi: 10.1377/hlthaff.26.6.1794]. Health Aff (Millwood). 2007;26(3):w405-w414. doi: 10.1377/hlthaff.26.3.w405.
5. Chien AT, Chin MH, Davis AM, Casalino LP. Pay for performance, public reporting, and racial disparities in health care: how are programs being designed? Med Care Res Rev. 2007;64(suppl 5):283S-304S. doi: 10.1177/1077558707305426.
6. Damberg CL, Elliott MN, Ewing BA. Pay-for-performance schemes that use patient and provider categories would reduce payment disparities. Health Aff (Millwood). 2015;34(1):134-142. doi: 10.1377/hlthaff.2014.0386.
7. Ryan AM. Will value-based purchasing increase disparities in care? N Engl J Med. 2013;369(26):2472-2474. doi: 10.1056/NEJMp1312654.
8. National Academies of Sciences, Engineering, and Medicine. Accounting for Social Risk Factors in Medicare Payment: Criteria, Factors, and Methods. Washington, DC: The National Academies Press; 2016.
9. National Academies of Sciences, Engineering, and Medicine. Accounting for Social Risk Factors in Medicare Payment: Data. Washington, DC: The National Academies Press; 2016.
10. Fung V, Reed M, Price M, et al. Responses to Medicare drug costs among near-poor versus subsidized beneficiaries. Health Serv Res. 2013;48(5):1653-1668. doi: 10.1111/1475-6773.12062.
11. Hsu J, Fung V, Price M, et al. Medicare beneficiaries’ knowledge of Part D prescription drug program benefits and responses to drug costs. JAMA. 2008;299(16):1929-1936. doi: 10.1001/jama.299.16.1929.
12. Ngo-Metzger Q, Sorkin DH, Billimek J, Greenfield S, Kaplan SH. The effects of financial pressures on adherence and glucose control among racial/ethnically diverse patients with diabetes. J Gen Intern Med. 2012;27(4):432-437. doi: 10.1007/s11606-011-1910-7.
13. Phelan JC, Link BG, Tehranifar P. Social conditions as fundamental causes of health inequalities: theory, evidence, and policy implications. J Health Soc Behav. 2010;51(suppl):S28-S40. doi: 10.1177/0022146510383498.
14. HHS. Report to Congress: Social Risk Factors and Performance Under Medicare’s Value-Based Purchasing Programs. Washington, DC: HHS; 2016. aspe.hhs.gov/pdf-report/report-congress-social-risk-factors-and-performance-under-medicares-value-based-purchasing-programs. Accessed July 20, 2017.
15. Epstein AM. Health care in America—still too separate, not yet equal. N Engl J Med. 2004;351(6):603-605. doi: 10.1056/NEJMe048181.
16. National Quality Forum. Risk Adjustment for Socioeconomic Status or Other Sociodemographic Factors. Washington, DC: National Quality Forum; 2014.
17. Elliott MN, Zaslavsky AM, Goldstein E, et al. Effects of survey mode, patient mix, and nonresponse on CAHPS hospital survey scores. Health Serv Res. 2009;44(2 pt 1):501-518. doi: 10.1111/j.1475-6773.2008.00914.x.
18. Medicare Plan Finder. CMS website. medicare.gov/find-a-plan/questions/home.aspx. Accessed August 19, 2016.
19. Announcement of calendar year (CY) 2017 Medicare Advantage capitation rates and Medicare Advantage and Part D payment policies and final call letter. CMS website. cms.gov/Medicare/Health-Plans/MedicareAdvtgSpecRateStats/Downloads/Announcement2017.pdf. Published April 4, 2016. Accessed November 16, 2016.
20. Jiang J. Asymptotic properties of the empirical BLUP and BLUE in mixed linear models. Stat Sin. 1998;8:861-885. www3.stat.sinica.edu.tw/statistica/oldpdf/A8n314.pdf. Accessed July 1, 2016.
21. Zaslavsky AM, Epstein AM. How patients’ sociodemographic characteristics affect comparisons of competing health plans in California on HEDIS quality measures. Int J Qual Health Care. 2005;17(1):67-74. doi: 10.1093/intqhc/mzi005.
22. Zaslavsky AM, Hochheimer JN, Schneider EC, et al. Impact of sociodemographic case mix on the HEDIS measures of health plan quality. Med Care. 2000;38(10):981-992.
23. McWilliams JM, Meara E, Zaslavsky AM, Ayanian JZ. Differences in control of cardiovascular disease and diabetes by race, ethnicity, and education: U.S. trends from 1999 to 2006 and effects of Medicare coverage. Ann Intern Med. 2009;150(8):505-515. doi: 10.7326/0003-4819-150-8-200904210-00005.
24. Young GJ, Rickles NM, Chou CH, Raver E. Socioeconomic characteristics of enrollees appear to influence performance scores for Medicare Part D contractors. Health Aff (Millwood). 2014;33(1):140-146. doi: 10.1377/hlthaff.2013.0261.
25. DeLaet DE, Shea S, Carrasquillo O. Receipt of preventive services among privately insured minorities in managed care versus fee-for-service insurance plans. J Gen Intern Med. 2002;17(6):451-457.
26. Schneider EC, Cleary PD, Zaslavsky AM, Epstein AM. Racial disparity in influenza vaccination: does managed care narrow the gap between African Americans and whites? JAMA. 2001;286(12):1455-1460. doi: 10.1001/jama.286.12.1455.