Medicare beneficiaries attributed to small practices in accountable care organizations (ACOs) achieve greater savings than beneficiaries attributed to large practices in ACOs.
Objectives: Alternative payment models (APMs) encouraging provider collaboration may help small practices overcome the participation challenges that they face in APMs. We aimed to determine whether small practices in accountable care organizations (ACOs) reduced their beneficiaries’ spending more than large practices in ACOs.
Study Design: Retrospective cohort study of Medicare patients attributed to ACOs and non-ACOs.
Methods: We conducted a modified difference-in-differences analysis that allowed us to compare large vs small practices before and after the Medicare Shared Savings Program (MSSP) ACO started, between 2010 and 2016. Our sample included Medicare fee-for-service beneficiaries with 12 months of Medicare Part A and Part B (unless death) who were attributed to small (≤ 15 providers) and large (> 15 providers) practices participating in ACOs and non-ACOs. The outcome was patient annual spending based on CMS’ total per capita costs.
Results: Patients attributed to small practices in ACOs had annual Medicare spending decreases of $269 (95% CI, $213-$325; P < .001) more than patients attributed to large practices in ACOs. Small ACO practices reduced spending more than large practices by $165 for physician services (95% CI, $140-$190; P < .001), $113 for hospital/acute care (95% CI, $65-162; P < .001), and $52 for other services (95% CI, $27-$77; P < .001). Small practices in ACOs spent $253 more on average at baseline than small practices in non-ACOs. ACOs with a higher proportion of small practices were more likely to receive shared savings payments.
Conclusions: Small practices in ACOs controlled costs more so than large practices. Small practice participation may generate higher savings for ACOs.
Am J Manag Care. 2022;28(3):117-123. https://doi.org/10.37765/ajmc.2022.88839
Are small practices in accountable care organizations (ACOs) more likely to achieve Medicare savings than large practices in ACOs?
The Patient Protection and Affordable Care Act placed a greater emphasis on models of reimbursing physicians and hospitals that hold them accountable for both quality and cost of care, an approach commonly referred to as value-based purchasing (VBP).1 The accountable care model is a particular VBP approach that has expanded rapidly. Within a Medicare Shared Savings Program (MSSP) accountable care organization (ACO), physician group practices, hospitals, and health systems can receive payments known as “shared savings” for meeting care quality and spending benchmarks.2 Study findings have shown that ACOs improve care quality but only modestly reduce annual beneficiary spending.3-9 However, evidence suggests that spending reductions are heterogeneous across organizational characteristics of the ACO.10-16 Further exploration of ACO features associated with spending reductions may help identify pathways for ACOs to control costs.17,18
One group that has faced challenges in VBP programs is the small physician practice.19-21 For small practices, participating in an ACO offers a potential opportunity to gain the benefits of provider networks that can overcome the burdensome requirements for VBP participation. Small practices face obstacles when joining and participating in VBP due to their lack of infrastructure (eg, health information technology), administrative and business expertise, and capital.19 Small practices may help ACOs obtain shared savings payments by limiting high-cost inpatient and specialty care and forming strong bonds with patients, thus increasing treatment adherence.22 Patients in small practices are also less likely to have avoidable hospital stays23 and readmissions.24 Further, small practices are nimbler when adjusting care to meet ACO quality and spending benchmarks.25 Therefore, we hypothesized that there would be greater reductions in spending for patients attributed to small practices in ACOs relative to large practices.
We evaluated the relative performance of small practices (≤ 15 providers) and large practices (> 15 providers) participating in an MSSP ACO in reducing annual Medicare expenditures of attributed beneficiaries between 2010 and 2016 using a modified difference-in-differences analysis. Beneficiary spending in small and large practices in ACOs was compared with spending in small and large non-ACO practices before and after ACO entry. Then, the difference between the small and large practice differences was calculated, which causally estimated how small practices in ACOs affected spending relative to large practices in ACOs.
The primary outcome, annual spending, was measured by aggregating Medicare spending for each beneficiary each year (Medicare Provider Analysis and Review [MedPAR], Outpatient, and Carrier files; 2010-2016). The independent variables included physician practice and patient characteristics from the Medicare Data on Provider Practice and Specialty (MD-PPAS; 2010-2016), the Medicare Beneficiary Summary File (MBSF; 2010-2016), and Medicare claims files. The ACO attribution and ACO entry dates were derived using the ACO Provider Research Identifiable File (RIF) for 2016. Shared savings payment information for 2016 was obtained from the Shared Savings Program ACO Public Use File and linked to our ACO practice composition data.
The study population includes Medicare beneficiaries with parts A and B and no Medicare Advantage plan coverage for each year between 2010 and 2016 or until death. Note that 2016 was selected as the final year of the study due to data availability, whereas the other restrictions prevented potentially unobserved spending and diagnoses from biasing model estimates. Beneficiaries were then assigned to either an ACO or a non-ACO based on the CMS Shared Savings and Losses and Assignment Methodology Version 4 (see eAppendix A [eAppendices available at ajmc.com]).26 Characteristics of ACO-attributed and non–ACO-attributed beneficiaries were obtained from the MBSF, and beneficiary medical histories were determined using diagnosis codes included in the Medicare Carrier, Outpatient, and MedPAR files.
We used propensity score matching as a technique for quasirandomization.27 Demographics in the matching model included race and ethnicity (ie, Black, Asian, Native American, White, Hispanic, and other/unknown race), Medicaid dual-eligible status, gender, and age. We also incorporated CMS Hierarchical Categorical Condition (HCC) scores as a comorbidity measure, as well as patient disability and end-stage renal disease status. We also restricted patient matching to within hospital referral region (HRR) and year to address temporal and geographic variation in both attribution and health care resources. After matching, our analytic sample consisted of 2,788,240 unique ACO-attributed beneficiaries and 3,943,174 unique non–ACO-attributed beneficiaries between 2010 and 2016 (see eAppendix B Table).
Identifying Practices and Characteristics
We identified physician practices using federal Taxpayer Identification Numbers (TINs). Whether a practice was participating as an ACO provider was then determined using the ACO provider files. Special payment models in underserved areas (eg, federally qualified health centers) do not use TINs for billing in the Outpatient file and account for a small percentage of beneficiaries, so we excluded them due to the difficulty in accurately measuring their sizes.26
The primary independent variable was practice size. We defined a small practice as any practice with 15 or fewer practicing physicians, which is the exact definition used by CMS’ Merit-based Incentive Payment System program.28 To characterize by practice size, we calculated the number of National Provider Identifiers (NPIs) participating in each TIN each year using MD-PPAS files.
We also included a variable that indicated whether physician practices were part of a vertically integrated system. Organizational incentives differ for vertically integrated practices, and these incentives may affect a practice’s ability to constrain costs.29 Prior work by McWilliams et al showed that physician vs hospital ownership of the ACO, a measure of vertical integration, was associated with ACO spending.11 We identified vertical integration using Neprash and colleagues’ methods,30 which use the proportion of spending accrued by NPIs in hospital outpatient departments to assign TINs to vertically integrated systems (see eAppendix C).
Outcome Variable: Annual Patient Expenditures
The primary outcome was annual patient spending based on CMS’ total per capita costs (TPCC).26 Our annual spending measure included TPCC components of MedPAR, Carrier, Hospice, Home Health, and Outpatient claims. Durable medical equipment, a small proportion of total spending, was excluded due to data availability. Spending above the 1% threshold in each year was also censored to the 1% level to smooth highly irregular payments (see eAppendix D for additional details).
To determine the difference in spending between small and large practices in ACOs, we separately compared their spending with small and large practices in non-ACOs from before and after ACO entry and took the difference of these differences. Applying this methodology, referred to as difference-in-difference-in-differences,31 provided an estimate of the relative reduction in spending between small and large ACO practices. Specifically, our model had the following specification:
TPCCikt = β0 + β1Smallk + β2Postkt + β3Postkt × Smallk + β4Yeart + δXit+ α1ACOikt + α2ACOikt × Postkt + α3ACOikt × Smallkt + α4ACOikt × Smallkt × Postkt + εikt
The outcome, TPCCikt, is the TPCC of patient i assigned to TIN k in year t; ACOikt is an indicator for whether or not a patient is attributed to an ACO; Xit is a set of patient-level covariates (ie, gender, race/ethnicity, age [< 65, 65-74, 75-80, and > 80 years], and the 72 HCC indicators used in CMS’ risk adjustment model) with δ defined as a vector of corresponding coefficients; Yeart is a normalized time trend (ie, 0, 1, 2, 3, 4, 5, 6); Postkt is an indicator for whether or not TIN k is in its post period (ie, after ACO launch date if the patient was ACO attributed and after the matched pair’s ACO launch date if the patient was attributed to a non-ACO); β0 is the intercept; and εikt is the independent and identically distributed error term. The variable of interest, Smallkt, is an indicator of whether or not TIN k was defined as a small practice in a given year. The “treatment effect” in our model is the coefficient on the triple interaction term (α4), which represents the mean difference in spending between patients attributed to small practices in ACOs relative to large. Our full specification, which separately specifies ACO attribution by ACO entry cohort (with April 2012 being the first cohort), is provided in eAppendix E. Note that we identify 6 ACO cohorts based on their unique start dates provided in the ACO provider RIF. Therefore, we model 6 ACO cohort effects, which we weigh by the total beneficiaries in each cohort to obtain a single estimate of the ACO effect on spending.
To achieve savings, ACOs had to meet both quality and savings benchmarks. We illustrate ACO achievement of shared savings and the dollars per attributed beneficiary by the percentage of attributed beneficiaries assigned to a small practice (Figure).
Practice and Beneficiary Characteristics of ACOs vs Non-ACOs Before and After Matching
Before the 1:1 propensity score matching was applied, ACO practices differed from non-ACO practices on both practice and patient characteristics in 2016 (Table 1). ACO practices were more likely to be primary care physician (PCP) only (56.8% vs 39.9%; P < .001) and mixed PCPs and specialists (23.9% vs 16.4%; P < .001) as opposed to specialist only (19.2% vs 43.7%; P < .001). Practices participating in ACOs were primarily located in the Northeast (24.8% vs 20.1%; P < .001) and Midwest (17.3% vs 15.9%; P < .001). Although they represented only 15% of total practices, ACO practices were responsible for the care of roughly a quarter of all patients. Patients in ACOs were younger and less likely to be disabled or dually eligible.
After matching, patient characteristics were well balanced between ACOs and non-ACOs. Practice characteristic differences were also reduced after patient matching. This is likely a consequence of matching patients within HRR where practice characteristics are likely homogenous.
Characteristics of Small and Large Practices in ACOs vs Non-ACOs after Patient Matching
Table 2 shows the comparison of small and large practices in ACOs and non-ACOs in our postmatching sample in 2016. This comparison demonstrates the key features that distinguish small and large practices and their attributed patients not related to ACO participation. Small practices in ACOs were more likely to have dual-eligible patients (19.4% vs 14.5%; P < .001), less likely to be vertically integrated (3.0% vs 7.2%; P = .049), and more likely to be PCP only (61.2% vs 0.2%; P < .001). Further, small practices represented nearly 10 times the total number of practices as large practices in ACOs yet cared for 40% fewer patients.
Observed differences between small and large practices were similar across ACO status. However, there were differences between ACO and non-ACO practices. ACO practices were more likely to be vertically integrated, be composed of specialty physicians, and have more patients attributed to large practices. Further, large practices made up a more significant proportion of practices in ACOs than they did in non-ACOs, with a ratio of small to large practices in ACOs of 10:1 compared with 12:1 in non-ACOs. Propensity score matching created a balanced covariate distribution between beneficiaries attributed to ACOs vs non-ACOs. This balance was unperturbed after stratifying by practice size (Tables 1 and 2).
We also examined whether being vertically integrated would be perfectly colinear with practice size. Large practices were more frequently vertically integrated and hospital owned. However, adequate proportions of small practices were in each category to include the integration variable (eAppendix F).
Differences in Marginal Predicted Payments by Payment Category
Table 3 shows the model-predicted payments stratified by practice size and ACO attribution status for 4 different spending categories: total, hospital/acute care, physician, and all other spending. There was a modest reduction in beneficiaries’ spending attributed to small ACO practices in all payment categories. Being attributed to a small ACO practice led to an additional spending reduction of $269 relative to large ACO practices, representing a 3% decline in baseline spending. Spending for beneficiaries attributed to small ACO practices declined $113 for hospitals/acute care (~3.4%), $165 for physician services (~5.0% and $52 for all other service types (~2.2%). The parallel trends assumption was confirmed visually (see eAppendix G Figure).
Differences in baseline spending may drive more significant spending reductions in small practices. Small practices in ACOs spent $253 more on average in the pre-ACO period than did small practices in non-ACOs. The higher mean costs before ACO entry for small practices decreased to non-ACO levels after entry. In contrast, large practices in ACOs had lower spending than did large practices in non-ACOs at baseline—$88 less on average in the pre-ACO period—and did not reduce spending substantially after ACO entry.
Association of Practice Size Within ACO With Receiving MSSP Rewards
The Figure shows the fraction of ACOs receiving shared savings stratified by their practice size composition. The likelihood of achieving shared savings and the per-beneficiary payment amounts were higher for ACOs that had a greater proportion of their beneficiaries attributed to small practices.
Within the MSSP payment model, small ACO practices decreased annual Medicare spending by $269 per patient more than large ACO practices. Multiplying this by the 408,837 patients in all small practices in ACOs yields an approximation of savings to Medicare of roughly $110 million. More generally, ACOs with a higher proportion of small practices were more likely to meet their quality and spending benchmarks and to achieve shared savings back to the ACO. Small practices in ACOs had higher costs before participating, suggesting lower baseline efficiency but more potential for improvement.
Small practice success in the ACO model may be surprising due to the cited burdens that small practices face when participating in pay-for-performance programs.21 Small practice attitudes toward pay-for-performance programs have been described in the literature as “unmotivated” due to the additional labor burden and the program-related implementation costs.19 These attitudes may explain why, in some cases, small practice performance in national programs is no different or worse than large practice performance.32-35 However, a study by Wang and colleagues evaluated small practice performance and found that the most considerable improvements were for small practices in the Patient-Centered Medical Home (PCMH) program.36 Although PCMH lacks a formal pay-for-performance component, it emphasizes care coordination and sharing resources across participating practices, just like the ACO model. More generally, small practices have a greater incentive to reduce spending if more care is delivered outside the organization. For example, they are more likely to reduce costly hospitalizations than hospital-led organizations.
In the context of other studies of ACO performance, the reduction in total spending among ACO-attributed patients (when averaged over our large and small practice savings, $26 and $295, respectively) is similar to overall ACO findings presented by McWilliams and colleagues.10 They observed a beneficiary spending reduction of $144 for patients attributed to the 2012 ACO cohort. Our analysis, however, includes several of the more recent ACO cohorts with additional years of data for the early entrants. Consequently, we find that ACOs have maintained consistency in their ability to control costs over time and across cohorts. We also build on the findings of prior ACO studies that suggest that heterogeneity of ACO organizational features can determine ACO success.
Despite statistically significant findings suggesting that the participation of small practices in ACOs reduced patient spending, several limitations remain to be considered. The first is potential residual confounding due to beneficiary selection into ACOs.37-39 To mitigate this issue, we propensity score matched ACO-attributed and non–ACO-attributed beneficiaries by their ACO attribution probabilities, and we found that balance was maintained when we categorized attributed practice by size. Because our comparison is between large and small practices in ACOs, any residual unobserved patient selection issues should be addressed by using a difference-in-differences framework. In addition to patient selection, selection bias in which practices enter and leave ACOs may exist. Specifically, there is some evidence that ACOs selectively move practices in and out of the ACO based on their performance and patient case mix. This form of selection may be easier to do with small rather than large practices. However, in examining ACO dropout between small and large practices, we did not find any differences large enough to support any considerable bias.
Another limitation is that our definition of a “small” practice is somewhat arbitrary even though it is based on that of another VBP program. As a sensitivity analysis, we reestimated the model using different small practice cut points, including solo practitioners, 5 providers, 10 providers, 25 providers, and 50 providers. Results from the sensitivity analysis were consistent with our findings; increasing the small practice cut point above 15 reduced the difference in spending (ie, smaller savings for small practices than the main model) whereas decreasing the cut point below 15 increased the spending difference (ie, greater savings for small practices than the main model). Another concern regarding practice size is that MD-PPAS files, which we used to determine practice size, may underestimate practice size. Some large practices bill under more than 1 TIN, which would bias toward an underestimate of actual spending reduction.
Finally, we do not consider nuances surrounding the expansion of the ACO program during our study period, such as the introduction of Pathways to Success and the ACO investment model in 2016. Although we do not believe these issues would affect our findings given that our study period ended in 2016 and most ACOs selected 1-sided risk, changes to the ACO model in 2016 and beyond are essential to consider in the context of our results. In particular, small practices may face more significant challenges taking on 2-sided risk contracts, especially in rural areas where there may be a lack of experience with these contracts and a greater density of small practices. Future research on this topic should carefully consider how the development of the ACO model may affect small practice performance.
Annual Medicare spending for beneficiaries attributed to small ACO practices was $269 less on average than for large ACO practices. Spending reductions were the largest in physician services ($165), followed by hospital/acute care ($113). These findings appear to be driven by high baseline spending among beneficiaries attributed to small practices in ACOs, which was reduced to the level of small practices in non-ACOs following ACO entry. This finding may be explained by efficiency gains accruing to small practices when participating in ACOs. The substantial savings in small practices may also warrant policies that strengthen incentives for small practice participation in ACOs and VBP.
Dr Lena Chen led this research team until her unexpected death due to an aneurysm in July 2019. She obtained the funding and designed and supervised the initial analysis and presentation. After her death, Dr Julie P.W. Bynum took on the leadership role and completion of the work.The authors are grateful to Dr Chen’s family for supporting their desire to include her as an author posthumously.
Author Affiliations: Department of Health Policy & Management, Johns Hopkins University Bloomberg School of Public Health (JBG), Baltimore, MD; Institute for Healthcare Policy & Innovation, University of Michigan (CHC, MB, JM, ECN, LC, JPWB), Ann Arbor, MI; Department of Internal Medicine (CHC, JM, LC, JPWB) and Department of Pediatrics and Communicable Diseases (JM), University of Michigan Medical School, Ann Arbor, MI; Department of Biostatistics (MB, JM) and Department of Health Management & Policy (ECN), University of Michigan School of Public Health, Ann Arbor, MI; Medicine Service, Veterans Affairs Ann Arbor Healthcare System (JM), Ann Arbor, MI; Department of Economics, University of Michigan (ECN), Ann Arbor, MI.
Source of Funding: This study was partly supported by Agency for Healthcare Research and Quality grant R01 HS024698.
Author Disclosures: Dr Meddings has submitted a patent for a device to monitor patient movement and has received a patent involving a device to improve safety of urinary catheter insertion in women, both unrelated to this article. The remaining authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (JBG, CHC, ECN, LC, JM, JPWB); acquisition of data (JBG, LC, JPWB); analysis and interpretation of data (JBG, CHC, MB, JM, ECN, LC, JPWB); drafting of the manuscript (JBG, MB, JPWB); critical revision of the manuscript for important intellectual content (JBG, CHC, MB, JM, ECN, LC, JPWB); statistical analysis (JBG, CHC, MB, ECN, JPWB); provision of patients or study materials (JBG, CHC, LC, JPWB); obtaining funding (LC, JPWB); administrative, technical, or logistic support (JBG, JPWB); and supervision (LC, JPWB).
Address Correspondence to: Julie P.W. Bynum, MD, MPH, University of Michigan, 2800 Plymouth Rd, NCRC B16, Ann Arbor, MI 48109. Email: firstname.lastname@example.org.
1. Meddings J, Gupta A, Houchens N. Quality and safety in the literature: January 2020. BMJ Qual Saf. 2020;29(1):86-90. doi:10.1136/bmjqs-2019-010547
2. Berwick DM. Making good on ACOs’ promise—the final rule for the Medicare Shared Savings Program. N Engl J Med. 2011;365(19):1753-1756. doi:10.1056/NEJMp1111671
3. Colla CH, Lewis VA, Stachowski C, Usadi B, Gottlieb DJ, Bynum JPW. Changes in use of postacute care associated with accountable care organizations in hip fracture, stroke, and pneumonia hospitalized cohorts. Med Care. 2019;57(6):444-452. doi:10.1097/MLR.0000000000001121
4. Gilstrap LG, Huskamp HA, Stevenson DG, Chernew ME, Grabowski DC, McWilliams JM. Changes in end-of-life care in the Medicare Shared Savings Program. Health Aff (Millwood). 2018;37(10):1693-1700. doi:10.1377/hlthaff.2018.0491
5. Nathan H, Thumma JR, Ryan AM, Dimick JB. Early impact of Medicare accountable care organizations on inpatient surgical spending. Ann Surg. 2019;269(2):191-196. doi:10.1097/SLA.0000000000002819
6. Agarwal D, Werner RM. Effect of hospital and post-acute care provider participation in accountable care organizations on patient outcomes and Medicare spending. Health Serv Res. 2018;53(6):5035-5056. doi:10.1111/1475-6773.13023
7. Colla CH, Lewis VA, Kao LS, O’Malley AJ, Chang CH, Fisher ES. Association between Medicare accountable care organization implementation and spending among clinically vulnerable beneficiaries. JAMA Intern Med. 2016;176(8):1167-1175. doi:10.1001/jamainternmed.2016.2827
8. Lam MB, Zheng J, Orav EJ, Jha AK. Early accountable care organization results in end-of-life spending among cancer patients. J Natl Cancer Inst. 2019;111(12):1307-1313. doi:10.1093/jnci/djz033
9. McWilliams JM, Landon BE, Chernew ME. Changes in health care spending and quality for Medicare beneficiaries associated with a commercial ACO contract. JAMA. 2013;310(8):829-836. doi:10.1001/jama.2013.276302
10. McWilliams MJ, Hatfield LA, Chernew ME, Landon BE, Schwartz AL. Early performance of accountable care organizations in Medicare. N Engl J Med. 2016;374(24):2357-2366. doi:10.1056/NEJMsa1600142
11. McWilliams, Hatfield LA, Landon BE, Pasha H, Chernew ME. Medicare spending after 3 years of the Medicare Shared Savings Program. N Engl J Med. 2018;379(12):1139-1149. doi:10.1056/NEJMsa1803388
12. Sukul D, Ryan AM, Yan P, et al. Cardiologist participation in accountable care organizations and changes in spending and quality for Medicare patients with cardiovascular disease. Circ Cardiovasc Qual Outcomes. 2019;12(9):e005438. doi:10.1161/CIRCOUTCOMES.118.005438
13. McWilliams MJ, Chernew ME, Landon BE, Schwartz AL. Performance differences in year 1 of Pioneer accountable care organizations. N Engl J Med. 2015;372(20):1927-1936. doi:10.1056/NEJMsa1414929
14. Comfort LN, Shortell SM, Rodriguez HP, Colla CH. Medicare accountable care organizations of diverse structures achieve comparable quality and cost performance. Health Serv Res. 2018;53(4):2303-2323. doi:10.1111/1475-6773.12829
15. Nyweide DJ, Lee W, Cuerdon TT, et al. Association of Pioneer accountable care organizations vs traditional Medicare fee for service with spending, utilization, and patient experience. JAMA. 2015;313(21):2152-2161. doi:10.1001/jama.2015.4930
16. Schulz J, DeCamp M, Berkowitz ASA. Spending patterns among Medicare ACOs that have reduced costs. J Healthc Manag. 2018;63(6):374-381. doi:10.1097/JHM-D-17-00178
17. McWilliams MJ, Landon BE, Rathi VK, Chernew ME. Getting more savings from ACOs — can the pace be pushed? N Engl J Med. 2019;380(23):2190-2192. doi:10.1056/NEJMp1900537
18. Lewis VA, Fisher ES, Colla CH. Explaining sluggish savings under accountable care. N Engl J Med. 2017;377(19):1809-1811. doi:10.1056/NEJMp1709197
19. Hearld LR, Alexander JA, Shi Y, Casalino LP. Pay-for-performance and public reporting program participation and administrative challenges among small- and medium-sized physician practices. Med Care Res Rev. 2014;71(3):299-312. doi:10.1177/1077558713509018
20. Schneider ME. Pay-for-performance demo price tag may be too high for small practices. MDedge. April 1, 2008. Accessed February 14, 2022. https://www.mdedge.com/dermatology/article/8627/health-policy/pay-performance-demo-price-tag-may-be-too-high-small
21. Ault A. Policy & practice: pilot P4P for small practices. Internal Medicine News. November 15, 2006. Accessed April 15, 2020. https://www.mdedge.com/internalmedicine/article/14224/health-policy/policy-practice
22. Robinson JC, Miller KM. Total expenditures per patient in hospital-owned and physician-owned physician organizations in California. JAMA. 2014;312(16):1663-1669. doi:10.1001/jama.2014.14072
23. Casalino LP, Pesko MF, Ryan AM, et al. Small primary care physician practices have low rates of preventable hospital admissions. Health Aff (Millwood). 2014;33(9):1680-1688. doi:10.1377/hlthaff.2014.0434
24. McWilliams J, Chernew ME, Zaslavsky AM, Hamed P, Landon BE. Delivery system integration and health care spending and quality for Medicare beneficiaries. JAMA. 2013;173(15):1447-1456. doi:10.1001/jamainternmed.2013.6886
25. Lemaire N, Singer SJ. Do independent physician led ACOs have a future? NEJM Catalyst. February 22, 2018. Accessed November 17, 2020. https://catalyst.nejm.org/doi/full/10.1056/CAT.18.0251
26. Medicare Shared Savings Program: shared savings and losses and assignment methodology specifications. December 2015. Accessed November 17, 2020. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/sharedsavingsprogram/Downloads/Shared-Savings-Losses-Assignment-Spec-V4.pdf
27. D’Agostino RB Jr. Propensity score methods for bias reduction in the comparison of a treatment to a non-randomized control group. Stat Med. 1998;17(19):2265-2281. doi:10/fmjktp
28. Special statuses. Quality Payment Program. Accessed April 15, 2020. https://qpp.cms.gov/mips/special-statuses
29. Burns LR, Goldsmith JC, Sen A. Horizontal and vertical integration of physicians: a tale of two tails. Adv Health Care Manag. 2013;15:39-117. doi:10.1108/s1474-8231(2013)0000015009
30. Neprash HT, Chernew ME, Hicks AL, Gibson T, McWilliams JM. Association of financial integration between physicians and hospitals with commercial health care prices. JAMA Intern Med. 2015;175(12):1932-1939. doi:10.1001/jamainternmed.2015.4610
31. Kim H, Meath THA, Dobbertin K, Quiñones AR, Ibrahim SA, McConnell KJ. Association of the mandatory Medicare bundled payment with joint replacement outcomes in hospitals with disadvantaged patients. JAMA Netw Open. 2019;2(11):e1914696. doi:10.1001/jamanetworkopen.2019.14696
32. Vamos EP, Pape UJ, Bottle A, et al. Association of practice size and pay-for-performance incentives with the quality of diabetes management in primary care. CMAJ. 2011;183(12):E809-E816. doi:10.1503/cmaj.101187
33. Bardach NS, Wang JJ, De Leon SF, et al. Effect of pay-for-performance incentives on quality of care in small practices with electronic health records: a randomized trial. JAMA. 2013;310(10):1051-1059. doi:10.1001/jama.2013.277353
34. Doran T, Fullwood C, Gravelle H, et al. Pay-for-performance programs in family practices in the United Kingdom. N Engl J Med. 2006;355(4):375-384. doi:10.1056/NEJMsa055505
35. Doran T, Campbell S, Fullwood C, Kontopantelis E, Roland M. Performance of small general practices under the UK’s Quality and Outcomes Framework. Br J Gen Pract. 2010;60(578):e335-e344. doi:10.3399/bjgp10X515340
36. Wang JJ, Cha J, Sebek KM, et al. Factors related to clinical quality improvement for small practices using an EHR. Health Serv Res. 2014;49(6):1729-1746. doi:10.1111/1475-6773.12243
37. Markovitz AA, Hollingsworth JM, Ayanian JZ, et al. Risk adjustment in Medicare ACO program deters coding increases but may lead ACOs to drop high-risk beneficiaries. Health Aff (Millwood). 2019;38(2):253-261. doi:10.1377/hlthaff.2018.05407
38. DeCamp M, Lehmann LS. Guiding choice—ethically influencing referrals in ACOs. N Engl J Med. 2015;372(3):205-207. doi:10.1056/NEJMp1412083
39. Douven R, McGuire TG, McWilliams JM. Avoiding unintended incentives in ACO payment models. Health Aff (Millwood). 2015;34(1):143-149. doi:10.1377/hlthaff.2014.0444