The different approaches to setting benchmarks for population-based payment models (empirical, bidding based, and administratively set) have unique advantages and challenges.
Am J Manag Care. 2022;28(7):In Press
Understanding different approaches to setting benchmarks for population-based payment models can help policy makers design accountable care organization models that reduce spending, improve quality, and promote equity. Administratively set benchmarks, which tie benchmark trajectories to external indices (as opposed to historical spending), may address problems, such as ratchet effects, that are present when using other types of benchmarks.
Although Medicare has historically relied on fee-for-service (FFS) payment, population-based payment (PBP), which provides financial incentives to plans or providers to manage total cost of care, has grown in prominence over the past decade. PBP takes many forms. For example, PBP is the foundation of the Medicare Advantage (MA) program, in which the payment (based on the MA benchmark) is paid to health plans (which may in turn pay FFS to providers). Recently, MA has grown to enroll almost 50% of Medicare beneficiaries. The remaining beneficiaries are covered in traditional Medicare (TM), which used both FFS payment and alternative payment models (APMs), including PBP models known as accountable care organizations (ACOs).
ACOs typically operate on budget-based versions of PBP, where FFS payment is used to pay all claims, but bonuses (or penalties) are paid to ACOs at the end of the year based on accrued FFS spending relative to a benchmark. This FFS chassis facilitates cash flows and eliminates the need for ACOs to contract with non-ACO providers, but the core incentives (if the model is well designed) reward organizations that deliver less care without sacrificing quality. More than 30% of TM beneficiaries are now covered by ACOs, bringing the total share of beneficiaries enrolled in both Medicare Part A and Part B who are in population-based models (eg, MA or ACOs) to almost 70%. The success of any PBP model depends on the details of the model.
Perhaps the most important parameter of any PBP model is how the payments, known as benchmarks, are set. Medicare can save money if the benchmark is set below what would otherwise have been spent, if Medicare keeps a large enough share of any savings, or if any efficiencies in care delivery spill over to populations outside the PBP model. Higher benchmarks induce plan or provider participation but increase program expenditures. Lower benchmarks may reduce available benefits (in MA) or reduce plan participation in MA or provider participation in voluntary ACO models. Moreover, reductions in benchmarks in response to an organization’s past success in controlling health care spending (often referred to as “rebasing”), lower program spending in the short run but can greatly diminish incentives to save or participate.1
Here we review several approaches to setting benchmarks and highlight the advantages and challenges with each. Specifically, we focus on 3 broad approaches to setting population-based benchmarks: (1) empirical benchmarks, (2) bidding-based benchmarks, and (3) administratively set benchmarks.
Empirical benchmarks refer to a process of setting benchmarks based on observed spending. This is the dominant method of benchmarking for PBP in Medicare, although the details differ between the MA program and the ACO programs. The MA benchmark is based on an “external” comparison. Specifically, for each county in the plan’s service area, the benchmark is a multiple (eg, 115%) of the average spending in the TM sector. The plan benchmark aggregates the county-specific benchmark. The MA benchmark in any county is not plan specific and rises at the same rate as TM spending in the county. Importantly, TM spending includes FFS and APM spending but does not include MA spending, so any feedback between lower MA spending and MA benchmarks is indirect (ie, because of spillovers from MA to TM in a county).
In contrast, ACO benchmarks are “circular.” Specifically, they are initially set as a blend of an ACO’s historical spending at baseline and TM spending in the ACO’s service area. This creates an ACO-specific benchmark. During the contract period (3-5 years), the ACO-specific part of an ACO’s benchmark is updated based on either projected or, in most cases, actual TM spending growth (details depend on the model). When an ACO transitions to a new contract period, the ACO-specific component of the benchmark is rebased such that the spending in the performance period of the first contract period contributes to the baseline of the next contract period. The regional component of the benchmark rises with regional TM spending and receives increasing weight over time, up to 50%. This approach to updating is circular in that an ACO’s spending is included directly in the benchmark calculation (both in the regional component and in the update/rebasing of the ACO specific component).
Empirical benchmarking typically reflects the belief that benchmarks should approximate what spending would have been in TM Medicare (perhaps with a slight discount built in). When MA was established, the benchmarks were set at 95% of spending in the then-dominant TM system. Over time, legislation that was intended to bring more plans (in more parts of the country) into the market changed the benchmark system, but the core paradigm of basing MA benchmarks on TM has remained. Similarly, when ACOs were introduced, the paradigm was that payment amounts should follow spending in the TM sector, which in this case includes the ACOs.
If benchmarks were set to equal the spending that would have happened in TM, the most direct way that the Medicare program saves is by keeping a share of the savings that arise when ACO spending or MA bids are below the benchmark (or by charging ACOs penalties if spending is above the benchmark). The Medicare program will also save indirectly if the practice pattern changes associated with the MA or ACOs spill over to other patients. Considerable evidence suggests that such spillovers exist, but the magnitude is likely dependent on the context.2-4 In the case of the ACO program, there are more direct spillovers because current ACO spending is included in the future benchmark (as spending falls, future benchmarks fall) and because MA benchmarks are based on TM spending, which includes spending for beneficiaries in ACOs.
One appeal of empirical benchmarks is that the benchmarks automatically adjust for forces that influence spending but are beyond a plan or provider’s control. These forces include new technologies, changes to accepted standards of care, and changes in care-seeking behavior due to economic fluctuations. This flexibility may be an advantage because it shields providers from risks associated with unforeseen inflationary pressures, but it also does not shield them from lower revenue if unforeseen forces, such as the COVID-19 pandemic, reduce TM spending, thereby defeating one purpose of global budget models (to stabilize revenue).5 Similarly, because of the regional component of empirical benchmarks, ACOs face continuous competitive pressures to improve as other ACOs in the market improve, which could accelerate savings over time (assuming ACOs stay in the program).
The problem with external empirical benchmarks (as in the MA program) is that as the population used to set the benchmark (eg, TM) shrinks, the benchmarks may fluctuate excessively. More importantly, the remaining population may not be representative of the population in the PBP. For example, if beneficiaries who remain in TM are more costly in ways that are difficult for risk adjustment to capture, MA benchmarks will be too high, which will increase program expenditures. Alternatively, if beneficiaries who remain in TM are less costly, MA benchmarks will be too low, which will lead to lower benefits, increased premiums charged to beneficiaries, and possibly drive plans from the market. Thus, the benchmarks may be unstable, if not distortionary, as participation in the program (eg, MA) grows. Although MA enrollment is now a bit less than 50% nationally, in some markets MA enrollment approaches 60% or even 70%, which is problematic given that empirical comparison benchmarking is local.6 In the extreme, if participation in MA were 100%, there would be no basis for setting the MA benchmark. Similar issues would arise in the ACO program if ACO benchmarks were based on the non-ACO TM population. In either case, this issue with external benchmarks could be remedied by use of broader markets, but local practice pattern variation diminishes the attractiveness of this approach.
The problem with circular empirical benchmarks (as in ACOs) is somewhat different. Specifically, the circularity creates 2 types of “ratchet” effects in which success begets reductions in benchmarks that in turn discourage participation and savings. The first relates to the ACO-specific part of the benchmark. As an ACO transitions from one contract period to the next (or from one model to the next), its baseline benchmark is rebased to reflect spending in the past performance period. As a result, when an ACO succeeds in lowering spending, the ACO-specific portion of the benchmark falls in future contract periods, making future success more difficult. This link between an ACO’s present behavior and its future benchmark diminishes incentives to save because efforts to save now are penalized with lower benchmarks later and losses now are rewarded with higher benchmarks later. The current blending of ACO-specific benchmarks with regional spending reduces, but does not eliminate, this problem, and it will grow over time if ACOs become increasingly efficient.
The second problem with circular benchmarks in ACOs relates to the regional component. Specifically, if an ACO has a large share of the market, its spending will affect the regional portion of the benchmark, creating a ratchet effect similar to that which occurs for the ACO-specific part of the benchmark. Even if each ACO has a small share of the market, as ACO enrollment rises, the regional component of the benchmark approaches average spending across all ACOs. If ACOs collectively lower growth, benchmarks will rise more slowly than spending would have without ACOs. Thus, if ACOs collectively succeed, the regional component of the benchmark will fall (relative to if they do not succeed). This will deter participation in a voluntary model. Moreover, this could create a race to the bottom and instability for providers facing inadequate risk adjustment or other barriers to matching competitors’ savings. Although, as mentioned above, the pressure to perform better than competitors may be viewed as an advantage, both of these ratchet effects (ACO specific and regional), by construction, are likely to result in many ACOs facing penalties, which makes the ACO program less appealing over time and thus reduces the incentive to participate.
Bidding-based benchmarks refer to a system in which the benchmark depends on bids from plans or providers. Bidding is used in many programs, including MA, but the distinguishing feature of bidding-based benchmarks is that the bids affect the benchmark (and thus the bids of any given organization affect what other organizations get paid as opposed to simply affecting the organization’s payment). Two prominent examples of bidding-based benchmarks include the Affordable Care Act marketplaces, in which the government subsidy is based on the second-lowest bid for a silver plan, and the Part D program, in which the benchmark reflects the average Part D bid nationally.
The case for bidding-based benchmarks stems from the appeal of competition. When competition is working, bidding will put much more pressure on everyone’s spending. In theory, competition drives prices (benchmarks) to the efficient level. Evidence from the durable medical equipment (DME) competition demonstration has shown that competitive bidding significantly reduced Medicare spending for the covered items by more than 40%.7 Similarly, analysis of competitive bidding for Medicare Part D found that the addition of a national plan sponsor was associated with a reduction in Medicare spending.8 However, the commodity nature of DME and prescription drugs, as well as the role of reinsurance in Part D, are important caveats that must be considered and make generalization from these positive experiences challenging.
The merits of bidding-based benchmarks hinge on how well competition functions. A common concern in health care is that industry consolidation (of plans and/or providers) hinders competition. Many markets are already consolidated, and evidence suggests that plans and providers in consolidated markets can charge higher prices.9 In fact, evidence from MA, where bidding is used to determine benefit generosity but not benchmarks, suggests that competition is far from perfect.10
Although consolidation is surely part of the reason that competition is less powerful than advocates would like, another concern is inertia in plan or provider choice. Even when choices do exist, patients often do not shop effectively. This is a particular concern for Medicare beneficiaries, who may find shopping difficult. For example, considerable literature documents choice inertia and situations in which beneficiaries chose plans inferior to other options.11-14 The DME and Part D examples involve products that are relatively homogeneous or whose features can be understood by consumers. As a result, patients are relatively indifferent to who is providing the product. However, health plans and health care providers are more differentiated. Switching health plans or providers may be much more consequential. As a result, patients may be reluctant to make such switches.
Competition among ACOs may be even less effective than among MA plans because Medicare beneficiaries are likely unaware of the ACO programs (as they are passively enrolled) and even if they were informed, switching ACOs would require switching doctors. Moreover, most ACO programs have yet to incorporate a mechanism to explicitly pass savings from any efficiencies on to beneficiaries, so we may not expect patients to be attracted to efficient ACOs. Using bidding to set benchmarks in the ACO program would thus raise many challenges and likely require significant other changes to the ACO programs, some of which, like requiring beneficiaries to choose a physician, may conflict with the original Medicare statute.
Paradoxically, another concern about bidding-based benchmarks is that competition will be too vigorous. One virtue of competition is that it drives for ever-increasing savings. Yet plans and providers will worry that this could lead to a race to the bottom because quality is hard to measure and risk adjustment systems are imperfect. Similarly, as in many areas of the economy, a competitive model could lead to a 2-tiered system in which individuals who can afford more generous coverage or access to higher-quality providers pay more while others have their access to high-quality providers or care restricted. Such a system could exacerbate health disparities. Without guardrails, the virtuous competitive outcomes may be offset by deleterious consequences of competition.
Administrative benchmarks refer to a system in which benchmark trajectories are set either as a fixed growth rate or as a growth rate relative to an external (not spending) index (eg, the Consumer Price Index [CPI]).15 Most Medicare fee schedules rely on administrative pricing. For example, although there are sometimes adjustments for specific services, overall fee trajectories for hospital services are set as the growth in the hospital market basket minus a productivity adjustment specified in the Affordable Care Act. Population-based benchmarks are a bit different because they are set to encompass all the care that an individual uses over the course of a year, which can vary widely compared with variation in relatively homogeneous services. In the United States, the hospital global budget system in Maryland comes closest to an administrative benchmark system, as all-payer per capita hospital global budgets growth is capped at 3.58% per year (though this may be allocated across specific organizations in ways that allow budgets in some organizations to grow faster and force those in other organizations to grow slower). That system is hospital focused, capturing only a portion of spending, whereas an administrative benchmark system for ACOs or MA plans would be population based, reflecting total cost of care. Other developed countries, such as the United Kingdom, have payment models that rely more heavily on administrative prices for broad bundles of services.
Benchmarks in an administrative benchmark system for providers or plans could be regional (as in MA) or provider specific (as in ACOs). Either way, this is similar to premium support systems outlined by others for Medicare that do not rely on empirical benchmarking but instead make fixed payments (ie, benchmarks) to plans that rise at a predefined trajectory. The fixed payment functions as the benchmark, and in a premium support system, beneficiaries pay any premium above the fixed payment.16 The big difference is that in an administrative benchmark system, the responsibility to keep spending below the benchmark (and to assume the risk if spending rises above benchmarks) rests with providers or plans, not with beneficiaries.17
In an administratively set benchmark system, the benchmark trajectory can be tied to a broad economic indicator such as the gross domestic product (GDP) or CPI (eg, GDP + 1) or to Medicare fee increases. For example, current CMS forecasts suggest that Part A and Part B spending per Medicare beneficiary will rise about 2.5% per year above inflation. This is driven by an assumed annual increase of about 3.5% in utilization (volume and intensity) per beneficiary, offset by Medicare fee increases that rise about 1 percentage point less than inflation (because of the ACO productivity adjustment and subinflation physician fee updates called for under the Medicare Access and CHIP Reauthorization Act/Merit-based Incentive Payment System). Under an administrative benchmark system, benchmarks could be set, for example, at the projected increase in Medicare fees (blending Part A and Part B services) plus 3.0%. This would be above CPI and above average annual real GDP growth (projected to be about 1.5% per year between 2023 and 2031) but below current spending forecasts.18
In an administrative benchmark model, a wedge can develop between benchmarks and observed FFS spending. If benchmarks rise at a predetermined, administratively set rate above Medicare fee increases and if ACOs are successful, actual spending will rise at a rate below what was originally forecast. This allows all ACOs that constrain spending growth below the originally forecasted volume increases to receive bonuses. Because the benchmark trajectory (relative to Medicare fees) is determined at launch, the benchmarks will not fall if ACOs can constrain utilization below what was forecast. This is in contrast to empirical benchmark setting approaches in which success begets failure. It is also in contrast to bidding approaches in which benchmarks are based on the average or second-lowest bid. In those systems, by construction, many ACOs will pay penalties. For this reason, administrative benchmarks can keep spending below the forecast at launch but encourage participation because everyone can earn a bonus if they hold utilization growth below what was forecast. This produces less savings for the program in the short run but may prove more stable and more successful at producing savings in the long run.
If participation were universal and risk symmetric, such a model, with benchmark trajectories set below current law forecasts, would be scored at launch as saving money when compared with current law. Provider revenues would rise less than forecasted, but providers would be able to capture any savings from more efficient care delivery. Assuming that there are meaningful efficiencies to be gained or that volume and intensity growth can be controlled (which they must be if the system is to be fiscally sustainable), such a system would be better for providers than the status quo in which fees are scheduled to fall relative to inflation.
An administratively set benchmark system has other advantages. Specifically, it avoids the ratchet effect that arises from empirical benchmarking and bidding-based benchmarks. This should increase program participation. Moreover, it is operationally stable as new payment models diffuse and would allow harmonization of benchmarks across MA and TM. If the benchmark formulas allowed some (slow) convergence across areas, it could help diminish unwarranted geographic variation. Benchmarks could also be adjusted to support the safety net or promote health equity.
Finally, an administrative benchmark system provides more predictability in program spending without sacrificing the entitlement nature of Medicare. In particular, there is no change in the coverage to which beneficiaries are entitled, and in fact, the rising benchmark trajectory allows inflation-adjusted spending (and thus quality) to rise over time.
Many of the disadvantages of administrative prices are common to all PBP systems, including issues such as risk adjustment and patient attribution. However, the core concern is that the benchmark trajectory will be too high (leading to provider windfalls despite spending below current forecasts) or too low (leading to insufficient access or quality). Administrative benchmarks could be designed to mitigate some of this by policies such as setting slower growth in higher-spending areas or for higher spending ACOs). Moreover, just like existing systems of administrative fees, the administrative benchmarks could be adjusted by policy makers over time. The benchmark rules just set a default trajectory.
A related concern is that spending growth may vary by region, creating region-specific windfalls or losses. To some extent, this should even out over time and ACOs are only partially affected by regional trends. Nevertheless, some strategies to mitigate this concern, such as blending administrative benchmarks with regional spending, may be needed until a wedge develops.
Selection effects are also a significant concern if participation is voluntary (which will likely be the case in many models (or model tracks). Specifically, if spending is persistent and predictable, organizations anticipating spending below the benchmark may participate and those anticipating higher spending may opt not to do so. Even if ACOs cannot predict spending growth and benefit from selective participation, the variation in spending growth across regions may lead ACOs in some areas to face penalties. The risk of this may deter participation. However, over time the wedge between benchmarks and actual FFS spending will encourage participation and shield ACOs against losses.
It is also true that administrative benchmarks may slow spending on new technologies. This may put downward pressure on the prices for new services and technologies. Access to new technologies (beyond what is affordable by default updates) will need to be financed by waste that is eliminated, other care efficiencies, or explicit benchmark adjustments by policy makers.
The more explicit budget limits imposed by administrative benchmarks will likely be resisted or weakened by the political process. At some point, however, spending growth must be reduced. At the minimum, administrative benchmarks do not force providers or plans to chase their own success.
Some may argue that the failed experience with the sustainable growth rate system, which also set spending targets administratively but was not sustainable, suggests that such a model cannot work. Yet in the sustainable growth rate system, which imposed a national budget target, providers that used more services got a larger share of that budget and those that used less did not meaningfully share in the savings. In contrast, the administrative benchmark system discussed here imposes accountability at the plan, provider system, or practice levels, which dramatically alters the incentives because organizations keep a substantial share of the savings that they generate.
When money gets tight, and it will get tight, PBP models can support efficient delivery of care and allow providers greater control over the money. However, the success of PBP models will hinge on the details of the model. The benchmark is among the most important model parameters.
Currently, benchmarks are typically set using empirical benchmarking, in which the benchmark, or benchmark update, is based on spending. This system may work well when the participation in population payment is a small share of the total. But as the program increases in market share, the comparison group becomes less representative or conflated with program participants, making operation difficult. For example, as MA enrollment grows, setting MA benchmarks as a function of ever-decreasing FFS populations becomes problematic. Similarly, as ACOs grow, savings will be harder and harder to achieve and imperfections in risk adjustment will be more consequential. An alternative is to use bids to set benchmarks, but this system can only succeed if competition works well, and that is problematic in many health care markets.
Administrative benchmarks can also address many concerns about benchmark setting. Specifically, they can avoid the ratchet effect common in current benchmarking systems and avoid having provider success in the past dampen the prospects for success in the future.
A foundational question is how, and how much, pressure should be placed on providers. Too little pressure diminishes program savings and too much pressure discourages participation and may be disruptive to many providers. Empirical and bidding-based benchmarks rely on decentralized or market forces to determine the pressure on providers to save. In contrast, administrative benchmarks give policy makers more control over spending growth and can be set to save money relative to current forecasts while also allowing providers to share in the efficiencies that they generate.
Administrative prices are common in Medicare and used widely abroad. However, setting the specific parameters of administrative benchmarks appropriately will take more thought. Moreover, details of a transition to administrative benchmarks are important and may entail a period of time in which benchmarks are a blend of administrative and empirical benchmarks. In any case, a well-designed benchmark system is a crucial component for any PBP model, so deeper discussions of the alternatives are essential, and incorporation of an administrative component may be valuable.
Author Affiliations: Department of Health Care Policy, Harvard Medical School (MEC, JMM), Boston, MA; Dartmouth Institute for Health Policy and Clinical Practice (JH), Lebanon, NH.
Source of Funding: This manuscript was supported by Arnold Ventures.
Author Disclosures: Dr Chernew is the board chair of the Medicare Payment Advisory Commission (MedPAC); received funding from Signify Health; has a small equity interest in Archway Health and Station Health; personal fees from the National Institute for Health Care Management; unpaid speaking for America’s Health Insurance Plans; and co-editor-in-chief of The American Journal of Managed Care®. Dr McWilliams is an unpaid member of the board of directors of the Institute for Accountable Care (I4AC) and serves a senior advisor to the Center for Medicare and Medicaid Innovation (CMMI). Mr Heath reports no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article. The content of this manuscript is based solely on the authors’ analysis and conclusions and does not necessarily reflect the official views of Arnold Ventures, MedPAC, CMMI, or I4AC or their directors, officers, or staff.
Authorship Information: Concept and design (MEC, JH, JMM); drafting of the manuscript (MEC, JH, JMM); critical revision of the manuscript for important intellectual content (MEC, JH, JMM); administrative, technical, or logistic support (JH); and supervision (MEC).
Address Correspondence to: Michael E. Chernew, MD, Department of Health Care Policy, Harvard Medical School, 180 Longwood Ave, Ste 207, Boston, MA 02115. Email: firstname.lastname@example.org.
1. Douven R, McGuire TG, McWilliams JM. Avoiding unintended incentives in ACO payment models. Health Aff (Millwood). 2015;34(1):143-149. doi:10.1377/hlthaff.2014.0444
2. McWilliams JM, Landon BE, Chernew ME. Changes in health care spending and quality for Medicare beneficiaries associated with a commercial ACO contract. JAMA. 2013;310(8):829-836. doi:10.1001/jama.2013.276302
3. Baicker K, Chernew ME, Robbins JA. The spillover effects of Medicare managed care: Medicare Advantage and hospital utilization. J Health Econ. 2013;32(6):1289-1300. doi:10.1016/j.jhealeco.2013.09.005
4. Einav L, Finkelstein A, Ji Y, Mahoney N. Randomized trial shows healthcare payment reform has equal-sized spillover effects on patients not targeted by reform. Proc Natl Acad Sci U S A. 2020;117(32):18939-18947. doi:10.1073/pnas.2004759117
5. Fried JE, Liebers DT, Roberts ET. Sustaining rural hospitals after COVID-19: the case for global budgets. JAMA. 2020;324(2):137-138. doi:10.1001/jama.2020.9744
6. Freed M, Biniek JF, Damico A, Neuman T. Medicare Advantage in 2021: enrollment update and key trends. Kaiser Family Foundation. June 21, 2021. Accessed July 19, 2021. https://www.kff.org/medicare/issue-brief/medicare-advantage-in-2021-enrollment-update-and-key-trends/
7. Ding H, Duggan M, Starc A. Getting the price right? the impact of competitive bidding in the Medicare program. National Bureau of Economic Research working paper No. 28457. February 2021. Accessed July 19, 2021. https://www.nber.org/papers/w28457
8. Stocking A, Buntin M, Baumgardner J, Cook A. Examining the number of competitors and the cost of Medicare Part D. Congressional Budget Office. July 2014. Accessed July 19, 2021. https://cbo.gov/sites/default/files/cbofiles/attachments/45553-PartD.pdf
9. Schwartz K, Lopez E, Rae M, Neuman T. What we know about provider consolidation. Kaiser Family Foundation. September 2, 2020. Accessed July 19, 2021. https://www.kff.org/health-costs/issue-brief/what-we-know-about-provider-consolidation/
10. Song Z, Landrum MB, Chernew ME. Competitive bidding in Medicare Advantage: effect of benchmark changes on plan bids. J Health Econ. 2013;32(6):1301-1312. doi:10.1016/j.jhealeco.2013.09.004
11. Sinaiko AD, Hirth RA. Consumers, health insurance and dominated choices. J Health Econ. 2011;30(2):450-457. doi:10.1016/j.jhealeco.2010.12.008
12. Abaluck J, Gruber J. Choice inconsistencies among the elderly: evidence from plan choice in the Medicare Part D program: reply. Am Econ Rev. 2016;106(12):3962-3987. doi:10.1257/aer.20151318
13. McWilliams JM, Afendulis CC, McGuire TG, Landon BE. Complex Medicare Advantage choices may overwhelm seniors—especially those with impaired decision making. Health Aff (Millwood). 2011;30(9):1786-1794. doi:10.1377/hlthaff.2011.0132
14. Kuye IO, Frank RG, McWilliams JM. Cognition and take-up of subsidized drug benefits by Medicare beneficiaries. JAMA Intern Med. 2013;173(12):1100-1107. doi:10.1001/jamainternmed.2013.845
15. Chernew ME, Heath J. How different payment models support (or undermine) a sustainable health care system: rating the underlying incentives and building a better model. NEJM Catalyst. 2020;1(1). doi:10.1056/cat.19.1084
16. KHN Staff. Understanding Rep. Ryan’s plan for Medicare. Kaiser Health News. April 4, 2011. Accessed July 19, 2021. https://khn.org/news/ryan-plan-for-medicare-vouchers-vs-premium-support/
17. Chernew ME, Frank RG, Parente ST. Slowing Medicare spending growth: reaching for common ground. Am J Manag Care. 2012;18(8):465-468.
18. An update to the budget and economic outlook: 2021 to 2031. Congressional Budget Office. July 1, 2021. Accessed July 19, 2021. https://www.cbo.gov/publication/57218