The American Journal of Managed Care
May 2023
Volume 29
Issue 5

Time to Publication of Cost-effectiveness Analyses for Medical Devices

This study examines the availability of cost-effectiveness analyses for medical devices, both in terms of the number of studies and when studies are published.


Objectives: Academic researchers and physicians have called for greater use of cost-effectiveness analyses in informing treatment and reimbursement decisions. This study examines the availability of cost-effectiveness analyses for medical devices, in terms of both the number of studies and when studies are published.

Study Design: Analysis of the number of years between FDA approval/clearance and publication for cost-effectiveness analyses of medical devices in the United States published between 2002 and 2020 (n = 86).

Methods: Cost-effectiveness analyses of medical devices were identified using the Tufts University Cost-Effectiveness Analysis Registry. Studies in which the model and manufacturer of the medical device used in the intervention were identifiable were linked to FDA databases. Years between FDA approval/clearance and publication of cost-effectiveness analyses were calculated.

Results: A total of 218 cost-effectiveness analyses of medical devices in the United States published between 2002 and 2020 were identified. Of these studies, 86 (39.4%) were linked to FDA databases. Studies examining devices approved via premarket approval were published a mean of 6.0 years after the device received FDA approval (median, 4 years), whereas studies examining devices that were cleared via the 510(k) process were published a mean of 6.5 years after the device received FDA clearance (median, 5 years).

Conclusions: There are few studies describing the cost-effectiveness of medical devices. Most of these studies’ findings are not published until several years after the studied devices received FDA approval/clearance, meaning that decision makers will likely not have evidence of cost-effectiveness when making initial decisions related to newly available medical devices.

Am J Manag Care. 2023;29(5):265-268.


Takeaway Points

This study describes when cost-effectiveness analyses of medical devices are typically available to decision makers.

  • A total of 218 cost-effectiveness analyses of medical devices in the United States were published between 2002 and 2020 and available in a centralized database of cost-effectiveness analyses.
  • Among the 86 cost-effectiveness analyses in which the make and model of the studied medical device were identifiable, analyses were on average published 4 to 6 years after FDA approval or clearance of the studied device.
  • Cost-effectiveness analyses are usually unavailable at the time when decision makers will need to make initial decisions related to newly available medical devices.


Academic researchers and physicians have called for greater use of cost-effectiveness analysis (CEA) to inform treatment and reimbursement decisions in the United States, especially in the context of informing the development of alternative payment models and clinical practice guidelines issued by specialty societies.1-5 However, for physicians or payers to consider CEAs when making decisions, rigorous CEAs must be available. Prior work has documented a dearth of CEAs examining medical devices specifically,6 despite the fact that medical devices comprise approximately 5% of total health care spending in the United States.7 In this descriptive cross-sectional analysis, I expand on prior work by documenting the number of published medical device CEAs in the United States by regulatory review pathway and the average time between FDA approval of medical devices and publication of CEAs.


For US payers, CEAs play a limited role in coverage and payment decisions for medical devices. Most private insurers consider cost when making reimbursement decisions,8,9 but few rely on formal CEAs. Findings from a recent study show that considering cost can push private insurers to make coverage decisions at least roughly consistent with formal cost-effectiveness evidence. In a sample of employer-based health plans, drugs with small (ie, favorable) incremental cost-effectiveness ratios (ICERs) exhibited larger declines in cost sharing between 2010 and 2013 compared with drugs with larger ICERs. However, there was no relationship between cost sharing and cost-effectiveness for drugs with an ICER greater than $10,000 per quality-adjusted life-year (QALY). This suggests that when formularies do not explicitly consider CEAs, cost-sharing decisions made by payers are only weakly related to the incremental value of covered therapies.10

In contrast to private payers, CMS is explicitly prohibited from considering CEAs when making coverage decisions (although Medicare does sometimes cite CEAs in national coverage determinations).11 As such, it is not surprising that prior work finds no obvious relationship between the cost-effectiveness of medical devices and outcomes of national coverage determinations by CMS.11 Given the disparate considerations of cost between CMS and private payers, CMS national coverage determinations match with private payer decisions for medical device coverage only approximately half the time.12

For medical specialty societies, CEAs play a small but growing role in informing clinical practice guidelines. A 2015 analysis found that in a sample of 100 highly cited guidelines, 43% of guidelines incorporated at least 1 cost analysis,13 compared with 26% of guidelines in a similar analysis performed in 2002.14 Cost analyses that are published earlier are more likely to be included in guidelines, suggesting that CEA availability may play an important role in determining whether medical specialty societies consider such analyses.13

The lack of use of CEAs to inform clinical practice guidelines and payer decisions for medical devices may be wasteful from both payer and societal perspectives. Patients may receive medical devices that are not cost-effective relative to comparator devices or receive medical device–based interventions when surgical or pharmaceutical interventions may be more cost-effective.6 Furthermore, the potential for waste caused by not considering CEAs may be greater for medical devices compared with other interventions. Unlike pharmaceutical products, for example, most medical devices receive regulatory approval or clearance without demonstrating efficacy using human clinical data. Instead, the FDA grants market access to most devices through the 510(k) notification process based on their “substantial equivalence” to comparator “predicate” devices. Substantial equivalence is often demonstrated through bioequivalence studies or bench testing rather than clinical data. Manufacturers may add new features (that may or may not improve clinical outcomes) to an applicant device as long as the new features do not “raise different questions of safety and effectiveness” and the applicant device is “as safe and effective” as the predicate.15,16

Even among devices evaluated through the premarket authorization (PMA) process that requires human clinical data prior to receiving regulatory approval, the quality of clinical evidence supporting device approvals tends to be lower compared with that of pharmaceutical products (eg, more likely to be approved based on a single nonrandomized trial).17,18 The opacity of incremental benefits for certain devices combined with payers’ inconsistent consideration of costs means that medical device–based interventions that would not meet conventional thresholds of cost-effectiveness may be more likely to receive coverage compared with other interventions. As such, both payers and society may benefit from more use of CEAs in clinical practice guidelines and reimbursement decisions, particularly for decisions related to medical devices.

One barrier to greater adoption of CEA in clinical practice guidelines and payer decisions for medical devices is the lack of CEAs that study devices. The CEA literature disproportionately focuses on pharmaceutical interventions, whereas surgical and medical device–based interventions are underrepresented.6 Another potential barrier may be the timing of publication for medical device CEAs. Unlike pharmaceutical products, medical devices often come to market without any direct clinical evidence,17,18 meaning that conducting CEAs for devices may take longer because researchers cannot consistently rely on already collected data when assessing incremental benefit. Furthermore, payers and physicians may be less likely to incorporate delayed medical device CEAs into their decision-making, given the rapid and highly iterative product development lifecycle and subsequent physician “learning curve” in the device industry.19

This study aims to describe these barriers and develop a better understanding of the availability of medical device CEAs by documenting the number of CEAs published and the timing of publications relative to FDA decisions. Unlike prior studies, this study stratifies analyses of CEA availability by the regulatory pathway through which the device was approved or cleared for market access by the FDA.


Data Sources

This study relies on 2 main data sources: the Tufts University Center for the Evaluation of Value and Risk in Health Cost-Effectiveness Analysis Registry (Tufts CEAR) and the FDA medical device approval and clearance databases. The Tufts CEAR is a repository of all peer-reviewed English-language CEAs identifiable in MEDLINE with incremental health benefits measured in QALYs. For each CEA, the Tufts CEAR catalogs bibliographic information and categorizes the studied intervention (eg, pharmaceutical, medical device, surgical). The CEAR catalogs all CEAs published after 1976, but this study focuses on CEAs published between 2002 and 2020.20

The FDA medical device approval and clearance databases maintain records for all devices approved by the FDA through the PMA process for high-risk devices or cleared through the 510(k) notification process for moderate-risk devices.21,22 These records include unique identification numbers for individual medical devices and dates when medical devices were granted market access by the FDA.

To construct my analytic sample, I identified all fully reviewed CEAs in the Tufts CEAR between 2002 and 2020 in which the studied intervention was a medical device used in a US-based patient population. Within this sample, I read through each CEA to identify the specific make and model of the intervention device to manually match the CEA to a corresponding medical device in the FDA databases. Specific medical devices were not always identifiable. For example, a CEA might assess the average cost-effectiveness of all implantable cardiac defibrillators on the market, rather than specifically identifying the cost-effectiveness of a defibrillator from company A vs a defibrillator from company B; this hypothetical CEA could not be linked to a single medical device record or assigned to an approval/clearance date.

Data Analysis

For instances in which the CEA could be linked to FDA databases, I calculated the time to CEA publication as the difference between the year of CEA publication and the year of FDA approval/clearance for the studied medical device. Time to CEA publication could not be calculated for CEAs that could not be linked to the FDA databases. I calculated mean and median time to CEA publication, stratifying by regulatory pathway (ie, PMA devices vs 510[k] devices), and constructed corresponding histograms. I assessed differences in mean time to CEA publication by regulatory pathway using t tests under the assumption of unequal variances between groups.


The FDA approved 582 devices through the PMA pathway and cleared 59,142 devices through the 510(k) pathway between 2002 and 2020. A total of 218 CEAs of US medical devices published during the same time period were identifiable in the Tufts CEAR. Among these CEAs, 86 (39.5%) identified the specific make and model of the studied intervention and could be linked to the FDA database.

CEAs of PMA devices were on average published more quickly following FDA approval compared with 510(k) devices, although differences in mean publication times are not statistically significant (P = .703). CEAs of PMA devices were published a mean (SD) of 6.0 (6.5) years and a median (IQR) of 4 (9) years after receiving FDA approval (Figure 1), whereas CEAs of 510(k) devices were published a mean (SD) of 6.5 (5.2) years, with a median (IQR) of 5 (5) years, after receiving FDA clearance (Figure 2). There are some instances in which CEAs of PMA devices were published prior to their FDA approval (Figure 1). These were frequently cases in which CEAs were conducted using inputs from pilot trials, rather than the pivotal trials that led to FDA approval.


There were fewer than 250 CEAs of medical devices in the United States published between 2002 and 2020 and available in a centralized cost-effectiveness database, despite more than 500 PMA devices and 59,000 510(k) devices coming to market during the same time period. Of the limited studies available, most CEAs were not published until 4 to 5 years after FDA approval or clearance. CEAs of PMA devices exhibited more variation in time to publication compared with CEAs of 510(k) devices.

The small number of US-based medical device CEAs has been previously documented6 and is not particularly surprising, given that Medicare is prohibited from considering cost when making coverage decisions11 and that many private payers do not consistently factor formal CEAs into coverage decisions.8,9 However, the finding that most medical device CEAs are not available until several years after FDA approval (a pattern also observed in CEAs of pharmaceutical products)23 should inform how payers and medical specialty societies might reasonably expect to increase their use of CEAs when making decisions.

For medical specialty societies, CEA publication delays suggest that updates to existing guidelines will need to consider the ongoing evolution of both clinical evidence and cost evidence. For payers, those aspiring to incorporate CEAs into coverage and payment decisions should instead consider conditional coverage strategies, in which they initially cover new technologies but require manufacturers to develop evidence of cost-effectiveness to continue coverage or receive preferential cost sharing beyond a set number of years. Medicare has employed this strategy (referred to as “coverage with evidence development”) in recent years by conditioning continued coverage of new products on further development of evidence demonstrating product effectiveness (without respect to cost).24,25 Other payers could apply this approach to requiring evidence of cost-effectiveness specifically.


There are limitations to this study. This study examined only medical device CEAs based in the United States. This was done to increase the likelihood that CEAs could be accurately matched to FDA databases. However, this means that the findings of this study may not be generalizable to medical device CEAs for non-US populations. This is nontrivial, as US payers and specialty societies looking for evidence of cost-effectiveness may consider CEAs from other countries when making decisions. Additionally, this study is limited in that the linkage between the Tufts CEAR and the FDA databases was performed manually. Errors in the linkage to the FDA databases could bias both the counts of devices with linked CEAs and average lag times between FDA decisions and CEA publication.


There are few published peer-reviewed studies examining the cost-effectiveness of US medical devices. The limited number of available CEAs are usually published several years after FDA approval, meaning that medical specialty societies and payers may not have cost-effectiveness evidence available when making either clinical guidelines or coverage and payment decisions related to recently approved medical devices. Payers may consider conditional coverage strategies for medical devices if they wish to incorporate CEAs into coverage decisions as CEAs are published.

Author Affiliation: Harvard-MIT Center for Regulatory Science, Harvard Medical School, Harvard University, Boston, MA.

Source of Funding: This work was supported by the Agency for Healthcare Quality and Research through grant R36 HS27522-01A1.

Author Disclosures: Dr Everhart reports being previously employed by Medtronic plc, receiving grants from the National Institutes on Aging and the National Bureau of Economic Research unrelated to this study, and receiving honoraria from the Digital Medicine Society and the University of Southern California.

Authorship Information: Concept and design; acquisition of data; analysis and interpretation of data; drafting of the manuscript; critical revision of the manuscript for important intellectual content; statistical analysis; and obtaining funding.

Address Correspondence to: Alexander O. Everhart, PhD, Harvard-MIT Center for Regulatory Science, Harvard Medical School, Harvard University, 200 Longwood Ave, Armenise Bldg, Room 109, Boston, MA 02115. Email:


1. Schwartz JAT, Pearson SD. Cost consideration in the clinical guidance documents of physician specialty societies in the United States. JAMA Intern Med. 2013;173(12):1091-1097. doi:10.1001/JAMAINTERNMED.2013.817

2. Garrison LP Jr. Cost-effectiveness and clinical practice guidelines: have we reached a tipping point?—an overview. Value Health. 2016;19(5):512-515. doi:10.1016/J.JVAL.2016.04.018

3. Chen KK, Harty JH, Bosco JA. It is a brave new world: alternative payment models and value creation in total joint arthroplasty: creating value for TJR, quality and cost-effectiveness programs. J Arthroplasty. 2017;32(6):1717-1719. doi:10.1016/J.ARTH.2017.02.013

4. Pandya A, Soeteman DI, Gupta A, Kamel H, Mushlin AI, Rosenthal MB. Can pay-for performance incentive levels be determined using a cost-effectiveness framework? Circ Cardiovasc Qual Outcomes. 2020;13(7):e006492. doi:10.1161/CIRCOUTCOMES.120.006492

5. Kim DD, Basu A. How does cost-effectiveness analysis inform health care decisions? AMA J Ethics. 2021;23(8):E639-E647. doi:10.1001/AMAJETHICS.2021.639

6. Baumgardner J, Brauer M, Skornicki M, Neumann P. Expanding cost-effectiveness analysis to all of health care: comparisons between CEAs on pharmaceuticals and medical/surgical procedures. Innovation and Value Initiative. February 21, 2018. Accessed August 5, 2019.

7. Donahoe GF. Estimates of medical device spending in the United States. June 2021. Accessed April 3, 2023.

8. Garber AM. Cost-effectiveness and evidence evaluation as criteria for coverage policy. Health Aff (Millwood). 2004;23(suppl 1):W4-284-W4-296. doi:10.1377/hlthaff.W4.284

9. Solow B, Pezalla EJ. ISPOR’s initiative on US value assessment frameworks: the use of cost-effectiveness research in decision making among US insurers. Value Health. 2018;21(2):166-168. doi:10.1016/j.jval.2017.12.004

10. Brouwer ED, Basu A, Yeung K. Adoption of cost effectiveness-driven value-based formularies in private health insurance from 2010 to 2013. Pharmacoeconomics. 2019;37(10):1287-1300. doi:10.1007/s40273-019-00821-5

11. Chambers JD, Neumann PJ, Buxton MJ. Does Medicare have an implicit cost-effectiveness threshold? Med Decis Mak. 2010;30(4):E14-E27. doi:10.1177/0272989X10371134

12. Chambers JD, Chenoweth M, Thorat T, Neumann PJ. Private payers disagree with Medicare over medical device coverage about half the time. Health Aff (Millwood). 2015;34(8):1376-1382. doi:10.1377/hlthaff.2015.0133

13. Zervou FN, Zacharioudakis IM, Pliakos EE, Grigoras CA, Ziakas PD, Mylonakis E. Adaptation of cost analysis studies in practice guidelines. Medicine (Baltimore). 2015;94(52):e2365. doi:10.1097/MD.0000000000002365

14. Wallace JF, Weingarten SR, Chiou CF, et al. The limited incorporation of economic analyses in clinical practice guidelines. J Gen Intern Med. 2002;17(3):210-220. doi:10.1046/J.1525-1497.2002.10522.X

15. Van Norman GA. Drugs, devices, and the FDA: part 2: an overview of approval processes: FDA approval of medical devices. JACC Basic Transl Sci. 2016;1(4):277-287. doi:10.1016/j.jacbts.2016.03.009

16. Content of a 510(k). FDA. April 26, 2019. Accessed May 15, 2022.

17. Dhruva SS, Bero LA, Redberg RF. Strength of study evidence examined by the FDA in premarket approval of cardiovascular devices. JAMA. 2009;302(24):2679-2685. doi:10.1001/jama.2009.1899

18. Jones LC, Dhruva SS, Redberg RF. Assessment of clinical trial evidence for high-risk cardiovascular devices approved under the Food and Drug Administration Priority Review Program. JAMA Intern Med. 2018;178(10):1418-1420. doi:10.1001/jamainternmed.2018.3649

19. Drummond M, Griffin A, Tarricone R. Economic evaluation for devices and drugs—same or different? Value Health. 2009;12(4):402-404. doi:10.1111/J.1524-4733.2008.00476_1.X

20. Cost-Effectiveness Analysis Registry. Center for the Evaluation of Value and Risk in Health. 2018. Accessed May 19, 2022.

21. PMA approvals. FDA. December 16, 2021. Accessed August 11, 2022.

22. Downloadable 510(k) files. FDA. November 23, 2021. Accessed August 11, 2022.

23. Chambers JD, Thorat T, Pyo J, Neumann PJ. The lag from FDA approval to published cost-utility evidence. Expert Rev Pharmacoecon Outcomes Res. 2015;15(3):399-402. doi:10.1586/14737167.2015.1001371

24. Neumann PJ, Chambers JD. Medicare’s reset on ‘coverage with evidence development.’ Health Affairs. April 1, 2013. Accessed May 31, 2021.

25. Wherry K, Stromberg K, Hinnenthal JA, Wallenfelsz LA, El-Chami MF, Bockstedt L. Using Medicare claims to identify acute clinical events following implantation of leadless pacemakers. Pragmatic Obs Res. 2020;11:19-26. doi:10.2147/por.s240913

Related Videos
Sindhuja Kadambi, MD, MS
Kimberly Westrich, MA, chief strategy officer of the National Pharmaceutical Council
ISPOR 2024 Recap
William Padula, PhD, MSc, MS, assistant professor of pharmaceutical and health economics, University of California Alfred E. Mann School of Pharmacy and Pharmaceutical Sciences
Screenshot of Mary Dunn, MSN, NP-C, OCN, RN, during a video interview
Screenshot of Fran Gregory, PharmD, MBA, during a video interview
Dr Sophia Humphreys
Ryan Stice, PharmD
Leslie Fish, PharmD.
Pat Van Burkleo
Related Content
CH LogoCenter for Biosimilars Logo