Currently Viewing:
The American Journal of Managed Care June 2014
Comparison Between Guideline-Preferred and Nonpreferred First-Line HIV Antiretroviral Therapy
Stephen S. Johnston, MA; Timothy Juday, PhD; Amanda M. Farr, MPH; Bong-Chul Chu, PhD; and Tony Hebden, PhD
The Value of Specialty Pharmaceuticals - A Systematic Review
Martin Zalesak, MD, PhD; Joyce S. Greenbaum, BA; Joshua T. Cohen, PhD; Fotios Kokkotos, PhD; Adam Lustig, MS; Peter J. Neumann, ScD; Daryl Pritchard, PhD; Jeffrey Stewart, BA; and Robert W. Dubois, MD
Health Insurance and Breast-Conserving Surgery With Radiation Treatment
Askal Ayalew Ali, MA; Hong Xiao, PhD; and Gebre-Egziabher Kiros, PhD
Patient-Centered Medical Home and Quality Measurement in Small Practices
Jason J. Wang, PhD; Chloe H. Winther, BA; Jisung Cha, PhD; Colleen M. McCullough, MPA; Amanda S. Parsons, MD, MBA; Jesse Singer, DO, MPH; and Sarah C. Shih, MPH
Impact of a Patient Incentive Program on Receipt of Preventive Care
Ateev Mehrotra, MD; Ruopeng An, PhD; Deepak N. Patel, MBBS; and Roland Sturm, PhD
Novel Predictive Models for Metabolic Syndrome Risk: A "Big Data" Analytic Approach
Gregory B. Steinberg, MB, BCh; Bruce W. Church, PhD; Carol J. McCall, FSA, MAAA; Adam B. Scott, MBA; and Brian P. Kalis, MBA
Primary Care Diabetes Bundle Management: 3-Year Outcomes for Microvascular and Macrovascular Events
Frederick J. Bloom Jr, MD; Xiaowei Yan, PhD; Walter F. Stewart, PhD; Thomas R. Graf, MD; Tammy Anderer, PhD; Duane E. Davis, MD; Steven B. Pierdon, MD; James Pitcavage, MS; and Glenn D. Steele Jr, MD
Association of Electronic Health Records With Cost Savings in a National Sample
Abby Swanson Kazley, PhD; Annie N. Simpson, PhD; Kit N. Simpson, DPH; and Ron Teufel, MD
Learning About 30-Day Readmissions From Patients With Repeated Hospitalizations
Jeanne T. Black, PhD, MBA
Removing a Constraint on Hospital Utilization: A Natural Experiment in Maryland
Noah S. Kalman, MD; Bradley G. Hammill, MS; Robert B. Murray, MA, MBA; and Kevin A. Schulman, MD
Using Clinically Nuanced Cost Sharing to Enhance Consumer Access to Specialty Medications
Jason Buxbaum, MHSA; Jonas de Souza, MD; and A. Mark Fendrick, MD
Currently Reading
Real-World Impact of Comparative Effectiveness Research Findings on Clinical Practice
Teresa B. Gibson, PhD; Emily D. Ehrlich, MPH; Jennifer Graff, PharmD; Robert Dubois, MD; Amanda M. Farr, MPH; Michael Chernew, PhD; and A. Mark Fendrick, MD

Real-World Impact of Comparative Effectiveness Research Findings on Clinical Practice

Teresa B. Gibson, PhD; Emily D. Ehrlich, MPH; Jennifer Graff, PharmD; Robert Dubois, MD; Amanda M. Farr, MPH; Michael Chernew, PhD; and A. Mark Fendrick, MD
The authors found no consistent pattern in the concordance between CER evidence and subsequent utilization patterns.

Unprecedented funding for comparative effectiveness research (CER) to help provide better evidence for decision making as a way to lower costs and improve quality is under way. Yet how research findings are adopted and applied will impact the nation’s return on this investment. We examine the relationship between the publication of findings from 4 seminal CER trials, the release of subsequent clinical practice guidelines (CPGs), and utilization trends for associated surgical interventions, diagnostic interventions, or medications.

Study Design

Retrospective, observational study.


Using a large national administrative claims database, we examined time series utilization trends before and after publication of findings from 4 CER trials published within the last decade.


We found no clear pattern of utilization in the first 4 quarters after publication. However, we found that results for 2 of the studies were in concert with the release of CPGs and publication of study results. The trend in intensive statin therapy rose rapidly starting at the end of 2007, while the trend in standard therapy remained relatively constant (PROVE-IT). And, 9 months after trial publication, breast magnetic resolution imaging (MRI) utilization rates rose 43.2%, from 0.033 to 0.048 per 100 enrollees (Mammography With MRI).


Our analysis of 4 case studies supports the call others have made to translate and disseminate CER findings to improve application of research findings to clinical practice and the need for continued development and dissemination of CPGs that serve to synthesize research findings and guide practitioners in clinical decision making. Further research is needed to determine whether these findings apply to different medical topics.

Am J Manag Care. 2014;20(6):e208-e220
Time series utilization trends before and after publication of findings from 4 CER trials published within the last decade revealed no clear pattern of utilization in the first 4 quarters after publication. Results for 2 of the studies (PROVE-IT, Mammography With MRI) were in concert with the release of clinical practice guidelines (CPGs) and publication of study results.
  • Our findings support a continued effort to translate and disseminate CER results to improve application of research findings to clinical practice.

  • Continued development and dissemination of CPGs to synthesize research findings and guide practitioners in clinical decision making is necessary.
With the Affordable Care Act, public funding for comparative effectiveness research (CER) expands beyond the Agency for Healthcare Research and Quality (AHRQ) and the National Institutes of Health to include the Patient-Centered Outcomes Research Institute (PCORI).1 According to the Institute of Medicine of the National Academies, “The purpose of CER is to assist consumers, clinicians, purchasers, and policy makers to make informed decisions that will improve healthcare at both the individual and population levels.”2 The return on this investment in CER will be measured by if, and how quickly, results from the research are used in decision making and translated into clinical practice.

One of the most notable examples of the relationship between findings and practice is the Estrogen Plus Progestin trial, a component of the Women’s Health Initiative.3 The trial was stopped in May 2002 after investigators found that the associated health risks of combination hormone therapy among postmenopausal women outweighed the benefits. Almost immediately following publication, rates of hormone therapy in postmenopausal women were shown to drop precipitously, from 90 million prescriptions to 57 million—a 36.7% decrease.4

The Estrogen Plus Progestin trial is an example of a study where the evidence resulted in near-term changes in clinical practice. However, beyond a few notable studies,4-6 researchers have found that it can take several years for study findings to translate to practice.7,8 Additionally, fewer than 1 study in 1000 is reported by the mainstream media.9 Historically, one way that research findings have impacted medical practice has been through the development, dissemination, and use of clinical practice guidelines (CPGs).10

In this study, we used a large national administrative database to examine real-world utilization trends before and after publication of CER findings (and release of relevant CPGs) from 4 high-profile CER studies published within the last decade. We analyzed changes in utilization rates of procedures and treatments associated with widely communicated CER evidence and CPGs for up to 6 years following the evidence of the publication. Our approach was not normative or prescriptive, but was intended to examine historical trends and resulting changes in practice patterns within a large sample of enrollees with employersponsored health insurance and to discuss the implications of our findings for those seeking to increase the translation of research findings into practice.


To distinguish case studies and conditions for our areas of focus, we performed an environmental scan via a literature review. We queried MEDLINE/PubMed,11 Congressional Budget Office publications,12 and AHRQ CER13 studies to find CER evidence reported between January 1, 2000, and December 31, 2009.

Studies (including randomized controlled trials, metaanalyses, or observational studies) which were highly cited in high-impact medical journals (Table) that compared 2 or more treatment options or reviewed the safety of a drug were selected. Based on these criteria, 23 studies were selected, characteristics of each study were abstracted (eg, type of comparators, primary aim as safety vs efficacy, study design), and then each was reviewed by the study team to determine which should be included in the quantitative analysis (eAppendix A, available at The main criteria for the selected case studies were (1) high-profile study of a medication, surgical, or diagnostic intervention (published in a high-impact journal as ranked by ISI Web of Knowledge)14; and (2) widely cited results (ISI Web of Knowledge14 and Google Scholar citations). We also focused on quality criteria that included, as reviewed by the study team: (1) study design (adequate sample size with an experimental design); (2) clear findings (lack of ambiguity in the implications); and (3) findings that were not reversed or contradicted by subsequent evidence. Cost-effectiveness studies were excluded.

Of these, the study team selected 4 case studies that represented varying types of trials: 1 study compared medical therapies (PROVE-IT TIMI 22), 2 compared surgical versus nonsurgical treatments (COURAGE and SPORT) and 1 compared diagnostic screening procedures (called Mammography With MRI here for simplicity; see reference 20 for actual title).

We also examined the first release of related CPGs, following the publication of trial results. To determine when guidelines might have started to influence uptake or discontinuation of a specific practice related to the clinical trial, we conducted an environmental scan via a literature review. The criteria for the scan limited results to articles that focused on clinical guidelines and that specifically cited 1 of the 4 seminal studies in relationship to guideline development. We queried Google Scholar for publications following the release of the trial(s) through the end of our study period in 2010. We then eliminated nonguidelines. We also conducted a keyword search of guidelines published by the AHRQ National Guidelines Clearinghouse for the same time period. From the combined list we retrieved the first clinical guideline (from search results), corresponding to each trial (eAppendix B).

Data Source and Analysis

This analysis was based on data contained in the Truven Health MarketScan Commercial Claims and Encounters Database and the MarketScan Medicare Supplemental and Coordination of Benefits Database for the period of January 1, 2003, to June 30, 2010. The MarketScan Database includes the enrollment, inpatient, outpatient, and outpatient pharmacy claims experience of tens of millions of individuals across the nation with employer-sponsored insurance or employer-sponsored Medicare Supplemental insurance. Health insurance was offered through a variety of capitated and fee-for-service health plan types, and prescription drug coverage was offered in conjunction with the medical benefit. Its sample size is large enough to allow creation of a nationally representative data sample of Americans with employer-provided health insurance. The enrollment characteristics in MarketScan are largely similar to nationally representative data for employer-sponsored insurance in the Medical Expenditure Panel Survey (MEPS), although a higher percentage of MarketScan enrollees reside in the South census region. The data cover all 50 states and the District of Columbia. Used primarily for research, these databases are fully Health Insurance Portability and Accountability Act compliant.15

To study trends in the utilization rates for each case study, we calculated the quarterly utilization rates of each procedure (by calendar quarter). First, we selected the continuous cohort of firms contributing data to MarketScan each year from 2003 through 2010 Q1, representing approximately 4 million enrollees annually (eAppendix C), so trends would not be impacted by firms entering and leaving the sample. Within these firms, each enrollee selected for inclusion in the study had medical and pharmacy claims data available, was 18 years or older, was continuously enrolled in the 4 calendar quarters prior to the quarter of interest, and was not pregnant (no diagnosis of pregnancy).

Second, within this cohort of enrollees, we replicated (as was possible using claims data) the inclusion and exclusion criteria used in each clinical trial (based upon publication or online trial-specific materials) for patient selection (See the Table for summarized criteria and eAppendix D for detailed criteria and codes). To determine these sets of criteria, a nosologist (certified clinical coding expert) provided clinical codes (International Classification of Diseases, Ninth Revision, Clinical Modification [ICD-9-CM] diagnosis codes, ICD-9-Clinical Modification [ICD-9-CM] procedure codes, and Current Procedural Terminology, 4th Edition [CPT-4] procedure codes) for each of the inclusion and exclusion criteria and the procedures listed in the clinical trial. As part of an iterative process, these codes were reviewed by a clinician expert in clinical coding and the resulting list was reviewed a second time by the nosologist. Through a consensus-driven process, any differences between the clinician and the nosologist were resolved in consultation with the project manager, and by 2 other clinicians. A similar process was employed for creating a list of pharmaceutical codes, incorporating a review of National Drug Codes by a licensed pharmacist.

The study clinicians relaxed some of the criteria (see eAppendix D for a description of how these criteria were relaxed) designated in the trial to focus on patients who were similar to those targeted by very specific study criteria for 2 reasons. First, clinical coding in administrative data sets can lack precision.16 Second, we hypothesized that these results would have spillover effects on clinical practice beyond patients meeting the restrictive clinical inclusion criteria used in the respective clinical trial. Therefore, we focused on patients highly similar to those included in the trial. For example, for the Mammography With MRI study, we did not require women to have a diagnosis on a claim indicating that they carried the BRCA1 or BRCA2 mutation, because test results are infrequently available in claims analyses.17 For the SPORT case study, we did not exclude enrollees if they had a previous lumbar surgery. The relaxed criteria were then applied to the data set to find the set of enrollees in each calendar quarter meeting the patient selection criteria (the denominator). For the pre-period for each study, we required patients to be continuously enrolled at least 1 year prior to the quarter, to check for a number of inclusion and exclusion criteria. Claims incurred during this year were used to evaluate inclusion and exclusion criteria (eAppendix D). Once meeting study requirements, patients were followed until the end of their continuous enrollment, or until evidence appeared that they met the exclusion criteria, or until completion of 2 years of follow-up (except in the Mammography With MRI study, where patients were followed through the end of their continuous enrollment).

Finally, a list of the treatments or procedures of interest (eg, breast magnetic resonance imaging [MRI]) was created (numerator of each measure). Within the enrollees meeting the selection criteria, enrollees receiving each treatment or procedure were flagged in the database and the number of occurrences of the treatment or procedure of interest (the numerator) was recorded (note: eAppendix D contains the clinical criteria used in each case study). Using this information, we calculated aggregate quarterly trends in utilization rates for each treatment or procedure.

Copyright AJMC 2006-2019 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
Welcome the the new and improved, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up