Rigorous scientific standards are needed to address the challenge of providing information on the real-world effects of treatments and procedures.
With enactment of healthcare reform legislation now a fact, attention has quickly turned to its implementation and necessary preparations for the changes to come. Many of the provisions of the new law will not take effect for several years, but others have more aggressive time lines and no doubt are already receiving considerable attention within federal agencies that will need to prepare for those changes. One of these changes is the creation of a nonprofit comparative effectiveness research (CER) entity known as the Patient-Centered Outcomes Research Institute (PCORI). Although it will not be an agency within the federal government, PCORI will benefit from the significant CER effort currently under way as a result of the 2009 American Recovery and Reinvestment Act.1 The American Recovery and Reinvestment Act not only expanded the federal commitment to CER, but also required specific initiatives to define CER and to develop recommendations on priority areas for new research. These important activities will further inform the process and functions of the new PCORI.
Need for Credible Real-World CER Evidence
The American Recovery and Reinvestment Act called for a study by the Institute of Medicine to define priorities for CER2 and also established the Federal Coordinating Council for Comparative Effectiveness Research, which was charged with coordination of all federal CER activities. Although this council was terminated under the new healthcare reform law, it did establish a definition for CER that is widely accepted and expected to be embodied in the work of PCORI. That definition includes the concept that CER should provide evidence on real-world effects of treatments, processes, and technologies—that is, the effects of medical care in the usual clinical setting.3
Although prescription drugs are subject to stringent requirements for regulatory approval and can be marketed only after demonstrating safety and efficacy in prospective, well-controlled trials, regulatory approval does not require demonstration of the effects in usual care settings. In addition, for some medical procedures, ethical and practical considerations may limit the opportunity for assessment of comparative effectiveness in formal clinical trials. Therefore, the challenge of providing information on the real-world effects of treatments and procedures is significant and should be addressed comprehensively with attention to rigorous scientific standards.
Potential of Observational Research
Many consider the prospective randomized controlled trial (RCT) to be the gold standard for clinical evidence. However, challenges associated with the RCT as the principal method for CER include the cost and time required for conducting such studies and their potential for broad generalizability. These challenges make RCTs a problematic standard for the range and types of real-world evidence that are needed in a new era of CER. Increasingly, attention is turning to nonexperimental study designs using observational data sets that in some cases are derived from the records compiled from the daily interactions of patients and providers in the typical clinical office setting. In addition to data generated from electronic health records, which will increase in importance as a result of new government incentives for adoption of such systems, these data sets more commonly include the data from claims processing or disease registries. How best to use these data to inform the CER enterprise is a question of critical importance that is, as yet, unanswered.
Research using observational data already occurs frequently, and has for a long time. Using such data for research is clearly consistent with the Institute of Medicine’s evolving concept of a “learning healthcare system” in which healthcare delivery continuously benefits as real-world evidence accumulates.4 Observational data are plentiful and readily available, and a growing number of payer organizations are recognizing the valuable resource available to them for understanding how treatments work in practice. But the concept of “good,” “sound,” or even “credible” observational research is not well understood or agreed upon, and this lack of agreement can result in uneven and unpredictable consideration of such research when making coverage and payment decisions. Lack of predictability wastes resources both for those reviewing and interpreting the results of observational research and for the suppliers of that research.
Reaching Stakeholder Consensus on Standards for Good Research
Payers and researchers alike should strive for a consistent view of what constitutes good observational research, especially when it assesses the comparative benefits of medical treatment options. With the ever-increasing demand for answers to questions about the comparative benefits of medical treatments, the tasks of researchers who generate comparative evidence and payers who interpret it should be efficient and timely. Efforts in recent years have focused on standardizing the format in which evidence is presented for consideration by payers,5 and these efforts have enhanced the efficiency of reviewing and considering evidence. But broadly accepted criteria and standards for the quality of research evidence can improve both the efficiency of the process and the quality of patient care.
The new healthcare reform legislation will establish a methodology committee that is likely to address many of these issues, but this committee will be created only after PCORI is established. The methodology committee then has 18 months to develop its recommendations for methodologic standards. The work of this committee can be aided if stakeholders come together to review the experience with observational research to date, share successes and failures with this approach, identify the methodologic gaps that limit the utility of observational comparative research, and agree on at least the outline of appropriate standards for CER using observational data. A cooperative effort toward this objective can contribute meaningfully to the work of the new methodology committee and in the near term can enhance the predictability of the quality of the research that is offered in support of medical treatments and therapies. Such an effort also can improve the predictability of the interpretation of this research by those who must make decisions about coverage and payment.
An initial effort intended to improve the quality and usefulness of this type of research is described in this issue of The American Journal of Managed Care.6 My organization, the National Pharmaceutical Council, provided the initial funding for the independent development of the principles for Good Research for Comparative Effectiveness (GRACE), which addresses many of the quality issues facing CER when observational data are used. These principles are not prescriptive and do not address or suggest solutions to unsolved methodologic problems, but they do focus on the issues that should be addressed or at least discussed when observational CER is reported or presented. They leave to the reviewer the task of judging the quality and thoroughness of the response to these issues in any given piece of research. By identifying the key issues to be addressed, the principles suggest a framework for addressing the quality of observational CER and may serve as the starting point for a broad-based, multistakeholder focus on this type of research.
Demand continues for more and better evidence about the practical and real-world effects of medical treatments. Data sources that capture information about the healthcare experience continue to proliferate. As a result, there is an urgent need for consistency and harmonization of standards for evidence generation based on these data, as well as for interpretation and application of the findings that result from the research. Patients will be the ultimate beneficiaries of a successful effort to harmonize the generation, interpretation, and application of real-world evidence.
Gary Persinger and Les Paul, MD, MS, of the National Pharmaceutical Council contributed to this commentary.
Author Affiliation: From the National Pharmaceutical Council, Washington, DC.
Funding Source: The National Pharmaceutical Council provided funding to Outcome, Inc in support of the GRACE principles (see the article by Dreyer et al, in this issue [p 467]).
Author Disclosures: Mr Leonard reports serving as a board member for the Health Industry Forum, the National Health Council, and the Pharmacy & Therapeutics Society.
Authorship Information: Concept and design; drafting of the manuscript; critical revision of the manuscript for important intellectual content; administrative, technical, or logistic support; and supervision.
Address correspondence to: Daniel T. Leonard, MA, President, National Pharmaceutical Council, 1501 M St, NW, Washington, DC 20005. E-mail: firstname.lastname@example.org.
1. 111th Congress of the United States. American Recovery and Reinvestment Act of 2009. February 2009. http://frwebgate.access.gpo. gov/cgi-bin/getdoc.cgi?dbname=111_cong_bills&docid=f:h1enr.pdf. Accessed April 5, 2010.
2. Committee on Comparative Effectiveness Research Prioritization, Institute of Medicine. Initial National Priorities for Comparative Effectiveness Research. June 2009. http://www.iom.edu/Reports/2009/ ComparativeEffectivenessResearchPriorities.aspx. Accessed April 5, 2010.
3. Federal Coordinating Council on Comparative Effectiveness Research. Report to the President and the Congress on comparative effectiveness research. http://www.hhs.gov/recovery/programs/cer/ execsummary.html. June 2009. Accessed April 5, 2010.
4. IOM Roundtable on Evidence-Based Medicine, Institute of Medicine. March 2007. http://www.iom.edu/Reports/2007/The-Learning- Healthcare-System-Workshop-Summary.aspx. Accessed April 5, 2010.
5. Academy of Managed Care Pharmacy. The AMCP Format for Formulary Submissions, version 3.0. October 2009. http://www.amcp.org/ format/pub.pdf. Accessed April 5, 2010.
6. Dreyer NA, Schneeweiss S, McNeil BJ, et al, for the GRACE Initiative. GRACE principles: recognizing high-quality observational studies of comparative effectiveness. Am J Manag Care. 2010;16(6): 467-471.