Comparative effectiveness research and pragmatic clinical trials are valued methods to address the limitations of traditional randomized trials, answer questions of cost-effectiveness or noninferiority, and inform data-driven dialogue and decision making by stakeholders.
This article was published as part of a special joint issue and also appears in the Journal of Oncology Practice.
Although much effort has focused on identifying national comparative effectiveness research (CER) priorities, little is known about the CER priorities of community-based practitioners treating patients with advanced cancer. CER priorities of managed care—based clinicians may be valuable as reflections of both payer and provider research interests.
We conducted mixed methods interviews with 10 clinicians (5 oncologists and 5 pharmacists) at 5 health plans within the Health Maintenance Organization Cancer Research Network. We asked, “What evidence do you most wish you had when treating patients with advanced cancer?” and questioned participants on their impressions and knowledge of CER and pragmatic clinical trials (PCTs). We conducted qualitative analyses to identify themes across interviews.
Ninety percent of participants had heard of CER, 20% had heard of PCTs, and all rated CER/PCTs as highly relevant to patient and health plan decision making. Each participant offered between 3 and 10 research priorities. Half (49%) involved head-to-head treatment comparisons; another 20% involved comparing different schedules or dosing regimens of the same treatment. The majority included alternative outcomes to survival (eg, toxicity, quality of life, noninferiority). Participants cited several limitations to existing evidence, including lack of generalizability, funding biases, and rapid development of new treatments.
Head-to-head treatment comparisons remain a major evidence need among community- based oncology clinicians, and CER/PCTs are highly valued methods to address the limitations of traditional randomized trials, answer questions of cost-effectiveness or noninferiority, and inform data-driven dialogue and decision making by all stakeholders.
(Am J Manag Care. 2012;18(5 Spec No. 2):SP77-SP83)Evidence of cancer treatment effectiveness comes primarily from randomized controlled clinical trials. Although these studies are used extensively to evaluate the efficacy of new treatments, meet US Food and Drug Administration requirements, and determine clinical guidelines, their findings may not be generalizable to community practice.1,2 This may result from implementation in academic settings and use of complicated protocols with stringent inclusion and exclusion criteria, which may not be relevant to community practices.1-4 These factors lead to greater uncertainty in decision making by doctors, patients, and policy makers, as well as in evaluations of cost-effectiveness, in these settings.
Pragmatic, or practical, clinical trials (PCTs), a form of comparative effectiveness research (CER), are an alternative to traditional trials. PCTs compare 2 or more clinically relevant interventions, recruit a population that is more representative of the target population, and assess a broad range of clinically relevant health outcomes to aid decision making for a variety of stakeholders1,3,5 (). This form of research has been recognized as a potential solution to concerns regarding generalizability,1 and significant effort has gone into identifying the national CER agenda.6 Unfortunately, little research has investigated the oncology-specific CER or PCT priorities of community-based clinicians treating patients with advanced cancer who would be the recipients and users of these CER results. Rising healthcare costs, increasing intensity of care, an aging population, and the fact that cancer is the second-leading cause of death in the United States7-9 make it all the more important to achieve effective and efficient care in advanced cancer.
We interviewed 1 oncologist and 1 pharmacist from each of 5 Health Maintenance Organization (HMO) Cancer Research Network (CRN) health plans to understand their knowledge and perceptions of CER and PCTs and what evidence they wish they had when treating patients with advanced cancer. The perspectives of this clinician population are of interest because, as salaried employees of their respective health plans, their preferences for evidence and/or research priorities may reflect both provider and payer perspectives.
A single trained interviewer (EJAB), a nonclinician researcher, conducted structured telephone interviews (between December 2010 and March 2011) with 2 clinicians (1 oncologist and 1 pharmacist) from each of 5 health plans within the CRN (Group Health in Washington State; the Northwest, Northern California, Colorado, and Georgia regions of Kaiser Permanente). Delivery system pharmacists were included because they may be involved in developing formularies, approving off-label chemotherapy use, and addressing appeals representing the payer perspective. Patients were selected from the 5 locations participating in an NCI-funded grant entitled, “Building CER capacity: Aligning CRN, CMS, and state resources to map cancer care.” All 10 participants who were invited to participate agreed to do so. Participants were selected for the study on the basis of their roles as clinical leaders at each HMO Research Network site; most were the heads of their respective practices or lead pharmacists, and all had a major role in decision making in their departments. After being given a brief description of the study via e-mail, participants were contacted to schedule an interview. The study was approved by the Kaiser Permanente Colorado institutional review board.
We used a mixed methods approach, collecting both quantitative and qualitative data, to gain a more thorough understanding of participants’ responses.10,11 After querying participants on their familiarity with CER and PCTs, we emailed published definitions of CER and PCT to them in real time; these definitions (Figure) were then used as the standard for the rest of the interview. We asked participants about the relevance of CER to patient and health plan decision making as well as their self-reported likelihood of changing their practice or advocating for health plan policy change on the basis of evidence from various types of studies on a scale from 1 to 10 (1 = “extremely unlikely” and 10 = “extremely likely”). We asked open-ended questions about what evidence participants wished they had when treating patients with advanced cancer and about what CER studies or PCTs they felt were the most important to conduct at this time for patients with advanced lung, colorectal, breast, or prostate cancer. These cancers were identified a priori as having the greatest potential for impact because they are the 4 most commonly diagnosed cancers in the United States, and they have many treatment-related challenges. We also collected data on participants’ demographic characteristics.
We calculated mean values and ranges for all quantitative data. We used ethnographic software (Atlas.ti; GmbH, Berlin, Germany) to classify and analyze qualitative data from interview transcripts. Qualitative data included all answers to open-ended questions, including what evidence participants wished they had when treating patients with advanced cancer as well as their thoughts on CER, PCTs, and how they compared with traditional methods of research. Two investigators (EJAB and SJL) independently reviewed the same 2 randomly chosen transcripts to identify codes and then discussed and refined this list. Both investigators then independently reviewed the remaining 8 transcripts, identifying codes, and then discussed them and summarized major themes. All participants’ names and institutions were kept anonymous.
Participants ranged in age from 30 to 62 years (mean, 47 years); 8 of 10 were male. Participants had been in practice for an average of 17.5 years; 6 had previously practiced in a comprehensive or university-based cancer center, and all oncologists had completed a fellowship. Four participants reported that 5% to 15% of their patients were currently enrolled onto clinical trials; the rest reported less than 5%. All but 1 participant had heard of CER, whereas only 2 had heard of PCTs. After reviewing the definitions of CER and PCTs (Figure), half considered CER more relevant than traditional methods of research (including traditional trials) to health system decision making, and half considered CER more relevant (including 2 who considered it at least as relevant) to patient decision making than traditional methods of research. No participants considered CER less relevant. We observed similar results for the perceived relevance of PCTs. Participants were slightly more likely to change their practice or advocate for change in health system policies on the basis of consistent results from multiple pragmatic trials than for traditional randomized trials and less likely on the basis of results from a single trial. They were less likely to change on the basis of results from observational studies, case studies, or opinions of their colleagues or other clinicians.
When asked what evidence they wanted most when treating patients with advanced cancer, participants each provided between 3 and 10 ideas. About half of the proposed ideas involved comparisons of 2 (or more) different treatments (49%)—for example, comparing the effects of pemetrexed versus multiple older drugs on overall survival in patients with non-small-cell lung cancer (). Another 20% of proposed ideas involved comparing different aspects of a single drug or treatment, such as different doses, durations, or sequences—for example, comparing the adverse effects of 2 mg versus 4 mg of zoledronic acid (Zometa; Novartis Oncology, East Hanover, New Jersey) per month. Approximately 10% involved comparing the effects of maintenance therapy versus nothing, conducting palliative care interventions, or testing hypotheses around genetic testing. Of trial ideas that specified at least 1 health outcome, a third of ideas specified survival time as the outcome of interest, and twothirds specified toxicity and/or quality of life as the outcome.
Although participants were not directly questioned about what they perceived to be the weaknesses of existing evidence, this arose as a common theme across all interviews. Four specific weaknesses were raised by at least half of participants, including: “lack of generalizability of existing data,” “biases in funding,” “rapid development of new treatment options,” and “increasing individualization of treatment” ().
Lack of Generalizability of Existing Data
Many participants raised concerns about the generalizability of results from traditional randomized trials. According to participants, this issue greatly impacts day-to-day practice, with providers regularly deciding whether or not to apply the results of traditional trials to situations in which they may not be applicable (eg, to older patients with more comorbidities).
In one participant’s words, “A usual patient in a randomized adjuvant colon cancer study is 62; the average [age] with colon cancer in the US is 71. And of course, we always choose the healthiest quartile for our clinical trials—youngest and healthiest.”
Several participants felt that advanced cancer research is driven by the pharmaceutical industry and focuses too much on the newest drugs, neglecting questions about or comparisons with older drugs. They felt that this leads to biases in funding and potentially biased research findings.
One participant said,“I think where oncology fails is that a lot of . . . our research is drug-driven, and so there is probably a bias in that direction and a bias toward new drugs rather than old drugs in terms of getting studies funded and maybe not enough research on . . . systems of delivery, . . . older drugs, or . . . scheduling of drugs, or . . . looking at cancer care that goes beyond . . . what the . . . new medicines are.”
Another participant said, “A lot of the clinical trials that are designed have biases based on who funds them, and if they’re funded by a drug company, then you’ve got to . . . be a little skeptical as to whether they’re going to extrapolate to real life.”
Rapid Growth in Treatment Options
Half of participants mentioned the rapidly growing set of treatment options in the field of advanced cancer care. For example, participants discussed the increasing use of molecular markers, specific gene targets, sipuleucel-T immunotherapy, and monoclonal antibodies. Participants cited this as a challenge to their ability to make informed, evidence-based decisions about patient care as a result of inadequate data on the effectiveness and optimal use of new treatments compared with older treatments. One participant noted that it is very difficult to study new treatments thoroughly before they become irrelevant as a result of the relatively long timeline of traditional trials.
In one participant’s words, “The problem is that . . .cancer treatment is always a moving target in terms of . . .new treatments . . . and strategies . . . coming out every year. . .. Is there enough time to take . . . two treatments that . . .you think are equivalent and to run a trial from . . . concept to implementation and not have the field change drastically in that period of time so that . . . it no longer has meaningful results by the time you actually collect the data?”
Participants mentioned increasing individualization of treatment as an area of growing emphasis in advanced cancer treatment; treatment is being tailored to the individual patient as our ability to distinguish and treat specific cancer subtypes grows. Knowledge of genetic variation in tumors is also increasing, along with our ability to identify biomarkers and target specific genes, and more information on how to address individual treatment differences could potentially improve outcomes in terms of both survival and adverse effects.
One participant said, “I hate to say this: some physicians are still just giving [monoclonal antibodies] without getting any biomarkers . . .. We really need to see if the biomarkers are appropriate to give these medications, instead of just, ‘here—take this.’”
Another participant said, “For example, with Velcade [bortezomib; Millennium Pharmaceuticals, Cambridge, Massachusetts] you get these neuropathies, numbness in the fingertips, and some patients can go through life, no problem; others, it’s like they can’t stand it, it drives them nuts.” Overall, knowledge and impressions of CER and PCTs, types of evidence needed, and weaknesses of existing evidence were similar for oncologists compared with pharmacists. One possible exception was the perception of biases in funding—4 of 5 pharmacists cited this limitation to existing evidence compared with just 2 of 5 oncologists (Table 2).
Our community-based oncology physicians and pharmacists were less well informed regarding the definition of PCTs compared with CER. However, once informed, participants felt CER/PCT evidence was as relevant to or more relevant than patient and health plan decision making compared with traditional RCTs. Participants cited many types of needed evidence, including comparing the effectiveness of different treatments and treatment regimens with respect to multiple outcomes, including survival, toxicity, and quality of life. We observed that both oncologists and pharmacists in community-based settings struggle to make treatment-related decisions because of a lack of generalizability of much of the existing evidence on advanced cancer treatment effectiveness, a perception that is supported in the literature.12
Participants, particularly pharmacists, felt that biases in funding were another weakness to existing evidence. The emphasis by industry on studying new drugs maximizes profit but increases costs for payers and, from the provider perspective, represents a barrier to definitively answering questions on the comparative effectiveness of older drugs.
In the words of one participant, “It seems like industry funding is the primary means by which we do a lot of our research . . .. We’re going to have to get some independent. . . groups that are going to fund [research] to really find the answers about what is going to be most cost-effective or . . . comparatively effective . . .. It really is important, given the massive amount of stress that we’re putting on our economic situation here in the United States . . . with health care, so I think we really need to be aggressive about . . . looking at those sorts of studies to determine ways that we can alleviate some of these burdens.”
The fact that this seemed to be a bigger concern for pharmacists than for oncologists may reflect the greater emphasis among pharmacists to represent the health plan, or payer, perspective and maintain the pharmacy budget, whereas the primary focus of the oncologists seemed to be optimal patient care. These perceptions are not new, given that studies have shown that industry-funded research tends to yield industryfriendly findings. This may occur in several ways: intentional underpowering of studies, use of inappropriate comparison doses, seeding trials (conducting them for undisclosed marketing purposes rather than for science), ghostwriting (unacknowledged authorship by a contributor from industry), or guest authorship (including an author solely to add external objectivity).13,14 Some participants called for more independent funding sources, such as federal funding, to address the industry funding bias and to improve accountability. Such measures may include improved disclosure of conflicts of interest, independent analysis of clinical trial data analysis, and comprehensive and publicly available trial registration.14
Participants’ focus on the rapid development of new treatments seemed to reflect a concern that research was unable to keep pace with drug development, and new agents were, in some cases, being used prematurely. One participant mentioned the cost implications of rushing to use new agents, providing insight into tensions felt by some practitioners: “Now there’s one facet of this which is maybe not spoken of very often, which is can you use the old drugs and have similar outcomes for much less money? And I think that’s a valid reason to do trials . . .. But that can be in conflict with what you’re trying to achieve in terms of what’s improving care as opposed to what’s cheaper for care. So the cost issue is a big one and one that we need to address and maybe one that clinical trials and comparative effectiveness research can bring light to bear on.”
CER and PCTs are not without limitations. By selecting broader, more inclusive study populations, they may sacrifice internal validity for the sake of generalizability. However, each has its place; once traditional trials have established the efficacy of a treatment within a highly controlled setting, PCTs can then be used to assess effectiveness in a real-world environment.15 Concerns have also been raised that CER (including PCTs, observational studies, and meta-analyses) could overgeneralize, ignoring important differences in disease subtypes,16 individual responses to treatment,17,18 selection bias, or genetic differences.19 This highlights the need for care in planning and implementation of CER, as it should be designed to aid in decision making tailored to individual patient needs (Figure). This can be complex, time-consuming, and costly. In the case of observational CER studies, which often use large existing databases or registries to answer questions not feasible for a PCT approach, data limitations are a concern. There may be no information on important variables, such as genetic markers19; selection bias may also be present. CER may also be difficult to fund. It has been suggested that CER is less likely to be funded by pharmaceutical companies, which prioritize research that leads to new products or broadens indications for existing products.20 This further underscores the need identified by multiple participants for independent funding of cancer treatment research.
The small sample size of this study is a limitation; a larger sample might elicit additional ideas for CER and PCTs. In addition, our findings may not be generalizable to clinicians practicing in fee-for-service settings. However, our study provides a global perspective that represents the perspectives of both oncologists, who synthesize evidence as they prioritize patient care, and pharmacists, whose job in the managed-care setting includes evaluating evidence and managing pharmacy budgets.
A strength of this study is the inclusion of both qualitative and quantitative data, which provides a richer picture of what evidence is needed and the perceived weaknesses of existing evidence. Our results suggest that CER, and specifically PCTs, although not commonly used to improve oncology care, would be considered a valuable way to address evidence gaps in community practice as well as to direct future research by community-based oncology clinicians. In this way, CER provides hope for generating high-quality, and most importantly, relevant evidence for effective decision making by providers, health plans/payers, and patients with advanced cancer.Acknowledgments
This study was supported by National Institutes of Health/National Cancer Institute Grant No. RC2 CA148185 to REACT (Building CER capacity: Aligning CRN, CMS, and state resources to map cancer care; primary investigators, J.C. Weeks, D.P. Ritzwoller) and Grant No. U19 CA079689 to the Cancer Research Network (primary investigator, E.H. Wagner), a consortium of 14 research organizations associated with nonprofit, integrated healthcare delivery organizations. We thank Diana S. M. Buist, PhD, MPH, for her comments on an earlier draft.
Author Affiliations: From Group Health Research Institute (SJL, ETL, EJAB, EHW), Seattle, WA.
Authors’ Disclosures of Potential Conflicts of Interest
The authors indicated no potential conflicts of interest.
Conception and design: Elizabeth T. Loggers, Erin J. A. Bowles, Edward H. Wagner. Provision of study materials or patients: Elizabeth T. Loggers. Collection and assembly of data: Sarah J. Lowry, Elizabeth T. Loggers, Erin J. A. Bowles. Data analysis and interpretation: All authors. Manuscript writing: All authors. Final approval of manuscript: All authors. Address correspondence to: Sarah J. Lowry, MPH, Group Health Research Institute, 1730 Minor Avenue No. 1600, Seattle, WA 98101; e-mail: firstname.lastname@example.org. Tunis SR, Stryer DB, Clancy CM: Practical clinical trials: Increasing the value of clinical research for decision making in clinical and health policy. JAMA 290:1624-1632, 003
2. Luce BR, Kramer JM, Goodman SN, et al: Rethinking randomized clinical trials for comparative effectiveness research: The need for transformational change. Ann Intern Med 151:206-209, 2009
3. Maclure M: Explaining pragmatic trials to pragmatic policy-makers. CMAJ 180:1001-1003, 2009
4. Poonacha TK, Go RS: Level of scientific evidence underlying recommendations arising from the National Comprehensive Cancer Network clinical practice guidelines. J Clin Oncol 29:186-191, 2011
5. Thorpe KE, Zwarenstein M, Oxman AD, et al: A pragmatic-explanatory continuum indicator summary (PRECIS): A tool to help trial designers. J Clin Epidemiol 62:464-475, 2009
6. Institute of Medicine of the National Academies Press: Initial national priorities for comparative effectiveness research. http://www.iom.edu/Reports/2009/ComparativeEffectivenessResearchPriorities.aspx
7. National Cancer Institute: Cancer trends progress report: 2009/2010 update. http://progressreport.cancer.gov
8. Earle CC, Neville BA, Landrum MB, et al: Trends in the aggressiveness of cancer care near the end of life. J Clin Oncol 22:315-321, 2004
9. Centers for Disease Control and Prevention: Leading causes of death. http://www.cdc.gov/nchs/fastats/lcod.htm
10. Curry LA, Nembhard IM, Bradley EH: Qualitative and mixed methods provide unique contributions to outcomes research. Circulation 119:1442-1452, 2009
11. Johnstone PL: Mixed methods, mixed methodology health services research in practice. Qual Health Res 14:259-271, 2004
12. Dans AL, Dans LF, Guyatt GH, et al: Users’ guides to the medical literature: XIV--How to decide on the applicability of clinical trial results to your patient: Evidence-Based Medicine Working Group. JAMA 279:545-549, 1998
13. Gartlehner G, Fleg A: Comparative effectiveness reviews and the impact of funding bias. J Clin Epidemiol 63:589-590, 2010
14. Ross JS, Gross CP, Krumholz HM: Promoting transparency in pharmaceutical industry-sponsored research. Am J Public Health 102:72-80, 2012
15. Koepsell TD, Weiss NS: Epidemiologic Methods: Studying the Occurrence of Illness. New York, NY, Oxford University Press, 2003, pp 312-313.
16. Tuma RS: Stimulus funds force hard look at comparative effectiveness research. J Natl Cancer Inst 101:1036-1039, 2009
17. Hirsch BR, Giffin RB, Esmail LC, et al: Informatics in action: Lessons learned in comparative effectiveness research. Cancer J 17:235-238, 2011
18. Lauer MS, Collins FS: Using science to improve the nation’s health system: NIH’s commitment to comparative effectiveness research. JAMA 303:2182-2183, 2010
19. Djulbegovic M, Djulbegovic B: Implications of the principle of question propagation for comparative-effectiveness and “data mining” research. JAMA 305:298-299, 2011
20. Hochman M, McCormick D: Characteristics of published comparative effectiveness studies of medications. JAMA 303:951-958, 2010