CancerLinQ - ASCO's Rapid Learning System to Improve Quality and Personalize Insights

Evidence-Based Oncology, June 2016, Volume 22, Issue SP8

CancerLinQ is a big data platform developed by the American Society of Clinical Oncology (ASCO) that aggregates clinical data from electronic health records for quality benchmarking and hypothesis generation. Such observational data can complement traditional evidence but must be used with caution.

The promotion of the highest quality cancer care has long been foundational to the mission of the American Society of Clinical Oncology (ASCO). To make this vision a reality, ASCO has been building a big data, health information technology (HIT) platform called CancerLinQ (Cancer Learning Intelligence Network for Quality), based on principles articulated by the Institute of Medicine (IOM) and others. (The division of the National Academies of Sciences, Engineering, and Medicine that focuses on health and medicine was renamed the Health and Medicine Division, from the Institute of Medicine, in 2016).

CancerLinQ extracts data from electronic health records (EHRs) and other sources, and using transformational data analytics, generates knowledge that can be accessed by oncologists, researchers, and patients. CancerLinQ’s primary objectives are to provide real-time quality feedback to oncologists to enable them to measure the care they render against clinical guidelines and that of their peers, to deliver personalized insights at the point of care, and to accelerate the generation of new research hypotheses by uncovering patterns in patient and tumor characteristics, therapies, and outcomes that require massive data sets and real-world evidence.

CancerLinQ Background and History

The 2013 IOM report, Delivering High-Quality Cancer Care: Charting a New Course for a System in Crisis, identified contemporary challenges to the delivery of high-quality cancer care and specified 10 core recommendations for remediation.1 A technology-based “learning healthcare system” was identified as essential to make cancer care more evidence-based and patient-centric. The authors described a learning health system as one that uses HIT to “…continuously and automatically collect and compile from clinical practice, disease registries, clinical trials, and other sources of information, the evidence needed to deliver the best, most up-to-date care that is personalized for each patient.”2

In a learning health system, research and practice inform each other in a virtuous cycle, one in which the findings from everyday care experiences continuously improve both practice and the discovery of new knowledge. In March 2011, the ASCO Board of Directors adopted strategic initiatives to begin the development of a rapid learning system (RLS) in oncology. Over the next 4 years, ASCO undertook a number of steps to make this RLS a reality. ASCO formed a separate Quality Department to house all of the Society’s quality programs, including CancerLinQ, and dedicated staff, volunteer committees, and external advisors were brought together to define goals and strategy and to create a business plan. A pilot of an RLS that focused on breast cancer began in 2012, using mostly open-source software, and the prototype was publicly demonstrated, albeit with limited functionality, at ASCO’s Quality Care Symposium in 2012.

After the Board approved full build-out of the system, in 2013, ASCO commenced a rigorous process to gather requirements, leading to the release of a Request for Proposals for vendors in 2014. Also in 2014, the ASCO Board created a separate limited liability company, CancerLinQ LLC,3 as a non-profit, wholly-owned subsidiary of ASCO, with its own board and advisory committees and numerous new staff. In January 2015, ASCO announced that SAP, a German multinational corporation that makes enterprise software, was chosen to develop and deploy CancerLinQ in a co-innovation partnership with ASCO. ASCO provides the oncology subject matter expertise and controls the data, services, and products that stem from CancerLinQ, while SAP provides access to its global healthcare technical platform and engineering support to create customized tools unique to CancerLinQ’s needs.

ASCO had previously engaged a number of early adopters from the oncology community in the United States to serve as “vanguard practices,” and between 2015 and 2016, the pace of practice engagement increased sharply. As of May 1, 2016, 37 vanguard practices had signed Business Associate Agreements with CancerLinQ, representing over 700 physicians, and nearly 600,000 patient records. Data ingestion is ongoing as new practices are added, and data connections to the EHR, established. The vanguard practice group comprises a mixture of small, community-based, single-specialty, hematology-oncology practices; larger, multisite cancer centers and integrated delivery networks; and academic medical centers. With the first release of the platform, anticipated in proximity to the ASCO Annual Meeting in Chicago, June 3-7, 2016, participating practices will have access to a full suite of features to be delivered as part of the initial version, including quality performance measures, analytic reports, and a data exploration tool known as CancerLinQ Insights (CLQI).

CancerLinQ Data Architecture

CancerLinQ is a cloud-based solution built on SAP’s HANA, an in-memory data management and application platform providing analytics and data visualization, and hosted in a secure data center within the United States. CancerLinQ consists of a series of logical databases in the HANA Enterprise Cloud (HEC) environment, through which data flow and undergo progressive normalization and deidentification. There are several models by which data are extracted from the EHR and flow into CancerLinQ. In the default option, the CancerLinQ engineering team deploys a third-party software agent called CancerLinQ Connect, which is installed behind the practice firewall and pulls the required data elements, both structured and unstructured, from the EHR database. A customized data file known as an HIX file is created and securely transferred to the HEC. Several “push” options are also supported, among them data warehouse extracts. Regardless of the initial data extraction methodology, incremental updates, uploaded nightly, will occur to ensure the timeliness of the clinical data in the system.

Once the data are ingested into the HEC, they flow into a data staging area or, “data lake,” which contains fully identifiable protected health information (PHI) and personably identifiable information (PII). So that the clinical quality measures can fire and the clinical concepts queried, the data are standardized via a rules engine, based on a terminologies database known as the National Cancer Institute Metathesaurus.4 The unstructured data run through a natural language processing engine for conversion to structured elements. The data land in the clinical database containing PHI and PII that can be queried by the end-user based on access privileges. Physician-users will only be able to view patient data with PHI for their own patients or, in some cases, the patients of other physicians within their practice. From there, the data then undergo deidentification via a third-party software tool and land in an analytical database for users to access the aggregated, deidentified data set. The CancerLinQ informatics team will use this database to create customized analytic reports for participating practices and other parties, pursuant to CancerLinQ data governance principles.

The end-user accesses CancerLinQ through a web browser. Product features include a set of clinical quality performance indicators based on ASCO-developed electronically specified clinical quality measures (eCQMs), discussed in the next section; the CLQI tool for customized cohort and data exploration; a patient timeline tool to visually represent oncologic milestones in the patient history; and a suite of parameterized analytic reports.

Additional data sources contemplated for CancerLinQ ingestion in future versions, include practice management systems (financial and administrative data), tumor registry data, claims data, genomic and other molecularly derived datasets, and data warehouse datasets.

Quality Measurement and Benchmarking

TABLE

The ASCO Board of Directors, from the time of its earliest strategic decisions to create CancerLinQ as a learning health system for oncology, envisioned that the primary function of the platform would be as an extension of ASCO’s quality portfolio, most notably the Quality Oncology Practice Initiative (QOPI).5 QOPI is ASCO’s signature quality assessment program for outpatient hematology-oncology practices, designed to create a culture of self-examination and improvement through periodic measurement of practice performance on established clinical quality measures, many of which are derived from ASCO’s own clinical practice guidelines. However, QOPI is a retrospective analysis, describing clinical events at least 6 months back, by which time, the opportunity to influence the care of any individual patient, has generally long passed. By comparison, CancerLinQ provides real-time assessment of practice performance on a subset of embedded eCQMs, both disease- and domain-specific, derived from the QOPI program. The eCQM results are displayed graphically in the user portal of the CancerLinQ web interface using tools in SAP HANA for measure definitions and visualization. The initial measure set, which includes eCQMs that are adapted from measures endorsed by the National Quality Forum, is shown in the below.

TABLE.

CancerLinQ Clinical Quality Measuresa

Staging documented within 1 month of first office visit.

Pain addressed by second office visit.

Pain intensity quantified by second office visit.

Test for HER2/neu overexpression or gene amplification.

Tamoxifen or aromatase inhibitor received within 1 year of diagnosis by patients with AJCC stage IA(T1c) and IB-III estrogen or progesterone receptor-positive breast cancer.

CEA within 4 months of curative resection for colorectal cancer.

Adjuvant chemotherapy received within 4 months of diagnosis by patients with AJCC stage III colon cancer.

Smoking status/tobacco use documented in past year.

Rituximab administered when CD-antigen expression is negative or undocumented

(lower score = better).

Hepatitis B surface antigen and hepatitis B core antibody test within 3 months prior to initiation of rituximab for patients with NHL.

ainitial release 2016.

AJCC indicates American Joint Committee on Cancer; CD, cluster of differentiation; CEA, carcinoembryonic antigen; NHL, non-Hodgkin lymphoma.

Oncologists participating in CancerLinQ have the ability to see quality performance results on their own patient population, and they can drill down to individual patient-level detail. They are able to compare their results to aggregated data from all participating practices, but they are shielded from seeing performance metrics of other participants other than in the aggregate, and no PHI is exposed, other than that of their own patient population. Designated “clinical supervisors” in each practice, typically the lead physician or practice manager, can be given access to view the quality performance results of all physicians in the practice, based on the local security model, but not that of any other practice at the detailed level.

The real-time nature of quality measurement in CancerLinQ allows for the concept of "actionability." Embedded in the software is functionality that enables the individual user to view a measure-by-measure depiction of his or her performance in a dashboard characterizing patients as measure concordant or non-concordant. The non-concordant patients can be sorted based on whether they are in an actionable time frame, meaning the patient does not yet satisfy the requirements of the measure, but is still within a timeframe permitting a diagnostic or therapeutic intervention under the physician’s control (eg, administration of adjuvant chemotherapy for a patient with stage III colon carcinoma within 4 months of diagnosis). This “early warning system” can improve care by surfacing such patients when there is still an opportunity to deliver a guideline-specified intervention. In the circumstance where there may be a valid clinical contraindication (eg, due to a comorbidity), relevant patient-level detail can be accessed directly from the quality dashboard.

By virtue of the sheer number of cases contained in the database—at least 1 million by mid-2016, and expected to grow sharply from there—CancerLinQ may very well represent the most accurate depiction of practice quality in the real world, since it generates metrics based on the actual care rendered and outcomes achieved. Furthermore, all patients eligible for a quality measure, subject to the denominator exclusions incorporated into the measure itself, will be included automatically, without an ability to “cherry pick” patient charts, as is the case for most other quality assessment programs where chart selection is controlled by the practice. However, it is important to reemphasize that each practice’s quality reports are only viewable by that practice, and the aggregated reports from all participants have safeguards built in that makes it impossible to identify individual practices by name, location, or characteristic.

Learning From Observational Data—Hazard or Opportunity?

As described, CancerLinQ enables oncologists to use data collected from EHRs to assess quality and inform clinical decisions. Presumably, physicians can learn from the quality performance data of peers to improve their own care. But, is this a reasonable assumption? Moreover, how should observational data such as that derived from CancerLinQ be considered in the context of clinical decision making for individual patients? In this era of big data, there are no straightforward answers yet. Even if one acknowledges that patient populations encountered in practice have little resemblance to those studied in the majority of clinical trials—where the patients tend to be healthier, younger, and less diverse from a racial and socioeconomic status—using the data from observational studies remains fraught with hazards, since risk adjustment and controlling for confounding variables remain challenging. For example, a report looking at the breast cancer radiotherapy endpoints of overall mortality and cause-specific mortality, comparing findings from a public-use dataset from the Surveillance, Epidemiology, and End Results Program (observational data) to the randomized trials that were part of recent meta-analyses by the Early Breast Cancer Trialists’ Collaborative Group (clinical trial data), showed substantially divergent results, even when all potential confounders were controlled using full stratification.6

On the topic of using observational data in comparative effectiveness research, Curtis and Krumholz, opined in an editorial, in Annals of Internal Medicine: “Much work remains before comparative effectiveness studies using observational data become meaningful for influencing clinical practice, including improving the quality of data, strengthening analytic methods with attention to assessing comparative effects and modifying factors, and reaching consensus on validation approaches. Meanwhile, these studies remain interesting, yet fall short in altering our assessments of the comparative performance of each strategy.”7

EBO

Author information

However, how is an individual clinician to proceed when faced with a patient in the exam room with a rare tumor for which evidence-based clinical practice guidelines do not exist, and the patient is not a candidate for a trial? Or a patient with a common malignancy like breast cancer coexisting with a myelodysplastic syndrome with del[5q]? Or the much more common scenario of a patient with compromised renal function faced with the decision as to the advisability of potentially nephrotoxic, but curative adjuvant chemotherapy? The availability of a powerful tool like, CancerLinQ, that can provide insights into the real world outcomes of similar patients, when combined with existing trial-generated evidence and full patient consent, may be transformative to the practice of the art of medicine in these difficult situations.

Robert S. Miller, MD, FACP, FASCO, is vice president, Quality and Guidelines, and medical director, CancerLinQ, American Society of Clinical Oncology.

Address for correspondence

Robert S. Miller, MD, FACP, FASCO

2318 Mill Road, Suite 800

Alexandria, VA 22314

E-mail:

robert.miller@asco.org

Funding:

No external funding.

References

  1. Delivering high-quality cancer care: charting a new course for a system in crisis. Institute of Medicine website. http://www.nationalacademies.org/hmd/Reports/2013/Delivering-High-Quality-Cancer-Care-Charting-a-New-Course-for-a-System-in-Crisis.aspx. Published September 10, 2013. Accessed May 1, 2016.
  2. A learning health care information technology system for cancer. In: Delivering high-quality cancer care: charting a new course for a system in crisis. Institute of Medicine website. http://www.nap.edu/read/18359/chapter/8#238. Published September 10, 2013. Accessed May 1, 2016.
  3. CancerLinQ website. http://www.cancerlinq.org. Accessed May 1, 2016.
  4. NCImetathesaurus. National Cancer Institute website. https://ncimeta.nci.nih.gov/ncimbrowser/. Accessed May 1, 2016.
  5. Blayney DW, McNiff K, Eisenberg PD, et al. Development and future of the American Society of Clinical Oncology’s Quality Oncology Practice Initiative. J Clin Oncol. 2014;32(35):3907-3913. doi:10.1200/JCO.2014.56.8899.
  6. Henson KE, Jagsi R, Cutter D, McGale P, Taylor C, Darby SC. Inferring the effects of cancer treatment: divergent results from Early Breast Cancer Trialists’ Collaborative Group meta-analyses of randomized trials and observational data from SEER registries. J Clin Oncol. 2016;34(8):803-809. doi:10.1200/JCO.2015.62.0294.
  7. Curtis JP, Krumholz HM. The predicament of comparative effectiveness research using observational data. Ann Intern Med. 2015;163(10):799-800. doi:10.7326/M15-2490.