Linking Claims, Clinical Data Is Essential for a Learning Health System

Is rubber meeting the road with big data in cancer care? “No…rather, not yet,” Green said at the 2017 Community Oncology Conference, April 26-27, held at the Gaylord National Hotel and Convention Center in National Harbor, Maryland.

Big data is a term used as commonly as the term “value” in cancer care. However, similar to “value,” the interpretation of “big data” can vary, according to Robert Green, MD, vice president of clinical strategy and senior medical director at Flatiron Health. Is the rubber meeting the road with big data in cancer care? “No… rather, not yet,” Green said at the 2017 Community Oncology Conference, held April 26-27 at the Gaylord National Hotel and Convention Center in National Harbor, Maryland.

Green explained that in order to make data from electronic health records useful, real-world quality data plays an important role. However, it is also important to link clinical and claims data.

“That’s where the future is,” he said.

Quoting quantitative scientist Gary King, PhD, from Harvard University, who said, “Big data is not about the data,” Green explained that it’s about using the data to generate meaningful insights. “At Flatiron, we define big data based on its complexity, rather than the volume.”

The focus should be on leveraging the data for high-value care, on improving outcomes, and accelerating clinical work, Green explained.

“We are being asked to develop interventions that will affect care and the financial viability of our practices," he said. "To achieve this, we need to feed all this information back into our system to improve work flow … the concept of a learning system."

He believes that processing structured data is key to be able to use these data, often described as “data scrubbing.” But a lot of information is not structured—such as pathology or physician notes—and there needs to be a method to extract this information.

“Unstructured data is typically hard to get at, and it’s not possible to get this data into a structured form, accurately, and use it to feedback and improve care," he said.

Green told the audience, most of whom were oncology care providers, that although most of providers think they are good at what they do for patients, “I don’t believe the metrics that I am reporting on are really bringing value to the patient because I checked the required box, such as measuring pain medication.” So, he asked, how do providers find out if they are taking good care of their patients?

“You don’t know how well your patients are doing unless you try to measure their performance,” Green said.

Green outlined what is needed to generate real-world quality data:

  • Fill in the gaps. He stressed that filling in data gaps is very important to be able to mine high-quality data, and this means combining unstructured data with raw structured data.
  • Identify cohorts. Identify the appropriate patient cohort to conduct analyses on. Defining the cohort is important when measuring quality to report on metrics.
  • Develop analysis plan. Develop, document, and apply a rigorous plan. It is easy to miss the right answer if the data is not thoroughly evaluated, he said.

Case study 1. He then provided a case study on assessing clinic adherence to EGFR and ALK testing in non-small cell lung cancer (NSCLC). Analysis of Flatiron’s database found that only 21% patients were tested across the network of practices that were conducting this genetic test.

“But when we drilled down even more, the median testing rate was 16%—some clinics were testing 100% while others were only testing infrequently,” he said.

So there was significant variance across clinics, which was apparent only when Flatiron analyzed the data at the individual clinic level.

Case study 2. Green showed that in their data set, KRAS testing rates for colorectal cancer between 2012 and 2014 were 71% in 2012 and then reduced to 57% in 2014. The variability was 90% to 35% across 21 clinics, and that testing rates rose with later lines of therapy: 62% at first-line and 90% by third-line and above.

“Such detailed information can influence how we collect, analyze, and report on quality metrics and how it ultimately affects reimbursement in that practice,” Green said.

To highlight the importance of linking clinical and claims data, Green compared the value that claims data brings to quality analysis, and also noted specific challenges. While claims data provide insight into the total cost by disease type, helps identify cost drivers, drug compliance rates, information on hospitalization and emergency room visits, that information is not sufficient, Green said. Claims data, he added, lack attribution and don’t have enough clinical depth that has real influence on cost.

“There’s also data latency … claims data are not as recent as a clinic would like,” he added.

Circling back to how he kicked off his presentation, Green said “It’s not about the data, it’s what you do with it.” He predicted that measurement and reporting of physician and clinical performance will soon become routine, personalized risk assessment will be essential for process improvements and to maximize returns, and outcomes improvement will become the expectation.