Gathering and Using Real-World Data to Improve Patient Outcomes
The transition in healthcare to value-based care and risk-based contracts means that systems need to be able to measure how well they are treating patients. In a session at Asembia's 15th annual Specialty Pharmacy Summit, held April 29 to May 2 in Las Vegas, speakers highlighted a pilot program to gather and use real-world evidence to compare outcomes at 7 large academic medical centers for patients with rheumatoid arthritis (RA), multiple sclerosis (MS), and malignant melanoma.
Most health systems, if they aren’t already, will start doing more with risk-based contracts, which means there needs to be a better way to measure and provide a report card, said Tom Renshaw, RPh, senior director of business solutions, Acentrus Specialty.
“If you’re doing something poorly, that doesn’t mean you can’t do it better,” he said. “But you do have to be able to identify that you did it poorly. And historically, we haven’t had those measures to do that.”
Rich Glicklich, MD, chief executive officer, OM1, explained some of the work that his company does to measure outcomes. He highlighted that meaningfully evaluating outcomes means standardizing outcomes that are patient-centric and relevant to all stakeholders and that health systems need to leverage unstructured data.
The data also need to be risk-adjusted to understand where patients come in and the results of individual centers. This also helps with benchmarking fairly, Glicklich explained.
An effort funded by HHS to understand outcomes as they relate to patients, providers, and treatments has arrived at outcome domains that matter: survival, clinical response, events of interest, patient reported, and resource utilization.
He used MS as example to show the measures in outcome domain and highlighted that measures like disability markers, lesion burden, Expanded Disability Status Scale (EDSS) performed as standard care, symptomatology, reason(s) for treatment disruption are unstructured data that have to be taken out of the clinical record.
Once this information is taken out of the EHR and processed, natural language processing or machine learning can be used to enrich the data and derive information from them. The next step is to use artificial intelligence to create these patient journeys using narrative text from the EHRs.
Renshaw’s group worked with OM1 on a pilot program that compared outcomes when a health system has control over a patient group with outcomes when the patients have to be released to an outside agency because the system doesn’t have access to the limited-distribution drug.
The medical centers that participated were the University of Arkansas Medical Sciences Medical Center, UC San Diego Medical Center, Medical University of South Carolina, UT Southwestern Medical Center, UNC Medical Center, University of Kansas Medical Center, and University of Utah Health.
The researchers used everything in the electronic health records (EHRs) with International Classification of Diseases, Tenth Revision, codes for RA, MS, and malignant melanoma. The patients with RA were the group arm, because most of the products are commercially available without a payer block, Renshaw explained. In comparison, MS and malignant melanoma both have drugs with limited drug distribution and payer access issues.
He estimated that 80% of the important information lives in the EHR as unstructured data, such as provider notes and laboratory data. The partnership with OM1 helped here, Renshaw explained, since OM1 could take the data and analyze it in a meaningful way.
The program is ongoing, but data for 6 of the 7 sites for RA and MS are in with initial results, and the data for the seventh center is coming soon.
Glicklich showed the data for patient characteristics—age, gender, a medical burden index for MS, disability, EDSS, and relapses 6 months prior—at baseline for the 6 centers. At baseline, age and gender are similar, but there is a 1.5-fold difference in disability between the centers with the highest and lowest scores and a 2-fold difference between the centers with the highest and lowest number of relapses. The medical burden index also showed that one center had 50% higher future costs than another center.
“How do we actually adjust for all of these things to give a fair comparison to compete at the benchmark?” he asked.
Comparing the centers for MS, the risk adjustment shows that some centers are doing worse relative to others for condition score, clinical response, and events of interest in MS, Glicklick said.
This may have been the first foray into doing an outcomes study between multiple sites, but more research will come looking at new disease states and developing ways to make getting EHR data an easier process.
“I think this is really just the beginning of how we’re going to use real-world evidence to improve care,” Renshaw said.