• Center on Health Equity and Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

Quality Data: Simplify Data Governance, Advance Interoperability, and Improve Analytics

Article

Unsustainable healthcare costs are driving seismic changes to the way payers do business and are requiring the ability to extract deeper insights from data assets.

Unsustainable healthcare costs are driving seismic changes to the way payers do business. All healthcare stakeholders face significant pressure within the framework of value-based care to cut costs while simultaneously improving clinical outcomes and populations health. Consequently, forward-looking operational and clinical leaders recognize that they must invest in innovative approaches and technology such as automation, analytics, and artificial intelligence (AI) to stay relevant and competitive in a dynamic marketplace.

The ability to extract deeper insights from data assets sits at the heart of many strategies. Yet, data generated in healthcare today is overwhelming. Current estimates suggest that it doubles every 3 years, and by 2020, it is projected to double every 73 days.

Getting in front of this exponential data expansion is a significant challenge for most organizations as they try to optimize the quality of information used to drive decision-making. The good news is that quality data that is accurate and reliable is achievable with the right infrastructure and strategy in place.

The Power of Data Quality

Today’s payers are striving to transform and enrich their data for mission critical activities such as claims processing, quality reporting, and member management and support. This process includes assembling all available data sources, extracting data from unstructured text, normalizing or codifying disparate information to common standards, and categorizing data into clinical concepts.

For most health plans, this is a strenuous exercise due to the proliferation of disparate data that remains locked in silos. For example, a payer involved in a population health initiative will need to collect data from a variety of sources including claims, electronic health records, and emerging areas such as telehealth, genomics, and social determinants of health. Without a way of fully capturing data from each of these areas, payers risk negative downstream consequences that impact quality measures reporting, billing and member engagement.

To illustrate the challenge, consider a population health initiative aimed at improving diabetes outcomes that requires analyzing laboratory glycated hemoglobin data—a common glucose test that may be represented more than 100 different ways. As payers attempt to pull laboratory data along with other clinical information from the variety of sources mentioned, it will present in a number of formats including structured claims information such as Current Procedural Terminology; International Classification of Diseases, Tenth Revision (ICD-10); and Healthcare Common Procedure Coding System codes, as well as semistructured clinical data such as labs and unstructured text (eg, the patient’s medical history often contained in the medical record). All this data must be normalized to industry standards before achieving an accurate, reliable framework for analytics.

Building a Framework for Data Quality

A multifaceted strategy that engages technology, expertise, and the right processes is essential to achieve data quality. Comprehensive strategies must address terminology management and data governance from three vantage points.

1. Establish a Single Source of Truth Through Reference Data Management

Effective management of reference data is foundational to any data quality strategy. Comprised of industry health information technology standards—such as ICD-10, RxNorm, Logical Observation Identifiers Names and Codes, and Systematized Nomenclature of Medicine—Clinical Terms—and other proprietary content, reference data provides the building blocks for analytics efforts by establishing of framework of interoperability to support the free flow of information between systems. An optimal Reference Data Management (RDM) strategy encompasses oversight and ongoing maintenance of these enterprise assets to ensure that all stakeholders are drawing from the most up-to-date terminology standards and a single source of truth for accurate analytics and reporting.

Advanced solutions exist that automate these functions and help healthcare organizations optimize 4 key tenants of an RDM strategy including: (1) data acquisition of all code sets used across on enterprise; (2) list management that represent clinical concepts comprised of different terminologies; (3) integration and distribution of data into existing enterprise infrastructures; and (4) data governance that aligns people, processes, and technology.

2. Normalize Clinical Data to Standards

Data normalization solutions can automatically map nonstandard clinical data, such as local labs or drugs, to standard terminologies that are maintained as part of an RDM strategy. This process bridges the gap between disparate systems by establishing semantic interoperability of data across the healthcare enterprise. Due to the volume of disparate data that exists, many executives find that the business case for leveraging automation is an easy one to make. It eliminates burdensome, error-prone, manual processes and ensures nothing is missed.

3. Unlock Unstructured Data with Clinical Natural Language Processing

To fully ensure data quality, healthcare organizations must address challenges with data captured in free text fields. Unstructured data currently accounts for as much of 80% of clinical documentation, thereby essentially locking up clinically relevant patient data, making it unusable for downstream initiatives. Clinical Natural Language Processing solutions provide a foundation of comprehensive clinical data and provider-specific synonyms and acronyms to extract valuable data such as problems, diagnoses, labs, medications, and immunizations from unstructured text fields, so that it can be used to improve health outcomes and increase quality of care.

When payers implement practices to achieve high data quality, they can more effectively extract value from their own data assets using emerging technologies such as machine learning and AI. Clinical and financial leaders are wise to identify high-value areas first that could benefit most from improved data quality as a starting point and then progressively design more complex initiatives.

With the right investments targeted at improving the quality of data by establishing a single source of truth, normalizing structured data to standards, and unlocking unstructured data—the potential to draw deeper insights is sizeable and can support health plan’s goal to be more successful in this competitive marketplace.

Related Videos
Screenshot of Raajit Rampal, MD, PhD
Dr Padma Sripada, Columbia Internal Medicine
Screenshot of Jennifer Vaughn, MD, in a Zoom video interview
Screenshot of Jennifer Vaughn, MD, in a Zoom video interview
Mila Felder, MD, FACEP
Patrick Vermersch, MD, PhD
Camilla Levister
Chandler Cortina, MD, MS, FSSO, FACS, Froedtert & Medical College of Wisconsin
Camilla Levister
Related Content
© 2024 MJH Life Sciences
AJMC®
All rights reserved.