Currently Viewing:
Contributor
Currently Reading
Quality Data: Simplify Data Governance, Advance Interoperability, and Improve Analytics
September 25, 2019

Quality Data: Simplify Data Governance, Advance Interoperability, and Improve Analytics


Brian Diaz is the senior director of strategy with Wolters Kluwer Clinical Surveillance, Compliance and Data Solutions and is responsible for directing and managing the Health Language suite of content and service offerings. He has over 15 years of experience in healthcare information technology, including providing leadership in the areas of product management, marketing, and software development, and has a Bachelor of Science in Computer Engineering from the University of Minnesota.
Unsustainable healthcare costs are driving seismic changes to the way payers do business. All healthcare stakeholders face significant pressure within the framework of value-based care to cut costs while simultaneously improving clinical outcomes and populations health. Consequently, forward-looking operational and clinical leaders recognize that they must invest in innovative approaches and technology such as automation, analytics, and artificial intelligence (AI) to stay relevant and competitive in a dynamic marketplace.

The ability to extract deeper insights from data assets sits at the heart of many strategies. Yet, data generated in healthcare today is overwhelming. Current estimates suggest that it doubles every 3 years, and by 2020, it is projected to double every 73 days.

Getting in front of this exponential data expansion is a significant challenge for most organizations as they try to optimize the quality of information used to drive decision-making. The good news is that quality data that is accurate and reliable is achievable with the right infrastructure and strategy in place.

The Power of Data Quality
Today’s payers are striving to transform and enrich their data for mission critical activities such as claims processing, quality reporting, and member management and support. This process includes assembling all available data sources, extracting data from unstructured text, normalizing or codifying disparate information to common standards, and categorizing data into clinical concepts.

For most health plans, this is a strenuous exercise due to the proliferation of disparate data that remains locked in silos. For example, a payer involved in a population health initiative will need to collect data from a variety of sources including claims, electronic health records, and emerging areas such as telehealth, genomics, and social determinants of health. Without a way of fully capturing data from each of these areas, payers risk negative downstream consequences that impact quality measures reporting, billing and member engagement.

To illustrate the challenge, consider a population health initiative aimed at improving diabetes outcomes that requires analyzing laboratory glycated hemoglobin data—a common glucose test that may be represented more than 100 different ways. As payers attempt to pull laboratory data along with other clinical information from the variety of sources mentioned, it will present in a number of formats including structured claims information such as Current Procedural Terminology; International Classification of Diseases, Tenth Revision (ICD-10); and Healthcare Common Procedure Coding System codes, as well as semistructured clinical data such as labs and unstructured text (eg, the patient’s medical history often contained in the medical record). All this data must be normalized to industry standards before achieving an accurate, reliable framework for analytics.




 
Copyright AJMC 2006-2019 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
x
Welcome the the new and improved AJMC.com, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up