Nicole Neumarker, executive vice president of development and innovation at Cotiviti, discusses the key areas to remember when developing new data models in health care.
What are some of the struggles with developing large data models for the health care industry?
Neumarker: So, in health care, data interoperability has been a focus and a massive investment over time. But I would say, seeing the yield of that is still a bit suspect at this point and when we look at what we're doing to lay down critical factors for successful data models, we think about 3 things. The first thing is how are we correlating and mastering our member or patient data? How do we make sure that when we get a massive amount of data about a patient, we're correlating the right information to the right patient and we're not double counting patients? That's a critical stage, which is mastering member data. The second thing is when you think about the longitudinal data of a member, it's really the temporality of data that you have to figure out how to structure appropriately so that you understand what's happening in time and the sequence of events that happen in health care. In your own life, you can recognize that a lot of things happen over time and there's gaps in that timeline and which gaps maybe have significant impact and which don't. So, understanding temporality of data is a second very important variable. And the third 1 is standard taxonomies and standard medical data that you want to bring into those data models. For example, having a very consistent set of provider data is critical, having the ability to bring in standard medical taxonomies, like HCC [hierarchical condition category] codes and other taxonomies that we use to understand the data behind the data, so to speak. You have to take those 3 things into account--mastering your member data, the temporality of member data, and then finally, standard ontologies, like provider or HCC codes. Those are 3 key areas we focus on for building out our data models.