Automating Care Quality Measurement With Health Information Technology

The authors discuss the design and evaluation of a health information technology platform that enables comprehensive, automated assessment of care quality in electronic medical records.
Published Online: June 15, 2012
Brian Hazlehurst, PhD; Mary Ann McBurnie, PhD; Richard A. Mularski, MD, MSHS, MCR; Jon E. Puro, MPA-HA; and Susan L. Chauvie, RN, MPA-HA
Objectives: To assess the performance of a health information technology platform that enables automated measurement of asthma care quality using comprehensive electronic medical record (EMR) data, including providers’ free-text notes.


Study Design: Retrospective data study of outpatient asthma care in Kaiser Permanente Northwest (KPNW), a midsized health maintenance
organization (HMO), and OCHIN, Inc, a group of Federally Qualified Health Centers.


Methods: We created 22 automated quality measures addressing guideline-recommended outpatient asthma care. We included EMRs of asthma patients aged >12 years during a 3-year observation window and narrowed this group to those with persistent asthma (13,918 KPNW; 1825 OCHIN). We validated our automated quality measures using chart review for 818 randomly selected patients, stratified by age and sex for each health system. In both health systems, we compared the performance of these measures against chart review.


Results: Most measures performed well in the KPNW system, where accuracy averaged 88% (95% confidence interval [CI] 82%-93%). Mean sensitivity was 77% (95% CI 62%-92%) and mean specificity was 84% (95% CI 75%-93%). The automated analysis was less accurate at OCHIN, where mean accuracy was 80% (95% CI 72%-89%) with mean sensitivity and specificity 52% (95% CI 35%-69%) and 82% (95% CI 69%-95%) respectively.


Conclusions: To achieve comprehensive quality measurement in many clinical domains, the capacity to analyze text clinical notes is required. The automated measures performed well in the HMO, where practice is more standardized. The measures need to be refined for health systems with more diversity in clinical practice, patient populations, and setting.


(Am J Manag Care. 2012;18(6):313-319)

Quality improvement requires comprehensive quality measurement, which in turn requires robust automation to be broadly applicable and reliable.
 

  • We designed a health information technology platform and quality assessment method that overcome informatics challenges created by text-based guidelines, nonstandard electronic clinical data elements, and text clinical documentation.

  • We implemented our method in 2 healthcare systems to assess outpatient asthma care.

  • The automated measures generally performed well in the health maintenance organization setting, where clinical practice is more standardized. Refinements are needed for health systems with more diversity in clinical practice, patient population, and setting.
To guide quality improvement, we must comprehensively measure the quality of healthcare. Currently, this process is hindered by expensive, time-consuming, and sometimes inconsistent manual chart review.1 Electronic medical records (EMRs), which have become more prevalent as a result of the American Recovery and Reinvestment Act of 2009, promise to make routine and comprehensive quality measurement a reality.2 However, informatics challenges have hindered progress: care guidelines may not be specified to allow for automated measurement; the needed data are not standardized and are subject to variations in EMRs and clinical practice; and much of the data required are in the free-text notes that care providers use to document clinical encounters, which may not be accessible.

Our research team, funded by the Agency for Healthcare Research and Quality (AHRQ), designed and implemented an automated method to comprehensively assess outpatient asthma care. In doing so, we aimed to develop a platform that could automate care measurement for any condition.3

SETTING

We conducted a retrospective data study of outpatient asthma care in 2 distinct healthcare systems: Kaiser Permanente Northwest (KPNW) and the Federally Qualified Health Centers (FQHCs) associated with OCHIN, Inc. We obtained institutional review board approval and executed Data Use Agreements between research organizations for this study.

OCHIN

OCHIN serves the data management needs of FQHCs and other community health centers that care for indigent, uninsured, and underinsured populations. OCHIN has licensed an integrated Practice Management and EMR data system from Epic Systems. We approached 8 FQHCs associated with OCHIN (caring for 173,640 patients at 44 locations through 2010), and all agreed to participate in this study.

Kaiser Permanente Northwest

KPNW is a nonprofit, groupmodel health maintenance organization (HMO) that provides comprehensive, prepaid healthcare to members. KPNW serves about 475,000 members in Oregon and Washington. All patient contacts are recorded in a single, comprehensive EMR—the HealthConnect system.

Study Population

We included the electronic records of patients 12 years or older at the start of 2001 (KPNW) or 2006 (OCHIN) who had at least 1 diagnosis code for asthma (35,775 in KPNW and 6880 in OCHIN). We then narrowed this population to those defined as having persistent asthma (Table 1) to reach the target population of interest (13,918 KPNW; 1825 OCHIN).

METHODS

Developing the Measure Set

To automate the assessment of asthma care quality, recommended care steps from credentialed guidelines needed to be converted into quantifiable measures. The development of each measure began with a concise proposition about the recommended care for specific patients, such as “patients seen for asthma exacerbation should have a chest exam.” We used an 8-stage iterative process to ensure that our quality measures would be comprehensive and current. This process included 4

vetting steps with local and national experts. We identified 25 measures from comprehensive, rigorous quality measure sets, primarily derived from RAND’s Quality Assessment system.1,4,5 We then revised the measures to reflect updated guidelines6 and restricted our attention to asthma care in the outpatient setting, resulting in a set of 22 measures that we labeled the Asthma Care Quality (ACQ) measure set (Table 2).

Operationalizing the Measure Set

Each measure was specified as a ratio based on its applicability to individual patients (denominator) and evidence that the recommended care had been delivered (numerator). Performance on each measure can then be reported as the percentage of patients who received recommended care from among those for whom that care was indicated. For example, the national RAND study of McGlynn and colleagues demonstrated that across 30 clinical conditions, Americans received about 55% of recommended care.1 Ratios for each measure can be produced at the patient, provider, clinic, and healthsystem levels.

We investigated providers’ clinical practices related to each measure in the ACQ measure set and how care was documented and stored in the EMR. Each measure’s numerator requires a “measure interval,” which is the time window during which the care events must take place. The measure interval is oriented around an index date that is a property of denominator inclusion. For example, for the measure “patients seen for asthma exacerbation should have a chest exam,” the index date is the exacerbation encounter and the measure interval includes only that encounter. On the other hand, for the measure “patients with persistent asthma should have a flu vaccination annually,” the index date is the event that qualifies the patient as having persistent asthma and the measure interval is operationalized to include encounters 6 months before through 12 months after the index date.

Applying the Measure Set

For each of the 22 quality measures, we first defined an observation period (in our case, 3 years of clinical events) and divided it into a period for denominator qualification (the selection period) followed by an evaluation period, during which, in most cases, the prescribed care events were identified. For this study, we used a 2-year selection period based on a modified version of the Healthcare Effectiveness Data and Information Set asthma criteria to identify patients with persistent asthma (used in all of our measures) or those presenting with an asthma exacerbation (used in 36% of the measures in our set). We identified patients as having persistent asthma if they met minimum criteria for asthma-related utilization or if this diagnosis could be specifically determined from the provider’s clinical notes (Table 1). Asthma exacerbation criteria were based on hospital discharge diagnosis or an outpatient visit associated with a glucocorticoid order/dispensing and a text note about exacerbation.

Automated System Design

We designed a quality measurement system as a “pipeline” of transformation and markup steps taken on encounter-level EMR data. The goal was to capture all of the clinical events required to assess care quality (Figure).

Data Extraction

Data begin traveling through the pipeline when they are extracted from each EMR system’s data warehouse. These data extracts—produced by a component called the EMR Adapter—contain data aggregated into records at the encounter (visit) level for all patients. In our study, these records included the coded diagnoses, problems, and medical history updates; medications ordered, dispensed, and noted as current or discontinued; immunizations, allergies, and health maintenance topics addressed; and procedures ordered, progress notes, and patient instructions.

The data are then exported from the EMR data warehouse (typically, a relational database) into file-based eXtensible Markup Language (XML) documents according to a local specification. The first transformation step involves converting locally defined XML formats into a common, standard XML format conforming to the HL7 Clinical Document Architecture (CDA) specification.7

Concept Markup

The CDA provides a canonical representation of encounterlevel data that is used as an input to our medical record classification system, MediClass.8 MediClass uses natural language processing and rules defining logical combinations of marked up and originally coded data to generate concepts that are then inserted into the CDA document. This system has been successfully used to assess guideline adherence for smoking cessation,9 to identify adverse events due to vaccines,10 and for other applications that require extracting clinical data from EMR text notes.

Up to this point, data processing is performed locally, within the secure data environments of each study site. The next step filters these data to identify only clinical events (including specific concepts identified in the text notes) that are part of the quality measures of the study. The result is a single file of measure set–specific clinical event data, in comma-delimited format, called the Events Data set. Each line in this file describes a patient, provider, and encounter, along with a single event (and attributes specific to that event) that is part of 1 or more measures in the set.

Quality Measurement

The distinct data pipelines from each health system converge into a single analysis environment at the data coordinating center, where quality measures are computed. The Events Data set files are transferred to a central location (KPNW) for final analysis and processing. Here, information contained in the aggregate events data set is processed to provide the clinical and time-window criteria for identifying patients who meet numerator and denominator criteria for each measure. Finally, the proportion of patients receiving recommended services is computed.

RESULTS

PDF is available on the last page.
Feature
Recommended Articles