Mechanisms to better codify clinical outcomes and intermediate outcome milestones are necessary to make the fullest use of EHR data for comparative effectiveness research.
This commentary is meant to set the stage for further discussion about how the objective of a learning healthcare system can be advanced through better specifying requirements to support secondary data use. Recent federal initiatives seek to foster widespread health information technology adoption in the hopes of improving the efficiency and efficacy of our nation’s health system. Development of a framework for codifying clinical outcomes would support those objectives primarily though making it easier to uncover associative patterns in patient care data. Put simply, the explicit classification of patient outcomes at the point of care seems to be a prerequisite to foster the most rapid exploration of achievable outcomes and their determinants. Considerations in such an endeavor include attributional validity, accounting for treatment appropriateness, incorporating patient perspectives, and evaluating the impacts of linkages to pay-for-performance programs.
(Am J Manag Care. 2010;16(12 Spec No.):e327-e329)
Recognizing the critical role of health information and the use of certain electronic health record (EHR) functions in the improvement of our nation’s health,1,2 the federal government is in the process of accelerating EHR adoption. Although the strategy being implemented across the country is multifaceted, the cornerstone of the approach is the “meaningful use” incentive program. Beginning in 2011, this program will provide incentive payments to eligible professionals and hospitals as they adopt, implement, and demonstrate the meaningful use of certified EHR products. The program is compelling because it recognizes that technology adoption can influence the quality and efficiency of care only to the extent that the technology is effectively used. The first phase of the meaningful use program provides some well-specified functions and related use thresholds. Later phases aim to raise the bar—fostering technology use that is yet more meaningful to the delivery of high-quality and thus more cost-effective care.
The initial criteria are dominated by process measures of technology use and care quality. This is quite understandable given our health system’s lagging technological maturity. However, moving beyond structure and process, we ultimately strive to understand and improve outcomes. Toward that end we propose the development of a framework to support
the codification of observed outcomes at the point of care (POC).
Widespread adoption of health information technology (HIT) is being promoted to realize gains in the efficiency of transactions between healthcare entities; to expand the information available about a particular patient at the POC; to allow more informed, coordinated, and efficient decision making; and in the hopes of expanding the evidence base by leveraging the data captured in the course of care delivery for secondary uses. Progress toward these objectives requires (1) infrastructure investments to accelerate access to HIT tools and exchange platforms; (2) provision of technical assistance to elements of our health system requiring it; (3) consensus around standards and protocols; (4) specificity around desired functionality; (5) availability of tools that meet some minimum threshold of usability and safety assurance; (6) alignment of incentives to address the mismatch between providers who are required to invest in technology and where value from those investments is realized (providers, payers, patients, and society); and (7) forethought regarding potential secondary data uses and requirements needed to support them. Diligent efforts are under way on all these fronts. We seek to add to the discussion on the last point—secondary data use requirements.
A Learning Health System to Increase the Value of Care
The Institute of Medicine recently devoted much effort to developing the concept of a learning healthcare system “designed to generate and apply the best evidence for the collaborative health care choices of each patient and provider; to drive the process of discovery as a natural outgrowth of patient care; and to ensure innovation, quality, safety, andvalue in health care.”3 There are many interrelated objectives of such a system, such as promoting personalized medicine, exploring real-world effectiveness, understanding treatment efficacy in understudied subpopulations, uncovering factors that contribute to disease susceptibilities, and fostering inclusion of patient preferences in medical decisions. However, a central tenet is strengthening the evidence base around the practice of medicine and how that practice influences the trajectory of disease and clinical outcomes. Outcomes—changes in particular symptoms or physical functions—are influenced by many different factors, including disease severity, comorbidity, genetic profile, treatment choice, sex, health behaviors, and patient compliance. The degree to which changes in outcomes can be attributed to the care provided (ie, the attributional validity of outcome metrics) is typically evaluated post hoc as part of a risk adjustment process.4 Capturing better information at the POC would dramatically improve our understanding of outcomes and their determinants.
We propose the development and use of an outcome-coding framework to improve the analytic utility of EHR-collected data, in effect expanding the horizons of meaningful use to society generally. Improving the descriptive data on observed outcomes and intermediate outcomes could go well beyond what is available in restricted clinical trial settings and could provide important insight into the value of various treatments among understudied subpopulations and complicated/costly patients not otherwise effectively segmented in randomized clinical trials. Clinical trials cannot feasibly examine the impact of new medications and treatments in all subsets of the population who ultimately use them. Long-term impacts from the use of certain drugs can be assessed more effectively through analysis of real-world data. Problems that develop after extended use of a drug, such as in the case of Vioxx, may be caught earlier in a health system better able to uncover associative patterns in patient care data. Although we readily admit the limitations and challenges of secondary data use and observational studies, others have also highlighted the potential of augmenting evidence from randomized clinical trials with information gleaned from EHRs.5
Much thought also has been given to the use of real-world observational data for developing reimbursement policy among private insurers.6 Another federal government priority,
comparative effectiveness research, is being aggressively pursued in the hopes of advancing the quality and value of health services delivery, in part through refining care guidelines and insurance coverage decisions. For example, some current care guidelines would benefit from mechanisms to better tailor treatment approaches to the specific risk profiles of individual patients,7 and some coverage decisions would benefit from greater flexibility in regard to off-label uses as the evidence base behind a particular pharmaceutical matures with its use in patient care.8 Clearer real-world evidence, garnered from outcome-coded EHR data, would allow payers greater confidence that they are paying for the right things and support the development of value-based insurance plans.9
Coding Outcomes to Strengthen the Evidence Base
To advance the utility of EHR data, outcome codes along the same lines as International Classification of Diseases (ICD) codes should be developed. Just as providers render some professional judgment in the selection of an ICD, Ninth Revision or ICD, Tenth Revision code, they also should be able to effectively render a coded observation on a particular patient’s outcome that can later be associated with that patient’s treatment profile and other data captured in the EHR. To put outcome codes in context, records also would require some synthesis of how the treatment has affected disease progression, control of pain, improvement of lab values, and so forth (ie, provider opinion would be solicited regarding outcome attribution).
Effectively codifying outcomes requires a framework that balances many characteristics of such a system. Interrater reliability, definitional validity, generalizability, the level of disease specification, and administrative burden all require careful consideration. Such a framework should be sensitive enough to capture clinically relevant changes, yet not so complicated as to be difficult to interpret broadly. Developing a framework that is consistent with standardized terminology and that incorporates known clinical indicators when available (eg, lab threshold values, blood pressure, preventable utilization) will aid long-term research efforts that follow patients through multiple episodes of care across provider settings.10 We may even find clinical value at the POC in the ability to track individual patient progress using a more standardized outcome terminology.
Other researchers have discussed the need for broader inclusion of “problem codes” to evaluate treatment appropriateness (treatment often seeks to address specific symptoms rather than diagnoses).11 This issue raises an interesting dilemma in the pursuit of codifying outcomes. Given an objective of measuring treatment effectiveness, we need to be careful that treatment appropriateness is not assumed away in some cases (eg, self-limiting disorders). Another broad consideration is the strong desire to pay for performance. Although paying directly for outcomes sounds attractive, it should perhaps not be used as a direct basis for payment or quality reporting, in order to preserve the validity of outcome-coded data. The conflict of interest seems insurmountable if payments are associated with specified levels of outcomes and providers are assigned the task of selecting the level achieved by their patients. The indirect use of outcome-coded data will help to ensure that payers are paying for higher value care and not merely upcoding motivated by the prospect of higher reimbursement.12,13
Of course, outcomes also can be somewhat subjective and patient-specific. A good outcome is defined, in part, by the patient’s feelings about the outcome. Important work is already under way on standardized instruments for capturing patient-reported outcomes. The National Institutes of Health maintains a set of tools called the Patient Reported Outcomes Measurement Information System, which include standardized outcome measures and instruments (www.NIHpromis.org).14 Coupling patient perspectives with how clinicians characterize outcomes (eg, the best possible outcome for this specific patient given his or her comorbidities, compliance issues, etc) presents a novel opportunity to better understand complex trade-offs that often are implicit in healthcare delivery. For example, pursuing particular lab values or symptom control while accepting the side effects associated with increasing medication dosages exhibits such a trade-off where both the patient’s and provider’s perspective are informative. A health system that better understands outcomes, their determinants, and related patient perspectives is truly a learning system capable of consistently achieving meaningful outcomes.
Author Affiliations: From the Altarum Institute (DCA), Ann Arbor, MI; Department of Health Management and Policy (EJL, DGS), University of Michigan, Ann Arbor, MI.
Funding Source: The authors report no external funding for this work.
Author Disclosures: The authors (DCA, EJL, DGS) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (DCA, EJL, DGS); drafting of the manuscript (DCA, EJL, DGS); critical revision of the manuscript for important intellectual content (DCA, EJL, DGS); and administrative, technical, or logistic support (EJL).
Address correspondence to: Daniel C. Armijo, MHSA, The Altarum Institute, 3520 Green Ct, Ste 300, Ann Arbor, MI 48105-1566. E-mail: email@example.com.
1. Simon SR, Soran CS, Kaushal R, et al. Physicians' use of key functions in electronic health records from 2005 to 2007: a statewide survey. J Am Med Inform Assoc. 2009;16(4):465-470.
2. Etheredge LM. A rapid-learning health system. Health Aff (Millwood). 2007;26(2):w107-w118.
3. Roundtable on Value & Science-Driven Health Care, Institute of Medicine. The Learning Healthcare System in 2010 and beyond: understanding, engaging, communicating the possibilities. April 2010. http://www.iom.edu/Activities/Quality/VSRT/2010-APR-01.aspx. Accessed August 15, 2010.
4. Iezzoni LI, ed. Risk Adjustment for Measuring Health Care Outcomes. Chicago, IL: Health Administration Press; 2003.
5. Stewart WF, Shah NR, Selna MJ, Paulus RA, Walker JM. Bridging the inferential gap: the electronic health record and clinical evidence. Health Aff (Millwood). 2007;26(2):w181-w191.
6. Garrison LP Jr, Neumann PJ, Erickson P, Marshall D, Mullins CD. Using real-world data for coverage and payment decisions: the ISPOR Real-World Data Task Force report. Value Health. 2007;10(5):326-335.
7. Hayward RA, Krumholz HM, Zulman DM, Timble JW, Vijan S. Optimizing statin treatment for primary prevention of coronary artery disease. Ann Intern Med. 2010;152(2):69-77.
8. Sox HC. Evaluating off-label uses of anticancer drugs: time for a change. Ann Intern Med. 2009;150(5):353-354.
9. Smith DG. Getting the right services covered by health insurance. Am J Manag Care. 2010;16(4):278-279.
10. Chute CG, Cohn SP, Campbell JR, et al. A framework for comprehensive health terminology systems in the United States: development guidelines, criteria for selection, and public policy implications. ANSI Healthcare Informatics Standards Board Vocabulary Working Group and the Computer-Based Patient Records Institute Working Group on Codes and Structures. J Am Med Inform Assoc. 1998;5(6):503-510.
11. First MB, Pincus HA, Schoenbaum M. Issues for DSM-V: adding problem codes to facilitate assessment of quality of care. Am J Psychiatry. 2009;166(1):11-13.
12. McCarthy EP, Iezzoni LI, Davis RB, et al. Does clinical evidence support ICD-9-CM diagnosis coding of complications? Med Care. 2000;38(8):868-876.
13. Jollis JG, Ancukiewicz M, DeLong ER, Pryor DB, Muhlbaier LH, Mark DB. Discordance of databases designed for claims payment versus clinical information systems: implications for outcomes research. Ann Intern Med. 1993;119(8):844-850.
14. Cella D, Yount S, Rothrock N, et al; PROMIS Cooperative Group. The Patient-Reported Outcomes Measurement Information System (PROMIS): progress of an NIH roadmap cooperative group during its first two years. Med Care. 2007;45(5 suppl 1):S3-S11