• Center on Health Equity and Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

Validating Electronic Cancer Quality Measures at Veterans Health Administration

Publication
Article
The American Journal of Managed CareDecember 2014
Volume 20
Issue 12

Even in a fully integrated healthcare system, only 28% of cancer quality measures could be validated by using electronically available data.

Objectives

To assess the feasibility and validity of developing electronic clinical quality measures (eCQMs) of cancer care quality from existing metrics, using electronic health records, administrative, and cancer registry data.

Study Design

Retrospective comparison of quality indicators using chart abstracted versus electronically available data from multiple sources.

Methods

We compared the sensitivity and specificity of eCQMs created from structured data from electronic health records (EHRs) linked to administrative and cancer registry data to data abstracted from patients’ electronic health records. Twenty-nine measures of care were assessed in 15,394 patients with either incident lung or prostate cancer from 2007 and 2008, respectively, and who were treated in the Veteran’s Health Administration (VHA).

Results

It was feasible to develop eCQMs for 11 of 18 (61%) lung cancer measures, 4 (22%) of which were considered to be valid measures of the care constructs. Among prostate cancer measures, 6 of 11 (55%) were feasible, and 4 (36%) were both feasible and valid. Of the 29 metrics, data was available to create eCQMs for 17 (59%) cancer care metrics, and 8 (28%) were considered valid.

Conclusions

In a large integrated healthcare system with nationally standardized electronic health records, administrative, and cancer registry data, 28% of cancer quality measures developed for chart abstraction could be translated into valid eCQMs. These results raise much concern about the development of electronic clinical quality measures for cancer care, particularly in healthcare environments where data are disparate in both form and location.

Am J Manag Care. 2014;20(12):1041-1047

  • Only 28% of cancer quality measures could be translated into electronic clinical quality metrics, even in the largest integrated healthcare system in the United States with nationally standardized electronic data repositories.
  • These results raise concern about the use of electronic clinical quality measures for cancer care, particularly outside integrated healthcare environments, and suggest that it may be premature to think that meaningful measures of cancer care delivery can be developed for widespread use across electronic health record platforms.
  • Current health data systems are inadequate to support meaningful electronic quality measurement in cancer care.

As the healthcare system moves toward a paradigm that rewards “value” in healthcare, or the maximizing of quality while minimizing costs, accurate measurement of the quality of healthcare delivery has taken center stage in reform efforts.1-5 Currently, most quality measures rely on for data acquisition chart abstraction, which is considered the gold standard. While chart abstraction allows access to all types of medical data, clinical notes in particular, it is time-consuming and prohibitively expensive to permit broad and timely measurement of care quality.6 It has long been hoped that electronic measurement of care delivery would open the door to timely, meaningful, and actionable information for many stakeholders.7-10 Electronic health records (EHRs), now adopted by more than 70% of physicians, and health information exchanges are 2 key components of the visions of data interoperability.7,11,12 This is coupled with technical developments, like natural language processing (NLP) and structured clinical templates, which are expected to facilitate computerized access to the content of free text that underlies much of primary clinical documentation, and13,14 payment reforms, like accountable care organizations, which encourage linkage of healthcare data silos.13-17 While human chart abstractors are able to bridge data silos and interpret the full range of available data sources, it is still unclear if the above reforms will make electronic clinical quality measures (eCQMs) and measurement feasible. Many studies have illustrated wide gaps in agreement among electronically available data sources and between these sources and chart abstraction.18-27 Furthermore, eCQMs that are solely based on administrative data have been criticized for failing to measure what matters.28 Cancer care, additionally, is especially challenging to measure, in part because it relies on the multifactorial process of staging and because care is typically delivered across multiple specialties and care settings.

Large integrated healthcare systems, like the Veterans Health Administration (VHA), having already achieved national data integration across all aspects of care delivery, with a common EHR and national, standardized, electronically available data warehouses, offer an opportunity to examine the feasibility and validity of using available electronic data to measure cancer care quality.29-32 Recently, the VHA performed quality assessments of care for lung and prostate cancer patients using manual chart abstraction.33 We hypothesized that the data needed to measure quality performance was available in the VHA’s electronic clinical, administrative, and cancer registry repositories and could be used to create eCQMs from existing quality measures.

METHODS

Study Design

This was a retrospective, national VHA cohort study sponsored by the VHA Office of Analytics and Business Intelligence. The VHA Greater Los Angeles Institutional Review Board approved this study.

Quality Measures

VHA expert panels reviewed the relevant literature and proposed quality measures—using the RAND/ UCLA modified Delphi technique, for national assessments of the quality of care for lung cancer and prostate cancer diagnosed in 2007 and 2008, respectively—to be performed using data abstracted from patients’ EHRs.34 There were 29 quality metrics (18 for lung cancer and 11 for prostate cancer) spanning the continuum of care (see Appendix for more detailed descriptions). Many measures replicated, or were similar to, metrics endorsed by non-VHA organizations (eAppendix). Twenty measures addressed diagnosis, treatment, and management; 5 addressed supportive care; and 4 addressed end-of-life care. Twenty-six measures were patient level, 2 were visit level, and 1 was medical center level.

Data Sources

Data necessary for 29 measures of the quality of lung and prostate cancer care were abstracted from patients’ VHA Computerized Patient Record System (CPRS) medical records by External Peer Review Program (EPRP) nurse abstractors conducting the national VHA assessments of the quality of cancer care (chart abstraction cohort). Using an abstraction tool specifically developed for the quality assessments, EPRP abstractors systematically abstracted the data necessary to score the quality indicators. Case-level quality indicator and timeliness results were provided to VAMCs for review and correction, and the final results were calculated based on these field-corrected data.

Data for 29 eCQMs were obtained from extracts from the VHA’s EHR through the Corporate Data Warehouse (CDW) and Decision Support System (DSS), both of which are national, standardized, and near real-time clinical and administrative data repositories; and from the VHA Central Cancer Registry (VACCR). The eCQM data set was created by linking data from the VACCR, CDW and DSS. This included inpatient and outpatient encounter and procedure coding, as well as pharmacy, lab, radiology, allergy, problem list, and vital sign data.

Study Population

In both the chart abstraction and eCQM cohorts, case identification criteria were identical: all newly diagnosed, pathologically confirmed cases of lung cancer during 2007 (n = 8125) and prostate cancer during 2008 (n = 12,572) that were reported to the VACCR. Patients were excluded if the pathologic diagnosis was not identified in the EHR at the index facility that reported the case to the registry (lung, 1297; prostate, 489), a situation that could occur if patients were treated at more than 1 facility.

Patients were excluded from the chart abstraction cohorts if they were diagnosed at autopsy; death occurred in 30 days or less after cancer diagnosis; hospice enrollment occurred 30 days or less after cancer diagnosis (lung, 947; prostate, 16); they were enrolled if they had a pre-existing or concurrent diagnosis of metastatic cancer other than lung or prostate cancer (lung, 540; prostate, 627); they were enrolled in a cancer clinical trial (lung, 57; prostate, 147); there was documentation of comfort measures only in a hospital discharge summary or nursing home note 30 days or less after cancer diagnosis (lung, 91; prostate, 14); or documentation of life expectancy of 6 months or less in their Problem List at the time of diagnosis (lung, 39; prostate, 4). The resulting lung cancer and prostate cancer abstraction cohorts included 4865 and 11,211 patients, respectively.

Patients were excluded from the lung and prostate cancer eCQM cohorts if they died within 60 days of diagnosis (lung, 1482; prostate, 674); if there was no pathologic confirmation of disease according to the VACCR (lung, 419; prostate, 49); or if there were less than 2 encounters with an International Classificiation of Diseases, Ninth Revision (ICD-9) diagnosis code for prostate or lung cancer (lung, 229; prostate, 8). The eCQM cohort included 5995 patients with lung cancer and 11,700 patients with prostate cancer.

The final validation cohort included patients common to both the lung and prostate abstraction and the eCQM cohorts, n = 4865 and n = 10,529, respectively.

Adaptation of quality measures for eCQMs. Using the linked eCQM data set, we identified the elements specified by each quality indicator and, where more than 1 option existed, determined the optimal specification iteratively. Two measures, “No adjuvant chemotherapy for stage IA NSCLC” and “No radiation therapy for resected stage I, II NSCLC,” were excluded from analysis prior to eCQM creation because adherence, by chart abstraction, was greater than 99% with no variation seen across facilities.

Data Analysis

eCQMs were considered feasible if electronically available data could specify each aspect of the measure. Validity was evaluated by comparing the sensitivity and specificity of the denominator and numerator for each measure, using chart abstraction as the gold standard, and by comparing overall pass rates between the electronic and the chart-abstracted versions of the measures. Although there is no accepted cut-point for numerator and denominator sensitivity and specificity in determining measure validity, nor an accepted level of agreement between pass rates that marks validity, we considered at least 80% specificity for both the denominator and the numerator (compared with chart abstraction data) as valid for an eCQM. However, quality measures with less than 80% specificity or sensitivity were assessed individually and considered likely valid as eCQMs if internal validity could be supported—even if results from measure use might not be comparable across data collection methodologies. Analyses were performed using SAS statistical software, version 9.1 (Cary, North Carolina).

RESULTSPatient Characteristics

Patient demographic characteristics were similar to national trends in lung and prostate cancer (Table 1). The distribution of patients by AJCC stage (lung) or D’Amico risk group classification (prostate) were similar in both the chart abstraction cohort and the VACCR, with 25.5%, 7.8%, 28%, and 34.2% of lung cancer patients with Stages I, II, III, and IV, respectively.35,36 Among men with prostate cancer in the chart abstraction cohort, 24.5%, 34.8%, and 26.3% had low-, intermediate-, and high-risk disease, respectively. The overall percent agreement in staging classification between the VACCR and chart abstraction was 65% (95% CI, 63-67) among lung cancer cases and 75% (95% CI, 74-77) for D’Amico risk classification in prostate cancer cases.

Table 2

eAppendix

Table 3

Feasibility of eCQM creation. Using the VHA EHR, administrative, and VACCR data, it was possible to specify the necessary data elements for 17 of 29 (59%) of the eCQMs (, ). This included 6 of the 11 prostate cancer quality measures (55%) and 11 of 18 lung cancer quality measures (61%). In some cases, to accommodate available electronic data while attempting to preserve the intent and face validity of the measure, changes were required of the quality measure definitions. For example, the numerator of “surgical node sampling” in lung cancer treatment was modified to include cases based on the number of lymph nodes sampled instead of the number of lymph node stations that were sampled. While the number of nodes has value in determining staging, and may correlate with the number of lymph node stations, it is not an exact match and limits the ability to compare quality scores using the 2 different approaches.37 Measures for which it was not feasible to create eCQMs are listed in .

Validation of quality measures. Eight of the 17 measures (47%) that were feasible as eCQMs, which was 28% (8/29) of the initial set of quality measures, were found to be valid, (Table 2, eAppendix A). All of the measures examined processes of care at the patient level. The specificity of denominators for the valid eCQMs ranged from 84% to 98% (Table 2). In 7 of the valid eCQMs, the specificity of the denominator was equal to or greater than 90%; in 1 case it was equal to or greater than 80%. The sensitivity of the numerators for the valid eCQMs ranged from 84% to 100%. In 4 of the valid eCQMs, the sensitivity was equal to or greather than 90% and in 4 it was equal to or greather than 80%. Pass rates among valid eCQMs ranged from 61% to 99% for quality measures, based on chart abstraction, and from 65% to 100% for the same eCQMs. The difference in the pass rate between chart abstraction and eCQMs ranged from 1% to 20% for valid measures.

Four measures (surgical node sampling and outpatient screening for pain in both advanced disease and prior to death, with PSA monitoring after treatment) that failed our minimum sensitivity and specificity criteria were nevertheless considered “likely valid” based on individual consideration. Data definitions varied slightly between chart abstraction and eCQM cohorts; this, in conjunction with the expected use of these measures in a context where local data validation would take place, led us to believe that they would likely be valid and to recommend their further use. Had we included these 4 measures as valid in this analysis, the total number of valid measures would have been 12 of the 29 measures considered (41%).

For example, 2 measures of outpatient screening for pain were visit-level measures where the results assessed using the 2 different methods of data capture were not necessarily comparable. However, since all patients with visits were eligible for these quality measures, the measures were thought to be valid with either approach.Another example: the measure of PSA monitoring after treatment, which was likely performed less well as an eCQM, resulting in the inclusion of ineligible patients. This was due, in part, to the definition of active surveillance which, in chart abstraction, required affirmative documentation of this, but was defined as a lack of other treatments in the electronically available data. Five feasible measures were not valid (Table 2). For 3 of these, this was due to low sensitivity and specific, and poor pass rate agreement (adjuvant chemotherapy for resected stage II or III NSCLC, platinum-based doublet chemotherapy for stage IV NSCLC, and repeat biopsy for men on active surveillance). An additional measure of chemotherapy administration (platinum-based doublet chemotherapy for SCLC) was not considered valid, despite favorable sensitivity and specificity, because of a relatively large difference in pass rate, generalized concern about the validity of determining receipt of chemotherapy-raised by the other 3 chemotherapy-related measures, and limited utility, demonstrated by a nearly perfect pass rate by chart abstraction data.

Of the 12 measures that were feasible and valid or likely valid as eCQMs, none would have been feasible without data from the VACCR. The reverse was also true: none of the measures could have been specified with VACCR data alone.

DISCUSSION

Critics have pointed to a lack of value in eCQMs that were designed around available data, as opposed to clinically important data, or as relying too much on burdensome structured data entry by providers.18,28 In this study, we validated eCQMs for lung and prostate cancer that were adapted from existing quality measures. Our finding that 28% of lung and prostate cancer quality measures, which were designed without regard to electronic abstraction, could be translated into eCQMs using only available data, underscores the opportunity in electronic quality measurement and the effectiveness of VHA’s data management efforts. However, the fact that 72% of the measures could not be validated as eCQMs (in one of the most integrated and data-rich environments in healthcare) is a reminder of the limits of current healthcare data systems to facilitate the measurement of quality of care. As multiple stakeholders look toward EHR-derived data for the next generation of quality measurement, our results highlight the difficulty of using newly available electronic data and existing metrics to assess quality, particularly in nonintegrated healthcare systems.5,38,39

At present, there is little consensus or guidance on appropriate validation criteria for eCQMs, although some effort is being made in this area.40,41 In this study, we used 4 criteria when considering the likely validity of eCQMs: (1) the sensitivity and specificity of the denominator, placing particular emphasis on a high specificity to favor inclusion of only true eligible cases in the denominator; (2) the sensitivity and specificity of the numerator, placing particular emphasis on a high sensitivity to reduce the risk of excluding appropriately delivered care; (3) the agreement between chart abstracted and eCQM pass rates as another global comparator of measure performance; and (4) the clinical context of measure use as determined by the type of use (eg, quality improvement vs accountability), the utility of the measure (as determined, in part, by the overall pass rate and the resultant opportunity for improvement), and other clinical estimations of the measure’s usefulness.

All of the eCQMs we evaluated relied on data from the VACCR and, therefore, on trained cancer registry chart abstractors, underscoring the importance of disease staging for quality measurement in oncology. Natural language processing and synoptic reporting templates, like that currently offered by the College of American Pathologists, may reduce or eliminate the need to chart abstract information contained in free text form, like disease stage and operative reports, however, both still rely on accurate primary clinical documentation.13,14 The percent agreement between trained quality abstractors and trained cancer registrars in this study reinforces the need to validate data from any source. An alternate way forward is wider use of electronically available and exchangeable cancer registry data. Registry-EHR interfaces and incorporating cancer registries into quality measurement programs is a goal of the American College of Surgeons’ Rapid Quality Reporting System.42,43 Data aggregation programs, like the American Society of Clinical Oncology’s CancerLinQ™, may provide another avenue for access to the data in registries and in other disparate locations and forms.42

Measurement can drive rapid changes in healthcare, but there is no guarantee of improvement, particularly if, in addition to validation, concrete plans for re-evaluation of measures are not built into their use.44,45 In this study, it took 2 years to develop and validate the eCQMs, and they have yet to be implemented in an ongoing quality improvement program. Furthermore, nearly 10% of the measures evaluated in this study (“3D-CRT,” “No adjuvant chemotherapy for stage IA NSCLC,” and “No radiation therapy for resected stage I, II NSCLC”) were “topped-out” with no opportunity for improvement. Given the time and other resources involved in developing measures, there is a real risk that measures will fall behind the corpus of medical knowledge and have the unintended consequence of serving not as stimuli for quality, but as barriers that physicians must overcome (perhaps at risk to one’s finances and reputation) in order to deliver the best care possible.

There are several limitations to this study. Our study may not be generalizable to all cancer types or healthcare settings. As described above, some quality measures were altered so that they could be specified as eCQMs, and this limited our ability to validate them. VHA EHR data on clinical encounters is largely based on ICD-9 and Clinical Procedure Terminology coding, and coding accuracy at the VHA is likely lower than in the fee-for-service setting. Also, some of the patients in this study received parts of their cancer care outside of the VHA. In these cases, documentation is sometimes not visible to electronic abstraction, likely disproportionately decreasing the pass rates for eCQMs versus chart abstraction.

CONCLUSION

Using quality measures that were designed without consideration of electronic data availability, electronic data in the VHA is comprehensive and accurate enough for use in electronic quality measurement of care in lung and prostate cancer. However, more than half of the metrics could not be translated into electronic form, and abstraction relied in part on VACCR data. Automated data extraction and quality measurement are likely to increase in the coming years, although this report suggests significant barriers to using existing clinical data, even in a large integrated healthcare system. Furthermore, without validation studies, quality metrics based on EHR-derived clinical and administrative data run the risk of distorting, rather than enhancing, the assessment of quality in oncologic care.Author Affiliations: From the VA Greater Los Angeles Healthcare System (AA) Urology, Informatics, HSR&D (JBS) and Oncology (JM), Los Angeles, CA; University of Michigan, VA Ann Arbor Healthcare System, Urology, HSR&D, Ann Arbor, MI (TAS); VA (former), Washington, DC (DO); Kaiser Permanente, Los Angeles, CA (JR); and University of California, Los Angeles, CA (CSS).

Source of Funding: This study was funded by the Veterans Affairs Office of Analytics and Business Intelligence. This study was supported in part by the UCLA Career Development Program in Cancer Prevention and Control Research R25 Grant CA 087949-13 (JBS).

Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest to the topic of this paper.

Authorship Information: Concept and design (JBS, DO, JM, AA, JR, CSS); acquisition of data (JBS, JM, JR, CSS); analysis and interpretation of data (JBS, TAS, JM, AA, JR, CSS); drafting of the manuscript (JBS, TAS, JM); critical revision of the manuscript for important intellectual content (JBS, TAS, JM); statistical analysis (JBS, JM, AA, JR); provision of study materials or patients (); obtaining funding (JBS, DO, JM); administrative, technical, or logistic support (JBS, DO); and supervision (JBS, JM).

Address correspondence to: Jeremy Shelton, Veterans Affairs - Surgery, 13301 Wilshire Blvd, Bldg 500, Los Angeles, CA 90073. E-mail: jeremy.shelton@va.gov.REFERENCES

1. Porter ME. What is value in health care? N Eng J Med, 2010;363(26):2477-2481.

2. Gabow P, HGKG. Marshaling leadership for high-value health care: an institute of medicine discussion paper. JAMA. 2012;308(3):239-240.

3. Berwick DM, Hackbarth AD, Eliminating waste in US health care. JAMA. 2012;307(14):1513-1516.

4. Fineberg HV. A successful and sustainable health system—how to get there from here. New Eng J Med. 2012;366(11):1020-1027.

5. Pronovost PJ, Lilford R. A road map for improving the performance of performance measures. Health Affairs. 2011;30(4):569-573.

6. MacLean CH, Louie R, Shekelle PG, et al. Comparison of administrative data and medical records to measure the quality of medical care provided to vulnerable older patients. Med Care. 2006;44(2):141-148.

7. Lohr S. Big data is opening doors, but maybe too many. New York Times. May 23, 2013:3.

8. Kizer K. A call for a national summit on information management and healthcare quality. Medscape, 2001. March 13.

9. MP Hurtado, EK Swift, and JM Corrigan. Committee on the National Quality Report on Health Care Delivery Board on Health Care Services, Envisioning the National Health Care Quality Report. Washington, D.C.: The National Academies Press; 2001.

10. Dick RS, Steen EB, Detmer DE. The computer-based patient record: an essential technology for health care, revised edition. Washington, D.C.: The National Academies Press; 1997.

11. Hsiao C, Hing E. Use and characteristics of electronic health record systems among office-based physician practices. United States, 2001-2012, in NCHS Data Brief 2012. National Center for Health Statistics: Hyattsville, MD.

12. Centers for Medicare & Medicaid Services. Meaningful use. CMS website. http://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Meaningful_Use.html. Updated October 6, 2014. Accessed November 16, 2012.

13. Adsay NV, Basturk O, Saka B. Pathologic staging of tumors: pitfalls and opportunities for improvements: seminars in diagnostic pathology. 2012;29(3):103-108.

14. D’Avolio LW, Litwin MS, Rogers SO Jr, Bui AA. Automatic identification and classification of surgical margin status from pathology reports following prostate cancer surgery. AMIA Annu Symp Proc. 2007:160-164.

15. McGowan JJ, Kuperman GJ, Olinger L, Russell C. Strengthening health information exchange: final report HIE Unintended Consequences Work Group. Rockville, MD: Westat; 2012.

16. Gawande AA, Bates DW. The use of information technology in improving medical performance: part II: physician-support tools. Med Gen Med. 2000;2(1):E13.

17. Walker J, Pan E, Johnston D, Adler-Milstein J, Bates DW, Middleton B. The value of health care information exchange and interoperability. Health Aff (Millwood). 2005:Suppl Web Exclusive; W5-10-W5-18.

18. Parsons A, McCullough C, Wang J, Shih S. Validity of electronic health record-derived quality measurement for performance monitoring. J Am Med Inform Assoc. 2012;19(4):604-609.

19. Parker JP, Li Z, Damberg CL, Danielsen B, Carlisle DM. Administrative versus clinical data for coronary artery bypass graft surgery report cards: the view from California. Med Care. 2006;44(7):687-695.

20. Shahian DM, Silverstein T, Lovett AF, Wolf RE, Normand SL. Comparison of clinical and administrative data sources for hospital coronary artery bypass graft surgery report cards. Circulation. 2007;115(12):1518-1527.

21. Delate T, Bowles EJ, Pardee R, et al. Validity of eight integrated healthcare delivery organizations’ administrative clinical data to capture breast cancer chemotherapy exposure. Cancer Epidemiol Biomarkers Prev. 2012;21(4):673-680.

22. Quan H, Li B, Saunders LD, et al. Assessing validity of ICD-9-CM and ICD-10 administrative data in recording clinical conditions in a unique dually coded database. Health Serv Res. 2008;43(4):1424-1441.

23. Quach S, Blais C, Quan H. Administrative data have high variation in validity for recording heart failure. Can J Cardiol. 2010;26(8):306-312.

24. Chan KS, Fowles JB, Weiner JP. Review: electronic health records and the reliability and validity of quality measures: a review of the literature. Med Care Res Rev. 2010;67(5):503-527.

25. Goulet JL, Erdos J, Kancir S, et al. Measuring performance directly using the Veterans Health Administration electronic medical record: a comparison with external peer review. Med Care. 2007;45(1):73-79.

26. McGinnis KA, Skanderson M, Levin FL, Brandt C, Erdos J, Justice AC. Comparison of two VA laboratory data repositories indicates that missing data vary despite originating from the same source. Med Care. 2009;47(1):121-124.

27. Greiver M, Barnsley J, Glazier RH, Harvey BJ, Moineddin R. Measuring data reliability for preventive services in electronic medical records. BMC Health Serv Res. 2012;12:116.

28. Spinks TE, Walters R, Feeley TW, et al. Improving cancer care through public reporting of meaningful quality measures. Health Affairs. 2011;30(4):664-672.

29. Brown SH, Lincoln MJ, Groen PJ, Kolodner RM. VistA—US Department of Veterans Affairs national-scale HIS. Int J Med Inform. 2003;69(2-3):135-156.

30. Byrne CM, Mercincavage LM, Pan EC, Vincent AG, Johnston DS, Middleton B. The value from investments in health information technology at the US Department of Veterans Affairs. Health Aff (Millwood). 2010;29(4):629-638.

31. Kupersmith J, Francis J, Kerr E, et al. Advancing evidence-based care for diabetes: lessons from the Veterans Health Administration. Health Aff (Millwood). 2007;26(2):156-168.

32. Ryoo JJ, Malin JL. Reconsidering the Veterans Health Administration: a model and a moment for publicly funded health care delivery. Ann Intern Med. 2011;154(11):772-773.

33. Ryoo JJ, Ordin DL, Antonio ALM, et al. Patient preference and contraindications in measuring quality of care: what do administrative data miss? J Clin Oncol. 2013;31(21):2716-2723.

34. Brook RH. The RAND/UCLA Appropriateness method. In: McCormick KA, Moore SR, Seigel RA. Clinical Practice Guidelines Development: Methodology Perspectives. Rockville, MD: Agency for Healthcare Research and Quality; 1994.

35. D’Amico AV, Whittington R, Malkowicz SB, et al. Biochemical outcome after radical prostatectomy, external beam radiation therapy, or interstitial radiation therapy for clinically localized prostate cancer. JAMA. 1998;280(11):969-974.

36. AJCC Cancer Staging Manual, ed. Greene F, et al. 2002, Chicago: American Joint Committee on Cancer.

37. Gajra A, Newman N, Gamble GP, Kohman LJ, Graziano SL. Effect of number of lymph nodes sampled on outcome in patients with stage I non-small-cell lung cancer. J Clin Oncol. 2003;21(6):1029-1034.

38. Clancy CM, Anderson KM, White PJ. Investing in health information infrastructure: can it help achieve health reform? Health Affairs. 2009;28(2):478-482.

39. Roski J, McClellan M. Measuring health care performance now, not tomorrow: essential steps to support effective health reform. Health Aff (Millwood). 2011;30(4):682-629.

40. Draft requirements for eMeasure review and testing. National Quality Forum website. http://www.qualityforum.org/Measuring_Performance/Submitting_Standards.aspx. Published 2012. Accessed January 29, 2013.

41. eMeasure feasiblity testing. National Quality Forum website. http://www.qualityforum.org/Projects/e-g/eMeasure_Feasibility_Testing/eMeasure_Feasibility_Testing.aspx - t=2&p=&e=1&s=. Published 2012.

42. ASCO Cancer-LINQ. American Society of Clinical Oncology website. http://www.asco.org/ASCOv2/Practice+%26+Guidelines/Quality+Care/CancerLinQ+-+Building+a+Transformation+in+Cancer+Care. Published 2013.

43. NCDB-rapid quality reporting system. American College of Surgeons website. http://www.facs.org/cancer/ncdb/rqrs.html. Accessed February 15, 2013.

44. Hannan EL, Kilburn H Jr, Racz M, Shields E, Chassin MR. Improving the outcomes of coronary artery bypass surgery in New York State. JAMA. 1994;271(10):761-766.

45. Miller DC, Murtagh DS, Suh RS, et al. Regional collaboration to improve radiographic staging practices among men with early stage prostate cancer. J Urol. 2011 Sep;186(3):844-849.

Related Videos
Mila Felder, MD, FACEP
Kiana Mehring, MBA, director of strategic partnerships, managed care at Florida Cancer Specialists & Research Institute (FCS)
Miriam J. Atkins, MD, FACP, president of the Community Oncology Alliance (COA) and physician and partner of AO Multispecialty Clinic in Augusta, Georgia.
Dr Lucy Langer
Edward Arrowsmith, MD, MPH
Dr Kathi Mooney
Tiago Biachi de Castria, MD, PhD, Moffitt Cancer Center
Related Content
© 2024 MJH Life Sciences
AJMC®
All rights reserved.