Currently Viewing:
The American Journal of Managed Care January 2017
Currently Reading
Alignment of Breast Cancer Screening Guidelines, Accountability Metrics, and Practice Patterns
Tracy Onega, PhD; Jennifer S. Haas, MD; Asaf Bitton, MD; Charles Brackett, MD; Julie Weiss, MS; Martha Goodrich, MS; Kimberly Harris, MPH; Steve Pyle, BS; and Anna N. A. Tosteson, ScD
An Expanded Portfolio of Survival Metrics for Assessing Anticancer Agents
Jennifer Karweit, MS; Srividya Kotapati, PharmD; Samuel Wagner, PhD; James W. Shaw, PhD, PharmD, MPH; Steffan W. Wolfe, BA; and Amy P. Abernethy, MD, PhD
The Social Value of Childhood Vaccination in the United States
Tomas J. Philipson, PhD; Julia Thornton Snider, PhD; Ayman Chit, PhD; Sarah Green, BA; Philip Hosbach, BA; Taylor Tinkham Schwartz, MPH; Yanyu Wu, PhD; and Wade M. Aubry, MD
Value-Based Payment in Implementing Evidence-Based Care: The Mental Health Integration Program in Washington State
Yuhua Bao, PhD; Thomas G. McGuire, PhD; Ya-Fen Chan, PhD; Ashley A. Eggman, MS; Andrew M. Ryan, PhD; Martha L. Bruce, PhD, MPH; Harold Alan Pincus, MD; Erin Hafer, MPH; and Jürgen Unützer, MD, MPH,
Patient-Centered Care: Turning the Rhetoric Into Reality
Joel S. Weissman, PhD; Michael L. Millenson, BA; and R. Sterling Haring, DO, MPH
The Effect of Massachusetts Health Reform on Access to Care for Medicaid Beneficiaries
Laura G. Burke, MD, MPH; Thomas C. Tsai, MD, MPH; Jie Zheng, PhD; E. John Orav, PhD; and Ashish K. Jha, MD, MPH
The Value of Survival Gains in Myelodysplastic Syndromes
Joanna P. MacEwan, PhD; Wes Yin, PhD; Satyin Kaura, MSci, MBA; and Zeba M. Khan, PhD
Electronic Health Records and the Frequency of Diagnostic Test Orders
Ibrahim Hakim, BBA; Sejal Hathi, BS; Archana Nair, MS; Trishna Narula, MPH; and Jay Bhattacharya, MD, PhD
An Assessment of the CHIP/Medicaid Quality Measure for ADHD
Justin Blackburn, PhD; David J. Becker, PhD; Michael A. Morrisey, PhD; Meredith L. Kilgore, PhD; Bisakha Sen, PhD; Cathy Caldwell, MPH; and Nir Menachemi, PhD, MPH

Alignment of Breast Cancer Screening Guidelines, Accountability Metrics, and Practice Patterns

Tracy Onega, PhD; Jennifer S. Haas, MD; Asaf Bitton, MD; Charles Brackett, MD; Julie Weiss, MS; Martha Goodrich, MS; Kimberly Harris, MPH; Steve Pyle, BS; and Anna N. A. Tosteson, ScD
This study measured breast cancer screening practice patterns in relation to evidence-based guidelines and accountability metrics, and found closer alignment is needed for providing patient-centered care.
ABSTRACT

Objectives:
Breast cancer screening guidelines and metrics are inconsistent with each other and may differ from breast screening practice patterns in primary care. This study measured breast cancer screening practice patterns in relation to common evidence-based guidelines and accountability metrics.

Study Design: Cohort study using primary data collected from a regional breast cancer screening research network between 2011 and 2014.

Methods: Using information on women aged 30 to 89 years within 21 primary care practices of 2 large integrated health systems in New England, we measured the proportion of women screened overall and by age using 2 screening definition categories: any mammogram and screening mammogram.

Results: Of the 81,352 women in our cohort, 54,903 (67.5%) had at least 1 mammogram during the time period, 48,314 (59.4%) had a screening mammogram. Women aged 50 to 69 years were the highest proportion screened (82.4% any mammogram, 75% screening indication); 72.6% of women at age 40 had a screening mammogram with a median of 70% (range = 54.3%-84.8%) among the practices. Of women aged at least 75 years, 63.3% had a screening mammogram, with the median of 63.9% (range = 37.2%-78.3%) among the practices. Of women who had 2 or more mammograms, 79.5% were screened annually.

Conclusions: Primary care practice patterns for breast cancer screening are not well aligned with some evidence-based guidelines and accountability metrics. Metrics and incentives should be designed with more uniformity and should also include shared decision making when the evidence does not clearly support one single conclusion.

Am J Manag Care. 2017;23(1):35-40
Take-Away Points

This study explores the inherent challenges of shifting primary care breast cancer screening practice in response to evidence-based guidelines while also attending to accountability and performance measures. 
  • Studies examining practice-level patterns for breast cancer screening are limited. 
  • Breast cancer screening guidelines and accountability metrics are inconsistent with each other. 
  • Primary care practice patterns are not shifting with screening guidelines.
  • The heterogeneity of accountability measurement may be a barrier for the uptake of new guidelines in response to recent evidence. 
  • Given that practices seek to provide guideline-based care and achieve quality and accountability metrics, alignment of these measures is important for patient care.
Breast cancer screening practices are often debated in clinical practice, public health, and national dialogues. A multitude of medical professional organizations endorse specific sets of breast cancer screening guidelines, such as that of the United States Preventive Services Task Force (USPSTF),1 the American Cancer Society (ACS),2 and the American College of Radiology (ACR),3 among others. These guidelines vary to some extent, and represent different interpretations of largely the same evidence base. At the same time, in an effort to improve quality of care delivery and hold organizations accountable for the dollars they spend, a number of quality measures are in use by healthcare organizations and practices, such as Healthcare Effectiveness Data and Information Set (HEDIS) measures from the National Quality Forum (NQF)and similar measures used by CMS.These measures are often tied to fiscal and other organizational incentives through contractual payment mechanisms.

Currently, tested models of care delivery, such as accountable care organizations (ACOs)6 and patient-centered medical homes, are employing measures for “best practices” and for qualification/recognition, as well to establish whether bonuses or savings will be paid out. Specifically, in many of these risk-based contracts, if the provider entity shows worse quality on a number of established quality measures—often including cancer screening—provider organizations may not be able to participate in any shared savings generated by new payment models.7 In other pay-for-performance models, set thresholds for improvement in certain quality measures result in higher payment bonuses.8 Further, practices often have contracts with private insurers that specify guidelines of care for breast cancer screening, such as annual screening in women aged 40 to 49 years. Breast cancer screening is one of the measures that healthcare systems and practices are typically required to report. Given that physicians and practices seek to provide guideline-based care and achieve quality/accountability metrics, understanding the alignment of these measures is important for patient care.

Table 1 presents a summary of common evidence-based guidelines and quality measures for breast cancer screening.1-3,9,10 Major evidence-based guidelines that are endorsed or disseminated by professional organizations include the USPSTF, which, since 2009, has recommended that biennial screening typically begin at age 50 through age 74 for women of average breast cancer risk—although for women aged 40 to 49, a risk- and preference-based decision should be made.1-3,9,10 The most recently released USPSTF guidelines (2016) continue to support the previous 2009 screening guidelines, but include a statement that there is little evidence at this time on the effectiveness of digital tomosynthesis (3D mammography) or additional screening for women with dense breasts.10 In contrast, the ACS and ACR recommend that average-risk women begin screening at age 40, and continue annually, without a pre-specified ending age.2,3 

Measures for quality, payment, or other forms of accountability are usually derived from the same evidence as the professional organizations, but are not always aligned with guidelines. For example, prior to 2014, the HEDIS and ACO breast cancer screening metric was based on women initiating screening at age 40 and continuing until age 69 every 2 years, and also included any mammogram, not necessarily with a screening indication.6 These practice quality measures were instituted at many practices, and may have provided an impetus or financial incentive to achieve the best measures possible by adhering to their screening parameters. At the same time, women receive recommendations on breast cancer screening not only from primary care, but also from radiology and obstetrics and gynecology practices, which typically recommend an annual interval.

From 2009 to 2013, providers wanting to adopt USPSTF breast cancer screening guidelines would not be able to fully do so and perform well using HEDIS, ACO, and NCQA measures. This is due to the number of women aged 40 to 49 years who may choose not to be screened per USPSTF guidelines/recommendations, and thus, they would appear “not current,” which could reduce the rates on which practice performance was measured. On the other hand, providers using ACS or ACR breast cancer screening guidelines would be simultaneously concordant with the accountability metrics of HEDIS and others. In 2014, HEDIS and ACO measures were changed for breast cancer screening such that they were concordant with that of USPSTF for a starting age of 50 at 2-year screening intervals until age 74.10 This environment of heterogeneous guidelines, differential uptake of guidelines over time, and concurrent goals of providing patient-centered care, as well as meeting practice-based metrics, can potentially create unavoidable discordance of breast cancer screening practices within the range of recommendations.

The objective of this study was to examine breast cancer screening practice patterns to assess concordance with evidence-based guidelines and accountability metrics for primary care within a sample of practices from 2 large regional healthcare systems.

METHODS

Study Population and Setting

This study was conducted within one of the consortium member networks of the National Cancer Institute (NCI)–funded Population-based Research Optimizing Screening for Personalized Regimens (PROSPR),11 which is focused on breast cancer screening. The PROSPR Research Center (PRC) includes data on breast cancer screening within the primary care populations of the Dartmouth-Hitchcock regional network in New Hampshire and the Brigham and Women’s Hospital system in greater Boston and surrounding areas in Massachusetts. Our PRC comprises 37 primary care facilities and 10 radiology facilities in the bi-state region, and includes data from January 2011 through September 2014 on a primary care population (the PRC cohort) of women between the ages of 30 and 89 years, who had at least 1 primary care visit in the past 24 months within our respective healthcare systems.

To assess a biennial screening interval, the analysis was restricted to those women who became part of our PRC cohort anytime between January 1, 2011, and June 30, 2012, which allowed at least 24 months (plus an additional 3 months to account for the scheduling and completion of a mammography exam) from the time a woman became a part of the PRC cohort until September 2014 (n = 83,725). We excluded from the study, 16 primary care facilities with fewer than 100 women visiting during the study period. This resulted in a final cohort of 81,352 women among the 21 primary care facilities. The study was approved prior to data collection by the Institutional Review Boards of Dartmouth College and Brigham and Women’s Hospital.

Data Sources and Collection

Actual screening patterns were measured using data from our PRC database. Data are routinely collected for our PRC cohort on breast imaging (including breast screening exams within 2 years prior to becoming part of the PRC cohort), follow-up, breast pathology, breast cancer diagnosis, and vital status. Specifically, for this study, we used the following data elements: entry into the PRC cohort, the primary care facility, age at PRC cohort entry, date and exam indication for 2D and 3D breast images (mammograms), age at mammography, and vital status. Data sources used included the electronic health record (EHR), radiology information system databases, and institutional cancer registries. All data from our PRC were systematically mapped into a single database with common data elements.

RESULTS

Analysis

Using the PRC database, we categorized 2 measures for determining the receipt of breast cancer screening based on a woman’s breast images that occurred during the study period. The first screening measure category included receipt of any mammogram (screening or diagnostic), which corresponds to HEDIS and ACO metrics. The second category was receipt of a mammogram that specifically indicated that the purpose of the exam was screening, which corresponds to USPSTF guidelines. Additionally, the woman’s receipt of a screening mammogram within 2 years of her PRC cohort entry date was included in the screening measure definitions. The age at PRC cohort entry was categorized as follows (in years): under 40, 40 to 44, 45 to 49, 50 to 69, 70 to 74, 75 to 79, 80 to 84, and 85 or older. We reported the overall population age distribution by age groups and provided the number and percent of each population age group’s screening measures.

For analyzing the proportion of women who initiated and continued screening, including the time interval between screens (Figure and Table 2), we used the USPSTF criteria category (receipt of a mammogram indicated as screening). We assessed the percent of women initiating screening at the age of 40 and at the age of 50, and continued screening at age 75 or older for each primary care facility. We defined the age-40 cutoff with a 27-month window to account for time to schedule and complete an exam. For example, if a woman received a screening mammogram up to 27 months following her 40th birthday, she would be considered as initiating screening at age 40. We summarized the frequency and proportion of women within 5-year age categories for each of the screening intervals: annual, biennial, and over 2 years, in addition to the median and interquartile range (IQR) of screening interval in days. The screening intervals were defined as 9 to 18 months for annual, 18 to 27 months for biennial, and over 27 months for more than 2 years.

RESULTS

Our study cohort included 81,352 total women, with 38,897 (47.8%) aged 50 to 74 years (Table 3). Overall, 54,903 of 81,352 (67.5%) had a mammography exam of any type (screening or diagnostic), and 48,314 of 81,352 (59.4%) had a screening mammogram during our study period, with the majority of mammography exams occurring within the 50-to-69 age group (82.4% and 75%, respectively) (Table 3). Seventy percent of women aged 40 to 44 had a screening mammogram and 77.3% had a mammogram of any type (screening or diagnostic). Among the older age groups, the proportion with a mammogram steadily decreased; however, over a quarter (27.5%) of women 85 years or older had received a screening mammogram (Table 3).

Examining the proportion of women by age categories, those with a screening mammogram at age 40 was 72.6% overall, and across the primary care facilities, the median was 70% with a range of 54.3% to 84.8% (Figure). For women who initiated with a screening mammogram at age 50, the overall proportion with mammography was 75.5%, with a median of 77.2% and a range of 51.4% to 91.8% across the primary care facilities. Sixty-three percent of women 75 years or older continued screening, and among the primary care facilities, the median was 63.9% with a range of 37.2% to 78.3% (Figure).

 
Copyright AJMC 2006-2017 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
x
Welcome the the new and improved AJMC.com, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up
×

Sign In

Not a member? Sign up now!