• Center on Health Equity and Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

Health IT-Assisted Population-Based Preventive Cancer Screening: A Cost Analysis

Publication
Article
The American Journal of Managed CareDecember 2015
Volume 21
Issue 12

An automated cancer screening outreach tool implemented in a mature health information technology environment can achieve cost savings through reduced clinician time devoted to screening efforts.

ABSTRACTObjectives: Novel health information technology (IT)-based strategies harnessing patient registry data seek to improve care at a population level. We analyzed costs from a randomized trial of 2 health IT strategies to improve cancer screening compared with usual care from the perspective of a primary care network.

Study Design: Monte Carlo simulations were used to compare costs across management strategies.

Methods: We assessed the cost of the software, materials, and personnel for baseline usual care (BUC) compared with augmented usual care (AUC [ie, automated patient outreach]) and augmented usual care with physician input (AUCPI [ie, outreach mediated by physicians’ knowledge of their patient panels]) over 1 year.

Results: AUC and AUCPI each reduced the time physicians spent on cancer screening by 6.5 minutes per half-day clinical session compared with BUC without changing cancer screening rates. Assuming the value of this time accrues to the network, total costs of cancer screening efforts over the study year were $3.83 million for AUC, $3.88 million for AUCPI, and $4.10 million for BUC. AUC was cost-saving relative to BUC in 87.1% of simulations. AUCPI was cost-saving relative to BUC in 82.5% of simulations. Ongoing per patient costs were lower for both AUC ($35.63) and AUCPI ($35.58) relative to BUC ($39.51).

Conclusions: Over the course of the study year, the value of reduced physician time devoted to preventive cancer screening outweighed the costs of the interventions. Primary care networks considering similar interventions will need to capture adequate physician time savings to offset the costs of expanding IT infrastructure.

Am J Manag Care. 2015;21(12):885-891

Take-Away Points

We assessed whether 2 approaches to technology-assisted, population-based cancer screening outreach—each neutral with respect to improving screening rates— changed the cost of screening promotion compared with usual care. One approach employed an algorithm to escalate outreach processes automatically. The other used physician input to target outreach more efficiently.

  • The cost of building and implementing these systems within a mature health information technology system was offset by the value of reductions in physician time devoted to cancer screening relative to usual care.
  • The physician-mediated approach reduced outreach costs relative to the algorithm- only approach, but this approach costs more because its software design was more expensive.

In 2011 and 2012, the Massachusetts General Primary Care Practice-Based Research Network conducted the TopCare clinical trial comparing 2 novel information technology (IT)-based population health management strategies, with each harnessing patient registry data to improve preventive screening rates for cancer. The trial was motivated by the low cancer screening rates observed nationally relative to US Preventive Services Task Force (USPSTF) recommendations.1 At the same time, the federal government had committed to expanding the country’s use of health IT and, subsequently, the Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009 was designed to improve, among other things, the quality and efficiency of healthcare.2

At the conclusion of the TopCare trial, there were high rates of cancer screening in both intervention strategies, but there were no differences of statistical or practical significance in overall screening rates between the study arms.3 However, the TopCare intervention was specifically designed to reduce clinician burden by managing cancer screening outside the context of the face-to-face office visit. Thus, whereas the TopCare population health management tools were unsuccessful at improving cancer screening rates, there remained the possibility that they would improve the efficiency of care. In the current study, we present a cost analysis conducted in parallel with the TopCare clinical trial assessing the relative efficiency of the 2 novel intervention strategies compared with pre-intervention usual care.

METHODSOverview of the TopCare Clinical Trial

The results of the TopCare clinical trial are reported elsewhere.3 Here we describe the outcomes of the trial, as well as the cost-relevant details surrounding the development of the intervention and the changes in work flow it required. In summary, the trial took place between June 2011 and June 2012 at 18 primary care practices associated with Massachusetts General Hospital (MGH). The study was designed to compare 2 IT-based population health interventions: 1) an automated patient outreach program encouraging patients to schedule breast, cervical, and/or colorectal cancer screenings; and 2) a strategy that leveraged physicians’ knowledge of their patients to streamline cancer screening outreach. The present analysis compared the 1-year costs of these 2 strategies with each other, as well as with usual care as it existed before the introduction of the health IT system. The TopCare trial and its embedded cost analysis were approved by the Partners HealthCare institutional review board.

Figure

Prior to the clinical trial, screening reminders came when a physician, accessing a patient’s electronic health record, usually as part of a clinical visit, would see an alert indicating that a patient was overdue for screening. We term this practice “baseline usual care” (BUC) ().

The IT system tested in the trial included a patient registry that continuously identified patients overdue for a breast, cervical, or colorectal cancer screening according to USPSTF guidelines and tracked both scheduled and completed tests. In the clinical trial, the study arm called “augmented usual care” (AUC) used an automated outreach process in which overdue patients were first sent letters asking them to directly schedule an overdue test. They could also contact a call center if they had already received screening (eg, outside the care network), if they did not wish to be screened, or if they were no longer eligible for screening. Patients who did not make appointments or contact the call center were transferred by the IT tool to the list of a delegate in the provider’s office who could make outgoing reminder calls to the patient. If still overdue after 4 months, patients at high risk for nonadherence were transferred to patient navigators who would work closely with the patients to complete screening. Otherwise, patients still overdue were inactivated for a period of 8 months before being re-contacted.

For the other study arm, “augmented usual care with provider input” (AUCPI), the IT system regularly provided physicians with a list of patients believed to be overdue for screening. Physicians could use personal knowledge of their patients to update screening status or triage them for personalized letter, phone, or navigator outreach. The provider could also defer screening if, for example, they knew the patient had previously declined. If a provider did not act within 8 weeks, the patient defaulted to the automated outreach used in the AUC arm. The AUCPI intervention was designed to be faster and more effective than the AUC intervention.

For the TopCare trial, 9 practices were randomized to each study arm. These practices included 88 physician full-time equivalents (FTEs) serving 103,870 patients eligible for screening according to USPSTF guidelines during the 1-year trial. The trial found no significant difference in overall screening completion across the 2 arms of the study (81.6% vs 81.4%).3 Furthermore, the 2 novel programs did not significantly improve screening rates compared with BUC. However, both interventions did alter work flows, shifting cancer screening activities away from physician visits, raising the possibility of cost savings compared with BUC.

Cost Analysis

To compare the costs of the AUC and AUCPI strategies with BUC, we assessed the time and resources devoted to breast, cervical, and colorectal cancer screenings under each strategy. We examined costs from the perspective of the MGH primary care network and projected what it would cost to implement each strategy in the full patient/provider population over a single year. We used micro-costing techniques4 to estimate the costs of the health IT tool and health IT training (both one-time costs), and mailing materials and personnel time (both ongoing costs). Because the software5 used in the TopCare trial was developed inhouse, we estimated what it would have cost another provider organization putting in place a similar custom system built into an existing health IT infrastructure. The cost of the software was estimated by a consulting firm with health IT expertise (Massachusetts Technology Consultants, Boston). The AUCPI software development was more involved than AUC alone due to the creation of a refined user interface. In assessing the cost of the health IT tool, we omit the hardware and software costs of the existing systems on which the tool was built.

A series of group training sessions was conducted to instruct the clinical staff on how to use the health IT tool. Physicians and office staff were trained separately. Physicians were trained by a fellow physician and the project manager. Other staff were trained by the project manager alone. Refresher training and helpline support were also available and tracked. The estimated cost of training over the course of the study year was based on the time spent in training and the hourly wage (including fringe benefits) of both trainers and trainees. Software and training constituted fixed one-time costs of the interventions.

To track how the health IT tool affected physicians’ time spent in clinical care, surveys were administered asking the following question: “Thinking of the effort generated by a recent, typical half-day clinical session, about how much time did you spend on the following types of cancer screening, including time spent before, during, or after the session?” Respondents answered with separate time estimates for breast, cervical, and colorectal cancer screenings. The surveys were administered prior to the launch of the health IT tool and again approximately 1 year later. The responses were compared, and the change in total time spent on screening was calculated for each study arm using random effects models accounting for multiple observations for each respondent and the clustering of respondents within practices. Costs were estimated by extrapolating changes in provider time use over the course of a year and applying that to providers’ wages plus fringe benefits. Delegates were given a similar survey, but because a negligible number of delegates actually engaged with the novel AUC/AUCPI software and because their role was minimal during BUC, we do not include their time use in our analyses.

Patient navigator time was tracked as part of their usual responsibilities. Call center staff time was estimated based on the number of tasks performed and the average time it took to complete each task. Navigator and call center costs were then calculated using total time and staff wages plus fringe benefits. The number and cost of letters sent were tracked in the TopCare administrative database. Mailing, staff time, and clinician time were ongoing costs expected to fluctuate based on the number of eligible patients in the population.

Monte Carlo methods were used to aggregate costs, taking into account a reasonable range of parameter values as may exist at other institutions and establishing a distribution of cost estimates taking that range into account. We conducted 100,000 simulations using the common random number method to increase simulation efficiency.6 The mean for each cost component is reported for the BUC, AUC, and AUCPI screening strategies. Means and 95% CIs are also estimated for total costs and for cost differences between screening strategies estimated in each simulation. All simulations were conducted using TreeAge software version 2013 (TreeAge Software, Inc, Williamstown, MA).

RESULTS

Physician Time Use

Table 1

Just as screening rates were similar in the AUC and AUCPI groups, there were no statistical or practical differences in total physician time spent on cancer screening across the 2 intervention arms. Thus, to increase statistical precision, we assessed time use for the BUC strategy compared with the intervention strategies, regardless of trial arm (). Physicians reported reductions in time spent on each type of cancer screening, with the largest changes being evident with respect to colorectal cancer screening. Overall, for a typical half-day clinical session, the average physician-estimated time spent on tasks related to cancer screening was 6.5 minutes lower (95% CI, 1.2-11.8; P = .02) at the end of 1 year than just prior to the start of the trial. With just under 88 FTE physicians working 44 weeks per year, we estimated a reduction of 4189 physician-hours (2.38 FTEs) of cancer screening effort per year.

Program Costs

Table 2

presents our point estimates of cost components under the BUC, AUC, and AUCPI strategies over the course of a 1-year period. One-time costs included training and software development; software development accounted for 3.4% of total costs for AUC and 4.3% of total costs for AUCPI. The difference was mainly due to the additional programming effort required to develop the user interface for the AUCPI strategy. Personnel training costs were also greater for the more complex AUCPI strategy than under AUC due to the increased demands the user interface placed on physicians. In combination, the one-time implementation effort cost $51,022 more for AUCPI than for AUC. There were no one-time costs for BUC. Among ongoing costs, the value of physician time dominated, accounting for over 97%. Call center and mailing costs were lower under AUCPI than AUC because outreach was tailored based on physician knowledge of their patients’ care. Ongoing per-patient costs were lower for both AUC ($34.86) and AUCPI ($34.80) relative to BUC ($39.51). Estimated total costs for the practices’ cancer screening activities in the first year of a strategy’s implementation were approximately $4.10 million for BUC compared with $3.83 million for AUC and $3.88 million for AUCPI.

Table 3

Across simulation draws, 1-year costs under the AUC strategy averaged $273,626 less than BUC (95% CI, —$770,904 to $192,419) (). In this comparison, AUC was cost-saving relative to BUC in 87.1% of simulations. The AUCPI strategy cost an average of $227,660 less than BUC (95% CI, —$726,156 to $239,056) and was cost saving relative to BUC in 82.5% of simulations. When comparing AUC with AUCPI across simulations, AUC cost on average $45,967 less than AUCPI and was less expensive than AUCPI in 98.4% of simulations.

DISCUSSION

Stage 2 standards for “meaningful use” under the HITECH Act require that clinicians be able to “use clinically relevant information to identify patients who should receive reminders for preventive/follow-up care and send these patients the reminders….”7 Prior research has shown that visit-based screening reminders are effective at improving screening rates.8,9 However, the effectiveness of these reminders is limited to patients who visit their physician on a sufficiently regular basis. Population-based reminders are also effective, typically moreso than visit-based reminders,10-14 as they have the added advantage of potentially reaching an entire patient population of interest, not just those who are regular consumers of care. Previously, most population-based reminder efforts relied on one-time scans of clinical records to identify patients eligible for screening. The TopCare trial is one of a new generation of studies14 to use a largely automated system for identifying and contacting patients due for screening on an ongoing basis. We tracked the program costs in order to assess the efficiency of population-based systems that are independent of visit-based care.

Although the TopCare trial found no significant differences in the effectiveness of the AUCPI and AUC strategies for increasing preventive cancer screening, our estimates suggest such programs may result in efficiency gains relative to BUC by achieving similar outcomes at a lower cost—a finding driven by decreased time devoted to cancer screening by physicians during office visits. Absent savings based on increased efficiency in provider time-use, the AUC and AUCPI programs’ costs were net increases relative to BUC. AUCPI resulted in lower outreach costs than AUC, but those were offset by increased software development costs.

If the value of decreased physician time fell entirely to the provider organization, under AUC we estimated a potential 1-year cost reduction compared with BUC of $273,626 spread over 18 practices and 103,870 screening-eligible patients, leading to a reduction in provider group resource use of $2.64 per patient screened. In future years, the difference would be $3.88 per patient. The proportion of simulations yielding favorable results suggests these estimates are moderately robust across the range of cost structures that might be encountered at other institutions.

The ability of provider organizations to capture physician time savings from our cost analysis by implementing AUC or AUCPI is uncertain. The potential for provider groups to achieve real, monetary savings represents a best-case assessment under the assumption that physician time freed up by the intervention is entirely repurposed to activities that are productive for the practice organization. This could happen in several ways. Clearly, if the practice organization is owned by the physicians, they would realize the benefit of their own improved time use. Revenue-generating activities made possible by more efficient cancer screening might include increasing panel sizes under capitated payment systems or adding revenue-generating visits under fee-for-service payment. Under payment mechanisms with pay-for-performance components, revenue could be generated if the time saved on cancer screening was devoted to improving the organization’s scoring on quality metrics associated with financial withholds. Alternatively, provider organizations could reduce their salary obligations by reducing the size of their physician workforce.

It is also possible that reduced time spent on cancer screening may not benefit provider organizations at all. For example, if physicians simply reduce their work hours while maintaining their existing level of compensation, the AUC and AUCPI interventions would function mainly as transfers from the provider organization to the physicians and would only increase costs to the provider organization. Unfortunately, data from the TopCare trial are not sufficiently detailed to determine precisely who gained from the reduction in physician time devoted to cancer screening at our institution.

There have been numerous other economic analyses of programs aimed at increasing cancer screenings. Typically, they examine both program costs and screening costs. For example, Green et al assessed an outreach system developed to increase colorectal cancer screening at Group Health Cooperative in Washington state. They estimated incremental savings of $89 per patient while increasing screening rates by 25 percentage points over 2 years compared with usual care.14 Savings in that study were due to a shift away from colonoscopy to fecal occult blood test screening, a cheaper method. However, their study recruited volunteer patient participants, whereas ours focused on the entire screening-eligible primary care patient population in our system. Other studies found increased screening rates compared with usual care, but at increased cost.13,15-21 However, none of the existing studies considered the value of reducing physician time devoted to promoting cancer screening—a key financial outcome which could be an important driver of savings.

Practice network size is important to consider when assessing the generalizability of our findings. In the TopCare trial, fixed costs, such as the development and implementation of the software platform, were spread over a large primary care system. All else being equal, in smaller health systems, any potential savings generated by reduced physician effort would be smaller relative to the fixed costs. The inverse is also true: the relative importance of fixed startup costs diminishes when there are more practices seeing more patients eligible for screening over time. Since the TopCare trial was completed, TopCare software has become available commercially, which will reduce many of the one-time costs associated with implementation.5

Limitations

Our study must be considered in the context of certain limitations. It took place in a practice system affiliated with a large academic medical center. Screening rates for breast and colorectal cancers in our patient population (82.7% and 76.2%, respectively)3 were substantially higher than the national average (72.4% and 58.6%, respectively); cervical cancer screening rates were comparable.22 If the AUC model was implemented in a population with lower baseline screening rates, it might yield different results depending on the physicians’ baseline level of attention paid to visit-based screening reminders and the patient population’s willingness to be screened. Second, despite the intervention’s design, practice delegates had limited involvement in patient outreach. Better delegate engagement or, alternately, consolidating the delegates’ role in patient outreach to a centralized population health coordinator, might shift the costs and improve the effectiveness of the interventions relative to BUC.3,23 Third, physician time use was assessed by self-report, not by an objective measurement. One concern is that responses might be biased in favor of reduced time devoted to cancer screening because respondents were not blinded to the interventions. However, the surveys were administered a year apart and time use was assessed in absolute terms (minutes) as opposed to relative terms (“compared with last year”), thus helping to protect against such bias.

CONCLUSIONS

We found that although a program to improve population-based cancer screening using an advanced health IT platform did not increase patient screening rates for breast, cervical, or colorectal cancer, it has the potential to yield substantial cost savings if the reduced time that physicians must devote to cancer screening outreach can be monetized by the provider organization. This will be most likely in networks that are able to quickly align physician and system goals as new payment models evolve. Future research is needed to understand the costs and benefits of implementing this system in other primary care settings and for other population health initiatives.Author Affiliations: Mongan Institute for Health Policy (DEL), Division of General Internal Medicine, Medical Services (JMA, SJA), and Laboratory of Computer Science (AHZ), Massachusetts General Hospital, Harvard Medical School, Boston, MA; Harvard University (VNM), Boston, MA; Division of Research, Kaiser Permanente Northern California (RWG), Oakland, CA.

Source of Funding: This study was supported by grants from the Agency for Healthcare Research and Quality (AHRQ R03-HS020308 [DEL], R18-HS018161 [SJA]), and the Controlled Risk Insurance Company/Risk Management Foundation, and by institutional funding through the Massachusetts General Hospital Primary Care Operations Improvement Program and the Massachusetts General Physicians Organization. No funding source had a role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; or preparation, review, or approval of the manuscript. Dr Levy had full access to all of the data in the study and takes responsibility for the integrity and the accuracy of the data/analysis.

Author Disclosures: Massachusetts General Hospital entered into a royalty arrangement on June 27, 2013, to commercialize the population management system with SRG Technology, a for-profit company. Drs Zai and Atlas are beneficiaries of this royalty arrangement, but have not received any payments to date. Drs Levy, Zai, and Atlas and Mr Ashburner are employees of Massachusetts General Hospital. Drs Levy and Atlas received grant funding from AHRQ for this study. Dr Zai is an employee of SRG Technology, which sells the HIT system discussed; he has also consulted for them, has been paid by them to present at conferences, and owns SRG Technology stock. Dr Grant and Mr Munshi report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (DEL, JMA, RWG, SJA); acquisition of data (DEL, JMA, AHZ, SJA); analysis and interpretation of data (DEL, VNM); drafting of the manuscript (DEL, VNM, JMA); critical revision of the manuscript for important intellectual content (DEL, VNM, JMA, AHZ, RWG, SJA); statistical analysis (DEL, VNM); provision of patients or study materials (AHZ); obtaining funding (DEL, SJA); and supervision (DEL, SJA).

Address correspondence to: Douglas E. Levy, PhD, Massachusetts General Hospital, 50 Staniford St, 9th fl, Boston, MA 02114. E-mail: dlevy3@mgh.harvard.edu.REFERENCES

1. Johnson NB, Hayes LD, Brown K, Hoo EC, Ethier KA; CDC. CDC national health report: leading causes of morbidity and mortality and associated behavioral risk and protective factors—United States, 2005-2013. MMWR Surveill Summ. 2014;63(suppl 4):3-27.

2. Blumenthal D. Stimulating the adoption of health information technology. N Engl J Med. 2009;360(15):1477-1479.

3. Atlas S, Zai AH, Ashburner JM, et al. Non-visit-based cancer screening using a novel population management system. J Am Board Fam Med. 2014;27(4):474-485.

4. Gold MR, Siegel JE, Russell LB, Weinstein MC, eds. Cost-effectiveness in Health and Medicine. New York, NY: Oxford University Press, 1996.

5. TopCare version 2.0 [software]. Ft. Lauderdale, FL; SRG Technology: 2011.

6. Murphy DR, Klein RW, Smolen LJ, Klein TM, Roberts SD. Using common random numbers in health care cost-effectiveness simulation modeling. Health Serv Res. 2013;48(4):1508-1525.

7. CMS. Stage 2 eligible professional (EP) meaningful use core and menu measures table of contents. http://www.cms.gov/Regulationsand-Guidance/Legislation/EHRIncentivePrograms/Downloads/Stage2_MeaningfulUseSpecSheet_TableContents_EPs.pdf. Published October 2012. Accessed November 13, 2015.

8. Brawarsky P, Brooks DR, Mucci LA, Wood PA. Effect of physician recommendation and patient adherence on rates of colorectal cancer testing. Cancer Detect Prev. 2004;28(4):260-268.

9. Lerman C, Rimer B, Trock B, Balshem A, Engstrom PF. Factors associated with repeat adherence to breast cancer screening. Prev Med. 1990;19(3):279-290.

10. Atlas SJ, Ashburner JM, Chang Y, Lester WT, Barry MJ, Grant RW. Population-based breast cancer screening in a primary care network. Am J Manag Care. 2012;18(12):821-829.

11. Chaudhry R, Scheitel SM, McMurtry EK, et al. Web-based proactive system to improve breast cancer screening: a randomized controlled trial. Arch Intern Med. 2007;167(6):606-611.

12. Sequist TD, Zaslavsky AM, Marshall R, Fletcher RH, Ayanian JZ. Patient and physician reminders to promote colorectal cancer screening: a randomized controlled trial. Arch Intern Med. 2009;169(4):364-371.

13. Wagner TH. The effectiveness of mailed patient reminders on mammography screening: a meta-analysis. Am J Prevent Med. 1998;14(1):64-70.

14. Green BB, Wang CY, Anderson ML, et al. An automated intervention with stepped increases in support to increase uptake of colorectal cancer screening: a randomized trial. Ann Intern Med. 2013;158(5, pt 1):301-311.

15. Lairson DR, DiCarlo M, Myers RE, et al. Cost-effectiveness of targeted and tailored interventions on colorectal cancer screening use. Cancer. 2008;112(4):779-788.

16. Misra S, Lairson DR, Chan W, et al. Cost effectiveness of interventions to promote screening for colorectal cancer: a randomized trial. J Prev Med Public Health. 2011;44(3):101-110.

17. Sequist TD, Franz C, Ayanian JZ. Cost-effectiveness of patient mailings to promote colorectal cancer screening. Med Care. 2010;48(6):553-557.

18. Shankaran V, Luu TH, Nonzee N, et al. Costs and cost effectiveness of a health care provider-directed intervention to promote colorectal cancer screening. J Clin Oncol. 2009;27(32):5370-5375.

19. Wolf MS, Fitzner KA, Powell EF, et al. Costs and cost effectiveness of a health care provider-directed intervention to promote colorectal cancer screening among veterans. J Clin Oncol. 2005;23(34):8877-8883.

20. Fishman P, Taplin S, Meyer D, Barlow W. Cost-effectiveness of strategies to enhance mammography use. Eff Clin Pract. 2000;3(5):213-220.

21. Saywell RM Jr, Champion VL, Skinner CS, Menon U, Daggy J. A cost-effectiveness comparison of three tailored interventions to increase mammography screening. J Womens Health (Larchmt). 2004;13(8):909-918.

22. Klabunde CN, Brown M, Ballard-Barbash R, et al. Cancer screening—United States, 2010. MMWR Morb Mortal Wkly Rep. 2012;61(3):41-45.

23. Zai AH, Kim S, Kamis A, et al. Applying operations research to optimize a novel population management system for cancer screening. J Am Med Inform Assoc. 2014;21(e1):e129-e135.

Related Videos
dr krystyn van vliet
dr mitzi joi williams
Stephen Speicher, MD, MS
dr marisa mcginley
Mila Felder, MD, FACEP
Kiana Mehring, MBA, director of strategic partnerships, managed care at Florida Cancer Specialists & Research Institute (FCS)
Miriam J. Atkins, MD, FACP, president of the Community Oncology Alliance (COA) and physician and partner of AO Multispecialty Clinic in Augusta, Georgia.
Mike Brown, Vice President of Managed Care, Cardinal Health
Dr Lucy Langer
Mike Brown, vice president of managed services at Cardinal Health
Related Content
© 2024 MJH Life Sciences
AJMC®
All rights reserved.