• Center on Health Equity and Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

Small Practices' Experience With EHR, Quality Measurement, and Incentives

Publication
Article
The American Journal of Managed CareSpecial Issue: Health Information Technology - Guest Editor: Farzad Mostashari, MD, ScM
Volume 19
Issue SP 10

A study to assess clinician attitudes and experiences after participating in a New York City cardiovascular disease focused quality recognition and financial incentive program using health information technology.

Objectives:

To assess clinician attitudes and experiences in Health eHearts, a quality recognition and financial incentive program using health information technology.

Study Design:

Survey of physicians.

Methods:

A survey was administered to 140 lead clinicians at each participating practice. Survey domains included clinicians’ experiences and attitudes toward the selected clinical quality measures focused on cardiovascular care, use of electronic health records (EHRs), technical assistance visits, quality measurement reports, and incentive payments. Responses were compared across groups of practices receiving financial incentives with those in the control (no financial rewards).

Results:

Survey response rate was 74%. The majority of respondents reported receiving and reviewing the quality reports (89%), agreed with the prioritization of measures (89%), and understood the information given in the quality reports (95%). Over half of the respondents had a quality improvement visit (56%), with incentive clinicians more likely to have had a visit compared with the control group (68% vs 43%, P = .01). The incentive group respondents (92%) were more likely to report using clinical decision support system alerts than control group respondents (82%, P = .11).

Conclusions:

Clinicians in both incentive and control groups reported positive experiences with the program. No differences were detected between groups regarding agreement with selected clinical measures or their relevance to the patient population. However, clinicians in the incentive group were more likely to review quarterly performance reports and access quality improvement visits. Incentives may be used to further engage clinicians operating in small independently owned practices to participate in quality improvement activities.

Am J Manag Care. 2013;19(11 Spec No. 10):eSP12-eSP18

  • With adequate technical support, small practices can be engaged in recognition and financial rewards programs.

  • Clinician buy-in to the design of the program was high. A majority of the clinicians reported receiving, reviewing, and understanding the quality reports; were in agreement with the focus on cardiovascular quality measures; thought the measures were clinically meaningful; and understood the information.

  • Financially incentivized clinicians were slightly more engaged and participated in quality improvement visits and trainings, such as using clinical decision support systems and other electronic health record functionalities.

Use of incentives and pay-for-performance (P4P) to realign payment to address problems of low quality of care or gaps in preventive services has had limited success in improving the quality of healthcare.1-6 For the most part, studies on P4P have focused on large group practices.7-10 Small practices, where the majority of patients still receive care nationally,11 historically face greater obstacles to improving care because they have lacked the scale and organizational structure to conduct quality improvement activities or participate in P4P.12,13

It is important to assess clinician attitudes toward key program features, such as the selection of target quality measures, trust in performance reports, and relevance of quality targets. Understanding clinician motivations and opinions toward a quality improvement program may help predict the extent to which they change their clinical behavior.14 Specific program features, such as the frequency and type of performance feedback and available assistance for meeting program goals, could potentially affect clinician awareness and understanding of particular programs. Clinician skepticism about the accuracy of reports, or distrust of or lack of transparency in data used for reporting or payment, may lead to less engagement of clinicians in incentive programs or quality improvement efforts.15-17

With widespread implementation of electronic health records (EHRs),18 EHR-enabled solo and small group practices have been shown to be capable of responding to quality improvement (QI) initiatives, as well as programs that incentivize using quality measurement.19 It is unknown how clinicians will feel about quality measurement and pay-forperformance using EHR-derived quality measures. To address this gap in the literature, we surveyed clinicians participating in Health eHearts, a cluster-randomized trial of the effect of a financial incentive and QI assistance program on measures of cardiovascular care compared with the effect of providing quality reports and QI assistance. The Primary Care Information Project (PCIP), a bureau of the New York City Department of Health and Mental Hygiene, piloted Health eHearts in practices that recently adopted an EHR and that were receiving ongoing QI visits to improve practice work flows using health information technology. Survey domains included overall experience with the program, as well as experience with the tools supporting QI efforts. In addition, we assessed whether there were differences in experiences or attitudes and whether these attitudes differed for practices receiving incentives or not.

METHODSPractice Selection and Assignment

PCIP recruited 140 small practices to participate in Health eHearts. The program duration was April 2009 to September 2011. Practices were eligible if they have been “live” on the EHR for at least 3 months, had a minimum of 200 patients with cardiovascular diagnoses related to the quality measurement targets, and were transmitting quality measures through the EHR to PCIP. Practices agreed to be randomized into “recognition” or “rewards” groups. Rewards consisted of financial incentives for each numerator met for 4 areas of cardiovascular care: aspirin therapy, blood pressure control, cholesterol control, and smoking cessation intervention (ABCS). Incentive amounts ranged from $20 to $150 per patient with goal achieved, with higher payments for harder to treat patients (eg, comorbid diseases or lower socioeconomic status). The recognition group served as a control. Both groups (control and incentive) received quarterly quality performance reports, telephone and onsite coaching on work flow redesign, and training on documentation, and were invited to a recognition program at the end of the year. The quality reports summarized practices’ progress on the ABCS and compared their performance with other practices in Health eHearts and trends over the previous 6 months.

Survey Administration and Instrument

Health eHearts was a 2-year program, with cohort 1 enrolled at the beginning and continuing for 2 years and cohort 2 enrolled at the beginning of year 2. Practices were surveyed before and after each program year. This study focuses on the survey administered to all participating practices at the end of Health eHearts. A 33-item survey (29 items in the control group version) was administered in October 2011. A lead clinician from each practice was invited to respond to the survey first by mail, followed by at least 3 reminder phone calls to nonresponding clinicians. Survey administration continued through February 2012.

The instrument was developed in collaboration between PCIP and researchers from University of California San Francisco (UCSF) who were contracted as evaluators for the overall evaluation of the program. The instrument focused on several aspects of the Health eHearts program: clinicians’ experiences and attitudes toward the selected quality measures (ABCS), training on use of the EHR or achievement of ABCS, QI visits, tracking patients for preventive services using the EHR, quality reports, incentive payments (incentive group only), recognition programs in general, and demographics. The survey was pretested with program staff and a clinician in PCIP. Items used in this survey were based on an earlier instrument co-developed with UCSF to assess barriers and facilitators for small practices to participate in P4P. Topics identified as barriers included: accuracy and regularity of reports relevant to the practice’s patient population, measurement targets that were meaningful to the practice population, availability of training or assistance to conduct QI activities, and use of practice tools, such as the EHR, to identify patients and document for quality measurement reports.

The survey was considered part of program evaluation activities conducted by PCIP and was deemed exempt by the Institutional Review Board at New York City Department of Health and Mental Hygiene. Clinicians in the control group were offered a $100 honorarium for participating in the survey.

Analysis

Frequencies and averages were calculated for practice characteristics stratified by whether the practice was in the incentive or control group. All items in the survey were recorded into dichotomous variables and then stratified by incentive and control groups. Significant statistical differences between the incentive and control group were determined using χ2 tests. Data were analyzed using SAS software, version 9.2 (SAS Institute, Cary, North Carolina).

Items were recoded in the following manner: Answer choices of “all of the time with all of my patients,” “all of the time with a portion of my patients,” or “some of the time with a portion of my patients” were considered use of the functionalities and a “never” response was considered nonuse of the functionalities. Clinician responses on questions about their experience or use of the quality reports were recoded as agreement with the statement (“agree/strongly agree”) or disagreement (response of “neutral,” “disagree/strongly disagree”). QI visits and training was recoded as helpful (“helpful/very helpful”) or not helpful (“not at all helpful/slightly helpful”). Responses to items regarding clinician attitude toward future intentions to perform quality improvement activities were grouped into a positive response if they selected “likely” or “very likely” and a negative response if they selected “not likely.” Responses of “don’t know,” “not applicable,” and missing values were excluded.

RESULTSClinician and Practice Characteristics

Table 1

Of the eligible 140 clinicians (70 per group), 104 completed the survey (response rate of 74%, 54 incentive and 50 control clinicians, P = .18). The majority of respondents specialized in family or internal medicine (98.1%) and the average respondent had been in practice over 18 years (). Mean length of time “live” on the EHR was 37 months, with an average of 7000 encounters per year. No statistically significant differences were observed between the incentive group and the control group for either clinician or practice-level characteristics. No statistically significant differences were observed between survey respondents and nonrespondents except for the proportion of the patient who were self-pay (3.9% for respondents and 7.0% for nonrespondents; data not shown).

Clinician Experience With Health eHearts

Figure

Overall, clinicians reported positive experiences. Respondents reported receiving and reviewing the quality reports (89%), agreed with the prioritization of ABCS (89%), thought the ABCS were clinically meaningful for their population (87%), and understood the information given in the quality reports (95%) (). Clinicians in the program were using the EHR tools at least some of the time (Figure).

Quality Reports

Nearly all clinicians (95%) responded that they understood the information summarized in the reports (Figure). A majority (69%) agreed that the data in the reports accurately reflected the practice’s performance and enough information was provided to track progress toward meeting targets (77%). There were few differences between the groups, although clinicians receiving incentives were more likely to report that they received and reviewed the reports compared with control clinicians (P = .02).

Quality Improvement Visits and Training

Table 2

There were significant differences between incentive and control group in their program participation (). Over half of the respondents had a QI visit (56%); however, more clinicians in the incentive group reported having visits compared with the control group (68% vs 43%, P = .01). Both groups reported that the visit was helpful (85% vs 80%, P = .57), and the incentive group was more likely to report that the PCIP staff was accessible (69% vs 43%, P = .02). More clinicians in the incentive group had positive responses to the training using webinars (group online workshops) and web exes (virtual “visit” using the Internet; PCIP staff can access the participant computer terminal and “talk through” use of the EHR) compared with clinicians in the control group. Overall, respondents expressed interest in more QI visits (81%).

Tracking Patients for Preventive Services Using EHR Tools

All respondents reported some use of the EHR functionalities (Figure 1). Clinical Decision Support System (CDSS) alerts (automated alerts and reminders for preventive services) and smart forms (automated question flows that assist clinicians in taking patient histories) were the most used. Although not statistically significant, incentive clinicians were more likely to report using EHR tools with the exception of the use of order sets to identify patients in need of preventive services (83% incentive vs 59% for control, P = .01).

Intention to Continue Activities After Health eHearts

Most respondents (80%) indicated the intent to generate quality reports after the program ended and allocate staff time to focus on QI activities (70%) (Table 2). Incentive clinicians were more likely to report that that they would generate quality reports (87% incentive vs 72% control, P = .07), track practices’ progress toward meeting quality measurement goals (91% vs 78%, P = .09), and hold regular meetings or check-ins (71% vs 57%, P = .14) compared with control clinicians.

DISCUSSION

Small practice clinicians had positive experiences with the rewards and financial recognition program designed to improve the delivery of clinical preventive services. Clinicians in the incentive group were more likely than those in the control group to report participating in quality improvement activities offered by the program, such as reviewing the quality reports, using order sets, and participating in program training sessions. The high level of buy-in to the program is demonstrated by the reported usability and accuracy of the quality reports and by reported agreement with the ABCS prioritization of preventive cardiovascular care.

Past studies document instances of clinician skepticism about the validity of clinical quality measurements or accuracy of reports, leading to less engagement of clinicians in quality improvement efforts.15,16 In addition, because of the lack of transparency in data used for reporting or payment, some P4P programs have been seen as a threat to clinicians’ autonomy and sense of control.17 The Health eHearts program addressed issues seen in earlier studies by generating reports directly from the practices’ EHRs, offering transparency into the data used for quality measurement, and also by providing QI assistance and help with troubleshooting problem areas with the intent of improving clinician sense of control over measured performance.

Alignment of the program goals with the practice’s organizational structure and culture has been associated with successful P4P implementation.20 The majority of clinicians agreed with the prioritization of the ABCS and found them to be meaningful to their practice. Positive clinician attitude has been associated with successful implementation of EHRs21 and is potentially an important contributor to continued EHR use, especially in small independently owned practices that do not have dedicated staff for quality measurement or EHR-based reporting.

Robust EHRs can systematize and streamline work flow by allowing clinicians to use key features, such as CDSS.22 However, small practices are less likely to utilize these features.23,24 These survey results suggest that providing QI assistance along with incentives can be effective in engaging clinicians both during a program and potentially for sustaining continued QI activities.

Limitations

Our study has several limitations. As a self-reported survey, it is subject to social desirability bias whereby clinicians may be inclined to respond positively instead of with criticism. In this study, the differences between the incentive group and the control group answers were likely equally affected by this bias, implying that the differences observed in reported engagement with quality improvement activities would not be affected by this limitation, though the overall experience ratings may be higher than if respondents were not affected by this bias.

It is also possible that the overall ratings of the experience in the program are more positive than the experience for all participants in the program, since some participants did not respond. However, we received a high response rate of 74% and there were few significant differences in practice characteristics between respondents and nonrespondents.

Further Research

Further research should examine the effect of sustaining QI efforts in the absence of incentives. A recent study using independent data comparing PCIP and non-PCIP comparison practices in New York State also found that technical assistance visits were instrumental in improving quality.25 It is still not clear whether after establishing routine quality measurement, or receipt of QI technical assistance, that practices will sustain these activities. Most respondents indicated intentions of continuing QI work, but fewer responded that they anticipated investing ongoing resources (meetings, staff time). Further study is warranted regarding the sustainability of the intervention and the power of good intentions in the absence of resources.

Implications

Incentives may not be necessary to motivate clinicians to participate in a program focusing on increasing the delivery of clinical preventive services. However, practices that received incentives were more likely to report using quality improvement—related activities. An incentive system implemented in the context of robust information systems may drive use of specific EHR tools or follow-through on quality improvement activities.

As part of the Patient Protection and Affordable Care Act,26 new models of care delivery and reimbursement are being implemented and tested. Ways to facilitate clinician engagement, especially for small independently owned practices, are needed. Our study supports the hypothesis that clinician buy-in and engagement is possible if the program ensures that quality measures reports used in the program are clinically meaningful and that quality reports are relevant and accurate.Author Affiliations: From Primary Care Information Project (RB, MSR, CHW, JJW, AHP, SCS), New York City Department of Health and Mental Hygiene, Long Island City, NY; Department of Pediatrics (NSB), Department of Internal Medicine (RAD), Philip R. Lee Institute for Health Policy Studies (RAD), University of California San Francisco, San Francisco, CA.

Funding Source: This study was partially funded by the Agency for Healthcare Research and Quality (R18HS018275, R18 HS019164), New York City Tax Levy and Robin Hood Foundation.

Author Disclosures: The authors (RB, MSR, CHW, JJW, NSB, AHP, SCS, RAD) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (RB, MSR, JJW, NSB, SCS, RAD); acquisition of data (RB, MSR, CHW, JJW, SCS); analysis and interpretation of data (RB, MSR, CHW, JJW, NSB, SCS); drafting of the manuscript (RB, MSR, CHW, JJW, AHP, SCS); critical revision of the manuscript for important intellectual content (RB, MSR, CHW, JJW, NSB, AHP, SCS, RAD); statistical analysis (RB, MSR, JJW); obtaining funding (AHP); administrative, technical, or logistic support (RB, MSR, CHW, JJW, SCS); and supervision (MSR, JJW, SCS, RAD).

Address correspondence to: Sarah C. Shih, MPH, New York City Department of Health and Mental Hygiene, Primary Care Information Project, 42-09 28th St, 12th Fl, Queens, NY 11101. E-mail: sshih@health.nyc.gov.1. Grossbart SR. What’s the return? assessing the effect of “pay-forperformance” initiatives on the quality of care delivery. Med Care Res Rev. 2006;63(1 suppl):29S-48S.

2. Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Eng J Med. 2007; 356(5):486-496.

3. Jha AK, Joynt KE, Orav EJ, Epstein AM. The long-term effect of premier pay for performance on patient outcomes. N Eng J Med. 2012; 366(17):1606-1615.

4. Ryan AM. Effects of the premier hospital quality incentive demonstration on Medicare patient mortality and cost. Health Serv Res. 2009;44(3): 821-842.

5. Ryan AM, Blustein J, Casalino LP. Medicare’s flagship test of pay-for-performance did not spur more rapid quality improvement among low-performing hospitals. Health Aff (Millwood). 2012;31(4):797-805.

6. Werner RM, Dudley RA. Medicare’s new hospital value-base purchasing program is likely to have only a small impact on hospital payments. Health Aff (Millwood). 2012;31(9):1932-1940.

7. Van Herck P, De Smedt D, Annemans L, et al. Systematic review: effects, design choices, and context of pay-for-performance in health care. BMC Health Serv Res. 2010;10:247.

8. Scott A, Sivey P, Ait Ouakrim D, et al. The effect of financial incentives on the quality of health care provided by primary care physicians. Cochrane Database Syst Rev. 2011;(9):CD008451.

9. Chung S, Palaniappan LP, Trujillo LM, Rubin HR, Luft HS. Effect of physician-specific pay-for-performance incentives in a large group practice. Am J Manag Care. 2010;16(2):e35-e42.

10. Chung S, Palaniappan L, Wong E, Rubin H, Luft H. Does the frequency of pay-for-performance payment matter? experience from a randomized trial. Health Serv Res. 2010;45(2):553-564.

11. Rao SR, Desroches CM, Donelan K, Campbell EG, Miralles PD, Jha AK. Electronic health records in small physician practices: availability, use, and perceived benefits. J Am Med Inform Assoc. 2011;18(3): 271-275.

12. Tollen LA. Physician organization in relation to quality and efficiency of care: a synthesis of recent literature. The Commonwealth Fund. 2008;(89).

13. Crosson FJ. The delivery system matters. Health Aff (Millwood). 2005; 24(6):1543-1548.

14. Young GJ, Meterko M, White B, et al. Physician attitude towards pay-for-quality programs: perspectives from the front line. Med Care Res Rev. 2007;64:331-343.

15. Casalino LP, Alexander GC, Jin L, Konetzka RT. General internists’ views on pay-for-performance and public reporting of quality scores: a national survey. Health Aff (Millwood). 2007;26(2):492-499.

16. Pham HH, Bernabeo EC, Chesluk BJ, Holmboe ES. The roles of practice systems and individual effort in quality performance. BMJ Qual Saf. 2011;20(8):704-710.

17. Epstein AM, Lee TH, Hamel MB. Paying physicians for high-quality care. N Engl J Med. 2004;350(4):406-410.

18. Medicare and Medicaid Programs; Electronic Health Record Incentive Program; Final Rule 42370 CFR Parts 412, 413, 422 et al. 2010;75:44314-44588.

19. Bardach NS, Wang JJ, De Leon SF, et al. Effect of pay-for-performance incentives on quality of care in small practices with electronic health records: a randomized trial. JAMA. 2013;310(10):1051-1059.

20. Young GJ, Beckman H, Baker E. Financial incentives, professional values and performance: a study of pay-for-performance in a professional organization. J Organiz Behav. 2012;33:964-983.

21. Garg AX, Adhikari NK, McDonald H, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005;293(10):1223-1238.

22. Chaudhry B, Wang J, Wu S, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006;144(10):742-752.

23. DesRoches CM, Campbell EG, Rao SR, et al. Electronic health records in ambulatory care: a national survey of physicians. N Engl J Med. 2008;359(1):50-60.

24. Simon SR, Kaushal R, Cleary PD, et al. Physicians and electronic health records: a statewide survey. Arch Intern Med. 2007;167(5): 507-512.

25. Ryan AM, Bishop TF, Shih S, Casalino LP. Small physician practices in New York needed sustained help to realize gains in quality from use of electronic health records. Health Aff (Millwood). 2013;32(1):53-62.

26. The Patient Protection Affordable Care Act. http://www.gpo.gov/fdsys/pkg/BILLS-111hr3590enr/pdf/BILLS-111hr3590enr.pdf.

Related Videos
Screenshot of Jennifer Vaughn, MD, in a Zoom video interview
dr krystyn van vliet
dr mitzi joi williams
Screenshot of Jennifer Vaughn, MD, in a Zoom video interview
Stephen Speicher, MD, MS
dr marisa mcginley
Mike Brown, Vice President of Managed Care, Cardinal Health
Mike Brown, vice president of managed services at Cardinal Health
Mike Brown, vice president of managed services, Cardinal Health
Brian Mullen, PhD, head of innovation and product, The Clinic by Cleveland Clinic
Related Content
© 2024 MJH Life Sciences
AJMC®
All rights reserved.