https://www.ajmc.com/journals/issue/2013/2013-11-vol19-sp/small-practices-experience-with-ehr-quality-measurement-and-incentives
Small Practices' Experience With EHR, Quality Measurement, and Incentives

Rohima Begum, MPH; Mandy Smith Ryan, PhD; Chloe H. Winther, BA; Jason J. Wang, PhD; Naomi S. Bardach, MD; Amanda H. Parsons, MD; Sarah C. Shih, MPH; and R. Adams Dudley, MD, MBA

Use of incentives and pay-for-performance (P4P) to realign payment to address problems of low quality of care or gaps in preventive services has had limited success in improving the quality of healthcare.1-6 For the most part, studies on P4P have focused on large group practices.7-10 Small practices, where the majority of patients still receive care nationally,11 historically face greater obstacles to improving care because they have lacked the scale and organizational structure to conduct quality improvement activities or participate in P4P.12,13

It is important to assess clinician attitudes toward key program features, such as the selection of target quality measures, trust in performance reports, and relevance of quality targets. Understanding clinician motivations and opinions toward a quality improvement program may help predict the extent to which they change their clinical behavior.14 Specific program features, such as the frequency and type of performance feedback and available assistance for meeting program goals, could potentially affect clinician awareness and understanding of particular programs. Clinician skepticism about the accuracy of reports, or distrust of or lack of transparency in data used for reporting or payment, may lead to less engagement of clinicians in incentive programs or quality improvement efforts.15-17

With widespread implementation of electronic health records (EHRs),18 EHR-enabled solo and small group practices have been  shown to be capable of responding to quality improvement (QI) initiatives, as well as programs that incentivize using quality measurement.19 It is unknown how clinicians will feel about quality measurement and pay-forperformance using EHR-derived quality measures. To address this gap in the literature, we surveyed clinicians participating in Health eHearts, a cluster-randomized trial of the effect of a financial incentive and QI assistance program on measures of cardiovascular care compared with the effect of providing quality reports and QI assistance. The Primary Care Information Project (PCIP), a bureau of the New York City Department of Health and Mental Hygiene, piloted Health eHearts in practices that recently adopted an EHR and that were receiving ongoing QI visits to improve practice work flows using health information technology. Survey domains included overall experience with the program, as well as experience with the tools supporting QI efforts. In addition, we assessed whether there were differences in experiences or attitudes and whether these attitudes differed for practices receiving incentives or not.

METHODS

Practice Selection and Assignment


PCIP recruited 140 small practices to participate in Health eHearts. The program duration was April 2009 to September 2011. Practices were eligible if they have been “live” on the EHR for at least 3 months, had a minimum of 200 patients with cardiovascular diagnoses related to the quality measurement targets, and were transmitting quality measures through the EHR to PCIP. Practices agreed to be randomized into “recognition” or “rewards” groups. Rewards consisted of financial incentives for each numerator met for 4 areas of cardiovascular care: aspirin therapy, blood pressure control, cholesterol control, and smoking cessation intervention (ABCS). Incentive amounts ranged from $20 to $150 per patient with goal achieved, with higher payments for harder to treat patients (eg, comorbid diseases or lower socioeconomic status). The recognition group served as a control. Both groups (control and incentive) received quarterly quality performance reports, telephone and onsite coaching on work flow redesign, and training on documentation, and were invited to a recognition program at the end of the year. The quality reports summarized practices’ progress on the ABCS and compared their performance with other practices in Health eHearts and trends over the previous 6 months.

Survey Administration and Instrument

Health eHearts was a 2-year program, with cohort 1 enrolled at the beginning and continuing for 2 years and cohort 2 enrolled at the beginning of year 2. Practices were surveyed before and after each program year. This study focuses on the survey administered to all participating practices at the end of Health eHearts. A 33-item survey (29 items in the control group version) was administered in October 2011. A lead clinician from each practice was invited to respond to the survey first by mail, followed by at least 3 reminder phone calls to nonresponding clinicians. Survey administration continued through February 2012.

The instrument was developed in collaboration between PCIP and researchers from University of California San Francisco (UCSF) who were contracted as evaluators for the overall evaluation of the program. The instrument focused on several aspects of the Health eHearts program: clinicians’ experiences and attitudes toward the selected quality measures (ABCS), training on use of the EHR or achievement  of ABCS, QI visits, tracking patients for preventive services using the EHR, quality reports, incentive payments (incentive group only), recognition programs in general, and demographics. The survey was pretested with program staff and a clinician in PCIP. Items used in this survey were based on an earlier instrument co-developed with UCSF to assess barriers and facilitators for small practices to participate in P4P. Topics identified as barriers included: accuracy and regularity of reports relevant to the practice’s patient population, measurement targets that were meaningful to the practice population, availability of training or assistance to conduct QI activities, and use of practice tools, such as the EHR, to identify patients and document for quality measurement reports.

The survey was considered part of program evaluation activities conducted by PCIP and was deemed exempt by the Institutional Review Board at New York City Department of Health and Mental Hygiene. Clinicians in the control group were offered a $100 honorarium for participating in the survey. 

Analysis

Frequencies and averages were calculated for practice characteristics stratified by whether the practice was in the incentive or control group. All items in the survey were recorded into dichotomous variables and then stratified by incentive and control groups. Significant statistical differences between the incentive and control group were determined using χ2 tests. Data were analyzed using SAS software, version 9.2 (SAS Institute, Cary, North Carolina).

Items were recoded in the following manner: Answer choices of “all of the time with all of my patients,” “all of the time with a portion of my patients,” or “some of the time with a portion of my patients” were considered use of the functionalities and a “never” response was considered nonuse of the functionalities. Clinician responses on questions about their experience or use of the quality reports were recoded as agreement with the statement (“agree/strongly agree”) or disagreement (response of “neutral,” “disagree/strongly disagree”). QI visits and training was recoded as helpful (“helpful/very helpful”) or not helpful (“not at all helpful/slightly helpful”). Responses to items regarding clinician attitude toward future intentions to perform quality improvement activities were grouped into a positive response if they selected “likely” or “very likely” and a negative response if they selected “not likely.” Responses of “don’t know,” “not applicable,” and missing values were excluded.

RESULTS

Clinician and Practice Characteristics


Of the eligible 140 clinicians (70 per group), 104 completed the survey (response rate of 74%, 54 incentive and 50 control clinicians, P = .18). The majority of respondents specialized in family or internal medicine (98.1%) and the average respondent had been in practice over 18 years (Table 1). Mean length of time “live” on the EHR was 37 months, with an average of 7000 encounters per year. No statistically significant differences were observed between the incentive group and the control group for either clinician or practice-level characteristics. No statistically significant differences were observed between survey respondents and nonrespondents except for the proportion of the patient who were self-pay (3.9% for respondents and 7.0% for nonrespondents; data not shown).

Clinician Experience With Health eHearts

Overall, clinicians reported positive experiences. Respondents reported receiving and reviewing the quality reports (89%), agreed with the prioritization of ABCS (89%), thought the ABCS were clinically meaningful for their population (87%), and understood the information given in the quality reports (95%) (Figure). Clinicians in the program were using the EHR tools at least some of the time (Figure).

Quality Reports

Nearly all clinicians (95%) responded that they understood the information summarized in the reports (Figure). A majority (69%) agreed that the data in the reports accurately reflected the practice’s performance and enough information was provided to track progress toward meeting targets (77%). There were few differences between the groups, although clinicians receiving incentives were more likely to report that they received and reviewed the reports compared with control clinicians (P = .02).

Quality Improvement Visits and Training

There were significant differences between incentive and control group in their program participation (Table 2). Over half of the respondents had a QI visit (56%); however, more clinicians in the incentive group reported having visits compared with the control group (68% vs 43%, P = .01). Both groups reported that the visit was helpful (85% vs 80%, P = .57), and the incentive group was more likely to report that the PCIP staff was accessible (69% vs 43%, P = .02). More clinicians in the incentive group had positive responses to the training using webinars (group online workshops) and web exes (virtual “visit” using the Internet; PCIP staff can  access the participant computer terminal and “talk through” use of the EHR) compared with clinicians in the control group. Overall, respondents expressed interest in more QI visits (81%).

Tracking Patients for Preventive Services Using EHR Tools

All respondents reported some use of the EHR functionalities (Figure 1). Clinical Decision Support System (CDSS) alerts (automated alerts and reminders for preventive services) and smart forms (automated question flows that assist clinicians in taking patient histories) were the most used. Although not statistically significant, incentive clinicians were more likely to report using EHR tools with the exception of the use of order sets to identify patients in need of preventive services (83% incentive vs 59% for control, P = .01).

Intention to Continue Activities After Health eHearts

Most respondents (80%) indicated the intent to generate quality reports after the program ended and allocate staff time to focus on QI activities (70%) (Table 2). Incentive clinicians were more likely to report that that they would generate quality reports (87% incentive vs 72% control, P = .07), track practices’ progress toward meeting quality measurement goals (91% vs 78%, P = .09), and hold regular meetings or check-ins (71% vs 57%, P = .14) compared with control clinicians.

DISCUSSION

Small practice clinicians had positive experiences with the rewards and financial recognition program designed to improve the delivery of clinical preventive services. Clinicians in the incentive group were more likely than those in the control group to report participating in quality improvement activities offered by the program, such as reviewing the quality reports, using order sets, and participating in program training sessions. The high level of buy-in to the program is demonstrated by the reported usability and accuracy of the quality reports and by reported agreement with the ABCS prioritization of preventive cardiovascular care.

Past studies document instances of clinician skepticism about the validity of clinical quality measurements or accuracy of reports, leading to less engagement of clinicians in quality improvement efforts.15,16 In addition, because of the lack of transparency in data used for reporting or payment, some P4P programs have been seen as a threat to clinicians’ autonomy and sense of control.17 The Health eHearts program addressed issues seen in earlier studies by generating reports directly from the practices’ EHRs, offering transparency into the data used for quality measurement, and also by providing QI assistance and help with troubleshooting problem areas with the intent of improving clinician sense of control over measured performance.

Alignment of the program goals with the practice’s organizational structure and culture has been associated with successful P4P implementation.20 The majority of clinicians agreed with the prioritization of the ABCS and found them to be meaningful to their practice. Positive clinician attitude has been associated with successful implementation of EHRs21 and is potentially an important contributor to continued EHR use, especially in small independently owned practices that do not have dedicated staff for quality measurement or EHR-based reporting. 

Robust EHRs can systematize and streamline work flow by allowing clinicians to use key features, such as CDSS.22 However, small practices are less likely to utilize these features.23,24 These survey results suggest that providing QI assistance along with incentives can be effective in engaging clinicians both during a program and potentially for sustaining continued QI activities.

Limitations

Our study has several limitations. As a self-reported survey, it is subject to social desirability bias whereby clinicians may be inclined to respond positively instead of with criticism. In this study, the differences between the incentive group and the control group answers were likely equally affected by this bias, implying that the differences observed in reported engagement with quality improvement activities would not be affected by this limitation, though the overall experience ratings may be higher than if respondents were not affected by this bias.

It is also possible that the overall ratings of the experience in the program are more positive than the experience for all participants in the program, since some participants did not respond. However, we received a high response rate of 74% and there were few  significant differences in practice characteristics between respondents and nonrespondents.

Further Research

Further research should examine the effect of sustaining QI efforts in the absence of incentives. A recent study using independent data comparing PCIP and non-PCIP comparison practices in New York State also found that technical assistance visits were instrumental in improving quality.25 It is still not clear whether after establishing routine quality measurement, or receipt of QI technical assistance, that practices will sustain these activities. Most respondents indicated intentions of continuing QI work, but fewer responded that they anticipated investing ongoing resources (meetings, staff time). Further study is warranted regarding the sustainability of the intervention and the power of good intentions in the absence of resources.

Implications

Incentives may not be necessary to motivate clinicians to participate in a program focusing on increasing the delivery of clinical preventive services. However, practices that received incentives were more likely to report using quality improvement–related  activities. An incentive system implemented in the context of robust information systems may drive use of specific EHR tools or follow-through on quality improvement activities.

As part of the Patient Protection and Affordable Care Act,26 new models of care delivery and reimbursement are being implemented and tested. Ways to facilitate clinician engagement, especially for small independently owned practices, are needed. Our study supports the hypothesis that clinician buy-in and engagement is possible if the program ensures that quality measures reports used in the program are clinically meaningful and that quality reports are relevant and accurate.
Print | AJMC Printing...