Linking variation in process with cost and quality provides the opportunity for identifying low-cost, high-quality processes.
To demonstrate how the analysis of clinical process, cost, and outcomes can identify healthcare improvements that reduce cost without sacrificing quality, using the example of the initial visit associated with oral contraceptive pill use.
Cross-sectional study using data collected by HealthMETRICS between 1996 and 2009.
Using data collected from 106 sites in 24 states, the unintended pregnancy (UIP) rate, effectiveness of patient education, and unit visit cost were calculated. Staff type providing education and placement of education were recorded. Two-way analysis of variance models were created and tested for significance to identify differences between groups.
Sites using nonclinical staff to provide education outside the exam were associated with lower cost, higher education scores, and a UIP rate no different from that of sites using clinical staff. Sites also providing patient education during the physical examination were associated with higher cost, lower education scores, and a UIP rate no lower than that of sites providing education outside of the exam.
Through analyzing process, cost, and quality, lower-cost processes that did not reduce clinical quality were identified. This methodology is applicable to other clinical services for identifying low-cost processes that do not result in lower clinical quality. By using nonclinical staff educators to provide education outside of the physical examination, sites could save an average of 32% of the total cost of the visit.
(Am J Manag Care. 2013;19(1):e14-e21)This study compares the cost and clinical quality associated with 2 specific processes used to provide the initial visit for oral contraceptive pills.
Because reducing healthcare costs and improving access to highquality care are pressing issues, there is a constant need to improve the processes used to deliver care to reduce cost without negatively impacting clinical quality. Unfortunately, few studies explore the relationship between clinical process and outcome. Clinicians and administrative staff are left to make cost-saving process changes without access to process-specific data showing how those changes will impact clinical quality. There are tremendous variations in both clinical process and unit cost for the delivery of a given clinical service across the country.1 While it has also been shown that variations in clinical decision making exist,2 clinical process variations (eg, the sequencing of the steps during a visit, staff mix providing those steps, scheduling structure) have a significant impact on cost.
This study used the initial visit for obtaining oral contraceptive pills (OCPs) as a model to demonstrate how clinical processes and outcomes can be linked to help reduce costs while maintaining clinical quality. The initial visit for obtaining OCPs was chosen as the clinical model for study because it is an easily defined outpatient visit with a high volume and a clearly defined patient population. With an estimated 66 million US women of reproductive age3 and 10.7 million US women currently using OCPs,4 slight reductions in visit cost could result in substantial savings to the healthcare system. This study focuses on the process used to deliver patient education regarding OCP use during the initial OCP visit. Specifically, variation in the mix of staff utilized and the placement of the patient education step within the initial OCP visit process were considered. The goal was to determine whether the staff used to provide the education or the placement of the education in the initial OCP visit had any impact on cost or clinical quality.
This cross-sectional study utilizes data that were collected by Health- METRICS staff from 106 participating women’s healthcare sites in 24 states between 1996 and 2009. Settings included county health departments, private healthcare providers, community health centers, and others. These data were collected under contractual arrangement with HealthMETRICS and represent a broad spectrum of processes associated with the initial OCP process. HealthMETRICS is a consulting company providing the information healthcare managers need to ensure that they are providing care as effectively and efficiently as possible. All of the participating sites voluntarily asked to participate in the HealthMETRICS Family Planning Optimal Performance Project. HealthMETRICS did not choose sites to participate. From our experience analyzing the process for delivering family planning services over the last 15 years, the analysis includes most if not all of the different processes for providing this service.
Two or more HealthMETRICS staff members traveled to each site to interview clinic staff, document the initial OCP process, and distribute data collection tools. Subjects were all women at least 16 years of age without any significant medical complaints at the time of their visit. We focused on the data related to the patient education portion of the initial OCP visit, including the staff utilized, the placement of the education in the initial OCP process, the effectiveness of the patient education, the unintended pregnancy (UIP) rate, and unit visit costs.
Effectiveness of patient education and observed UIP rate were used as markers of clinical quality. Effectiveness of patient education was assessed through 4 survey questions that asked subjects to rate how clearly they understood 4 key components of the counseling portion of the visit: when to start the pill, what to do if a dose is missed, what to do if there is spotting or bleeding while on the pill, and possible side effects (). For each site, the percentage of patients who gave the highest rating for all 4 questions was calculated. Sites collecting fewer than 30 surveys (n = 57) were excluded.
Chart reviews were conducted by clinic staff at each site. The number of patients who had a UIP within 6 months of their initial OCP visit, as determined at the follow-up visit, was tallied for each site and reported as the number of UIPs per 50 patients. Sites that reviewed fewer than 40 charts (n = 1) were excluded.
HealthMETRICS staff visited sites and identified the level of the staff members used to provide patient education. While clinical staff performed the physical exam for all patients at all sites, the educational component of the initial OCP visit was conducted in multiple ways by a variety of clinical and nonclinical staff. Clinical staff was considered to be registered nurses (RNs) and nurse practitioners. Nonclinical staff included all other staff types that participated in the visit ().
Sites were categorized based on the level of staff used to provide the education. Sites exclusively utilizing clinical staff were categorized as having a higher-cost process. Sites utilizing nonclinical staff were categorized as having a lower-cost process. These designations were based on the higher hourly cost associated with RNs and nurse practitioners.
The staff level used to provide patient education varied among sites, with 13 distinct models being used (Table 1). The most common staff level used was the RN (52% of sites). A higher-cost model was used by 57% of sites; 43% used a lower-cost model.
Position of Education Step Within OCP Initial Visit Process
The initial OCP visit process largely consists of 3 steps: intake, exam, and checkout. The point in time at which education was provided varied. Some sites provided at least some education during the physical exam step, while others provided education only outside of the exam. At least some education was provided during the physical exam by 79% of sites and only outside of the exam by 21% of sites.
Unit Visit Cost
Unit cost was measured through a combination of time logs, financial data provided by the sites, and appointment schedules. The annual visit volume was recorded. Direct costs were adjusted for inflation using data from the Consumer Pricing Index. Indirect costs such as administrative overhead were not considered. Salaries were adjusted for regional differences and inflation using data from the employee cost index. These data were used to calculate an average hourly cost for each staff level. Time and staffing data gathered by the time logs were used to allocate the labor cost to each step in the visit. The annual cost of staff directly involved in the visit but not interacting directly with the patient was allocated over the annual visit volume. Unit staff cost was defined as the sum of all direct labor unit costs. Unit total cost was defined as the sum of the unit staff cost and all other nonlabor direct costs.
Our objective was to determine whether the factors representing the different staff mixes and the placement of the education step had any impact on either the education score, the UIP rate, or unit staff cost. We hypothesized the 2-way analysis of variance (ANOVA) model
y(i,j,k) = μ + S(i) + P(j) + S(i)P(j) + ε(i,j,k)
for y(i,j,k) the education score/UIP/unit staff cost for site k utilizing staff mix i and education step placement j; S(i)P(j) the interaction between factors; and ε the associated random error. The null hypothesis is that there was no effect on clinical quality or cost due to the main effects of who educates the patient and when education occurs, or due to the interaction of these factors.
RESULTSProcess Versus Clinical Quality
The mean education score was 88% with a coefficient of variation of 7%. The UIP rate varied more, with a mean of 0.60 and coefficient of variation of 161% (, ). A general linear model was used to test the 2-way ANOVA for education score and UIP using SPSS version 18 (SPSS Inc, Chicago, Illinois).
The results for the education score indicate that there was a significant effect of staff mix (P = .026) and placement of education step (P = .037) on education score. The interaction between these 2 variables was significant (P = .030). Sites that utilized nonclinical staff to provide education outside of the exam had the highest education score. When education was provided during the physical exam, clinical and nonclinical staff education scores were not statistically different. A total of 57 sites were excluded from this analysis because they collected fewer than 30 patient education surveys, resulting in 6 observations of nonclinical staff providing education outside the exam, 26 of clinical staff educating patients during the exam, and 8 in each of the remaining 2 groups. The 57 cases that were excluded were fairly evenly distributed (44% higher cost, 56% lower cost).
The results for UIP rate indicated that there was no staff mix effect (P = .548), no placement of education effect (P = .643), and no interaction effect (P = .195). The validity of the 2-way ANOVA analyses was predicated on demonstrating that the data were normally distributed and that the variances of the data were the same across all combinations. Levene’s test was used to confirm the equality-of-variance assumption for education score (P = .320) and UIP rate (P = .338). Normality was tested using the Shapiro-Wilk test and was violated for 1 subgroup in education score data. The violation was identified with 1 outlier. After removing the outlier, the conclusions did not change. All subgroups of the UIP data failed the Shapiro-Wilk test for normality. A bootstrap analysis was performed to analyze the 2-way ANOVA model for UIP rate. The bootstrap analysis does not require the normality assumption necessitated by our ANOVA analysis. The bootstrap results confirmed the results of the ANOVA model.
Process Versus Unit Cost
There was wide variation in unit staff cost and unit total cost. Unit total cost ranged from $40.22 to $247.76 with a mean of $98.90 (median of $89.77) and a coefficient of variation of 38.5%. Unit staff cost ranged from $12.26 to $205.02 with a mean of $70.46 (median of $61.86) and a coefficient variation of 45% (Table 2, ). The mean unit staff cost for sites utilizing a higher-cost process was $81.05 and for those with a lower-cost process it was $56.64. Mean unit staff cost for sites providing patient education during the physical exam was $73.52, and for those providing education only outside of the physical exam it was $58.75 (Table 2, Figure 2).
A 2-way ANOVA was estimated for unit staff cost. The results indicated that there was a significant effect due to staff mix (P <.001), no effect due to placement of education step (P = .186), and no interaction effect (P = .329). Equality of variance was confirmed; however, all but 1 subgroup violated normality. A bootstrap analysis was performed and confirmed the results from the 2-way ANOVA. Two-way ANOVA was performed to examine the effect of staff mix and placement of the education step on total unit cost and indicated that only staff mix had a significant effect on total unit cost. Normality was violated and a bootstrap analysis was performed, which revealed that both staff mix and placement of the education step had a significant impact on total cost. The fact that placement of the education step was found to significantly impact total unit cost but not unit staff cost would suggest that there may be something specific to the process of performing education during the physical exam that impacts nonstaff cost.
We estimated the potential cost savings that might be obtained if all the sites in our sample were to adopt the process corresponding to the highest education scores. For sites not already utilizing that process, the savings per visit were calculated on a site-by-site basis and expressed as a percentage of the total cost of the visit. For all other sites, the cost savings was zero. The average total visit cost savings across all sites was 32%. Note that this analysis assumed that adaptation of both lower-cost processes was practical at all sites.
The cost of providing the initial OCP visit varied tremendously across all sites. Cost was driven higher by utilizing clinical staff educators to perform the patient education step. This begs the question of whether there is a benefit gained from the added cost. This study found no such gain in clinical quality as measured by patient education score and UIP rate. These results indicate that sites can operate with a lower-cost process model while achieving at least equal clinical outcomes. Although an a priori power analysis was not done, and as expected, the observed power of the nonsignificant results was low, the negative results obtained were highly insignificant. Moreover, the relatively modest changes required to adopt the lowest-cost process at those sites utilizing a higher-cost process represent an opportunity for significant cost savings.
The majority of sites with a higher-cost process exclusively utilized RNs to provide patient education (92%). Clinic leadership at many sites believed that staff below the level of RN were not as effective at educating patients. However, this study found that sites utilizing nonclinical staff for patient education were able to achieve clinical quality comparable to that of sites using a process with a higher-cost staff mix. Interestingly, the utilization of nonclinical staff outside the exam was associated with significantly higher education scores. While this may seem counterintuitive, in many cases nonclinical staff were drawn from the local population; therefore, the interaction between the patient educator and the patient may have allowed for more effective communication. In addition, patients may be more receptive to the education once they are removed from the added stress and potential complications presented by the exam itself. The exact explanation for this association remains unclear and might warrant additional study. While certain medical topics in this and other areas of medicine clearly require the expertise of an RN or physician, there are certainly just as many topics that can be equivalently or even more effectively discussed by a staff member with less clinical training. Although there are no formal studies comparing the effectiveness of education provided by clinicians with that provided by nonclinicians, some literature shows both staff types to be equally effective.5 This study adds to that body of literature by connecting variation in quality with cost.
At the majority of sites, the clinician provided some form of patient education during the physical exam. In most cases, sites using this process also provided additional education by another staff member at a different point in the visit. Because the education provided during the physical exam was given by a clinical staff member, the associated cost was higher. Moreover, because education was often given at 2 points in the visit by 2 different staff members, there were instances of duplication of some information presented to patients, resulting in higher cost. The additional education step may also have inadvertently led to increased levels of patient confusion. This study found that neither the duplication of education nor the timing of the education step had an impact on the UIP rate. Moreover, providing the education step during the physical exam step was associated with significantly lower education scores. We conclude that sites providing patient education outside of the physical exam operate at a lower cost, with the same or better clinical quality, compared with sites providing education during the physical exam.
This study found that an estimated 32% of total costs might have been saved on initial OCP visits for the women’s health centers included in our study. Based on an average cost per visit of $98.90, this would represent a potential average savings of $31.64 per initial visit. With 10.7 million women in the United Stated on OCPs, the potential savings is quite high. While our study was not designed to be generalizable to all women’s health centers, the size of the savings opportunity for the clinics in our sample would suggest that significant savings could be available to the healthcare system if OCP providers who used a less efficient and effective process for the initial OCP visit incorporated our findings. This study demonstrates how a small change in a high-volume clinical service can result in large aggregate savings.
The methodology outlined in this study provides a system by which variation in cost can be linked to variation in the way a clinical service is delivered. Distinct processes identified as low cost that are already in use can be compared with their higher-cost counterparts and tested for differences in clinical outcomes. This methodology for linking process, cost, and quality could be applied to other areas of medicine. We believe that by comparing processes across sites providing the same clinical service, and by linking process variation to cost and outcomes, low-cost processes can be identified that are associated with high-quality care.
Only a small set of clinical quality measures were selected. Developing quantitative clinical outcome measures that accurately reflect the provided quality of care is difficult. This analysis was done at the clinical-site level. Patient-level data were used to calculate site averages. Time logs, patient education surveys, and chart reviews were not specific to the same individual patients and were not necessarily completed on the same day. Although charts were randomly selected for review of clinical outcomes, this task was left to the clinic staff, and selection bias could have been introduced, inflating clinical outcome scores.
These data were collected by HealthMETRICS Partners under for-profit contracts and represent a broad variety of possible processes for conducting the initial OCP visit. Attempts were made to make the costs comparable across sites. While the sites may not represent a true random sample, the number of sites and their broad geographic dispersion would suggest that the number and variety of initial OCP processes observed should be representative of the range of processes currently in use.
Future Study Topics
Future studies should explore the relationship between specific process components and cost and clinical quality, as well as expand this framework to additional fields of medicine. Additional studies exploring these relationships will be forthcoming.Acknowledgment
Thanks to Rebecca Selgrade for her help in preparing this manuscript.
Author Affiliations: From Signature Healthcare Brockton Hospital/Tufts Medical Center (MJM), Boston, MA; Department of Mathematical Sciences (SWW), Bentley University, Waltham, MA; HealthMETRICS Partners (CLM, BMB), Lexington, MA.
Funding Source: HealthMETRICS was remunerated to collect and analyze this data through contracts with the participating sites. Funding primarily came from Title X funds through grantees, largely state departments of health, supplemented by additional funds from the participating sites themselves
Author Disclosures: Mr Moore and Dr Berger report board membership with HealthMETRICS, as well as stock ownership. The other authors (MJM, SWW) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (MJM, SWW, CLM); acquisition of data (MJM, CLM); analysis and interpretation of data (MJM, SWW, CLM); drafting of the manuscript (MJM, SWW, BMB); critical revision of the manuscript for important intellectual content (MJM, SWW, CLM, BMB); statistical analysis (MJM, SWW); obtaining funding (CLM); and administrative, technical, or logistic support (MJM, CLM).
Address correspondence to: Michael J. McMullen, MD, University of Virginia, Department of Ophthalmology, PO Box 800715, Charlottesville, VA 22908-0715. E-mail: firstname.lastname@example.org. Moore CL, McMullen MJ, Woolford SW, Berger BM. Clinical process variation: effect on quality and cost of care. Am J Manag Care. 2010;16(5):385-392.
2. Wennberg JE, Fisher ES, Baker L, Sharp SM, Bronner KK. Evaluating the efficiency of California providers in caring for patients with chronic illnesses. Health Aff (Millwood). 2005;Suppl Web Exclusives:W5-526-543.
3. Frost JJ, Henshaw SK, Sonfield A. Contraceptive Needs and Services: National and State Data, 2008 Update. New York: Guttmacher Institute; 2010.
4. Mosher WD, Jones J. Use of contraception in the United States: 1982-2008. Vital Health Stat 23. 2010;(29):1-44.
5. Smith L, Nguyen T, Seeto C, Saini B, Brown L. The role of nonclinicians in a goal setting model for the management of allergic rhinitis in community pharmacy settings. Patient Educ Couns. 2011;85(2):e26-32.