Wide variation in the way clinical services are provided to similar patients for similar visit types results in highly variable costs but similar clinical outcomes.
Objective: To identify optimal transferable practice processes for provision of a clinical service by studying the relationships among unit cost, clinical outcome, patient satisfaction, and staff satisfaction observed for a discreet service performed at multiple sites in a well-defined patient population.
Study Design: Cross-sectional study using data collected by HealthMETRICS from 1996 to 2007.
Methods: Data from 165 US clinics in 29 states, totaling 8835 patients and 1583 clinic staff, were reviewed. Four parameters of the initial visit for oral contraceptives (OCs) were measured: unit visit cost, patient satisfaction, staff satisfaction, and clinical indicators, including patient education effectiveness and occurrence of unintended pregnancies within 6 months of the initial OC visit. Patient population and visit type were narrowly defined to ensure intersite comparability. Data collection tools included surveys, time logs, financial work sheets, and on-site visits to document process.
Results: Clinical process variation was widespread. The unit cost of an initial OC visit varied from $42 to $206 (mean: $90, coefficient of variation: 38%). Staff satisfaction varied more than patient satisfaction. Clinical indicators, including unintended pregnancies, varied little. The sites achieving lower unit costs demonstrated no apparent decrease in clinical quality.
Conclusions: Clinical processes used to provide initial visits for OC services varied demonstrably across study sites, generating variation in cost with little impact on clinical quality or patient satisfaction. By adopting appropriate components of the optimal practice process, clinical sites could lower the care cost by more than 20% while maintaining or increasing care quality.
(Am J Manag Care. 2010;16(5):385-392)
This study measures variation in unit costs, clinical processes, staff and patient satisfaction,
and clinical outcomes in the initial visit for oral contraception at 165 clinics in 29 states.
Wide variation in the delivery of clinical services, resulting in differences in the cost and quality of care, can be observed repeatedly among similar providers with similar patients across the United States. Although variation in clinical practice by geographic region and other demographic characteristics has been documented,1 healthcare managers and providers rarely have processspecific benchmarking data for daily outpatient clinical process that control for the patient population being treated and narrow the clinical service sufficiently to allow for head-to-head comparisons. Some clinical process and cost benchmarking data do currently exist, like those provided by the Joint Commission (formerly The Joint Commission on Accreditation of Healthcare Organizations). However, the Joint Commission does not include data regarding the cost or efficiency of the hospitals’ clinical processes. Local efforts, such as the Massachusetts Health Quality Partners,2 compare Massachusetts physician networks, medical groups, and practice sites with respect to patient satisfaction and the percentage of patients receiving the correct treatment. Massachusetts Health Quality Partners provides a snapshot of results of the clinical process without specifically examining what causes variation or whether there exists a correlation between process, provider satisfaction, and cost.
This study establishes a baseline framework for examining the contribution of variability among clinical processes along 4 critical interdependent parameters: unit visit cost, patient satisfaction, staff satisfaction, and clinical indicators including effectiveness of patient education. We demonstrate that variability in the process of care delivery from 1 healthcare provider to another for similar patients for the same clinical service results in differences in cost of care, and to a lesser extent, differences in clinical outcomes and the satisfaction of both patients and care providers at all levels. We show that although no single process provides the optimal clinical outcome with the lowest unit cost and highest patient and staff satisfaction in all settings, the range of processes that can provide optimal results is much narrower than that observed in daily clinical practice. Finally, we provide evidence that identifying and transferring the processes that optimize clinical care along these dimensions could lower costs with the same or better clinical quality.
We focused on the provision of oral contraceptives (OCs), one of the most common and easily defined outpatient visit types in women’s healthcare, to demonstrate this clinical process variability. We examined the relationships among cost, clinical outcome, patient satisfaction, and staff satisfaction for this single clinical service performed at multiple sites in a welldefined patient population.
The goal of this study was to document variations in the care delivery process, identify how clinical outcome and cost are impacted by process variation, and quantify the opportunity to improve clinical outcomes at lower costs by optimizing the clinical process.
This cross-sectional study quantifies variation in the provision of the initial OC visit across 4 measures: unit cost, patient satisfaction, staff satisfaction, and clinical outcomes. Data collection tools were developed to quantify each of these 4 parameters. The initial OC visit process was chosen because it tends to be the highest volume service of the participating sites, and a uniform patient population can be readily identified. Participating sites included 165 US women’s healthcare facilities in 29 states. Settings included county health departments, private healthcare providers, community health centers, and others. Data collection occurred between 1996 and 2007. HealthMETRICS staff collected scheduling data and floor plans, and 2 or more HealthMETRICS staff members traveled to each site to interview clinic staff, document the initial OC visit process, and distribute data collection tools.
Subjects were women at least 16 years of age, without any significant medical complaints, who presented for an initial OC visit.
Outcomes were assessed by retrospective review of 50 charts, selected at random by site staff from patients who had an initial OC visit at least 6 months prior to the site’s study start date. The number of patients who had an unintended pregnancy (UIP) within 6 months following their initial OC visit was tallied for each site. Sites that reviewed fewer than 40 patient charts (n = 44) were excluded.
Unit costs were measured by utilizing several data collection tools. Only direct costs were considered; indirect costs such as administrative overhead were not included. Labor costs were calculated using a combination of direct financial data, actual clinical time logs, and a review of actual appointment schedules. Staff salaries were adjusted to control for the effect of regional economic differences. Using the data described above, hourly costs were calculated for all staff positions that involved direct interaction with the patient. Clinic time logs were used to allocate this labor cost to each step in the visit. The annual cost of support staff not directly interacting with the patient was allocated over the annual visit volume to obtain the unit visit cost. Sites were asked to collect 20 time logs, which tracked the labor involved in all stages of the initial OC visit. Those collecting fewer than 10 were included only if this number was enough to establish a clear pattern of how the initial OC visit was provided (24 sites excluded). Material and space costs as well as staff costs were adjusted for inflation to ensure that unit visit cost across all clinics was comparable. We used an activity-based costing software package that was developed and validated in-house. Similaractivity-based software packages that can use time as a basis for calculating costs should provide similar results. Costing programs that use resource-based relative value scale units instead of time and salary data do not provide the granularity required for this approach.
Patient satisfaction was assessed by an anonymous survey distributed to, and completed by, patients at the end of their visit. Patients were asked to rate how satisfied they were with the visit (, available at www.ajmc.com). Sites collecting less than 30 surveys (n = 101) were excluded. Surveys were returned by patients to clinic staff in sealed envelopes and mailed to HealthMETRICS, where they were opened for analysis.
Staff satisfaction surveys were completed by all clinic staff involved in the initial OC visit process. Staff members were asked to rate how satisfied they were with the initial OC visit process at their clinic (eAppendix A, available at www. ajmc.com). Surveys were anonymous and returned to Health METRICS in sealed envelopes, where they were opened for analysis. The staff satisfaction surveys from 2 sites were unavailable at the time of this study and were excluded.
A fifth element, the effectiveness of education regarding use of OC that was provided to the patients during their initial OC visit, was assessed by an anonymous survey distributed to, and completed by, patients at the end of their visits. This element was included as part of the clinical quality measure. Patients rated 4 questions regarding how clearly the use of OCs was explained to them (eAppendix A, available at www. ajmc.com). Surveys were returned by patients to clinic staff in sealed envelopes and mailed to HealthMETRICS, where they were opened for analysis. Sites collecting fewer then 30 surveys (n = 101) were excluded.
Variation was observed in cost, clinical quality, and patient and staff satisfaction outcomes as a result of differences in the process for providing family planning services. Results are summarized in the Table. Box plots of each parameter are shown in .
The number of UIPs was low, with an average of 0.7 UIPs per 50 patients per site. The OC failure rate was 1.4% in the first 6 months following OC prescription, with a coefficient of variation of 144%. Of 122 sites included in this analysis, 5 sites had at least 3 UIPs per 50 patients per site. The rate at the remaining 117 clinics was less than 2.2 UIPs per 50 patients per site. The OC failure rate seen in the United States for typical OC use is 8.7% UIPs, though measured over a 12-month period.3
Far less variation was seen among patient education scores, with an average of 89% of patients rating the explanation of the use of OC pills as “very clear” (). The coefficient of variation was 7%.
The average total adjusted direct unit cost for a visit was $90, with a coefficient of variation of 38%. Unit costs ranged from $42 to $206 per visit. Adjusted direct staffing costs, excluding space, utilities, and other nonsalary expenses, varied from $22 to $164 () and mirrored the distribution of total unit costs. Two sites operating with high clinical and patient satisfaction scores had lower costs: 38% and 34% below the average.
Patient satisfaction across all sites was high, with an average of 6.5 on a scale of 1 to 7 and a coefficient of variation of 4% (, available at www.ajmc.com).
Staff satisfaction varied more than patient satisfaction, with a mean score of 5.3 on a scale of 1 to 7 and a coefficient of variation of 12% (eAppendix B, available at www.ajmc.com).
Given the amount of variation in cost for this visit across all sites, a correlation between unit cost and the other 4 quality indicators was explored using bivariate analysis. No statistically significant correlations were found between cost and quality (, available at www.ajmc.com). This indicates that sites that adopt a low-cost process would not necessarily experience a decrease in clinical quality. Sites were observed in this study with low initial OC visit costs and high quality score outcomes.
The process used for the initial family planning varied widely across the sites (Figure 4). The standard process for the initial family planning visit can be divided into steps as follows:
2. Medical history
3. Lab tests
4. Patient counseling/education
5. Physical examination
6. Exit counseling/education
7. Check out/pick up OC pills.
The variations observed in the above process can be grouped into 6 categories:
• Staff mix
• Steps in visit process
• Patient education
• Lab tests
• Type of staff that hands prescribed OC pills to patients
• Number of visits.
The type of staff used to complete each step varied across all clinics. For example, a physician, nurse practitioner, certified nurse midwife, or extended-role registered nurse (RN) conducted the physical examination. The type of clinical support staff also varied. RNs, licensed practical nurses, and medical assistants all were used at different clinics to provide support to the prescribing clinician, whereas some clinics provided no support.
Steps in Visit Process
The standard OC visit process consisted of the 7 steps listed above. However, across clinics the process varied both by the number of steps (from 3 to 7), as well as by the order in which they were done.
Staff providing patient education included physician assistants, RNs, medical assistants, and health educators. The amount of time staff spent educating the patient varied from 5 to 30 minutes. Additionally, the number of times a patient received education in the same visit, as well as when they received it, varied. Depending on the clinic, staff educated the patient after the physical examination, before and during the examination, or before, during, and after the examination.
The policy of ordering lab tests at the initial OC visit varied across clinics. Some clinics ordered hematocrit/hemoglobin, urinalysis, and/or syphilis lab tests for all patients, and other clinics only ordered these tests as indicated. The point in the process when lab work was completed also varied.
Type of Staff That Hands Prescribed OC Pills to Patients
Some clinics required licensed personnel to hand prescribed OC pills to patients, whereas other clinics allowed unlicensed personnel, under the orders of a clinician, to complete this step.
Number of Visits
Some sites split the initial family planning visit into 2 separate office visits. The patient history and education were completed at the first visit, and the patient was examined during a separate visit. Other sites completed both of these steps during 1 visit.
The variation in process described above played a part in generating the variation in cost seen across all 165 sites. Lower-cost processes included using trained medical assistants or counselors rather than the prescribing clinician (physician, nurse practitioner, physician assistant, certified nurse midwife) to provide education associated with the contraceptive method. The former maximized the efficiency of the clinician’s time by delegating duties that did not require a clinician’s expertise and simultaneously achieved the clinical quality outcome goal and reduced the costs.
The transfer of low-cost and efficient processes was observed. Three sites, upon seeing the results of this study, implemented several of the identified low-cost processes. After the sites adopted these processes, outcomes and cost were systematically remeasured. The findings demonstrated that the unit visit cost had decreased, and the clinical quality had either improved or remained at previous levels (unpublished data, HealthMETRICS Partners).
In the harried world of daily practice, clinical process is rarely systematically reviewed in a manner that isolates and critically examines unit cost, clinical outcome, patient satisfaction, and staff satisfaction. In addition, significant innovations that could improve these outcomes (generated by process variability) may not be captured, evaluated, and implemented more widely.
Four interdependent measures were selected to assess the efficiency and effectiveness of a clinical process: clinical outcomes (successful patient education and avoidance of UIP), unit cost, staff satisfaction, and patient satisfaction. Clinical outcomes are a primary determinant of process quality, as failure to attain the clinical goal reflects a failure of the process regardless of the success in managing the other indicators. Unit cost is the major challenge in delivering services in our increasingly cost-constrained healthcare system. Achieving the best clinical outcome at the lowest cost is a national priority. The satisfaction of the staff delivering the service is key to ensuring that the process is fundamentally stable and not “person” dependent, creating a harmonious, efficient work environment for the staff and patients. Finally, patient satisfaction with the process reflects and is driven by the other 3 parameters. Satisfied patients are more likely to have understood instructions and to adhere to recommendations4,5 and subsequently require fewer downstream interactions (“rework”) for a given service. In healthcare, like any other service business, satisfied customers reflect good practices.
An extraordinary amount of variation was found in the initial visit process for contraceptive services. Staff interviews at 165 family planning clinics revealed that processes were established for various reasons and often without knowing what impact those processes would have on clinical outcomes and cost. For example, a process at a given clinic may have been established based on a manager’s training, a best guess as to what process would be most effective, or past experience of providing services at a different clinic.
Measuring performance across 4 dimensions and documenting the processes at a cohort of clinical sites can link variation in process to variation in both clinical outcomes and cost. This linkage allows each site to understand its current process as well as its relative performance, and to determine the size and nature of the opportunity for improvement. Comparing a site with those that have more optimal performance provides healthcare managers with the insight and information to establish or transfer effective and efficient processes for delivering care. Implementation of a so-called best practice without first quantifying one’s own performance and understanding one’s own process could lead to making changes that do not improve the quality or cost of care.
This comparative evaluation is an important exercise for sites that practice in similar healthcare settings such as community health centers or county health departments. Our observations (HealthMETRICS data) at the numerous sites in the study provide evidence that, despite the similarity of certain subgroup settings, similar settings do not use similar processes. Comparisons of the ways in which family planning services were delivered across community health centers identified wide variation in these processes. A similar observation was made for county health departments and other settings with a common purpose and structure. What is particularly remarkable is that the variation is significant even across sites within a single organization operating under the same clinical guidelines.
Comparisons across sites within a common subgrouping of healthcare settings produced wide variation, although less than the variation that occurred when a site was compared with the entire database, where both the total variability and the subgroup variability was captured. Some additional variability can be imposed by state regulatory agencies. For example, some states have regulations governing the provision of clinical services that would impact variation within that state,
as similar sites within the state react differently to achieve the regulatory requirement. For example, Florida requires that OC be handed to the patient by an RN. Maine does not. Therefore, there is greater use of RNs in Florida than Maine for this service, which could have a ripple effect on variation
in other subprocesses. Within a state, individual sites within a subgroup (eg, county health department) may react with different processes to achieve the same regulatory requirements.
Implementation of optimized processes observed at sites with low cost and high quality has the potential to reduce the cost of the initial OC visit by more than 20%. The potential savings would significantly impact public health dollars, as 17 million women per year utilize publicly funded family planning services.6 This study did not find any evidence that clinical quality would be negatively impacted. In fact, numerous instances where quality would be improved with a more efficient process were documented. For example, several sites that had extensive patient education consumed more time and had higher costs, yet patient education scores were lower, as education overload resulted in patients not retaining the items that were most important. The current study’s findings are congruent with the previously described principle that high-cost medical services do not necessarily result in more desirable clinical outcomes.7 Communication challenges are common, as previous reports have shown that 50% of patients leaving a primary care visit do not understand what they were told by the physician.8
We are living in a time of rising healthcare costs with millions of Americans lacking health insurance. Delegating tasks to others at an appropriate level of skill frees up time for both physicians and midlevel practitioners that they can spend building relationships with patients, which augments communication and enhances the provision of effective and efficient care.9 Healthcare providers must confront cost and quality issues on a daily basis with limited resources.10 Managing the delivery of clinical services based on an understanding of the way in which process influences outcomes, as well as the relative performance of one’s operations, will make a major contribution to addressing this crisis.
The data for this study were collected over an 11-year period, and processes at early sites may not represent how the initial OC visit is provided today. Only a small set of clinical quality measures were selected. Developing quantitative clinical outcome measures that accurately reflect the provided quality of care is difficult. Although charts were randomly selected for review of clinical outcomes, this task was left to the clinic staff, and selection bias could have been introduced, inflating clinical outcome scores. These data were originally collected by HealthMETRICS under for-profit contracts.
Future Topics of Study
Many different processes exist in the provision of the initial OC visit, and it is important to precisely determine how each influences satisfaction and clinical outcomes. These issues will be explored in future papers. The methodology for identifying variations in satisfaction, cost, and clinical outcomes and linking these to variations in process can be applied to a wide range of clinical services. Future studies should apply this methodology to additional fields of medicine.Acknowledgments
Thanks to Rebecca Selgrade, BA, Andres Garron, BA, and Chelsea Canan, BA, for their help in editing this manuscript.
Author Affiliations: From HealthMETRICS Partners (CLM, BMB), Lexington, MA; School of Medicine (MJM), Tufts University, Arlington, MA; and the Department of Mathematical Sciences (SWW), Bentley University, Waltham, MA.
Funding Source: HealthMETRICS was remunerated to collect and analyze this data through contracts with the participating sites. Funding primarily came from Title X funds through grantees, largely state departments of health, supplemented by additional funds from the participating sites themselves.
Author Disclosures: Mr Moore is an employee of HealthMETRICS Partners and reports owning stock in the company. Dr Berger is a nonrenumerated member of the HealthMETRICS’ Board of Directors and reports owning stock in the company. The other authors (MJM, SWW) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (CLM, MJM, SWW, BMB); acquisition of data (CLM, MJM); analysis and interpretation of data (CLM, MJM, SWW, BMB); drafting of the manuscript (CLM, MJM, SWW, BMB); critical revision of the manuscript for important intellectual content (CLM, MJM, SWW, BMB); statistical analysis (MJM, SWW); provision of study materials or patients (CLM); obtaining funding (CLM); administrative, technical, or logistic support (CLM, MJM, SWW); and supervision (CLM, MJM).
Address correspondence to: Charles L. Moore, MBA, President and CEO, HealthMETRICS Partners, 329 Massachusetts Ave, Lexington, MA 02421. E-mail: email@example.com. Wennberg JE, Fisher ES, Baker L, Sharp SM, Bronner K. Evaluating the efficiency of California providers in caring for patients with chronic illnesses. Health Aff (Millwood). 2005;Suppl Web Exclusives:W5-526-543.
2. Massachusetts Health Quality Partners Web site. http://www.mhqp.org/quality/whatisquality.asp?nav=030000. Accessed November 20, 2009.
3. Kost K, Singh S, Vaughn B, Trussell J, Bankole A. Estimates of contraceptive failure from the 2002 National Survey of Family Growth. Contraception. 2008;77(1):10-21.
4. Renzi C, Picardi A, Abeni D, et al. Association of dissatisfaction with care and psychiatric morbidity with poor treatment compliance. Arch Dermatol. 2002;138(3):337-342.
5. Harris LE, Luft FC, Rudy DW, Tierney EN. Correlates of health care satisfaction in inner-city patients with hypertension and chronic renal insufficiency. Soc Sci Med. 1995;41(12):1639-1645.
6. Espey E, Ogburn T, Fotieo D. Contraception: what every internist should know. Med Clin North Am. 2008;92(5):1037-1058.
7. Fisher ES, Wennberg JE, Stukel TA, Gottlieb DJ. Variations in the longitudinal efficiency of academic medical centers. Health Aff (Millwood). 2004;Suppl Web Exclusives:VAR19-32.
8. Bodenheimer T. The future of primary care: transforming practice. N Engl J Med. 2008;359(20):2086, 2089.
9. Treadway K. The future of primary care: sustaining relationships. N Engl J Med. 2008;359(20):2086, 2088.
10. Lee TH, Bodenheimer T, Goroll AJ, Starfied B, Treadway K. Perspective roundtable: redesigning primary care. N Engl J Med. 2008; 359(20):e24.