• Center on Health Equity and Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

Does Comparing Cesarean Delivery Rates Influence Women’s Choice of Obstetric Hospital?

Publication
Article
The American Journal of Managed CareFebruary 2019
Volume 25
Issue 2

This randomized controlled trial finds that a hospital cesarean delivery rate comparison tool affects women’s perceptions but not where they choose to deliver.

ABSTRACT

Objectives: Despite public reporting of wide variation in hospital cesarean delivery rates, few women access this information when deciding where to deliver. We hypothesized that making cesarean delivery rate data more easily accessible and understandable would increase the likelihood of women selecting a hospital with a low cesarean delivery rate.

Study Design: We conducted a randomized controlled trial of 18,293 users of the Ovia Health mobile apps in 2016-2017. All enrollees were given an explanation of cesarean delivery rate data, and those randomized to the intervention group were also given an interactive tool that presented those data for the 10 closest hospitals with obstetric services. Our outcome measures were enrollees’ self-reported delivery hospital and views on cesarean delivery rates.

Methods: Intent-to-treat analysis using 2-sided Pearson’s χ2 tests.

Results: There was no significant difference across the experimental groups in the proportion of women who selected hospitals with low cesarean delivery rates (7.0% control vs 6.8% intervention; P = .54). Women in the intervention group were more likely to believe that hospitals in their community had differing cesarean delivery rates (66.9% vs 55.9%; P <.001) and to report that they looked at cesarean delivery rates when choosing their hospital (44.5% vs 33.9%; P <.001).

Conclusions: Providing women with an interactive tool to compare cesarean delivery rates across hospitals in their community improved women’s familiarity with variation in cesarean delivery rates but did not increase their likelihood of selecting hospitals with lower rates.

Am J Manag Care. 2019;25(2):e33-e38Takeaway Points

Many states and consumer-focused organizations publicly report hospital cesarean delivery rates, but few women know where to access these data. We conducted a randomized controlled trial of 18,293 Ovia Health mobile app users to evaluate whether directly giving women access to cesarean delivery rates of local hospitals affected their choice of hospital. We found that providing women with this information improved their familiarity with these data but did not change their choice of hospital.

  • Cesarean deliveries are believed to be overused in the United States, and rates vary widely across hospitals.
  • Many organizations have begun publicly reporting cesarean delivery rates to help guide women’s choice of hospital.
  • Giving women an interactive tool on a mobile app to compare the hospitals in their community did not increase use of hospitals with low cesarean delivery rates.
  • This study illustrates the potential and limitations of using a modern method of communicating with the public to engage women in considering variation in quality of care among hospitals.

Across the United States, hospital cesarean delivery rates vary dramatically, independent of women’s health status, demographic characteristics, or personal preferences.1,2 Although cesarean deliveries are often clinically necessary, as many as 45% may be unindicated.3 More than three-fourths of women would prefer not to have an unindicated cesarean delivery.4 Compared with vaginal deliveries, cesarean deliveries are associated with 3-fold higher rates of maternal complications and 50% higher costs.5-8

In recent years, consumer advocates such as The Leapfrog Group and Consumer Reports, as well as more than 20 state departments of public health, have begun to publicly report hospital-level cesarean delivery rates. However, research has found that few women know where to access these data. Furthermore, women prioritize the selection of their obstetrician or midwife over selection of their hospital and believe that a hospital’s cesarean delivery rate will not affect the care that they receive.4,9,10

We hypothesized that making cesarean delivery rate data easily accessible to women who either are trying to conceive or are early in their pregnancy, and then pairing these data with an explanation of how their choice of hospital may affect their odds of having a cesarean delivery, would increase the likelihood of women selecting a hospital with a low cesarean delivery rate. A hospital with a low cesarean delivery rate was defined as one that meets the Healthy People 2020 target of a 23.9% (or lower) cesarean delivery rate for nulliparous term singleton vertex (NTSV) deliveries.11

STUDY DATA AND METHODS

Trial Platform and Recruitment

We conducted this trial using 2 mobile apps, Ovia Fertility and Ovia Pregnancy, from the Ovia Health mobile app suite. Trial recruitment and retention are outlined in the CONSORT diagram (eAppendix A [eAppendices available at ajmc.com]). We presented advertisements in the in-app information feeds of Ovia Fertility users who indicated they were trying to conceive and Ovia Pregnancy users in their first trimester (eAppendix B, Figure 1). The advertisements linked to a short article that explained the potential risks of unnecessary cesarean deliveries and the variation in hospital-level cesarean delivery rates (eAppendix B, Figure 2). Women who clicked on a hyperlink at the end of the article offering the opportunity to learn more were randomized 1:1 through the app to the control or intervention group.

Women randomized to the control group were shown a short article that encouraged considering cesarean delivery rates when selecting an obstetric hospital and explained where to find publicly reported data (text available in eAppendix B, Figure 3). Women in the intervention group were shown the same article plus an interactive tool presenting NTSV cesarean delivery rate data for the 10 hospitals closest to their location. Women could also enter another zip code in the tool to see a different set of hospitals. The cesarean delivery rates were self-reported by hospitals to The Leapfrog Group. On the interactive tool, hospitals were color-coded green if they met the Healthy People 2020 target NTSV cesarean delivery rate (23.9%), red if they did not meet the target, and yellow if they did not report their cesarean delivery rate to The Leapfrog Group.11 A screenshot of the tool is in eAppendix B, Figure 4.

Primary and Secondary Outcomes

Our primary outcome was the proportion of women who selected a hospital that met the Healthy People 2020 cesarean delivery rate target. Women enrolled in the trial were shown an advertisement in their feed incentivizing them to report their chosen hospital in the app for the opportunity to enter a lottery for a gift card (eAppendix B, Figure 5). Because any user of the app could report their hospital in their app settings, a small fraction of women in the trial entered their hospital choice before they enrolled in the trial. Our secondary outcomes were responses to 3 survey questions about cesarean delivery rates. The survey was delivered to women enrolled in the trial as another advertisement in their feed (eAppendix B, Figure 6).

Demographic and Other Data

Limited demographic data were available through Ovia, including enrollees’ age, zip code of residence, and pregnancy risk status, which was calculated by Ovia Health based on their age, body mass index, number of gestations, and a structured, self-reported medical history (eAppendix C). Using each woman’s zip code, we linked our data set with data from the US Census on median annual household income, the proportion of residents with a bachelor’s degree, and the urban or rural status of the county. We characterized a woman as having “hospital choice” if there was at least 1 hospital that met the target (green) and at least 1 hospital that did not meet the target or did not report (red or yellow) among hospitals that she saw or would have seen (if she was randomized to the control group) based on her zip code.

Analysis

We conducted a 2-sample, 2-sided test of proportions to compare the proportions of women selecting a hospital that met the target in the control group and in the intervention group. We used the intent-to-treat framework and assumed that women who did not report their hospital did not choose a hospital that met the cesarean delivery rate target. We also performed 3 post hoc sensitivity analyses to test whether the results were consistent after excluding women who reported their hospital before enrolling in the trial, did not report their hospital, or did not have a choice of at least 1 hospital that met the target and 1 that did not in their local area.

We compared answers to survey questions between the control and intervention groups using a 2-sided Pearson’s χ2 test across all survey respondents. Due to a coding limitation in the Ovia apps, survey responses could only be linked to patient demographic information halfway through the trial period (48% of all survey respondents). For this subset of respondents, we compared the proportion answering “yes” to each survey question in the intervention and control groups for each demographic subcategory. All analysis was performed using Stata version 15.0 (StataCorp; College Station, Texas).

The trial was approved by the Institutional Review Board of the Harvard T.H. Chan School of Public Health and registered at ClinicalTrials.gov (NCT02987803). All participants consented through their agreement to the terms of use and privacy policy for the Ovia apps. A Data Safety Monitoring Board (DSMB) comprising experts in the study content and statistical methods reviewed interim results in June 2017.

RESULTS

During an 11-month period (September 2016 to July 2017), we enrolled 18,293 women in our trial (9125 intervention and 9168 control). In our sample, most women were aged 26 to 34 years (61.4%) and lived in an urban county (79.5%) (Table 112). Half (49.9%) of the women resided in a zip code with a median household income of less than $52,000 and half (52.9%) resided in a zip code in which 30% or fewer of the residents held a bachelor’s degree. More than two-thirds of women (67.8%) lived in area with a choice of local hospitals, defined as including at least 1 hospital that met the cesarean delivery target and at least 1 other hospital among the choice set of the 10 hospitals closest to their zip code. Approximately half (54.8%) of women were classified as having a high-risk pregnancy based on self-reported medical history. Sample characteristics were balanced across the control and intervention groups. Compared with the national population of women who give birth annually, our study sample was slightly younger, more likely to reside in lower-income and less-educated areas, more likely to reside in rural areas, and more likely to have a high-risk pregnancy (eAppendix D).

One-fourth of enrolled women were recruited via the Ovia Fertility app before they were pregnant. The other women were recruited via the Ovia Pregnancy app and, on average, were 7 weeks pregnant when they enrolled. Almost one-third of the women enrolled in the trial (31.8%) reported their hospital selection in their profile, and they had an average gestational age of 15 weeks at the time they provided this information. Because women could enter their hospital information at any time, 3.2% of the participants reported their hospital before enrolling in the trial. Among women who were pregnant when entering the study and reported their hospital after enrolling in the trial, the average length of time between trial enrollment and hospital entry was 9 weeks.

Primary Outcome: Hospital Selection

There was no significant difference between the control and intervention groups in the proportion of women who selected hospitals that reported cesarean delivery rates that met the Healthy People 2020 target (7.0% in control vs 6.8% in intervention; P = .540) (Table 2). The mean (SD) cesarean delivery rate of hospitals selected was the same among women in the control and intervention groups (26.3% [9.1%] in control; 26.2% [8.9%] in intervention). These results were consistent in 3 post hoc sensitivity analyses, the first of which excluded women who reported their hospital before enrolling in the trial (eAppendix E), the second of which included only women who reported their hospital information (eAppendix F), and the third of which included only women who had a choice of at least 1 “green” hospital and at least 1 hospital of another color on the interactive tool (eAppendix G).

Secondary Outcome: Survey Questions

Of the 18,293 women enrolled in the trial, 3687 (20.2%) answered the 3 survey questions. Women in the intervention group were more likely to believe that hospitals in their community had differing cesarean delivery rates (66.9% vs 55.9%; P <.001) (Table 3). They were also more likely to say that they looked at cesarean delivery rates when choosing their hospital (44.5% vs 33.9%; P <.001). Although women in each group were equally likely to believe that the hospital they chose would affect their chances of having a cesarean delivery (55.6% vs 55.3% for intervention and control, respectively), women in the intervention group were less likely to answer no to this question (22.8% vs 25.9%) and more likely to answer that they did not know (21.7% vs 18.8%) (P = .023).

We conducted exploratory analyses on the differential impact of the intervention on women’s responses across demographic lines for the 48% of respondents (n = 1778) for whom we could link demographic data to survey responses (Table 412). Although for most demographic subgroups there was no consistent pattern of differential impact of the intervention, the intervention did have a stronger impact on beliefs about cesarean deliveries among women who resided in higher-income areas. Among women residing in areas with median household incomes above $70,000, there was an 18.8-percentage-point difference (P <.001) between the proportions of the intervention and control enrollees who answered that the hospitals in their community have differing cesarean delivery rates compared with a 3.5-percentage-point difference (P = .069) among women residing in areas with median household incomes of $38,800 or less.

DISCUSSION

This randomized controlled trial was motivated by the hypothesis that making cesarean delivery data more easily accessible would encourage and enable women to use these data in the selection of a hospital to deliver their baby. Women in the intervention arm of this trial were significantly more likely to believe that hospitals in their community had different cesarean delivery rates and to look up those rates before selecting a hospital. Nonetheless, they were not more likely to believe that the hospital they chose would affect their personal risk of cesarean delivery, and they ultimately were not more likely to select a hospital with a lower cesarean delivery rate. Notably, the cesarean delivery rate data appeared to have the greatest effect on the responses of women in higher-income areas, who may be better positioned to understand, interpret, and/or utilize the data.

State departments of health, payers, purchasers, and consumer advocates are publicly reporting hospital-level cesarean delivery rates to women who are pregnant or considering pregnancy in the hope that women will use these data to choose a hospital. Many of these existing reports are challenging to access, navigate, and interpret, but our study results show that simply closing these accessibility gaps may be insufficient to influence hospital selection. The lack of impact of the intervention on women’s selection of hospitals may be explained, in part, by a pervasive belief among many pregnant women that hospital-level outcomes have little bearing on how they will deliver their baby. Most women believe that their obstetrician or midwife and their own preferences will drive whether they have a cesarean delivery.4 Although we provided an explanation of why hospital-level data were important, it was clearly not sufficient, as nearly half of women in the intervention group still believed that the hospital they chose would not influence their chances of having a cesarean delivery.

Cesarean delivery rates may be challenging to interpret without a significant degree of health literacy or numeracy.13 Although our intervention was designed to address these potential barriers, the complexity and personal nature of healthcare decisions is a significant challenge in engaging the public with hospital-level quality indicators. In addition, other important determinants of hospital choice, such as insurance provider network, existing relationships with physicians or midwives, and recommendations from friends and family members, may have outweighed the value that women placed on these quality metrics.

Limitations

Our results should be interpreted in the context of limitations of our study. First, due to design constraints, women were only able to view the cesarean delivery rates of local hospitals once and were not able to save the tool or find it again on the app. Given this limited exposure, women would need to remember the data or actively seek them from another source later when they were making their decision of where to deliver. The 1-time access also limited women’s ability to share the data with others whose advice may affect their decision making, such as partners, family, friends, or their primary care provider.9

Second, our ability to encourage women to select hospitals with lower cesarean delivery rates depended on women having this option. One-third of women in our trial potentially did not have the option of choosing from a set of hospitals that had at least 1 hospital that met the Healthy People 2020 target and at least 1 hospital that either did not meet the target or did not report. Moreover, health insurance plans increasingly stipulate preferred provider networks, and although we could not observe these data in our trial, this too may have limited women’s options. The lack of options due to insurance network limits the impact of efforts by consumer groups and governmental organizations to publicly disseminate provider quality information. The results of our trial suggest that the data and information presented were not sufficient to persuade women to select hospitals with lower cesarean delivery rates; however, we cannot definitively conclude whether this was because the women in our trial did not have access to these hospitals or because of the other barriers to using this information, as discussed above.

Third, there were limitations related to our documentation of primary and secondary outcomes. We relied on women to self-report their hospital and observed this primary outcome for 32% of the sample. In addition, only a subsample of hospitals nationwide reports their cesarean delivery rates to Leapfrog (36% of hospitals). For our secondary outcomes, 20% of the women who were enrolled in the trial responded to the survey. To guard against bias from differential nonreporting, we used an intent-to-treat analysis. Ultimately, we did not recruit the full sample necessary to achieve 80% power to detect a 5-percentage-point effect size for our primary outcome; however, our observed effect size was very small and, based on those findings, our DSMB recommended that we discontinue the trial and report these results.

CONCLUSIONS

Despite these limitations, our study results illustrate the utility of using a modern method of communicating with the public to engage women in considering variation in quality of care among hospitals. The Ovia Health mobile app platform allowed us to reach a large and diverse sample of women during a critical time frame before or early in their pregnancy. Providing women 1-time access to an interactive tool showing cesarean delivery rates of hospitals in their area was effective in enhancing their understanding of hospital-level differences within their community and their likelihood of looking at cesarean delivery rates during their provider selection. Clinicians should be aware of how their patients are engaging with these media to inform important decisions about their care. Future research should investigate how these types of platforms can be more effectively utilized to help women navigate their options to select hospitals, as well as other decisions that may affect their pregnancy outcomes.

Acknowledgments

The authors acknowledge Ovia Health and the members of the Data Safety Monitoring Board for their helpful contributions to the study.Author Affiliations: Department of Health Care Policy (RAG, AM) and Department of Obstetrics, Gynecology and Reproductive Biology (NTS), Harvard Medical School, Boston, MA; Division of General Internal Medicine and Primary Care, Beth Israel Deaconess Medical Center (AM), Boston, MA; Ariadne Labs at Brigham and Women’s Hospital and the Harvard T.H. Chan School of Public Health (GG, ACP, NTS), Boston, MA.

Source of Funding: The study was funded by Square Roots. Square Roots is a philanthropic organization focused on improving pregnancy health and wellness that funds programs in academia, public policy agencies, and private companies. Square Roots was not involved in the study design; the collection, analysis, and interpretation of data; the writing of the report; or the decision to submit the article for publication. Ms Gourevitch’s time was partially supported by T32HS000055 from the Agency for Healthcare Research and Quality.

Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (RAG, AM, GG, ACP, NTS); acquisition of data (AM, ACP, NTS); analysis and interpretation of data (RAG, ACP, NTS); drafting of the manuscript (RAG, NTS); critical revision of the manuscript for important intellectual content (RAG, AM, GG, ACP, NTS); statistical analysis (RAG, AM); obtaining funding (GG, NTS); administrative, technical, or logistic support (GG); and supervision (AM).

Address Correspondence to: Rebecca A. Gourevitch, MS, Department of Health Care Policy, Harvard Medical School, 180 Longwood Ave, Boston, MA 02115. Email: gourevitch@g.harvard.edu.REFERENCES

1. Cáceres IA, Arcaya M, Declercq E, et al. Hospital differences in cesarean deliveries in Massachusetts (US) 2004-2006: the case against case-mix artifact. PloS One. 2013;8(3):e57817. doi: 10.1371/journal.pone.0057817.

2. Kozhimannil KB, Arcaya MC, Subramanian SV. Maternal clinical diagnoses and hospital variation in the risk of cesarean delivery: analyses of a National US Hospital Discharge Database. PLoS Med. 2014;11(10):e1001745. doi: 10.1371/journal.pmed.1001745.

3. Witt WP, Wisk LE, Cheng ER, et al. Determinants of cesarean delivery in the US: a lifecourse approach. Matern Child Health J. 2015;19(1):84-93. doi: 10.1007/s10995-014-1498-8.

4. Gourevitch RA, Mehrotra A, Galvin G, Karp M, Plough A, Shah NT. How do pregnant women use quality measures when choosing their obstetric provider? Birth. 2017;44(2):120-127. doi: 10.1111/birt.12273.

5. Caughey AB, Cahill AG, Guise JM, Rouse DJ; American College of Obstetricians and Gynecologists (College); Society for Maternal-Fetal Medicine. Safe prevention of the primary cesarean delivery. Am J Obstet Gynecol. 2014;210(3):179-193. doi: 10.1016/j.ajog.2014.01.026.

6. Deneux-Tharaux C, Carmona E, Bouvier-Colle MH, Bréart G. Postpartum maternal mortality and cesarean delivery. Obstet Gynecol. 2006;108(3, pt 1):541-548. doi: 10.1097/01.AOG.0000233154.62729.24.

7. Liu S, Liston RM, Joseph KS, Heaman M, Sauve R, Kramer MS; Maternal Health Study Group of the Canadian Perinatal Surveillance System. Maternal mortality and severe morbidity associated with low-risk planned cesarean delivery versus planned vaginal delivery at term. CMAJ. 2007;176(4):455-460. doi: 10.1503/cmaj.060870.

8. Truven Health Analytics. The cost of having a baby in the United States. Catalyst for Payment Reform website. catalyze.org/wp-content/uploads/2017/04/2013-The-Cost-of-Having-a-Baby-in-the-United-States.pdf. Published January 2013. Accessed April 6, 2018.

9. Declercq ER, Sakala C, Corry MP, Applebaum S, Herrlich A. Listening to Mothers III: Pregnancy and Birth. New York, NY: Childbirth Connection; 2013. transform.childbirthconnection.org/wp-content/uploads/2013/06/LTM-III_Pregnancy-and-Birth.pdf.

10. Maurer M, Firminger K, Dardess P, Ikeler K, Sofaer S, Carman KL. Understanding consumer perceptions and awareness of hospital-based maternity care quality measures. Health Serv Res. 2016;51(suppl 2):1188-1211. doi: 10.1111/1475-6773.12472.

11. Maternal, infant, and child health. Healthy People 2020 website. healthypeople.gov/2020/topics-objectives/topic/maternal-infant-and-child-health/objectives. Accessed April 6, 2018.

12. US Census American Community Survey Summary File data. US Census Bureau website. census.gov/programs-surveys/acs/data/summary-file.2015.html. Accessed November 15, 2017.

13. Goff SL, Pekow PS, White KO, Lagu T, Mazor KM, Lindenauer PK. IDEAS for a healthy baby—reducing disparities in use of publicly reported quality data: study protocol for a randomized controlled trial. Trials. 2013;14:244. doi: 10.1186/1745-6215-14-244.

Related Videos
Dr David Adamson
David Adamson, MD, FRCSC, FACOG, FACS, founder and CEO of ARC Fertility
Adetunji T. Toriola, MD, PhD
Robert J. Hopkin, MD, clinical geneticist, Cincinnati Children's Hospital Medical Center.
Related Content
© 2024 MJH Life Sciences
AJMC®
All rights reserved.