
The American Journal of Managed Care
- January 2026
- Volume 32
- Issue 1
Subjective and Objective Impacts of Ambulatory AI Scribes
Although the vast majority of physicians using an artificial intelligence (AI) scribe perceived a reduction in documentation time, those with the most actual time savings had higher relative baseline levels of documentation time.
ABSTRACT
Objectives: To evaluate the association between perceived and actual changes in physician documentation time (DocTime) following implementation of an artificial intelligence (AI) scribe and to determine whether physicians with higher baseline DocTime experience greater reductions in DocTime from AI scribe use.
Study Design: Retrospective assessment of AI scribe use among 310 ambulatory physicians across specialties who chose to adopt a commercial tool at a large academic medical center. We utilized data from a postimplementation user feedback survey and electronic health record audit log measures of scribe use and DocTime.
Methods: We used an ordered logit model to assess adjusted associations between perceived and actual changes in DocTime in the 12 weeks after AI scribe adoption for the 252 physicians (81.3%) with survey data. Multivariate regression models assessed whether baseline DocTime modified the relationship between level of AI scribe use (percentage of weekly encounters) and DocTime.
Results: Although the majority of physicians perceived reductions in DocTime (86.5%) following AI scribe adoption, there was no overall association between perceived reductions and actual changes in DocTime (OR, 0.975; P = .144). In multivariate models, higher levels of AI scribe use were associated with lower DocTime. For each additional 10% of encounters with AI scribe use, DocTime decreased by just over 30 seconds per scheduled hour (P < .001). This effect was modified by baseline DocTime, with less-efficient physicians realizing the majority of time savings.
Conclusions: Although most physicians perceived DocTime reductions from AI scribe use, those realizing the majority of actual time savings were those with higher relative baseline DocTime.
Am J Manag Care. 2026;32(1):In Press
Takeaway Points
Our study assessed the relationship between ambulatory artificial intelligence (AI) scribe use and physician documentation time (DocTime) and can inform clinical and policy decisions around investment in this technology and workforce sustainability.
- Most physicians using an AI scribe perceived DocTime reductions, but actual reductions were modest. There was not a strong association between perceived and objective measures of AI scribe impact on DocTime.
- Those with higher levels of AI scribe use and those with lower baseline DocTime efficiency benefited the most from AI scribe adoption.
- These findings suggest that targeting use of the tool to those with higher DocTime may produce a positive financial return on investment.
Artificial intelligence (AI) scribes—tools that record clinician-patient conversations during an encounter and generate draft clinical notes and visit summaries for the patient’s electronic health record (EHR)—are being widely adopted, particularly in ambulatory settings, with the intent of reducing clinician documentation burden and improving efficiency. Prior to the introduction of AI scribes, clinicians manually generated their notes, either during the encounter or afterward, by relying on their memory or shorthand notes from the encounter. Although some clinicians benefited from assistance from human scribe services (either virtual or in person)1-3, these services have historically been too costly to scale. Thus, AI scribes hold potential to generate multiple types of benefits for clinicians. For example, they may save clinicians documentation time (DocTime; ie, time spent on clinical notes) by generating the first draft of the notes if editing is minimal.4,5 AI scribes may also reduce clinicians’ cognitive burden by eliminating the need to remember what occurred during the encounter, which could feel like time savings even in the absence of DocTime reductions.6,7
Emerging data from studies in individual institutions that examined these benefits for clinicians point to overall clinician satisfaction, particularly around perceived reduction in cognitive burden,8-10 alongside small reductions in DocTime.11-13 Thematic analysis of clinician feedback following implementation of AI scribes reveals feelings of reduced documentation-related cognitive burden, increased efficiency and time savings, and improved work-life balance.14 However, use of this tool and subsequent benefits are not evenly distributed. One study reported that usage volume is primarily driven by a subset of high users, identifying a dose-response relationship whereby physicians with higher usage levels of the AI scribe exhibited the largest reductions in DocTime.15
Although the evidence to date14 offers a fairly consistent picture of clinician benefits, there are nuanced questions related to these benefits that need to be addressed for health system leaders to understand return on investment (ROI) from AI scribes. First, are perceived changes in DocTime from AI scribe use associated with objective changes in DocTime? Generating a positive financial ROI requires increasing billable activities with the DocTime savings from AI scribes. If the same clinicians experience both perceived and objective reductions, ramping up billing expectations may be feasible and acceptable to clinicians. If not, then asking providers to see more patients could offset any well-being gains. This raises a second question of actionable drivers of DocTime reductions from AI scribes. One prior study points to increasing the level of AI scribe use (ie, the percentage of encounters for which the tool is used) as one such driver.15 However, not all clinicians may have the opportunity for DocTime savings from increased AI scribe use, particularly those who have efficient manual documentation. In addition, if DocTime savings from AI scribes are realized during after-hours work, this also limits the potential for increasing billable activities.
In this study, we sought to address these open questions related to clinician benefits from AI scribes to inform health system leaders’ understanding of ROI. We used recent data from the implementation of a commercially available AI scribe tool at our academic medical center that is available to any attending ambulatory physician. We focused on the ambulatory setting because it is where AI scribe technology was more mature and widely deployed due to high patient volumes, short visit times, and substantial documentation requirements that have resulted in documentation burden and burnout in this setting.
METHODS
Study Setting
The University of California, San Francisco is an academic medical center in Northern California that delivers more than 2 million patient visits annually to individuals in San Francisco and Northern California. Prior to AI scribe adoption, select high-volume physicians (~15% of all ambulatory physicians) used a human scribe, and the remaining physicians wrote their notes using a mix of elements, including prebuilt note templates or autofill functionalities, structured fields (eg, drop-downs or checkboxes), copy and paste from prior related notes, and free text. A commercial AI scribe technology (Ambience AutoScribe; Ambience Healthcare) was initially selected by the faculty practice, in partnership with leaders from the School of Medicine and the UCSF Health Clinical Informatics team, for broad implementation (including EHR integration) across ambulatory specialty practices.
Faculty who chose to adopt the tool were provided with onboarding that included a 1-time, synchronous, virtual training with the vendor; tip sheets from our institution; and optional virtual, real-time support. After onboarding, physicians could choose to use the AI scribe tool for any proportion of encounters. Importantly, use of the tool was not tied to expectations of increased billing or productivity. Participating physicians utilized the tool on a smartphone using the following workflow: (1) Ask the patient and anyone else in the room to consent to use of the tool at the start of the encounter (for any encounter type, including telemedicine); (2) for patients who consent, launch the tool; (3) access the AI-generated drafts for the note sections and patient instructions within the EHR patient encounter record soon after the end of recording; and (4) review, edit, and sign off on the clinical note and patient instructions prior to finalizing them.
Study Sample, Data, and Measures
Our study sample included all 310 physicians who were onboarded after the AI scribe was integrated into the EHR and offered enterprise-wide (December 6, 2024) but before February 9, 2025, to ensure sufficient data for analyses post onboarding. This represents 18.7% of the estimated 1658 eligible physicians. Many of the remaining eligible physicians were onboarded after our study cutoff date (February 9, 2025). For our first research question (objective vs perceived impact of AI scribe on DocTime), the sample was limited to the subset of physicians (n = 252) with a completed postadoption survey response. The specialty distributions across the 2 samples were similar (eAppendix Table [
Survey. Physicians were asked to complete an online survey 90 days after onboarding; the distribution of the timing of survey completion relative to the onboarding date is shown in the eAppendix Figure. The survey asked physicians about varied dimensions of their experience with the tool. For this study, we focused on the question most directly relevant to compare with changes in DocTime measured using EHR data: “How does the AI scribe tool impact the time you spend on documentation?” Answers were on a 5-point Likert scale: significantly increases time, somewhat increases time, no impact, somewhat decreases time, and significantly decreases time.
EHR-based measures. All EHR-based measures, including the level of AI scribe use, were calculated at the physician-week level and adjusted for scheduled clinical hours. We chose week-level rather than encounter-level aggregation to reduce variability introduced by differences in encounter complexity and documentation and to enhance the robustness and interpretability of the data by smoothing out random fluctuations across individual encounters.
Our primary independent variable was the frequency of AI scribe tool use, defined by the percentage of weekly ambulatory visits for which the physician was provider of record and the AI scribe tool was used. Two weekly metrics of EHR time were measured in minutes per scheduled hour: (1) total active time spent engaged in ambulatory note activities within the EHR (DocTime; ie, reading, editing, and writing clinical notes, including patient instructions), and (2) total active time spent on ambulatory EHR activities outside working hours (WowTime; ie, work outside of work, including DocTime and time on orders, chart review, and inbox). These times were measured using Epic’s active use log data, known as User Action Log Lite. Measures were limited to ambulatory work (to exclude inpatient-related EHR work, given that our ambulatory physicians can work in both settings during the same week) by including only those usage records that captured time spent in ambulatory-context EHR activities. Working hours were defined as 30 minutes prior to the first appointment of a physician’s day through 30 minutes after the end of the last appointment, with WowTime defined as all remaining blocks of time in the day spent working in the EHR, including any time spent in an ambulatory context within the EHR on days when a physician had no scheduled appointments. EHR time measures were scaled relative to each physician’s hours of scheduled outpatient time and reported as minutes per scheduled clinic hour to make them comparable across physicians with heterogeneous clinical workloads.16
To enable assessment of variability in the relationship between use of the AI scribe tool and our EHR-based time measures by baseline DocTime, we measured baseline DocTime efficiency using the individual physician’s random intercept from a linear mixed-effects regression model predicting their DocTime in the baseline period (12 weeks prior to AI scribe onboarding) and including the covariates described in the following paragraph as well as physician gender. The measure captures the individual’s deviation (how many extra minutes per hour of DocTime a physician is expected to spend) from the global intercept (how many minutes per hour the average physician spends on documentation). The sign on this measure was reversed, such that those with higher values were considered more efficient with documentation and those with lower values were considered less efficient. Given that approximately 15% of physicians had a human scribe prior to AI scribe adoption (which could have limited the opportunity for DocTime reductions from AI scribe use), we assessed—and found a low correlation between—baseline DocTime and use of a human scribe during the baseline period (Pearson correlation coefficient = 0.130; P = .022).
We calculated several additional variables to use in our models that are proxies for physician workload (based on guidance from frontline clinicians) and other features that may impact DocTime: (1) total number of weekly ambulatory visits for which the physician was provider of record; (2) percentage of weekly visits for which the patient was indicated as requiring an interpreter (adds complexity to documentation from third-party involvement and translation needs); (3) percentage of weekly visits that were conducted in person (vs via telemedicine), given prior evidence that in-person visits are for more complex patients with more EHR-related work17; (4) percentage of weekly visits that were billed as new (vs established) patients; (5) percentage of weekly “attending-only” visits for which the physician did not have any others contributing to the note (eg, an advanced practice provider or resident, which minimizes DocTime for attendings); and (6) a set of dichotomous indicators of physician specialty. We aggregated the full set of clinical specialties represented in the sample into 9 groups: dermatology, medical specialties, neurology and psychiatry, obstetrics and gynecology, ophthalmology, physical medicine and rehabilitation, primary care, surgical specialties, and other specialties. The eAppendix Table reports sample distribution across specialty groupings.
Analytic Approach
To assess the association between actual and perceived impact of AI scribe use on DocTime for each physician in the sample, we took the mean of the 12 weeks before onboard date (pre–scribe adoption) and subtracted it from the mean during the 12-week post period (which followed a 4-week washout period that started with the week of onboarding and during which the physician was assumed to be adjusting to use of the tool). Negative numbers therefore reflect reductions. Due to the sparse number of responses to the survey reporting that the AI scribe tool significantly (n = 1) or somewhat (n = 9) increases DocTime, our analysis was limited to the 242 (96.0%) who reported that the AI scribe had no impact, somewhat decreases DocTime, or significantly decreases DocTime. After confirming that the data did not violate the parallel odds assumption, we ran an ordinal logistic regression model with each of the 3 categories (no impact, somewhat decreases, significantly decreases) as the outcome and the change in DocTime as the independent variable as well as the covariates for physician specialty (because these were at the physician level). We also estimated and plotted marginal effects estimates to visualize the relationships between each response category and change in DocTime.
Our second research question assessed whether the frequency of AI scribe use was associated with DocTime. This analysis relied on physician-week data only from the 12-week post period for each user in the full sample. We used linear mixed-effects models to handle these repeated measures (clustering of weeks within physicians), including a random intercept for the physician. The dependent variable was DocTime, and the independent variable was the percentage of weekly encounters with AI scribe use, adjusting for weekly visit volume, visits requiring an interpreter, in-person visits, new patient visits, attending-only encounters, and physician specialty. We repeated the analysis in the subset of 252 physicians with survey responses to ensure that results were consistent.
To answer our third research question, in our full sample and with the same 6 covariates, we replicated the model predicting DocTime and included an interaction term between the measure of baseline DocTime and the frequency of AI scribe use. We then ran this model with WowTime as the dependent variable. All models include only weeks for which physicians had more than 1 ambulatory encounter completed.
RESULTS
Sample Characteristics
As shown in Table 1, during the 12 weeks following AI scribe adoption, a mean of 58.7% of weekly encounters involved use of the tool, although this varied considerably across weeks (SD = 32.9%). DocTime decreased from a mean of 22.2 minutes per scheduled hour to 17.3 minutes per scheduled hour after AI scribe adoption. WowTime decreased from 20.0 minutes per scheduled hour to 17.4 minutes per scheduled hour. Approximately one-third (32.5%) of physicians completing the postadoption survey said that the AI scribe significantly decreases DocTime, with another 54.0% reporting that it somewhat decreases DocTime and 9.5% reporting no impact.
Relationships Between Perceived and Actual Changes in DocTime
We did not find that the actual change in DocTime was associated with perceived level of reduction (OR, 0.975; 95% CI, 0.94-1.01) (Table 2). However, in the marginal effects plot (Figure), there is a visible relationship between pre-to-post changes in DocTime and the likelihood of reporting a significant decrease in DocTime from AI scribe adoption (line sloping downward as pre-to-post changes in DocTime increase). Specifically, physicians with no change in DocTime from pre- to post adoption had a 31.1% predicted likelihood (95% CI, 24.1%-38.1%) of reporting that the AI scribe significantly reduced DocTime, whereas those with a 20-minute reduction in DocTime post scribe adoption had a 42.5% predicted likelihood (95% CI, 29.3%-55.7%) of reporting a significant decrease. In contrast, the likelihood of reporting somewhat decreases or no impact increased (lines sloping upward) as pre-to-post changes in DocTime increased (Figure).
Relationship Between Frequency of AI Scribe Use and DocTime
As shown in Table 3, in both the full model (model 1) and the subsample model (model 2), weeks with higher levels of AI scribe use were significantly associated with lower levels of DocTime. Specifically, for each additional 10% of encounters with scribe use, DocTime decreased by 0.5 minutes per scheduled hour (P < .001; model 1). This implies that a 1-SD increase in AI scribe use (32.9%) (Table 1) was associated with a reduction in DocTime of 1.645 minutes per scheduled clinical hour (32.9 × 0.05 = 1.645 minutes), corresponding to a 9.5% relative reduction from the post period mean DocTime of 17.3 minutes per scheduled hour (Table 1).
We also observed a significant interaction effect between frequency of AI scribe use and baseline DocTime efficiency. In model 3 (baseline EHR efficiency interaction on DocTime) (Table 3), the coefficient on the interaction was small but statistically significant (0.0028; P < .001), which suggests that physicians with lower baseline DocTime efficiency experienced greater reductions in DocTime as their use of the AI scribe increased.
Relationship Between Frequency of AI Scribe Use and WowTime
Although higher levels of AI scribe use were significantly associated with lower levels of WowTime (–0.03; P = .018; model 4 [baseline EHR efficiency interaction on WowTime]) (Table 3), we did not observe a significant interaction effect between increased AI scribe use and baseline DocTime efficiency (0.0016; P = .107) (Table 3, right-most column).
DISCUSSION
In this study, we examined the impact of AI scribe use in a diverse group of ambulatory specialties at a large academic medical center. Although many studies have examined the impact of AI scribes on objective and perceived measures of DocTime and documentation burden, our study is novel in several key ways. We assessed the relationship between perceived impact on DocTime and an EHR-derived measure of DocTime using individual-level pre– vs post–scribe adoption data. Although we were surprised to find no relationship, differences between perceived and actual time spent have been observed in other contexts, such as resident work hours.18,19 Further, those with larger decreases in the EHR-based measure were more likely to report a significant decrease, suggesting that perceived and actual time may better align when the effect is large in magnitude. Our study also examined a novel set of questions related to the extent of AI scribe use, finding that those with higher levels of use experienced greater reductions in DocTime and WowTime. Additionally, we found that baseline DocTime efficiency moderates this relationship; however, this moderating effect did not extend to WowTime.
The magnitude of DocTime savings was small, which may be why we did not observe a strong association between perceived and objective changes. Overall, we found significant perceived reductions in DocTime, which is consistent with prior work.9,12 This supports the benefits of AI scribes, even if physician perceptions did not consistently correlate with objective reductions in DocTime. These findings point to the possibility that distinct groups of physicians are benefiting in different ways, with some experiencing perceived reductions in DocTime, potentially driven by reduced cognitive burden as noted in prior studies,8-10,14 and others realizing objective, measurable decreases in DocTime. The magnitude of our DocTime savings from AI scribes is similar to that seen in prior studies. These studies predominantly focused on different, large AI scribe vendors (eg, Nuance Dragon Ambient eXperience9,11-13); consistency of findings across these studies suggests that the observed time savings are generalizable across different AI scribe platforms, including those evaluated in our study.
Given that we observed small-magnitude relationships overall, more work is needed to assess how to optimize use of the tool. Our results point to a few areas in which to begin. First, the overall level of AI scribe use was for just over half of encounters. Understanding why it is less valuable in certain types of encounters and whether this is a contributor to minimal time savings is important. Second, as physicians increasingly trust the tool (the timing of which should be assessed in future work) and as the tool itself improves (ie, fewer hallucinations or omissions), editing time should decrease and translate directly into DocTime reductions. Finally, our results suggest that those who were efficient documenters at baseline realized limited time savings. Thus, use of the tool may be best targeted to those with higher DocTime because this group has greater opportunity for benefits. Notably, the time savings observed for this group did not translate into reductions in WowTime (nonsignificant interaction between AI scribe use and baseline DocTime efficiency; model 4), suggesting that although documentation during clinical hours may have become more efficient, after-hours workload remained largely unchanged.
Taken together, in the context of assessing physician benefits from AI scribes, our results suggest strong well-being benefits (as indicated by the survey) and measurable time savings at work within the group of physicians with high baseline DocTime.
Limitations
Our study should be interpreted with key limitations in mind. Our evaluation of the AI scribe tool was conducted shortly after its implementation, and long-term impacts may differ as physicians adapt and the technology continues to evolve. There was also variability in time between the post survey and onboarding date (eAppendix Figure), but given the relatively tight range, it is unlikely that this influenced our results. Physicians self-selected as early adopters and selected the encounters for which to use the AI scribe. Each of these factors could have increased or decreased the expected impacts. For example, those selecting to be an early adopter could be those with higher baseline EHR burden or those more comfortable with technology and therefore more efficient at documentation. Because physicians selected the encounters for which to use the tool, we did not have a valid way to assess the impact of the AI scribe at the encounter level; therefore, we aggregated data to the per-scheduled-clinic-hour-per-week level (which resulted in some loss of precision). We did not have a way to specifically measure DocTime outside of work, so our WowTime measure may not fully capture the impact of the AI scribe on after-hours documentation burden.
CONCLUSIONS
In this study of more than 300 physicians across multiple specialties, we found consistent reductions in both perceived and objective DocTime, contributing to the growing body of evidence on the impact of AI scribes on documentation burden. Interestingly, our analyses revealed both perceived and objective reductions in DocTime but no significant association between them overall, indicating that different groups of physicians may derive different types of benefits from AI scribe use. We also found that baseline DocTime appears to modify the DocTime reductions from AI scribe use. Looking ahead, it will be important to assess long-term benefits and to more deeply understand why individual physicians experience differential benefits and how to maximize benefits across all physicians.
Author Affiliations: Division of Clinical Informatics and Digital Transformation, Department of Medicine, University of California, San Francisco (UCSF) (JA-M, MEB, AO, RT, JY, SM), San Francisco, CA; Health AI Team, UCSF Health (OD, HS, SGM), San Francisco, CA; Faculty Practice Organization, Clinical Optimization and Innovation, UCSF (SB), San Francisco, CA.
Source of Funding: The work was supported in part by a gift from Ken and Kathy Hao to establish the Impact Monitoring Platform for AI in Clinical Care
at UCSF.
Author Disclosures: Dr Adler-Milstein is a member of the Augmedix Scientific Advisory Board, owns stock in Augmedix, and has attended AcademyHealth and American Medical Informatics Association meetings. Dr DeMasi, Dr Soleimani, Ms Beck, Dr Byron, Dr Oates, and Dr Murray are employed by USCF Health, an academic institution that is a customer of the vendor described in this article. Dr Soleimani also reports paid consultancies with third-party companies related to artificial intelligence (AI) scribes. Dr Byron has attended conferences at which AI scribes were discussed. The remaining authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (JA-M, OD, HS, MEB, AO, SGM); acquisition of data (OD, SB); analysis and interpretation of data (JA-M, OD, AO, RT, JY, SGM); drafting of the manuscript (JA-M, OD, JY, SGM); critical revision of the manuscript for important intellectual content (JA-M, OD, HS, SB, MEB, AO, RT, JY, SGM); statistical analysis (JA-M, OD, HS);obtaining funding (JA-M); administrative, technical, or logistic support (SB, MEB, RT); and supervision (HS, SGM).
Address Correspondence to: Julia Adler-Milstein, PhD, University of California, San Francisco, 10 Koret Way, San Francisco, CA 94143. Email: julia.adler-milstein@ucsf.edu.
REFERENCES
1. Prasad K, Frits M, Iannaccone C, et al. Clinician perceptions of virtual scribe use: a survey study. J Gen Intern Med. Published online August 5, 2025. doi:10.1007/s11606-025-09771-5
2. Earls ST, Savageau JA, Begley S, Saver BG, Sullivan K, Chuman A. Can scribes boost FPs’ efficiency and job satisfaction? J Fam Pract. 2017;66(4):206-214.
3. Hribar MR, Dusek HL, Goldstein IH, Rule A, Chiang MF. Methods for large-scale quantitative analysis of scribe impacts on clinical documentation. AMIA Annu Symp Proc. 2021;2020:573-582.
4. Pearlman K, Wan W, Shah S, Laiteerapong N. Use of an AI scribe and electronic health record efficiency. JAMA Netw Open. 2025;8(10):e2537000. doi:10.1001/jamanetworkopen.2025.37000
5. Rotenstein L, Melnick ER, Iannaccone C, et al. Virtual scribes and physician time spent on electronic health records. JAMA Netw Open. 2024;7(5):e2413140. doi:10.1001/jamanetworkopen.2024.13140
6. Olson KD, Meeker D, Troup M, et al. Use of ambient AI scribes to reduce administrative burden and professional burnout. JAMA Netw Open. 2025;8(10):e2534976. doi:10.1001/jamanetworkopen.2025.34976
7. Shah SJ, Crowell T, Jeong Y, et al. Physician perspectives on ambient AI scribes. JAMA Netw Open. 2025;8(3):e251904. doi:10.1001/jamanetworkopen.2025.1904
8. Tierney AA, Gayre G, Hoberman B, et al. Ambient artificial intelligence scribes to alleviate the burden of clinical documentation. NEJM Catal Innov Care Deliv. 2024;5(3). doi:10.1056/CAT.23.0404
9. Duggan MJ, Gervase J, Schoenbaum A, et al. Clinician experiences with ambient scribe technology to assist with documentation burden and efficiency. JAMA Netw Open. 2025;8(2):e2460637. doi:10.1001/jamanetworkopen.2024.60637
10. Stults CD, Deng S, Martinez MC, et al. Evaluation of an ambient artificial intelligence documentation platform for clinicians. JAMA Netw Open. 2025;8(5):e258614. doi:10.1001/jamanetworkopen.2025.8614
11. Ma SP, Liang AS, Shah SJ, et al. Ambient artificial intelligence scribes: utilization and impact on documentation time. J Am Med Inform Assoc. 2025;32(2):381-385. doi:10.1093/jamia/ocae304
12. Shah SJ, Devon-Sand A, Ma SP, et al. Ambient artificial intelligence scribes: physician burnout and perspectives on usability and documentation burden. J Am Med Inform Assoc. 2025;32(2):375-380. doi:10.1093/jamia/ocae295
13. Haberle T, Cleveland C, Snow GL, et al. The impact of Nuance DAX ambient listening AI documentation: a cohort study. J Am Med Inform Assoc. 2024;31(4):975-979. doi:10.1093/jamia/ocae022
14. Pelletier JH, Watson K, Michel J, McGregor R, Rush SZ. Effect of a generative artificial intelligence digital scribe on pediatric provider documentation time, cognitive burden, and burnout. JAMIA Open. 2025;8(4):ooaf068. doi:10.1093/jamiaopen/ooaf068
15. Tierney AA, Gayre G, Hoberman B, et al. Ambient artificial intelligence scribes: learnings after 1 year and over 2.5 million uses. NEJM Catal Innov Care Deliv. 2025;6(5). doi:10.1056/CAT.25.0040
16. Sinsky CA, Rotenstein L, Holmgren AJ, Apathy NC. The number of patient scheduled hours resulting in a 40-hour work week by physician specialty and setting: a cross-sectional study using electronic health record event log data. J Am Med Inform Assoc. 2025;32(1):235-240. doi:10.1093/jamia/ocae266
17. Reed M, Huang J, Somers M, et al. Telemedicine versus in-person primary care: treatment and follow-up visits. Ann Intern Med. 2023;176(10):1349-1357. doi:10.7326/M23-1335
18. Gonzalo JD, Yang JJ, Ngo L, Clark A, Reynolds EE, Herzig SJ. Accuracy of residents’ retrospective perceptions of 16-hour call admitting shift compliance and characteristics. J Grad Med Educ. 2013;5(4):630-633. doi:10.4300/JGME-D-12-00311.1
19. Dziorny AC, Orenstein EW, Lindell RB, Hames NA, Washington N, Desai B. Pediatric trainees systematically under-report duty hour violations compared to electronic health record defined shifts. PLoS One. 2019;14(12):e0226493. doi:10.1371/journal.pone.0226493
Newsletter
Stay ahead of policy, cost, and value—subscribe to AJMC for expert insights at the intersection of clinical care and health economics.




























































