Artificial intelligence in clinical practice: a cross-sectional survey of paediatric surgery residents’ perspectives

ObjectivesThe aim of this study was to compare the performances of residents and ChatGPT in answering validated questions and assess paediatric surgery residents’ acceptance, perceptions and readiness to integrate artificial intelligence (AI) into clinical practice.MethodsWe conducted a cross-sectio...

Full description

Saved in:
Bibliographic Details
Published inBMJ health & care informatics Vol. 32; no. 1; p. e101456
Main Authors Gigola, Francesca, Amato, Tommaso, Del Riccio, Marco, Raffaele, Alessandro, Morabito, Antonino, Coletta, Riccardo
Format Journal Article
LanguageEnglish
Published England BMJ Publishing Group Ltd 21.05.2025
BMJ Publishing Group LTD
BMJ Publishing Group
Subjects
Online AccessGet full text
ISSN2632-1009
2632-1009
DOI10.1136/bmjhci-2025-101456

Cover

More Information
Summary:ObjectivesThe aim of this study was to compare the performances of residents and ChatGPT in answering validated questions and assess paediatric surgery residents’ acceptance, perceptions and readiness to integrate artificial intelligence (AI) into clinical practice.MethodsWe conducted a cross-sectional study using randomly selected questions and clinical cases on paediatric surgery topics. We examined residents’ acceptance of AI before and after comparing their results to ChatGPT’s results using the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) model. Data analysis was performed using Jamovi V.2.4.12.0.Results30 residents participated. ChatGPT-4.0’s median score was 13.75, while ChatGPT-3.5’s was 8.75. The median score among residents was 8.13. Differences appeared statistically significant. ChatGPT outperformed residents specifically in definition questions (ChatGPT-4.0 vs residents, p<0.0001; ChatGPT-3.5 vs residents, p=0.03). In the UTAUT2 Questionnaire, respondents expressed a more positive evaluation of ChatGPT with higher mean values for each construct and lower fear of technology after learning about test scores.DiscussionChatGPT performed better than residents in knowledge-based questions and simple clinical cases. The accuracy of ChatGPT declined when confronted with more complex questions. The UTAUT questionnaire results showed that learning about the potential of ChatGPT could lead to a shift in perception, resulting in a more positive attitude towards AI.ConclusionOur study reveals residents’ positive receptivity towards AI, especially after being confronted with its efficacy. These results highlight the importance of integrating AI-related topics into medical curricula and residency to help future physicians and surgeons better understand the advantages and limitations of AI.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
FG and TA contributed equally.
None declared.
ISSN:2632-1009
2632-1009
DOI:10.1136/bmjhci-2025-101456