Large language models in radiology reporting - A systematic review of performance, limitations, and clinical implications
Large language models (LLMs) and vision-language models (VLMs), have emerged as potential tools for automated radiology reporting. However, concerns regarding their fidelity, reliability, and clinical applicability remain. This systematic review examines the current literature on LLM-generated radio...
Saved in:
Published in | Intelligence-based medicine Vol. 12; p. 100287 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
2025
|
Subjects | |
Online Access | Get full text |
ISSN | 2666-5212 2666-5212 |
DOI | 10.1016/j.ibmed.2025.100287 |
Cover
Summary: | Large language models (LLMs) and vision-language models (VLMs), have emerged as potential tools for automated radiology reporting. However, concerns regarding their fidelity, reliability, and clinical applicability remain. This systematic review examines the current literature on LLM-generated radiology reports. Assessing their fidelity, clinical reliability, and effectiveness. The review aims to identify benefits, limitations, and key factors influencing AI-generated report quality.
We conducted a systematic search of MEDLINE, Google Scholar, Scopus, and Web of Science to identify studies published between January 2015 and July 2025. Studies evaluating VLM/LLM-generated radiology reports were included (Transformer-based generative large language models). The study follows PRISMA guidelines. Risk of bias was assessed using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool.
Fifteen studies met the inclusion criteria. Four assessed VLMs that generate full radiology reports directly from images, whereas eleven examined LLMs that summarize textual findings into radiology impressions. Six studies evaluated out-of-the-box (base) models, and nine analyzed models that had been fine-tuned. Twelve investigations paired automated natural-language metrics with radiologist review, while three relied on automated metrics. Fine-tuned models demonstrated better alignment with expert evaluations and achieved higher performance on natural language processing metrics compared to base models. All LLMs showed hallucinations, misdiagnoses, and inconsistencies.
LLMs show promise in radiology reporting. However, limitations in diagnostic accuracy and hallucinations necessitate human oversight. Future research should focus on improving evaluation frameworks, incorporating diverse datasets, and prospectively validating AI-generated reports in clinical workflows.
•Fine-tuned LLMs outperformed base models in NLP metrics and expert alignment.•AI-generated reports exhibited hallucinations, misdiagnoses, and missing clinical details.•Automated metrics overemphasized stylistic similarity over clinical accuracy.•Human expert evaluation remains essential for validating AI-generated radiology reports.•Future research should improve evaluation frameworks and real-world validation. |
---|---|
ISSN: | 2666-5212 2666-5212 |
DOI: | 10.1016/j.ibmed.2025.100287 |