Meta-analysis of AI-based pulmonary embolism detection: How reliable are deep learning models?

Deep learning (DL)–based methods show promise in detecting pulmonary embolism (PE) on CT pulmonary angiography (CTPA), potentially improving diagnostic accuracy and workflow efficiency. This meta-analysis aimed to (1) determine pooled performance estimates of DL algorithms for PE detection; and (2)...

Full description

Saved in:
Bibliographic Details
Published inComputers in biology and medicine Vol. 193; p. 110402
Main Authors Lanza, Ezio, Ammirabile, Angela, Francone, Marco
Format Journal Article
LanguageEnglish
Published United States Elsevier Ltd 01.07.2025
Subjects
Online AccessGet full text
ISSN0010-4825
1879-0534
1879-0534
DOI10.1016/j.compbiomed.2025.110402

Cover

More Information
Summary:Deep learning (DL)–based methods show promise in detecting pulmonary embolism (PE) on CT pulmonary angiography (CTPA), potentially improving diagnostic accuracy and workflow efficiency. This meta-analysis aimed to (1) determine pooled performance estimates of DL algorithms for PE detection; and (2) compare the diagnostic efficacy of convolutional neural network (CNN)– versus U-Net–based architectures. Following PRISMA guidelines, we searched PubMed and EMBASE through April 15, 2025 for English‐language studies (2010–2025) reporting DL models for PE detection with extractable 2 × 2 data or performance metrics. True/false positives and negatives were reconstructed when necessary under an assumed 50 % PE prevalence (with 0.5 continuity correction). We approximated AUROC as the mean of sensitivity and specificity if not directly reported. Sensitivity, specificity, accuracy, PPV and NPV were pooled using a DerSimonian–Laird random-effects model with Freeman-Tukey transformation; AUROC values were combined via a fixed-effect inverse-variance approach. Heterogeneity was assessed by Cochran's Q and I2. Subgroup analyses contrasted CNN versus U-Net models. Twenty-four studies (n = 22,984 patients) met inclusion criteria. Pooled estimates were: AUROC 0.895 (95 % CI: 0.874–0.917), sensitivity 0.894 (0.856–0.923), specificity 0.871 (0.831–0.903), accuracy 0.857 (0.833–0.882), PPV 0.832 (0.794–0.869) and NPV 0.902 (0.874–0.929). Between-study heterogeneity was high (I2 ≈ 97 % for sensitivity/specificity). U-Net models exhibited higher sensitivity (0.899 vs 0.893) and CNN models higher specificity (0.926 vs 0.900); subgroup Q‐tests confirmed significant differences for both sensitivity (p = 0.0002) and specificity (p < 0.001). DL algorithms demonstrate high diagnostic accuracy for PE detection on CTPA, with complementary strengths: U-Net architectures excel in true-positive identification, whereas CNNs yield fewer false positives. However, marked heterogeneity underscores the need for standardized, prospective validation before routine clinical implementation. •AI shows strong sensitivity (0.894) and specificity (0.871) for PE, enhancing diagnostic confidence.•CNNs and U-Nets offer complementary strengths; segmentation aids detection, classification cuts false positives.•Clinical use needs prospective validation, standardized protocols, and clearer model reporting.•AI is best as a second reader; integration into workflows can boost speed, safety, and radiologist support.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
ObjectType-Review-3
content type line 23
ISSN:0010-4825
1879-0534
1879-0534
DOI:10.1016/j.compbiomed.2025.110402