Exploring the risks of over-reliance on AI in diagnostic pathology. What lessons can be learned to support the training of young pathologists?

The integration of Artificial Intelligence (AI) algorithms into pathology practice presents both opportunities and challenges. Although it can improve accuracy and inter-rater reliability, it is not infallible and can produce erroneous diagnoses, hence the need for pathologists to always check predi...

Full description

Saved in:
Bibliographic Details
Published inPloS one Vol. 20; no. 8; p. e0323270
Main Authors Bellahsen-Harrar, Yaëlle, Lubrano, Mélanie, Lépine, Charles, Beaufrère, Aurélie, Bocciarelli, Claire, Brunet, Anaïs, Decroix, Elise, El-Sissy, Franck Neil, Fabiani, Bettina, Morini, Aurélien, Tilmant, Cyprien, Walter, Thomas, Badoual, Cécile
Format Journal Article
LanguageEnglish
Published United States Public Library of Science 28.08.2025
Public Library of Science (PLoS)
Subjects
Online AccessGet full text
ISSN1932-6203
1932-6203
DOI10.1371/journal.pone.0323270

Cover

More Information
Summary:The integration of Artificial Intelligence (AI) algorithms into pathology practice presents both opportunities and challenges. Although it can improve accuracy and inter-rater reliability, it is not infallible and can produce erroneous diagnoses, hence the need for pathologists to always check predictions. This critical judgment is particularly important when algorithm errors could lead to high-impact negative clinical outcomes, such as missing an invasive carcinoma. However, the influence of AI tools on pathologists’ decision-making is not well explored. This study aims to evaluate the impact of a previously developed AI tool on the diagnostic accuracy and inter-rater reliability among pathologists, while assessing whether pathologists maintain independent judgment of AI predictions. Eight pathologists from different hospitals and with varying levels of experience, participated in the study. Each of them reviewed 115 slides of laryngeal biopsies, including benign epithelium, low-grade and high-grade dysplasia, and invasive squamous carcinomas. The study compared diagnostic outcomes with and without AI assistance. The reference labels were established by an expert’s double-blind review. Results show that assisted pathologists had a higher accuracy for high-grade dysplasia, invasive carcinoma and improved inter-rater reliability. However, cases of over-reliance on AI have been observed, resulting in the omission of correctly diagnosed invasive carcinomas during the unassisted examination. The false predictions on these carcinoma slides were labeled with a low confidence score, which was not considered by the less experienced pathologists, showing the risk that they would follow the AI prediction without enough critical judgment or expertise. Our study emphasizes the potential over-reliance of pathologists on AI models and the potential harmful consequences, even with the advancement of powerful algorithms. The integration of confidence scores and the education of pathologists to use this tool could help to optimize the safe integration of AI into pathology practice.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
PMCID: PMC12393786
These authors contributed equally to this work.
Competing Interests: The authors have declared that no competing interests exist.
ISSN:1932-6203
1932-6203
DOI:10.1371/journal.pone.0323270