Leveraging Deep Learning to Address Diagnostic Challenges with Insufficient Image Data

In recent AI-driven disease diagnosis, the success of models has depended mainly on extensive data sets and advanced algorithms. However, creating traditional data sets for rare or emerging diseases presents significant challenges. To address this issue, this study introduces a direct-self-attention...

Full description

Saved in:
Bibliographic Details
Published inACS sensors Vol. 10; no. 9; pp. 6734 - 6745
Main Authors Lu, Jian-Ming, Chiu, Ping-Yeh, Chen, Chien-Fu
Format Journal Article
LanguageEnglish
Published United States American Chemical Society 26.09.2025
Subjects
Online AccessGet full text
ISSN2379-3694
2379-3694
DOI10.1021/acssensors.5c01439

Cover

More Information
Summary:In recent AI-driven disease diagnosis, the success of models has depended mainly on extensive data sets and advanced algorithms. However, creating traditional data sets for rare or emerging diseases presents significant challenges. To address this issue, this study introduces a direct-self-attention Wasserstein generative adversarial network (DSAWGAN) designed to improve diagnostic capabilities in infectious diseases with limited data availability. DSAWGAN enhances convergence speed, stability, and image quality by integrating attention modules and leveraging the Wasserstein distance optimization. We compared DSAWGAN-generated images with traditional data augmentation and other image generation techniques, evaluating their effectiveness using classification neural networks for diagnostic accuracy. This model integration was then applied to a mobile app, enabling rapid, portable, and cost-effective diagnostic testing across various concentration ranges. Using only half of the raw data (n = 1500), DSAWGAN achieves an accuracy increase from 98.00 to 99.33%. Even with just 10% of the original data (n = 300), a neural network trained with the augmented data set maintains an accuracy of 92.67%, demonstrating the approach’s effectiveness in resource-limited settings.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2379-3694
2379-3694
DOI:10.1021/acssensors.5c01439