Artificial intelligence based classification and prediction of medical imaging using a novel framework of inverted and self-attention deep neural network architecture

Classifying medical images is essential in computer-aided diagnosis (CAD). Although the recent success of deep learning in the classification tasks has proven advantages over the traditional feature extraction techniques, it remains challenging due to the inter and intra-class similarity caused by t...

Full description

Saved in:
Bibliographic Details
Published inScientific reports Vol. 15; no. 1; pp. 8724 - 26
Main Authors Aftab, Junaid, Khan, Muhammad Attique, Arshad, Sobia, Rehman, Shams ur, AlHammadi, Dina Abdulaziz, Nam, Yunyoung
Format Journal Article
LanguageEnglish
Published London Nature Publishing Group UK 13.03.2025
Nature Publishing Group
Nature Portfolio
Subjects
Online AccessGet full text
ISSN2045-2322
2045-2322
DOI10.1038/s41598-025-93718-7

Cover

More Information
Summary:Classifying medical images is essential in computer-aided diagnosis (CAD). Although the recent success of deep learning in the classification tasks has proven advantages over the traditional feature extraction techniques, it remains challenging due to the inter and intra-class similarity caused by the diversity of imaging modalities (i.e., dermoscopy, mammography, wireless capsule endoscopy, and CT). In this work, we proposed a novel deep-learning framework for classifying several medical imaging modalities. In the training phase of the deep learning models, data augmentation is performed at the first stage on all selected datasets. After that, two novel custom deep learning architectures were introduced, called the Inverted Residual Convolutional Neural Network (IRCNN) and Self Attention CNN (SACNN). Both models are trained on the augmented datasets with manual hyperparameter selection. Each dataset’s testing images are used to extract features during the testing stage. The extracted features are fused using a modified serial fusion with a strong correlation approach. An optimization algorithm- slap swarm controlled standard Error mean (SScSEM) has been employed, and the best features that passed to the shallow wide neural network (SWNN) classifier for the final classification have been selected. GradCAM, an explainable artificial intelligence (XAI) approach, analyzes custom models. The proposed architecture was tested on five publically available datasets of different imaging modalities and obtained improved accuracy of 98.6 (INBreast), 95.3 (KVASIR), 94.3 (ISIC2018), 95.0 (Lung Cancer), and 98.8% (Oral Cancer), respectively. A detailed comparison is conducted based on precision and accuracy, showing that the proposed architecture performs better. The implemented models are available on GitHub ( https://github.com/ComputerVisionLabPMU/ScientificImagingPaper.git ).
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-025-93718-7