A Lightweight Multimodal Xception Network for Glioma Grading Using MRI Images
ABSTRACT Gliomas are the most common type of primary brain tumors, classified into low‐grade gliomas (LGGs) and high‐grade gliomas (HGGs). There is a significant difference in survival rates between patients with different grades of gliomas, making imaging‐based grading a research hotspot. Current d...
Saved in:
| Published in | International journal of imaging systems and technology Vol. 34; no. 6 |
|---|---|
| Main Authors | , , , , |
| Format | Journal Article |
| Language | English |
| Published |
Hoboken, USA
John Wiley & Sons, Inc
01.11.2024
Wiley Subscription Services, Inc |
| Subjects | |
| Online Access | Get full text |
| ISSN | 0899-9457 1098-1098 |
| DOI | 10.1002/ima.70001 |
Cover
| Summary: | ABSTRACT
Gliomas are the most common type of primary brain tumors, classified into low‐grade gliomas (LGGs) and high‐grade gliomas (HGGs). There is a significant difference in survival rates between patients with different grades of gliomas, making imaging‐based grading a research hotspot. Current deep learning–based glioma grading algorithms face challenges, such as network complexity, low accuracy, and difficulty in large‐scale application. This paper proposes a multimodal, lightweight Xception grading network to address these issues. The network introduces convolutional block attention modules and employs dilated convolutions for spatial feature aggregation, reducing parameter count while maintaining the same receptive field. By integrating spatial and channel squeeze‐and‐excitation modules, it achieves more accurate feature learning, alongside improvements to the residual connection modules for critical information retention. Compared to existing methods, the proposed approach improves classification accuracy while maintaining a reduced parameter count. The network was trained and validated on 344 glioma cases (261 HGGs and 83 LGGs) and tested on 38 glioma cases (29 HGGs and 9 LGGs). Experimental results demonstrate that the network achieves an accuracy of 92.67% and an AUC of 0.9413 using a fully connected layer as the classifier. The features extracted using the improved Xception grading network achieved an accuracy of 93.42% when classified with KNN and RF classifiers. This study aims to provide diagnostic suggestions for clinical use through a simple, effective, and noninvasive multimodal medical imaging diagnostic method for LGG/HGG grading, thereby accelerating treatment decision‐making. |
|---|---|
| Bibliography: | Funding This work was supported in part by the Heilongjiang Provincial Natural Science Foundation of China under Grant LH2022E087 and in part by the Heilongjiang Province Key Research and Development Program of China under Grant 2023ZX01A08. ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 0899-9457 1098-1098 |
| DOI: | 10.1002/ima.70001 |