An Automated Deep Learning Model for the Cerebellum Segmentation from Fetal Brain Images

Cerebellum measures taken from routinely obtained ultrasound (US) images have been frequently employed to determine gestational age and identify developing central nervous system’s anatomical abnormalities. Standardized cerebellar assessments from large-scale clinical datasets are required to invest...

Full description

Saved in:
Bibliographic Details
Published inBioMed research international Vol. 2022; no. 1; p. 8342767
Main Authors Sreelakshmy, R., Titus, Anita, Sasirekha, N., Logashanmugam, E., Begam, R. Benazir, Ramkumar, G., Raju, Raja
Format Journal Article
LanguageEnglish
Published United States Hindawi 2022
John Wiley & Sons, Inc
Subjects
Online AccessGet full text
ISSN2314-6133
2314-6141
2314-6141
DOI10.1155/2022/8342767

Cover

More Information
Summary:Cerebellum measures taken from routinely obtained ultrasound (US) images have been frequently employed to determine gestational age and identify developing central nervous system’s anatomical abnormalities. Standardized cerebellar assessments from large-scale clinical datasets are required to investigate correlations between the growing cerebellum and postnatal neurodevelopmental results. These studies could uncover structural abnormalities that could be employed as indicators to forecast neurodevelopmental and growth consequences. To achieve this, higher-throughput, precise, and impartial measures must be used to replace the existing human, semiautomatic, and advanced algorithms, which seem to be time-consuming and inaccurate. In this article, we presented an innovative deep learning (DL) technique for automatic fetal cerebellum segmentation from 2-dimensional (2D) US brain images. We present ReU-Net, a semantic segmentation network tailored to the anatomy of the fetal cerebellum. Moreover, we use U-Net as a foundation models with the incorporation of residual blocks and Wiener filter over the last 2 layers to segregate the cerebellum (c) from the noisy US data. 590 images for training and 150 images for testing were taken; also, we employed a 5-fold cross-assessment method. Our ReU-Net scored 91%, 92%, 25.42, 98%, 92%, and 94% for Dice Score Coefficient (DSC), F1-score, Hausdorff Distance (HD), accuracy, recall, and precision, correspondingly. The suggested method outperforms the other U-Net predicated techniques by a quantitatively significant margin (p 0.001). Our presented approach can be used to allow high bandwidth imaging techniques in medical study fetal US images as well as biometric evaluation on a broader scale in fetal US images.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ObjectType-Correction/Retraction-3
Academic Editor: Yuvaraja Teekaraman
ISSN:2314-6133
2314-6141
2314-6141
DOI:10.1155/2022/8342767