Dental bitewing radiographs segmentation using deep learning-based convolutional neural network algorithms

Objectives Dental radiographs, particularly bitewing radiographs, are widely used in dental diagnosis and treatment Dental image segmentation is difficult for various reasons, such as intricate structures, low contrast, noise, roughness, and unclear borders, resulting in poor image quality. Recent d...

Full description

Saved in:
Bibliographic Details
Published inOral radiology Vol. 40; no. 2; pp. 165 - 177
Main Authors Bonny, Talal, Al-Ali, Abdelaziz, Al-Ali, Mohammed, Alsaadi, Rashid, Al Nassan, Wafaa, Obaideen, Khaled, AlMallahi, Maryam
Format Journal Article
LanguageEnglish
Published Singapore Springer Nature Singapore 01.04.2024
Subjects
Online AccessGet full text
ISSN0911-6028
1613-9674
1613-9674
DOI10.1007/s11282-023-00717-3

Cover

More Information
Summary:Objectives Dental radiographs, particularly bitewing radiographs, are widely used in dental diagnosis and treatment Dental image segmentation is difficult for various reasons, such as intricate structures, low contrast, noise, roughness, and unclear borders, resulting in poor image quality. Recent developments in deep learning models have improved performance in analyzing dental images. In this research, our primary objective is to determine the most effective segmentation technique for bitewing radiographs based on different metrics: accuracy, training time, and the number of training parameters as a reflection of architectural cost. Methods In this research, we employ several deep learning models, namely Resnet-18, Resnet-50, Xception, Inception Resnet v2, and Mobilenetv2, to segment bitewing radiographs. The process begins by importing the radiographs into MATLAB®(MathWorks Inc), where the images are first improved, then segmented using the graph cut method based on regions to produce a binary mask that distinguishes the background from the original X-ray. Results The deep learning models were trained on 298 and 99 radiograph training and validation sets and were evaluated using 99 images from the testing set. We also compare the segmentation model using several criteria, including accuracy, speed, and size, to determine which network is superior. Furthermore, we compare our findings with prior research to provide a comprehensive understanding of the advancements made in dental image segmentation. The accurate segmentation achieved was 93.67% and 94.42% by the Resnet-18 and Resnet-50 models, respectively. Conclusion This research advances dental image analysis and facilitates more accurate diagnoses and treatment planning by determining the best segmentation technique. The outcomes of this study can guide researchers and practitioners in selecting appropriate segmentation methods for practical dental image analysis.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0911-6028
1613-9674
1613-9674
DOI:10.1007/s11282-023-00717-3