A cross-temporal multimodal fusion system based on deep learning for orthodontic monitoring

In the treatment of malocclusion, continuous monitoring of the three-dimensional relationship between dental roots and the surrounding alveolar bone is essential for preventing complications from orthodontic procedures. Cone-beam computed tomography (CBCT) provides detailed root and bone data, but i...

Full description

Saved in:
Bibliographic Details
Published inComputers in biology and medicine Vol. 180; p. 109025
Main Authors Chen, Haiwen, Qu, Zhiyuan, Tian, Yuan, Jiang, Ning, Qin, Yuan, Gao, Jie, Zhang, Ruoyan, Ma, Yanning, Jin, Zuolin, Zhai, Guangtao
Format Journal Article
LanguageEnglish
Published United States Elsevier Ltd 01.09.2024
Elsevier Limited
Subjects
Online AccessGet full text
ISSN0010-4825
1879-0534
1879-0534
DOI10.1016/j.compbiomed.2024.109025

Cover

More Information
Summary:In the treatment of malocclusion, continuous monitoring of the three-dimensional relationship between dental roots and the surrounding alveolar bone is essential for preventing complications from orthodontic procedures. Cone-beam computed tomography (CBCT) provides detailed root and bone data, but its high radiation dose limits its frequent use, consequently necessitating an alternative for ongoing monitoring. We aimed to develop a deep learning-based cross-temporal multimodal image fusion system for acquiring root and jawbone information without additional radiation, enhancing the ability of orthodontists to monitor risk. Utilizing CBCT and intraoral scans (IOSs) as cross-temporal modalities, we integrated deep learning with multimodal fusion technologies to develop a system that includes a CBCT segmentation model for teeth and jawbones. This model incorporates a dynamic kernel prior model, resolution restoration, and an IOS segmentation network optimized for dense point clouds. Additionally, a coarse-to-fine registration module was developed. This system facilitates the integration of IOS and CBCT images across varying spatial and temporal dimensions, enabling the comprehensive reconstruction of root and jawbone information throughout the orthodontic treatment process. The experimental results demonstrate that our system not only maintains the original high resolution but also delivers outstanding segmentation performance on external testing datasets for CBCT and IOSs. CBCT achieved Dice coefficients of 94.1 % and 94.4 % for teeth and jawbones, respectively, and it achieved a Dice coefficient of 91.7 % for the IOSs. Additionally, in the context of real-world registration processes, the system achieved an average distance error (ADE) of 0.43 mm for teeth and 0.52 mm for jawbones, significantly reducing the processing time. We developed the first deep learning-based cross-temporal multimodal fusion system, addressing the critical challenge of continuous risk monitoring in orthodontic treatments without additional radiation exposure. We hope that this study will catalyze transformative advancements in risk management strategies and treatment modalities, fundamentally reshaping the landscape of future orthodontic practice. [Display omitted] •The first deep learning based orthodontic system to continuous risk monitoring.•Cross-temporal fusion framework for multimodal medical imaging registration.•Novel registration method based on segmentation for internal structure changes.•Resolution attention algorithm to improve high-resolution segmentation.•Efficient and precise registration fusion efficacy validated on a robust dataset.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0010-4825
1879-0534
1879-0534
DOI:10.1016/j.compbiomed.2024.109025