Automated quantification of brain PET in PET/CT using deep learning-based CT-to-MR translation: a feasibility study

Purpose Quantitative analysis of PET images in brain PET/CT relies on MRI-derived regions of interest (ROIs). However, the pairs of PET/CT and MR images are not always available, and their alignment is challenging if their acquisition times differ considerably. To address these problems, this study...

Full description

Saved in:
Bibliographic Details
Published inEuropean journal of nuclear medicine and molecular imaging Vol. 52; no. 8; pp. 2959 - 2967
Main Authors Kim, Daesung, Choo, Kyobin, Lee, Sangwon, Kang, Seongjin, Yun, Mijin, Yang, Jaewon
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer Berlin Heidelberg 01.07.2025
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN1619-7070
1619-7089
1619-7089
DOI10.1007/s00259-025-07132-2

Cover

More Information
Summary:Purpose Quantitative analysis of PET images in brain PET/CT relies on MRI-derived regions of interest (ROIs). However, the pairs of PET/CT and MR images are not always available, and their alignment is challenging if their acquisition times differ considerably. To address these problems, this study proposes a deep learning framework for translating CT of PET/CT to synthetic MR images (MR SYN ) and performing automated quantitative regional analysis using MR SYN -derived segmentation. Methods In this retrospective study, 139 subjects who underwent brain [ 18 F]FBB PET/CT and T1-weighted MRI were included. A U-Net-like model was trained to translate CT images to MR SYN ; subsequently, a separate model was trained to segment MR SYN into 95 regions. Regional and composite standardised uptake value ratio (SUVr) was calculated in [ 18 F]FBB PET images using the acquired ROIs. For evaluation of MR SYN , quantitative measurements including structural similarity index measure (SSIM) were employed, while for MR SYN -based segmentation evaluation, Dice similarity coefficient (DSC) was calculated. Wilcoxon signed-rank test was performed for SUVrs computed using MR SYN and ground-truth MR (MR GT ). Results Compared to MR GT , the mean SSIM of MR SYN was 0.974 ± 0.005. The MR SYN -based segmentation achieved a mean DSC of 0.733 across 95 regions. No statistical significance ( P  > 0.05) was found for SUVr between the ROIs from MR SYN and those from MR GT , excluding the precuneus. Conclusion We demonstrated a deep learning framework for automated regional brain analysis in PET/CT with MR SYN . Our proposed framework can benefit patients who have difficulties in performing an MRI scan.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1619-7070
1619-7089
1619-7089
DOI:10.1007/s00259-025-07132-2