Residual encoder-decoder based architecture for medical image denoising

High-resolution computed tomography (CT) scans require high doses of X-rays, posing potential health risks to patients, including genetic damage and cancer. Conversely, low doses of X-rays result in noise and artifacts in the reconstructed CT scans. Consequently, the problem of denoising low-dose CT...

Full description

Saved in:
Bibliographic Details
Published inMultimedia tools and applications Vol. 84; no. 19; pp. 21625 - 21642
Main Authors Ferdi, Abdesselam, Benierbah, Said, Nakib, Amir
Format Journal Article
LanguageEnglish
Published New York Springer US 01.06.2025
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN1573-7721
1380-7501
1573-7721
DOI10.1007/s11042-024-20175-1

Cover

More Information
Summary:High-resolution computed tomography (CT) scans require high doses of X-rays, posing potential health risks to patients, including genetic damage and cancer. Conversely, low doses of X-rays result in noise and artifacts in the reconstructed CT scans. Consequently, the problem of denoising low-dose CT (LDCT) images has become a critical yet challenging issue in the field of CT imaging. However, existing deep learning-based LDCT image denoising methods frequently result in the loss of high-frequency features, such as edges and textures, due to the use of mean squared error loss. To address this issue, we propose a method based on high-frequency feature learning to enhance the denoising performance of existing models. Our method is designed to simultaneously learn the primary task of LDCT image denoising and the auxiliary task of LDCT edge detection, thereby improving the denoising performance without increasing the number of model parameters and the inference time. Our method significantly improves the denoising performance of the RED-CNN model, achieving competitive results compared to state-of-the-art denoising models on the AAPM and Qin-LUNG-CT datasets.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1573-7721
1380-7501
1573-7721
DOI:10.1007/s11042-024-20175-1