A multimodal fusion method for Alzheimer’s disease based on DCT convolutional sparse representation

The medical information contained in magnetic resonance imaging (MRI) and positron emission tomography (PET) has driven the development of intelligent diagnosis of Alzheimer's disease (AD) and multimodal medical imaging. To solve the problems of severe energy loss, low contrast of fused images...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in neuroscience Vol. 16; p. 1100812
Main Authors Zhang, Guo, Nie, Xixi, Liu, Bangtao, Yuan, Hong, Li, Jin, Sun, Weiwei, Huang, Shixin
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Media S.A 06.01.2023
Subjects
Online AccessGet full text
ISSN1662-453X
1662-4548
1662-453X
DOI10.3389/fnins.2022.1100812

Cover

More Information
Summary:The medical information contained in magnetic resonance imaging (MRI) and positron emission tomography (PET) has driven the development of intelligent diagnosis of Alzheimer's disease (AD) and multimodal medical imaging. To solve the problems of severe energy loss, low contrast of fused images and spatial inconsistency in the traditional multimodal medical image fusion methods based on sparse representation. A multimodal fusion algorithm for Alzheimer' s disease based on the discrete cosine transform (DCT) convolutional sparse representation is proposed. The algorithm first performs a multi-scale DCT decomposition of the source medical images and uses the sub-images of different scales as training images, respectively. Different sparse coefficients are obtained by optimally solving the sub-dictionaries at different scales using alternating directional multiplication method (ADMM). Secondly, the coefficients of high-frequency and low-frequency subimages are inverse DCTed using an improved L1 parametric rule combined with improved spatial frequency novel sum-modified SF (NMSF) to obtain the final fused images. Through extensive experimental results, we show that our proposed method has good performance in contrast enhancement, texture and contour information retention.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
This article was submitted to Neural Technology, a section of the journal Frontiers in Neuroscience
Edited by: Xiaomin Yang, Sichuan University, China
These authors share first authorship
Reviewed by: Kaining Han, University of Electronic Science and Technology of China, China; Teng Li, Anhui University, China
ISSN:1662-453X
1662-4548
1662-453X
DOI:10.3389/fnins.2022.1100812