Robust Deep Convolutional Dictionary Model With Alignment Assistance for Multi-Contrast MRI Super-Resolution

Multi-contrast magnetic resonance imaging (MCMRI) super-resolution (SR) methods aims to leverage the complementary information present in multi-contrast images. However, existing methods encounter several limitations. Firstly, most current networks fail to appropriately model the correlations of mul...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on medical imaging Vol. 44; no. 8; pp. 3383 - 3396
Main Authors Lei, Pengcheng, Zhang, Miaomiao, Fang, Faming, Zhang, Guixu
Format Journal Article
LanguageEnglish
Published United States IEEE 01.08.2025
Subjects
Online AccessGet full text
ISSN0278-0062
1558-254X
1558-254X
DOI10.1109/TMI.2025.3563523

Cover

More Information
Summary:Multi-contrast magnetic resonance imaging (MCMRI) super-resolution (SR) methods aims to leverage the complementary information present in multi-contrast images. However, existing methods encounter several limitations. Firstly, most current networks fail to appropriately model the correlations of multi-contrast images and lack certain interpretability. Secondly, they often overlook the negative impact of spatial misalignment between modalities in clinical practice. Thirdly, existing methods do not effectively constrain the complementary information learned between multi-contrast images, resulting in information redundancy and limiting their model performance. In this paper, we propose a robust alignment-assisted multi-contrast convolutional dictionary (A2-CDic) model to address these challenges. Specifically, we develop an observation model based on convolutional sparse coding to explicitly represent multi-contrast images as common (e.g., consistent textures) and unique (e.g., inconsistent structures and contrasts) components. Considering there are spatial misalignments in real-world multi-contrast images, we incorporate a spatial alignment module to compensate for the misaligned structures. This approach enables the proposed model to fully exploit the valuable information in the reference image while mitigating interference from inconsistent information. We employ the proximal gradient algorithm to optimize the model and unroll the iterative steps into a multi-scale convolutional dictionary network. Furthermore, we utilize mutual information losses to constrain the extracted common and unique components. This constraint reduces the redundancy between the decomposed components, allowing each sub-module to learn more representative features. We evaluate our model on four publicly available datasets comprising internal, external, spatially aligned, and misaligned MCMRI images. The experimental results demonstrate that our model surpasses existing state-of-the-art MCMRI SR methods in terms of both generalization ability and overall performance. Code is available at https://github.com/lpcccc-cv/A2-CDic .
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0278-0062
1558-254X
1558-254X
DOI:10.1109/TMI.2025.3563523