MMFGAN: A novel multimodal brain medical image fusion based on the improvement of generative adversarial network

In recent years, the multimodal medical imaging assisted diagnosis and treatment technology has developed rapidly. In brain disease diagnosis, CT-SPECT, MRI-PET and MRI-SPECT fusion images are more favored by brain doctors because they contain both soft tissue structure information and organ metabol...

Full description

Saved in:
Bibliographic Details
Published inMultimedia tools and applications Vol. 81; no. 4; pp. 5889 - 5927
Main Authors Guo, Kai, Hu, Xiaohan, Li, Xiongfei
Format Journal Article
LanguageEnglish
Published New York Springer US 01.02.2022
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN1380-7501
1573-7721
DOI10.1007/s11042-021-11822-y

Cover

More Information
Summary:In recent years, the multimodal medical imaging assisted diagnosis and treatment technology has developed rapidly. In brain disease diagnosis, CT-SPECT, MRI-PET and MRI-SPECT fusion images are more favored by brain doctors because they contain both soft tissue structure information and organ metabolism information. Most of the previous medical image fusion algorithms are the migration of other types of image fusion methods and such operations often lose the features of the medical image itself. This paper proposes a multimodal medical image fusion model based on the residual attention mechanism of the generative adversarial network. In the design of the generator, we construct the residual attention mechanism block and the concat detail texture block. After source images are concatenated to a matrix , the matrix is put into two blocks at the same time to extract information such as size, shape, spatial location and texture details. The obtained features are put into the merge block to reconstruct the image. The obtained reconstructed image and source images are respectively put into two discriminators for correction to obtain the final fused image. The model has been experimented on the images of three databases and achieved good fusion results. Qualitative and quantitative evaluations prove that the model is superior to other comparison algorithms in terms of image fusion quality and detail information retention.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1380-7501
1573-7721
DOI:10.1007/s11042-021-11822-y