Hybrid pixel-feature fusion system for multimodal medical images
Multimodal medical image fusion aims to reduce insignificant information and improve clinical diagnosis accuracy. The purpose of image fusion is to retain salient image features and detail information of multiple source images to yield a more informative fused image. A hybrid algorithm based on both...
Saved in:
| Published in | Journal of ambient intelligence and humanized computing Vol. 12; no. 6; pp. 6001 - 6018 |
|---|---|
| Main Authors | , , , , |
| Format | Journal Article |
| Language | English |
| Published |
Berlin/Heidelberg
Springer Berlin Heidelberg
01.06.2021
Springer Nature B.V |
| Subjects | |
| Online Access | Get full text |
| ISSN | 1868-5137 1868-5145 |
| DOI | 10.1007/s12652-020-02154-0 |
Cover
| Summary: | Multimodal medical image fusion aims to reduce insignificant information and improve clinical diagnosis accuracy. The purpose of image fusion is to retain salient image features and detail information of multiple source images to yield a more informative fused image. A hybrid algorithm based on both pixel and feature levels of multimodal medical image fusion is presented in this paper. For the pixel-level fusion, the source images are decomposed into low- and high-frequency components using Discrete Wavelet Transform (DWT), and then the low-frequency coefficients are fused using maximum fusion rule. Thereafter, the curvelet transform is applied on the high-frequency coefficients. The obtained high-frequency subbands (fine scale) are fused using Principal Component Analysis (PCA) fusion rule. On the other hand, the feature-level fusion is accomplished by extracting various features form the coarse and detail subbands and using them for the fusion process. These features involve mean, variance, entropy, visibility, and standard deviation. Thereafter, the inverse curvelet transform is implemented on the fused high-frequency coefficients, and finally the resultant fused image is acquired by applying the inverse DWT on the fused low- and high-frequency components. The proposed method is evaluated and implemented on different pairs of medical image modalities. The results demonstrate that the proposed method improves the quality of the final fused image in terms of Mutual Information (
MI
), Correlation Coefficient (
CC
), entropy, Structural Similarity index (
SSIM
), Edge Strength Similarity for Image quality (
ESSIM
), Peak Signal-to-Noise Ratio (
PSNR
), and edge-based similarity measure (
Q
AB
/
F
). |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 1868-5137 1868-5145 |
| DOI: | 10.1007/s12652-020-02154-0 |