A Pansharpening Algorithm Based on Multilevel Feature Fusion Visual State Space Network

Panchromatic sharpening technology is extensively applied to the integration of multi-band remote sensing images. Aiming at the problem that existing end-to-end deep networks using modules such as convolutional layers, normalization and attention mechanisms cannot simultaneously capture both the loc...

Full description

Saved in:
Bibliographic Details
Published in2024 5th International Conference on Computer, Big Data and Artificial Intelligence (ICCBD+AI) pp. 229 - 223
Main Authors Xu, Guoxia, He, Yue, Deng, Lizhen, Zhu, Hu
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.11.2024
Subjects
Online AccessGet full text
DOI10.1109/ICCBD-AI65562.2024.00045

Cover

More Information
Summary:Panchromatic sharpening technology is extensively applied to the integration of multi-band remote sensing images. Aiming at the problem that existing end-to-end deep networks using modules such as convolutional layers, normalization and attention mechanisms cannot simultaneously capture both the local details of remote sensing images and the broader contextual information, this paper proposes a panchromatic sharpening algorithm leveraging a multilevel feature fusion visual state-space network, which employs a combination of deep and shallow strategies in the feature extraction stage. Firstly the input image is dynamically adjusted to the convolution parameters and sensory field by adaptive convolutional layer. This approach improves the model's representational capacity while maintaining the network's original depth. Secondly, the long-distance features are modeled by the visual structure state space connected by multi-level residuals, so that the shallow feature map has a global receptive field with dynamic weights, and is able to acquire features with higher accuracy. Meanwhile, the feature refinement reconstruction module is used to focus on the detailed texture of the global information of the aggregated features, thereby significantly improving the accuracy and reliability of the extracted feature reconstruction process. Through extensive experiments, we have demonstrated that the proposed method exhibits strong competitiveness.
DOI:10.1109/ICCBD-AI65562.2024.00045