Spectral Super-Resolution based on Dictionary Optimization Learning via Spectral Library

Extensive works have been reported in hyperspectral images (HSIs) and multispectral images (MSIs) fusion to raise the spatial resolution of HSIs. However, limited acquisition of HSIs has been an obstacle to such approaches. Spectral super-resolution (SSR) of MSI is a challenging and less investigate...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on geoscience and remote sensing Vol. 61; p. 1
Main Authors Yan, Hao-Fang, Zhao, Yong-Qiang, Chan, Jonathan Cheung-Wai, Kong, Seong G.
Format Journal Article
LanguageEnglish
Published New York IEEE 01.01.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN0196-2892
1558-0644
DOI10.1109/TGRS.2022.3229439

Cover

More Information
Summary:Extensive works have been reported in hyperspectral images (HSIs) and multispectral images (MSIs) fusion to raise the spatial resolution of HSIs. However, limited acquisition of HSIs has been an obstacle to such approaches. Spectral super-resolution (SSR) of MSI is a challenging and less investigated topic which can also provide high resolution synthetic HSIs. To deal with this high ill-posedness problem, we perform super resolution enhancement of MSIs in the spectral domain by incorporating a spectral library as a priori. First, an aligned spectral library, which maps the open-source spectral library to a specific spectral library created for the reconstructed HR HSI is represented. An intermediate latent HSI is obtained from fusing the spatial information from MSI and the hyperspectral information from specific spectral library. Then, we use low-rank attribute embedding to transfer latent HSI into a robust subspace. Finally, a low-rank HSI dictionary representing the hyperspectral information is learned from the latent HSI. The adaptive sparse coefficient of MSI is obtained with non-negative constraint. By fusing these two terms, we get the final HR HSI. The proposed SSR model does not require any pre-training stages. We confirm validity and superiority for our proposed SSR algorithm via comparing with several benchmark state-of-the-art approaches on different datasets.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2022.3229439