Multi-Frame Compressed Video Quality Enhancement by Spatio-Temporal Information Balance

In recent years, the performance of multi-frame quality enhancement algorithms for compressed videos has been greatly improved compared with single-frame based algorithms. However, the existing methods mainly focus on mining the temporal information of multiple frames. The large number of reference...

Full description

Saved in:
Bibliographic Details
Published inIEEE signal processing letters Vol. 30; pp. 105 - 109
Main Authors Wang, Zeyang, Ye, Mao, Li, Shuai, Li, Xue
Format Journal Article
LanguageEnglish
Published New York IEEE 2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN1070-9908
1558-2361
DOI10.1109/LSP.2023.3244711

Cover

More Information
Summary:In recent years, the performance of multi-frame quality enhancement algorithms for compressed videos has been greatly improved compared with single-frame based algorithms. However, the existing methods mainly focus on mining the temporal information of multiple frames. The large number of reference frames reduces the exploration of spatial information, although the existing single-frame based algorithms for enhancement, denoising, and super-resolution demonstrate the significance of the spatial information. To address this problem, we propose a plug-and-play module called Spatio-temporal Information Balance (STIB) to adaptively balance the spatial and temporal information. In our method, we use a feature extractor to exploit richer spatial information, and use a refinement module to refine the aligned temporal information, to be more conducive to the fusion of spatio-temporal information. Finally, we use the deformable convolution based re-alignment module to do alignment and fusion in feature space for balancing the spatio-temporal information. Experiments show that our module can significantly improve the performance of the existing multi-frame based enhancement algorithms.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1070-9908
1558-2361
DOI:10.1109/LSP.2023.3244711