Vector approximate message passing with sparse Bayesian learning for Gaussian mixture prior

Compressed sensing (CS) aims for seeking appropriate algorithms to recover a sparse vector from noisy linear observations. Currently, various Bayesian-based algorithms such as sparse Bayesian learning (SBL) and approximate message passing (AMP) based algorithms have been proposed. For SBL, it has ac...

Full description

Saved in:
Bibliographic Details
Published inChina communications Vol. 20; no. 5; pp. 57 - 69
Main Authors Ruan, Chengyao, Zhang, Zaichen, Jiang, Hao, Dang, Jian, Wu, Liang, Zhang, Hongming
Format Journal Article
LanguageEnglish
Published China Institute of Communications 01.05.2023
National Mobile Communications Research Laboratory,Frontiers Science Center for Mobile Information Communication and Security,Southeast University,Nanjing 210096,China%National Mobile Communications Research Laboratory,Frontiers Science Center for Mobile Information Communication and Security,Southeast University,Nanjing 210096,China
Purple Mountain Laboratories,No.9 Mozhou East Road,Nanjing 211111,China%School of Artificial Intelligence,Nanjing University of Information Science and Technology,Nanjing 210044,China%School of Information and Communication Engineering,Beijing University of Posts and Telecommunications,Beijing 100876,China
Subjects
Online AccessGet full text
ISSN1673-5447
DOI10.23919/JCC.2023.00.005

Cover

More Information
Summary:Compressed sensing (CS) aims for seeking appropriate algorithms to recover a sparse vector from noisy linear observations. Currently, various Bayesian-based algorithms such as sparse Bayesian learning (SBL) and approximate message passing (AMP) based algorithms have been proposed. For SBL, it has accurate performance with robustness while its computational complexity is high due to matrix inversion. For AMP, its performance is guaranteed by the severe restriction of the measurement matrix, which limits its application in solving CS problem. To overcome the drawbacks of the above algorithms, in this paper, we present a low complexity algorithm for the single linear model that incorporates the vector AMP (VAMP) into the SBL structure with expectation maximization (EM). Specifically, we apply the variance auto-tuning into the VAMP to implement the E step in SBL, which decrease the iterations that require to converge compared with VAMP-EM algorithm when using a Gaussian mixture (GM) prior. Simulation results show that the proposed algorithm has better performance with high robustness under various cases of difficult measurement matrices.
ISSN:1673-5447
DOI:10.23919/JCC.2023.00.005