UAMPnet: Unrolled approximate message passing network for nonconvex regularization

Deep neural networks and model-based methods are both popular for their wide and great success in many inference problems. In this paper, resorting to deep learning, we study the efficient algorithms for two popular nonconvex regularization methods, smoothly clipped absolute deviation (SCAD) and min...

Full description

Saved in:
Bibliographic Details
Published inExpert systems with applications Vol. 213; p. 119220
Main Authors Zhang, Hui, Li, Shoujiang, Liang, Yong, Zhang, Hai, Du, Mengmeng
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.03.2023
Subjects
Online AccessGet full text
ISSN0957-4174
1873-6793
DOI10.1016/j.eswa.2022.119220

Cover

More Information
Summary:Deep neural networks and model-based methods are both popular for their wide and great success in many inference problems. In this paper, resorting to deep learning, we study the efficient algorithms for two popular nonconvex regularization methods, smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP). Approximate message passing (AMP) algorithm can be effective to optimize nonconvex regularization models. First, we unroll the AMP-based algorithm as the feed-forward neural network through leveraging the novel activation functions of neurons, dubbed as Unrolled-AMP. And then, for the case where the measurement matrix deviates from the i.i.d. Gaussian distribution, we propose two improved iterative algorithms based on the “Vector AMP (VAMP)” algorithm to solve the nonconvex regularization methods. Further, we unroll them as the feed-forward neural network, dubbed as Unrolled-VAMP. These two novel neural network architectures use a back-propagation algorithm to learn all their parameters from the training data. Finally, the convergence of Unrolled-AMP algorithm is analyzed, and the efficiency of the proposed networks is demonstrated through experiments on sparse signal reconstruction and 5G wireless communication. •We develop efficient algorithms for solving nonconvex regularization methods.•Two types iterative algorithms are unrolled as deep learning architectures.•We give convergence analysis of the proposed networks.•The proposed networks require few layers to achieve a specific level of accuracy.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2022.119220