L1/2 regularization learning for smoothing interval neural networks: Algorithms and convergence analysis

Interval neural networks can easily address uncertain information, since they are capable of handling various kinds of uncertainties inherently which are represented by interval. Lq (0 < q < 1) regularization was proposed after L1 regularization for better solution of sparsity problems, among...

Full description

Saved in:
Bibliographic Details
Published inNeurocomputing (Amsterdam) Vol. 272; pp. 122 - 129
Main Authors Yang, Dakun, Liu, Yan
Format Journal Article
LanguageEnglish
Published Elsevier B.V 10.01.2018
Subjects
Online AccessGet full text
ISSN0925-2312
1872-8286
DOI10.1016/j.neucom.2017.06.061

Cover

More Information
Summary:Interval neural networks can easily address uncertain information, since they are capable of handling various kinds of uncertainties inherently which are represented by interval. Lq (0 < q < 1) regularization was proposed after L1 regularization for better solution of sparsity problems, among which L1/2 is of extreme importance and can be taken as a representative. However, weights oscillation might occur during learning process due to discontinuous derivative for L1/2 regularization. In this paper, a novel batch gradient algorithm with smoothing L1/2 regularization is proposed to prevent the weights oscillation for a smoothing interval neural network (SINN), which is the modified interval neural network. Here, by smoothing we mean that, in a neighborhood of the origin, we replace the absolute values of the weights by a smooth function for continuous derivative. Compared with conventional gradient learning algorithm with L1/2 regularization, this approach can obtain sparser weights and simpler structure, and improve the learning efficiency. Then we present a sufficient condition for convergence of SINN. Finally, simulation results illustrate the convergence of the main results.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2017.06.061