Convergence analysis for sparse Pi-sigma neural network model with entropy error function

As a high-order neural network, the Pi-sigma neural network has demonstrated its capacities for fast learning and strong nonlinear processing. In this paper, a new algorithm is proposed for Pi-sigma neural networks with entropy error functions based on L 0 regularization. One of the key features of...

Full description

Saved in:
Bibliographic Details
Published inInternational journal of machine learning and cybernetics Vol. 14; no. 12; pp. 4405 - 4416
Main Authors Fan, Qinwei, Zheng, Fengjiao, Huang, Xiaodi, Xu, Dongpo
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer Berlin Heidelberg 01.12.2023
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN1868-8071
1868-808X
DOI10.1007/s13042-023-01901-x

Cover

More Information
Summary:As a high-order neural network, the Pi-sigma neural network has demonstrated its capacities for fast learning and strong nonlinear processing. In this paper, a new algorithm is proposed for Pi-sigma neural networks with entropy error functions based on L 0 regularization. One of the key features of the proposed algorithm is the use of an entropy error function instead of the more common square error function, which is different from those in most existing literature. At the same time, the proposed algorithm also employs L 0 regularization as a means of ensuring the efficiency of the network. Based on the gradient method, the monotonicity, and strong and weak convergence of the network are strictly proved by theoretical analysis and experimental verification. Experiments on applying the proposed algorithm to both classification and regression problems have demonstrated the improved performance of the algorithm.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1868-8071
1868-808X
DOI:10.1007/s13042-023-01901-x