A Hybrid Improved Neural Networks Algorithm Based on L2 and Dropout Regularization

Small samples are prone to overfitting in the neural network training process. This paper proposes an optimization approach based on L2 and dropout regularization called a hybrid improved neural network algorithm to overcome this issue. The proposed model was evaluated based on the Modified National...

Full description

Saved in:
Bibliographic Details
Published inMathematical problems in engineering Vol. 2022; pp. 1 - 19
Main Authors Xie, Xiaoyun, Xie, Ming, Moshayedi, Ata Jahangir, Noori Skandari, Mohammad Hadi
Format Journal Article
LanguageEnglish
Published New York Hindawi 03.11.2022
John Wiley & Sons, Inc
Subjects
Online AccessGet full text
ISSN1024-123X
1026-7077
1563-5147
1563-5147
DOI10.1155/2022/8220453

Cover

More Information
Summary:Small samples are prone to overfitting in the neural network training process. This paper proposes an optimization approach based on L2 and dropout regularization called a hybrid improved neural network algorithm to overcome this issue. The proposed model was evaluated based on the Modified National Institute of Standards and Technology (MNIST, grayscale-28 × 28 × 1) and Canadian Institute for Advanced Research 10 (CIFAR10, RGB - 32 × 32 × 3) as the training data sets and data applied to the LeNet-5 and Autoencoder neural network architectures. The evaluation is conducted based on cross-validation; the result of the model prediction is used as the final measure to evaluate the quality of the model. The results show that the proposed hybrid algorithm can perform more effectively, avoid overfitting, improve the accuracy of network model prediction in classification tasks, and reduce the reconstruction error in the unsupervised domain. In addition, employing the proposed algorithm without increasing the time complexity can reduce the effect of noisy data and bias and improve the training time of neural network models. Quantitative and qualitative experimental results show that the accuracy of using the proposed algorithm in this paper with the MNIST test set has an improvement of 2.3% and 0.9% compared to L2 regularization and dropout regularization, respectively, and based on the CIFAR10 data set, the accuracy improvement of 0.92% compared with L2 regularization and 1.31% concerning dropout regularization. The reconstruction error of using the proposed algorithm in this paper with the MNIST data set has an improvement of 0.00174 and 0.00398 compared to L2 regularization and dropout regularization, respectively, and based on the CIFAR10 data set, the accuracy improvement of 0.00078 compared with L2 regularization and 0.00174 concerning dropout regularization.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1024-123X
1026-7077
1563-5147
1563-5147
DOI:10.1155/2022/8220453