VFF-Net: Evolving forward–forward algorithms into convolutional neural networks for enhanced computational insights

In recent years, significant efforts have been made to overcome the limitations inherent in the traditional back-propagation (BP) algorithm. These limitations include overfitting, vanishing/exploding gradients, slow convergence, and black-box nature. To address these limitations, alternatives to BP...

Full description

Saved in:
Bibliographic Details
Published inNeural networks Vol. 190; p. 107697
Main Authors Lee, Gilha, Shin, Jin, Kim, Hyun
Format Journal Article
LanguageEnglish
Published United States Elsevier Ltd 01.10.2025
Subjects
Online AccessGet full text
ISSN0893-6080
1879-2782
1879-2782
DOI10.1016/j.neunet.2025.107697

Cover

More Information
Summary:In recent years, significant efforts have been made to overcome the limitations inherent in the traditional back-propagation (BP) algorithm. These limitations include overfitting, vanishing/exploding gradients, slow convergence, and black-box nature. To address these limitations, alternatives to BP have been explored, the most well-known of which is the forward–forward network (FFN). We propose a visual forward–forward network (VFF-Net) that significantly improves FFNs for deeper networks, focusing on enhancing performance in convolutional neural network (CNN) training. VFF-Net utilizes a label-wise noise labeling method and cosine-similarity-based contrastive loss, which directly uses intermediate features to solve both the input information loss problem and the performance drop problem caused by the goodness function when applied to CNNs. Furthermore, VFF-Net is accompanied by layer grouping, which groups layers with the same output channel for application in well-known existing CNN-based models; this reduces the number of minima that need to be optimized and facilitates the transfer to CNN-based models by demonstrating the effects of ensemble training. VFF-Net improves the test error by up to 8.31% and 3.80% on a model consisting of four convolutional layers compared with the FFN model targeting a conventional CNN on CIFAR-10 and CIFAR-100, respectively. Furthermore, the fully connected layer-based VFF-Net achieved a test error of 1.70% on the MNIST dataset, which is better than that of the existing BP. In conclusion, the proposed VFF-Net significantly reduces the performance gap with BP by improving the FFN and shows the flexibility to be portable to existing CNN-based models.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0893-6080
1879-2782
1879-2782
DOI:10.1016/j.neunet.2025.107697