FReLU: Flexible Rectified Linear Units for Improving Convolutional Neural Networks

Rectified linear unit (ReLU) is a widely used activation function for deep convolutional neural networks. However, because of the zero-hard rectification, ReLU networks lose the benefits from negative values. In this paper, we propose a novel activation function called flexible rectified linear unit...

Full description

Saved in:
Bibliographic Details
Published in2018 24th International Conference on Pattern Recognition (ICPR) pp. 1223 - 1228
Main Authors Qiu, Suo, Xu, Xiangmin, Cai, Bolun
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.08.2018
Subjects
Online AccessGet full text
DOI10.1109/ICPR.2018.8546022

Cover

More Information
Summary:Rectified linear unit (ReLU) is a widely used activation function for deep convolutional neural networks. However, because of the zero-hard rectification, ReLU networks lose the benefits from negative values. In this paper, we propose a novel activation function called flexible rectified linear unit (FReLU) to further explore the effects of negative values. By redesigning the rectified point of ReLU as a learnable parameter, FReLU expands the states of the activation output. When a network is successfully trained, FReLU tends to converge to a negative value, which improves the expressiveness and thus the performance. Furthermore, FReLU is designed to be simple and effective without exponential functions to maintain low-cost computation. For being able to easily used in various network architectures, FReLU does not rely on strict assumptions by self-adaption. We evaluate FReLU on three standard image classification datasets, including CIFAR-10, CIFAR-100, and ImageNet. Experimental results show that FReLU achieves fast convergence and competitive performance on both plain and residual networks.
DOI:10.1109/ICPR.2018.8546022