Invertible Residual Blocks in Deep Learning Networks

Residual blocks have been widely used in deep learning networks. However, information may be lost in residual blocks due to the relinquishment of information in rectifier linear units (ReLUs). To address this issue, invertible residual networks have been proposed recently but are generally under str...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 35; no. 7; pp. 10167 - 10173
Main Authors Wang, Ruhua, An, Senjian, Liu, Wanquan, Li, Ling
Format Journal Article
LanguageEnglish
Published United States IEEE 01.07.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN2162-237X
2162-2388
2162-2388
DOI10.1109/TNNLS.2023.3238397

Cover

More Information
Summary:Residual blocks have been widely used in deep learning networks. However, information may be lost in residual blocks due to the relinquishment of information in rectifier linear units (ReLUs). To address this issue, invertible residual networks have been proposed recently but are generally under strict restrictions which limit their applications. In this brief, we investigate the conditions under which a residual block is invertible. A sufficient and necessary condition is presented for the invertibility of residual blocks with one layer of ReLU inside the block. In particular, for widely used residual blocks with convolutions, we show that such residual blocks are invertible under weak conditions if the convolution is implemented with certain zero-padding methods. Inverse algorithms are also proposed, and experiments are conducted to show the effectiveness of the proposed inverse algorithms and prove the correctness of the theoretical results.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2162-237X
2162-2388
2162-2388
DOI:10.1109/TNNLS.2023.3238397