Improving Neural Network Robustness Through Neighborhood Preserving Layers

One major source of vulnerability of neural nets in classification tasks is from overparameterized fully connected layers near the end of the network. In this paper, we propose a new neighborhood preserving layer which can replace these fully connected layers to improve the network robustness. Netwo...

Full description

Saved in:
Bibliographic Details
Published inPattern Recognition. ICPR International Workshops and Challenges Vol. 12666; pp. 179 - 195
Main Authors Liu, Bingyuan, Malon, Christopher, Xue, Lingzhou, Kruus, Erik
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 2021
Springer International Publishing
SeriesLecture Notes in Computer Science
Online AccessGet full text
ISBN9783030687793
3030687791
ISSN0302-9743
1611-3349
DOI10.1007/978-3-030-68780-9_17

Cover

More Information
Summary:One major source of vulnerability of neural nets in classification tasks is from overparameterized fully connected layers near the end of the network. In this paper, we propose a new neighborhood preserving layer which can replace these fully connected layers to improve the network robustness. Networks including these neighborhood preserving layers can be trained efficiently. We theoretically prove that our proposed layers are more robust against distortion because they effectively control the magnitude of gradients. Finally, we empirically show that networks with our proposed layers are more robust against state-of-the-art gradient descent based attacks, such as a PGD attack on the benchmark image classification datasets MNIST and CIFAR10.
ISBN:9783030687793
3030687791
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-030-68780-9_17