Learning in compressed space

We examine two methods which are used to deal with complex machine learning problems: compressed sensing and model compression. We discuss both methods in the context of feed-forward artificial neural networks and develop the backpropagation method in compressed parameter space. We further show that...

Full description

Saved in:
Bibliographic Details
Published inNeural networks Vol. 42; pp. 83 - 93
Main Authors Fabisch, Alexander, Kassahun, Yohannes, Wöhrle, Hendrik, Kirchner, Frank
Format Journal Article
LanguageEnglish
Published Kidlington Elsevier Ltd 01.06.2013
Elsevier
Subjects
Online AccessGet full text
ISSN0893-6080
1879-2782
1879-2782
DOI10.1016/j.neunet.2013.01.020

Cover

More Information
Summary:We examine two methods which are used to deal with complex machine learning problems: compressed sensing and model compression. We discuss both methods in the context of feed-forward artificial neural networks and develop the backpropagation method in compressed parameter space. We further show that compressing the weights of a layer of a multilayer perceptron is equivalent to compressing the input of the layer. Based on this theoretical framework, we will use orthogonal functions and especially random projections for compression and perform experiments in supervised and reinforcement learning to demonstrate that the presented methods reduce training time significantly.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0893-6080
1879-2782
1879-2782
DOI:10.1016/j.neunet.2013.01.020