Pruning convolution neural network (squeezenet) using taylor expansion-based criterion

Recent research in the field of deep learning focuses on reducing the model size of the Convolution Neural Network (CNN) by various compression techniques like Pruning, Quantization and Encoding (eg. Huffman encoding). This paper proposes a way to prune the CNN based on Taylor expansion of change in...

Full description

Saved in:
Bibliographic Details
Published in2018 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT) pp. 1 - 5
Main Authors Gaikwad, Akash Sunil, El-Sharkawy, Mohamed
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.12.2018
Subjects
Online AccessGet full text
DOI10.1109/ISSPIT.2018.8705095

Cover

More Information
Summary:Recent research in the field of deep learning focuses on reducing the model size of the Convolution Neural Network (CNN) by various compression techniques like Pruning, Quantization and Encoding (eg. Huffman encoding). This paper proposes a way to prune the CNN based on Taylor expansion of change in cost function ΔC of the model. The proposed algorithm uses greedy criteria based pruning with fine-tuning by backpropagation on SqueezeNet architecture. Transfer learning technique is used to train the SqueezeNet on the CIFAR-10 dataset. The proposed algorithm achieves 70% model reduction on SqueezeNet architecture with only 1% drop in accuracy.
DOI:10.1109/ISSPIT.2018.8705095