[Back]


Talks and Poster Presentations (with Proceedings-Entry):

A. Marchisio, M. Hanif, M. Martina, M. Shafique:
"PruNet: Class-Blind Pruning Method for Deep Neural Networks";
Poster: IEEE International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brasil; 2018-07-08 - 2018-07-13; in: "2018 International Joint Conference on Neural Networks (IJCNN)", (2018), ISBN: 978-1-5090-6014-6; 1 - 8.



English abstract:
DNNs are highly memory and computationally intensive, due to which they are unfeasible to deploy in real time or mobile applications, where power and memory resources are scarce. Introducing sparsity in the network is a way to reduce those requirements. However, systematically employing pruning under given accuracy requirements is a challenging problem. We propose a novel methodology that iteratively applies a magnitude-based Class-Blind pruning to compress a DNN for obtaining a sparse model. It is a generic methodology and can be applied to different types of DNNs. We demonstrate that retraining after pruning is essential to restore the accuracy of the network. Experimental results show that our methodology is able to reduce the model size by around two orders of magnitude, without noticeably affecting the accuracy. It requires several iterations of pruning and retraining, but can achieve up to 190x Memory Saving Ratio (for the LeNet on the MNIST dataset) when compared to the baseline model. Similar results are also obtained for more complex networks like 91x for VGG-16 on the CIFAR100 dataset. If we combine this work with an efficient coding for sparse networks, like Compressed Sparse Column (CSC) or Compressed Sparse Row (CSR), we can obtain a reduced memory footprint. Our methodology can be complemented by other compression techniques, like weight sharing, quantization or fixed-point conversion, that allows to further reduce memory and computations.


"Official" electronic version of the publication (accessed through its Digital Object Identifier - DOI)
http://dx.doi.org/10.1109/IJCNN.2018.8489764


Created from the Publication Database of the Vienna University of Technology.