Pruning Ratio Optimization with Layer-Wise Pruning Method for Accelerating Convolutional Neural Networks
- 1 January 2022
- journal article
- research article
- Published by Institute of Electronics, Information and Communications Engineers (IEICE) in IEICE Transactions on Information and Systems
- Vol. E105.D (1), 161-169
- https://doi.org/10.1587/transinf.2021edp7096
Abstract
Pruning is an effective technique to reduce computational complexity of Convolutional Neural Networks (CNNs) by removing redundant neurons (or weights). There are two types of pruning methods: holistic pruning and layer-wise pruning. The former selects the least important neuron from the entire model and prunes it. The latter conducts pruning layer by layer. Recently, it has turned out that some layer-wise methods are effective for reducing computational complexity of pruned models while preserving their accuracy. The difficulty of layer-wise pruning is how to adjust pruning ratio (the ratio of neurons to be pruned) in each layer. Because CNNs typically have lots of layers composed of lots of neurons, it is inefficient to tune pruning ratios by human hands. In this paper, we present Pruning Ratio Optimizer (PRO), a method that can be combined with layer-wise pruning methods for optimizing pruning ratios. The idea of PRO is to adjust pruning ratios based on how much pruning in each layer has an impact on the outputs in the final layer. In the experiments, we could verify the effectiveness of PRO.Keywords
This publication has 18 references indexed in Scilit:
- Towards Fine-grained Network Security Forensics and Diagnosis in the SDN EraPublished by Association for Computing Machinery (ACM) ,2018
- AMC: AutoML for Model Compression and Acceleration on Mobile DevicesPublished by Springer Science and Business Media LLC ,2018
- SBNet: Sparse Blocks Network for Fast InferencePublished by Institute of Electrical and Electronics Engineers (IEEE) ,2018
- ThiNet: A Filter Level Pruning Method for Deep Neural Network CompressionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2017
- Learning Efficient Convolutional Networks through Network SlimmingPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2017
- Channel Pruning for Accelerating Very Deep Neural NetworksPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2017
- On Compressing Deep Models by Low Rank and Sparse DecompositionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2017
- Deep Residual Learning for Image RecognitionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2016
- Reshaping deep neural network for fast decoding by node-pruningPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2014
- ImageNet: A large-scale hierarchical image databasePublished by Institute of Electrical and Electronics Engineers (IEEE) ,2009