Pruning effectively reduces the size of neural networks, which facilitates deployment of neural networks in production environment, especially in embedded systems with limited computing resources. In this paper, we propose a convolutional neural network pruning method based on attention mechanism. We add a attention module to model to generate scaling factors for channels. The scaling factors are considered as channels' importance score, thus filters and convolution kernels corresponding to channels with lower importance score are removed. Our method has the ability to learn importance of channels during training, instead of considering only the direct impact of parameters like existing methods. Moreover, it does not depend on any dedicated libraries, so could be combined with other compression methods for better performance. In experiments, we prune about 90% parameters in VGGNet with 0.67% accuracy drop and prune about 50% parameters in ResNet-56 with 1.02% accuracy drop.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.