Deep convolutional neural networks currently show the most advanced results in many artificial intelligence fields. However, with the continuous increase in the depth and width of CNNs, the number of parameters and floating-point operations (FLOPs) have also increased dramatically. To apply deep CNN to mobile terminals and portable devices, many scholars have recently worked on the compressing and accelerating deep CNN. Based on this, we propose a novel uniform channel pruning (UCP) method and the modified squeeze-and-excitation blocks (MSEB) is used to measure the importance of the channels in the convolutional layers. The unimportant channels, including convolutional kernels related to them, are pruned directly, which greatly reduces the storage cost and number of calculations. There are two types of residual blocks in ResNet. For ResNet with bottlenecks, we use the pruning method with traditional CNN to prune the 3×3 convolutional layer in the middle of the blocks. For ResNet with basic blocks, we propose an approach to consistently prune all residual blocks in the same stage to ensure that the compact network structure is dimensionally correct. Considering that the network loses considerable information after pruning and that the larger the pruning amplitude is, the more information will be lost, we do not choose fine-tuning but retrain from scratch to restore the accuracy of the network after pruning. Finally, we verify our method on CIFAR-10, CIFAR-100 and ILSVRC-2012 for image classification. The results indicate that the performance of the compact network after retraining from scratch, when the pruning rate is small, is better than the original network. Even when the pruning amplitude is large, the accuracy can be maintained or decreased slightly. On the CIFAR-100, when reducing the parameters and FLOPs up to 82% and 62% respectively, the accuracy of VGG-19 even improved by 0.54% after retraining.