Computer vision systems are insensitive to the scale of objects in natural scenes, so it is important to study the multi-scale representation of features. Res2Net implements hierarchical multi-scale convolution in residual blocks, but its random grouping method affects the robustness and intuitive interpretability of the network. We propose a new multi-scale convolution model based on multiple attention. It introduces the attention mechanism into the structure of a Res2-block to better guide feature expression. First, we adopt channel attention to score channels and sort them in descending order of the feature's importance (Channels-Sort). The sorted residual blocks are grouped and intra-block hierarchically convolved to form a single attention and multi-scale block (AMS-block). Then, we implement channel attention on the residual small blocks to constitute a dual attention and multi-scale block (DAMS-block). Introducing spatial attention before sorting the channels to form multi-attention multi-scale blocks(MAMS-block). A MAMS-convolutional neural network (CNN) is a series of multiple MAMS-blocks. It enables significant information to be expressed at more levels, and can also be easily grafted into different convolutional structures. Limited by hardware conditions, we only prove the validity of the proposed ideas through convolutional networks of the same magnitude. The experimental results show that the convolution model with an attention mechanism and multi-scale features is superior in image classification.Appl. Sci. 2020, 10, 101 2 of 18 bottom-up models [16-18] and task-driven top-down models [19][20][21]. The research on the combination of deep learning and attention mechanisms has received considerable attention [22][23][24]. The goal of this study was to select information from the input that was relatively important to the current task. In deep neural networks, researchers often use masks to achieve attention. They demarcate key features in the data by training additional multi-layer weights. This approach naturally embeds the attention mechanism into the deep network structure and participates in end-to-end training. Attention models are well suited for solving computer vision tasks such as image classification, saliency analysis, and object detection.The same object will show different shapes in different natural scenes. When a computer vision system senses an unfamiliar scene, it cannot predict the scale of the object in the image in advance. Therefore, it is necessary to observe image information at different scales. The multi-scale representation of images can be divided into two types: Multi-scale space and multi-resolution pyramid. The difference between them is that multi-scale space has the same resolution at diverse scales. In the visual tasks, the multi-scale approach of multi-resolution pyramid processing targets can be separated into two categories: Image pyramid and feature pyramid. The image pyramid works best but the time and space complexity is high, and the feature pyramid is widely us...