2019
DOI: 10.3390/app10010101
|View full text |Cite
|
Sign up to set email alerts
|

A New Multi-Scale Convolutional Model Based on Multiple Attention for Image Classification

Abstract: Computer vision systems are insensitive to the scale of objects in natural scenes, so it is important to study the multi-scale representation of features. Res2Net implements hierarchical multi-scale convolution in residual blocks, but its random grouping method affects the robustness and intuitive interpretability of the network. We propose a new multi-scale convolution model based on multiple attention. It introduces the attention mechanism into the structure of a Res2-block to better guide feature expression… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(10 citation statements)
references
References 46 publications
0
9
0
Order By: Relevance
“…ADCM [35] integrates dropout into the attention mechanism according to the idea of lightweight and improves CBAM [33]. In addition, many works use improved attention mechanisms to improve the effect of CNNs [36,37].…”
Section: Attention Mechanismmentioning
confidence: 99%
“…ADCM [35] integrates dropout into the attention mechanism according to the idea of lightweight and improves CBAM [33]. In addition, many works use improved attention mechanisms to improve the effect of CNNs [36,37].…”
Section: Attention Mechanismmentioning
confidence: 99%
“…Res2Net randomly grouping residual blocks will affect the robustness and intuitive interpretability of the network. AMS-CNN [41] introduces channel sorting and grouping convolution, which makes more important feature channels perform more convolution operations to achieve multi-scale expression. Next, we use the improved channel expansion algorithm to construct the object-level attention multi-scale convolution model in AMS-CNN, and implement the object-level attention based on the channel expansion strategy in the intra-layer multi-scale convolution.…”
Section: B Multi-scale Convolutional Modelmentioning
confidence: 99%
“…As shown in Figure 1, the "channel sorting module" is first introduced, and the "channel expansion module" will be described in detail later. Here, "channel sorting module [41]" is used to obtain channel importance vector U , where U is a one-dimensional vector that records the importance of each feature channel. The implementation of the channel-sorting strategy can be divided into two parts: first, the global average pooling method is directly used to obtain the feature average value of each channel to represent the importance of the channel; second, the channel-by-channel convolution method is adopted, that is, the global depth-wise convolution, in which the sizes of the convolution kernels and the scales of the feature channels are the same.…”
Section: A Channel Expansion Networkmentioning
confidence: 99%
“…ISN provides the possibility of segmenting the adhesive characters, we used the same idea in this work. Attention mechanism: Attention mechanism are widely used in computer vision [28,29] and natural language processing [30]. X. Wang et al [31] proposed a non-local network for video classification, which is based on a spacetime dependency attention mechanism.…”
Section: Related Workmentioning
confidence: 99%