2018
DOI: 10.1186/s41074-018-0047-6
|View full text |Cite
|
Sign up to set email alerts
|

An multi-scale learning network with depthwise separable convolutions

Abstract: We present a simple multi-scale learning network for image classification that is inspired by the MobileNet. The proposed method has two advantages: (1) It uses the multi-scale block with depthwise separable convolutions, which forms multiple sub-networks by increasing the width of the network while keeping the computational resources constant.(2) It combines the multi-scale block with residual connections and that accelerates the training of networks significantly. The experimental results show that the propo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(14 citation statements)
references
References 16 publications
0
7
0
Order By: Relevance
“…Compares to ( 2) and ( 4), DSC shows lower trainable parameters than conventional CNN, which is formulized as [20], [24]. Based on (5), it is proven that DSC reduces the trainable parameter of the convolution layer.…”
Section: Depthwise Separable Convolutional Neural Networkmentioning
confidence: 92%
See 1 more Smart Citation
“…Compares to ( 2) and ( 4), DSC shows lower trainable parameters than conventional CNN, which is formulized as [20], [24]. Based on (5), it is proven that DSC reduces the trainable parameter of the convolution layer.…”
Section: Depthwise Separable Convolutional Neural Networkmentioning
confidence: 92%
“…Spatial location of (ℎ 𝑙 , 𝑤 𝑙 ) used from bank filter of 𝑓and 𝑑 𝑙 is a receptive field in 𝑥 𝑙 . Therefore, the total trainable parameters of the feature extraction represent as Kernel formulized as [20]:…”
Section: Depthwise Separable Convolutional Neural Networkmentioning
confidence: 99%
“…To decrease the computational cost caused by the cascading networks, depthwise separable convolution, linear bottlenecks and inverted residuals are adopted to reduce the number of model parameters while maintaining the accuracy [43]. By implementing all the above-introduced improvements and techniques, the general flowchart of our proposed detection framework is shown in Figure 15, and the corresponding pseudocodes are presented in Algorithms 2 and Algorithms 3.…”
Section: Inputmentioning
confidence: 99%
“…Depthwise separable convolution have found success in a variety of applications [19,20,21,22]. The work of Nguyen and Ray proposed an adaptive convolution block method that learns the upsampling algorithm [23].…”
Section: Depthwise Separable Convolutionsmentioning
confidence: 99%