2020
DOI: 10.1155/2020/8817849
|View full text |Cite
|
Sign up to set email alerts
|

A New Image Classification Approach via Improved MobileNet Models with Local Receptive Field Expansion in Shallow Layers

Abstract: Because deep neural networks (DNNs) are both memory-intensive and computation-intensive, they are difficult to apply to embedded systems with limited hardware resources. Therefore, DNN models need to be compressed and accelerated. By applying depthwise separable convolutions, MobileNet can decrease the number of parameters and computational complexity with less loss of classification precision. Based on MobileNet, 3 improved MobileNet models with local receptive field expansion in shallow layers, also called D… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
39
0
3

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 49 publications
(45 citation statements)
references
References 24 publications
0
39
0
3
Order By: Relevance
“…As opposed to MobileNet V2 [ 63 ], MobileNet [ 4 ] is a CNN-based model that is extensively used to classify images. The main advantage of using the MobileNet architecture is that the model needs comparatively less computational effort than the conventional CNN model that makes it suitable for working over mobile devices and the computers that work over lower computational capabilities [ 64 , 65 , 66 ]. The MobileNet model is a simplified structure that incorporates a convolution layer that can be used in distinguishing the detail that relies on two manageable features that switch among the parameter’s accuracy and latency effectively.…”
Section: Methodsmentioning
confidence: 99%
“…As opposed to MobileNet V2 [ 63 ], MobileNet [ 4 ] is a CNN-based model that is extensively used to classify images. The main advantage of using the MobileNet architecture is that the model needs comparatively less computational effort than the conventional CNN model that makes it suitable for working over mobile devices and the computers that work over lower computational capabilities [ 64 , 65 , 66 ]. The MobileNet model is a simplified structure that incorporates a convolution layer that can be used in distinguishing the detail that relies on two manageable features that switch among the parameter’s accuracy and latency effectively.…”
Section: Methodsmentioning
confidence: 99%
“…To show the efficiency of the proposed method, some comparison experiments are conducted, where some other state-of-the-art deep learning-based methods are tested on the same data set used in Section III-B. These state-of-the-art deep learning-based methods include MobileNet (with 14 Conv, 13 DW, 1 Pool, 1 FC and 1 Softmax layers) [45], ResNet101 (with 101 layers) [46], AlexNet (with 8 layers) [47], EfficientNet (with 16 MBConv, 2 Conv, 1 Pool, and 1 FC layers) [48], and Inception V1 (with 22 layers, see Fig. 7 for details) [49].…”
Section: Comparison Experimentsmentioning
confidence: 99%
“…The pooling layer, so called subsampling layer, is behindhand the convolutional layer. It executes down sampling function, with certain value as output in a specific sub region [22]. Simultaneously, the flexibility of the network to the modifications of image translation and rotation is raised.…”
Section: Mobilenet Based Spatial Feature Extractionmentioning
confidence: 99%
“…Stimulated by hunting performance of grey wolves, the top 3 optimum students are chosen, as given by Eq. (22).…”
Section: Teacher Phase Imentioning
confidence: 99%