2022
DOI: 10.3390/electronics11040575
|View full text |Cite
|
Sign up to set email alerts
|

A Light-Weight CNN for Object Detection with Sparse Model and Knowledge Distillation

Abstract: This study details the development of a lightweight and high performance model, targeting real-time object detection. Several designed features were integrated into the proposed framework to accomplish a light weight, rapid execution, and optimal performance in object detection. Foremost, a sparse and lightweight structure was chosen as the network’s backbone, and feature fusion was performed using modified feature pyramid networks. Recent learning strategies in data augmentation, mixed precision training, and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 31 publications
0
3
0
Order By: Relevance
“…Lightweight CNN has received increasing attention due to fewer weighting parameters and lower computational requirements. The key points to achieve CNN lightweight are to reduce the number of parameters and computational complexity of the model, and the common methods include using depthwise separable convolution instead of regular convolution [44], model pruning [45], and knowledge distillation [46,47]. Model pruning usually requires fine-tuning the model, which is relatively complex and sometimes even requires re-training the entire model.…”
Section: Lightweight Cnnmentioning
confidence: 99%
“…Lightweight CNN has received increasing attention due to fewer weighting parameters and lower computational requirements. The key points to achieve CNN lightweight are to reduce the number of parameters and computational complexity of the model, and the common methods include using depthwise separable convolution instead of regular convolution [44], model pruning [45], and knowledge distillation [46,47]. Model pruning usually requires fine-tuning the model, which is relatively complex and sometimes even requires re-training the entire model.…”
Section: Lightweight Cnnmentioning
confidence: 99%
“…Efficientlite was proposed by Google in March 2022, and the structure searches the depth, width, and resolution of the composite scaling network through multi-objective neural architecture. Compared to Efficient, Efficientlite replaces the previous version, eliminates the squeeze-and-excitation structure and replaces the original swish activation function with the Relu6 activation function to avoid the loss of feature information in the non-linear layers [26]. It consists of a 3 × 3 normal convolutional layer, 7 MBConv and a 1 × 1 normal convolutional layer, average pooling layer and fully connected layer.…”
Section: Replace the Yolov5 Backbone With Efficientlitementioning
confidence: 99%
“…In recent years, due to great representational power and the good ability to process image data, deep convolutional neural networks (DCNNs) have been used in various computer vision tasks, such as image classification [1,2], object detection [3,4], and semantic segmentation [5,6]. It is reported that most of the modern, powerful DCNNs need a considerable number of learnable parameters and computation requirements, which impose high demands on the hardware that supports their running.…”
Section: Introductionmentioning
confidence: 99%