2019
DOI: 10.48550/arxiv.1901.01928
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DSConv: Efficient Convolution Operator

Abstract: Quantization is a popular way of increasing the speed and lowering the memory usage of Convolution Neural Networks (CNNs). When labelled training data is available, network weights and activations have successfully been quantized down to 1-bit. The same cannot be said about the scenario when labelled training data is not available, e.g. when quantizing a pre-trained model, where current approaches show, at best, no loss of accuracy at 8-bit quantizations.We introduce DSConv, a flexible quantized convolution op… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 23 publications
0
1
0
Order By: Relevance
“…From table 2, it can be seen that OD-YOLO is optimal compared with other models in recall, map 50 and GFLOPs. Compared to the baseline model, OD-YOLO improves precision by 4.8%, recall by 2.4%, map 50 by 2.2%, and the computational complexity is reduced by 3.7; compared with other improved convolutional models, OD-YOLO is only slightly lower than model 4 [19] in precision by 1.4%, and slightly lower than model 2 and model 4 in map 50:95 by 1.2% and 1.3%, respectively. Meanwhile, in order to verify whether the embedding position of ODConv convolution is optimal or not, ODConv is used to replace the conventional convolution of neck network (model 2).…”
Section: Od-yolo Comparison Experimentmentioning
confidence: 88%
“…From table 2, it can be seen that OD-YOLO is optimal compared with other models in recall, map 50 and GFLOPs. Compared to the baseline model, OD-YOLO improves precision by 4.8%, recall by 2.4%, map 50 by 2.2%, and the computational complexity is reduced by 3.7; compared with other improved convolutional models, OD-YOLO is only slightly lower than model 4 [19] in precision by 1.4%, and slightly lower than model 2 and model 4 in map 50:95 by 1.2% and 1.3%, respectively. Meanwhile, in order to verify whether the embedding position of ODConv convolution is optimal or not, ODConv is used to replace the conventional convolution of neck network (model 2).…”
Section: Od-yolo Comparison Experimentmentioning
confidence: 88%
“…Using different types of convolutions will give the model different features. When processing the features, we used three convolutions, DSConv [52], and GNConv [53] for validation. The comparison results are shown in Table 4:…”
Section: Ablation Experimentsmentioning
confidence: 99%