2022
DOI: 10.3390/app12178424
|View full text |Cite
|
Sign up to set email alerts
|

Accelerated Inference of Face Detection under Edge-Cloud Collaboration

Abstract: Model compression makes it possible to deploy face detection models on devices with limited computing resources. Edge–cloud collaborative inference, as a new paradigm of neural network inference, can significantly reduce neural network inference latency. Inspired by these two techniques, this paper adopts a two-step acceleration strategy for the CenterNet model. Firstly, the model pruning method is used to prune the convolutional layer and the deconvolutional layer to obtain a preliminary acceleration effect. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 33 publications
(44 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?