2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00165
|View full text |Cite
|
Sign up to set email alerts
|

GhostNet: More Features From Cheap Operations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
957
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 2,036 publications
(1,000 citation statements)
references
References 28 publications
1
957
0
Order By: Relevance
“…The simple linear cheap operation is also used to generate more feature information. In the case of the same human visual perception, the size of the ghost network convolution kernel is 1 × 1 in our work compared with other convolution kernels of 3 × 3 and 5 × 5 [ 46 ]. This is conducive to the extraction of the local features of the images, but fewer parameters are calculated.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
See 2 more Smart Citations
“…The simple linear cheap operation is also used to generate more feature information. In the case of the same human visual perception, the size of the ghost network convolution kernel is 1 × 1 in our work compared with other convolution kernels of 3 × 3 and 5 × 5 [ 46 ]. This is conducive to the extraction of the local features of the images, but fewer parameters are calculated.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…The ghost network is introduced into DGRAMN in this article. The total number of parameters and computational complexity required in the ghost network is reduced compared with an ordinary CNN without changing the size of the output feature map [ 46 ]. The use of GhostNet solves the problem of large computation and large storage in spectral reconstruction.…”
Section: Our Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The GhostNet [27] idea comes from Huawei's Noah's Ark Laboratory. It proposes that a well-trained deep neural network usually contains rich or even redundant feature maps, and one feature map can be transformed from another feature map through certain operations.…”
Section: Related Workmentioning
confidence: 99%
“…In the refined design of compression, the deep separable convolution in MobileNet was first used to reduce the parameters of the network initially, however it did not reach the ideal goal of this article. GhostNet [27] is also a kind of compact model design. It proposes that a feature map can be used as a "Ghost" of another and the "Ghost" can be generated by a cheaper operation.…”
Section: Introductionmentioning
confidence: 99%