2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01191
|View full text |Cite
|
Sign up to set email alerts
|

What’s Hidden in a Randomly Weighted Neural Network?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

9
237
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 143 publications
(271 citation statements)
references
References 8 publications
9
237
0
Order By: Relevance
“…It was reported that selective tunings, such as number-selective responses, can emerge from the multiplication of random matrices ( 23 ) and that the structure of a randomly initialized convolutional neural network can provide a priori information about the low-level statistics in natural images, enabling the reconstruction of the corrupted images without any training for feature extraction ( 24 ). Furthermore, a recent study showed that subnetworks from randomly initialized neural networks can perform image classification ( 25 ), implying the ability of a randomly initialized network to engage in visual feature extraction.…”
Section: Introductionmentioning
confidence: 99%
“…It was reported that selective tunings, such as number-selective responses, can emerge from the multiplication of random matrices ( 23 ) and that the structure of a randomly initialized convolutional neural network can provide a priori information about the low-level statistics in natural images, enabling the reconstruction of the corrupted images without any training for feature extraction ( 24 ). Furthermore, a recent study showed that subnetworks from randomly initialized neural networks can perform image classification ( 25 ), implying the ability of a randomly initialized network to engage in visual feature extraction.…”
Section: Introductionmentioning
confidence: 99%
“…Empirical evidence suggests the likely existence ofN . [5] and [6] showed that an untrained random model simultaneously contains sub-networks that perform well. At specific sparsity levels, these models perform as well as individually trained dense models.…”
Section: Model Formulationmentioning
confidence: 99%
“…To deal with this issue, model pruning methods have been developed to remove unimportant connections in weight matrices of neural network models [29,30,1]. The resulting pruned models contain only sparse structures, allowing them to run efficiently at [31,32,5,6].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations