2009 International Joint Conference on Neural Networks 2009
DOI: 10.1109/ijcnn.2009.5179045
|View full text |Cite
|
Sign up to set email alerts
|

A Neural Network pruning approach based on Compressive Sampling

Abstract: The balance between computational complexity and the architecture bottlenecks the development of Neural Networks (NNs), An architecture that is too large or too small will influence the performance to a large extent in terms of generalization and computational cost. In the past, saliency analysis has been employed to determine the most suitable structure, however, it is time-consuming and the performance is not robust. In this paper, a family of new algorithms for pruning elements (weighs and hidden neurons) i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2010
2010
2020
2020

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 17 publications
(35 reference statements)
0
6
0
Order By: Relevance
“…Before we formulate the problem of network pruning as a compressive sampling problem we introduce some definitions [11,10] We assume that the training input patterns are stored in a matrix I, and the desired output patterns are stored in a matrix O, then the mathematical model for training of the neural network can be extracted in the form of the following expansion: w . So we can write problem as below: …”
Section: Problem Formulation and Methodologymentioning
confidence: 99%
“…Before we formulate the problem of network pruning as a compressive sampling problem we introduce some definitions [11,10] We assume that the training input patterns are stored in a matrix I, and the desired output patterns are stored in a matrix O, then the mathematical model for training of the neural network can be extracted in the form of the following expansion: w . So we can write problem as below: …”
Section: Problem Formulation and Methodologymentioning
confidence: 99%
“…This section briefly surveys the sparse representation and multiple measure vector (MMV) model [17], [18]. Given an original signal and a dictionary consisting of basis vectors, the sparse representation addresses a compact approximation for the given signal using basis vectors.…”
Section: Sparse Representationmentioning
confidence: 99%
“…In addition, Yang et al [24] presented a similar framework for network pruning and is described as follows:…”
Section: Neural Network Pruningmentioning
confidence: 99%