2021
DOI: 10.1103/physreve.104.054312
|View full text |Cite
|
Sign up to set email alerts
|

Training of sparse and dense deep neural networks: Fewer parameters, same performance

Abstract: Working with high-dimensional data is a common practice, in the field of machine learning. Identifying relevant input features is thus crucial, so as to obtain compact dataset more prone for effective numerical handling. Further, by isolating pivotal elements that form the basis of decision making, one can contribute to elaborate on -ex post -models' interpretability, so far rather elusive. Here, we propose a novel method to estimate the relative importance of the input components for a Deep Neural Network. Th… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 7 publications
(10 citation statements)
references
References 30 publications
0
10
0
Order By: Relevance
“…This Section is devoted to reviewing the spectral approach to the training of deep neural networks. The discussion will follow mainly 6 , where an extension of the method originally introduced in 5 is handed over. For the sake of completeness, let us emphasize a substantial difference between these works 5 , 6 and the one proposed in this manuscript: in Giambagli et al 5 and Chicchi et al 6 the focus is on designing a training algorithm in the spectral domain while, in this work, we propose a novel idea to effectively prune fully connected layers by exploiting the spectral approach to neural network training.…”
Section: Spectral Approach To Learningmentioning
confidence: 99%
See 3 more Smart Citations
“…This Section is devoted to reviewing the spectral approach to the training of deep neural networks. The discussion will follow mainly 6 , where an extension of the method originally introduced in 5 is handed over. For the sake of completeness, let us emphasize a substantial difference between these works 5 , 6 and the one proposed in this manuscript: in Giambagli et al 5 and Chicchi et al 6 the focus is on designing a training algorithm in the spectral domain while, in this work, we propose a novel idea to effectively prune fully connected layers by exploiting the spectral approach to neural network training.…”
Section: Spectral Approach To Learningmentioning
confidence: 99%
“…Here, denotes the diagonal matrix of the eigenvalues of . Following 6 , we set for and . The remaining elements are initially assigned to random entries, as e.g.…”
Section: Spectral Approach To Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…The target of the optimization are the weights of the links that connect pair of nodes belonging to adjacent stacks of the multilayered arrangement, in a fully coupled setting. An alternative training scheme has been recently proposed which anchors the learning to reciprocal domain [8,9]: the eigenvalues and the eigenvectors of the transfer operators get adjusted by the optimization. Spectral learning, so far engineered to deal with a feedforward organization, identifies key collective variables, the eigenvalues, which are more fundamental than any other (randomly selected) set of identical cardinality, allocated in direct space.…”
Section: Introductionmentioning
confidence: 99%