Artificial Neural Networks 1992
DOI: 10.1016/b978-0-444-89488-5.50127-5
|View full text |Cite
|
Sign up to set email alerts
|

Adaptable VLSI Neural Network of Tens of Thousand Connections

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 2 publications
0
1
0
Order By: Relevance
“…Pruning can be done either during training or after the model training. Other than pruning, eliminating the network redundancy without retraining [16], low rank approximation [17]- [21], fast Fourier transform (FFT) based convolutions [22], [23], quantization [24], binarization [25], [26], pruning [27], [28], sparsity regularization [29], [30], pruning low magnitude weights [31]- [33] are the common approaches. Knowledge distillation [34] refers to the process of training a smaller model to replicate the behavior of a larger, pre-trained model.…”
Section: Related Workmentioning
confidence: 99%
“…Pruning can be done either during training or after the model training. Other than pruning, eliminating the network redundancy without retraining [16], low rank approximation [17]- [21], fast Fourier transform (FFT) based convolutions [22], [23], quantization [24], binarization [25], [26], pruning [27], [28], sparsity regularization [29], [30], pruning low magnitude weights [31]- [33] are the common approaches. Knowledge distillation [34] refers to the process of training a smaller model to replicate the behavior of a larger, pre-trained model.…”
Section: Related Workmentioning
confidence: 99%
“…Compared to the related works which often use a single threshold to prune parameters for the entire network [Han et al, 2015b, Zhao et al, 2019], AAP's layer-specific thresholds allow it to generate better pruned models, and these thresholds are also fully automatically tuned.…”
Section: Layer-aware Threshold Adjustmentmentioning
confidence: 99%