2017 22nd International Conference on Digital Signal Processing (DSP) 2017
DOI: 10.1109/icdsp.2017.8096056
|View full text |Cite
|
Sign up to set email alerts
|

Finite precision implementation of random vector functional-link networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
8
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 27 publications
1
8
0
Order By: Relevance
“…In the previous experiments, the weights in W out being compressed were realvalued since the proposed compression approach is real-valued as well. It is, however, known that neural networks can perform well even when the weights are limited to a few quantization levels [15,35]. Therefore, we did an experiment to demonstrate what to potentially expect for the quantization of W out .…”
Section: Quantization Of the Classifiermentioning
confidence: 99%
“…In the previous experiments, the weights in W out being compressed were realvalued since the proposed compression approach is real-valued as well. It is, however, known that neural networks can perform well even when the weights are limited to a few quantization levels [15,35]. Therefore, we did an experiment to demonstrate what to potentially expect for the quantization of W out .…”
Section: Quantization Of the Classifiermentioning
confidence: 99%
“…In the previous experiments, the weights in W out being compressed were realvalued since the proposed compression approach is real-valued as well. It is, however, known that neural networks can perform well even when the weights are limited to a few quantization levels [42,43]. Therefore, we did an experiment to demonstrate what to potentially expect for the quantization of W out .…”
Section: Quantization Of the Classifiermentioning
confidence: 99%
“…Here, both W in and b are generated from a uniform distribution. Following [8], the range for W in is [−1, 1] while the range for b is [−0.1, 0.1]. Since W in and b are fixed the process of training RVFL is focused on learning the values of the readout matrix W out .…”
Section: A Random Vector Functional Linkmentioning
confidence: 99%
“…The second scenario compares the results for the real-valued readout matrix against the considered strategies of obtaining integer-valued readout matrix. The final scenario compares FPGA implementations of the proposed approach and the finite precision RVFL [8] in the case of a limited energy budget. All reported results are based on 121 real-world classification datasets obtained from the UCI Machine Learning Repository 7 .…”
Section: Performance Evaluationmentioning
confidence: 99%
See 1 more Smart Citation