2020
DOI: 10.3390/s20154191
|View full text |Cite
|
Sign up to set email alerts
|

FPGA-Based Implementation of Stochastic Configuration Networks for Regression Prediction

Abstract: The implementation of neural network regression prediction based on digital circuits is one of the challenging problems in the field of machine learning and cognitive recognition, and it is also an effective way to relieve the pressure of the Internet in the era of intelligence. As a nonlinear network, the stochastic configuration network (SCN) is considered to be an effective method for regression prediction due to its good performance in learning and generalization. Therefore, in this paper, we adapt the SCN… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 29 publications
0
5
0
Order By: Relevance
“…Ngah et al (2016) [33] designed a differential look-up table-based sigmoid function with limited accuracy. Gao et al (2020) [34] designed a better sigmoid function on an FPGA for regression problems, but it operates at a slow speed. Xie et al (2020) [35] implemented a twofold LUT-based tanh function, but it comes at a high cost in hardware.…”
Section: Related Work On Activation Functionsmentioning
confidence: 99%
“…Ngah et al (2016) [33] designed a differential look-up table-based sigmoid function with limited accuracy. Gao et al (2020) [34] designed a better sigmoid function on an FPGA for regression problems, but it operates at a slow speed. Xie et al (2020) [35] implemented a twofold LUT-based tanh function, but it comes at a high cost in hardware.…”
Section: Related Work On Activation Functionsmentioning
confidence: 99%
“…Minst data set is selected to test the accuracy of each model. The Table 1 shows the comparison between this paper and CPU, GPU and literature [3]and [10] in terms of model prediction time and accuracy. Among them, in…”
Section: Model Accuracy Testmentioning
confidence: 99%
“…As a result, the training procedure had to be executed for every single chip. This also held for mixed-signal realizations, which featured a dispersion of parameters, thus providing high performances [41] and supporting ultra-low-power implementations of the RBN inference step [31,42,43] Purely digital implementations for resource-constrained devices were presented in [7,8,21,44]. Those works explored design strategies that removed the need for multipliers in digital architectures.…”
Section: Implementation Of Randomization-based Network On Resource-co...mentioning
confidence: 99%
“…Gao et al [44] presented an implementation of RBNs based on a three-stage architecture. The network was deployed on a Xilinx XC7 with a 50MHz clock frequency.…”
Section: Comparison With Literaturementioning
confidence: 99%