2019
DOI: 10.1007/s11063-019-10165-y
|View full text |Cite
|
Sign up to set email alerts
|

A Design Strategy for the Efficient Implementation of Random Basis Neural Networks on Resource-Constrained Devices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

4
5

Authors

Journals

citations
Cited by 16 publications
(22 citation statements)
references
References 25 publications
0
21
0
Order By: Relevance
“…In [17], [18], they propose an efficient decomposition method to accelerate the computation of the pseudo-inverse for the hidden layer output matrix. In [19], the properties of random networks and hard-limiter activation functions are exploited to implement ELM on FPGA. For SoC design, [20] efficiently implements online sequential ELM for real-time applications.…”
Section: Related Work For Elm Hardware Implementationmentioning
confidence: 99%
“…In [17], [18], they propose an efficient decomposition method to accelerate the computation of the pseudo-inverse for the hidden layer output matrix. In [19], the properties of random networks and hard-limiter activation functions are exploited to implement ELM on FPGA. For SoC design, [20] efficiently implements online sequential ELM for real-time applications.…”
Section: Related Work For Elm Hardware Implementationmentioning
confidence: 99%
“…These paradigms features fast learning and an efficient forward phase due to the fact that the hidden parameters are not tuned during the learning process. The literature shows that SLFNs [ 41 , 46 ] offer a viable solution for low-power, resource-constrained digital implementations of the inference function [ 47 , 48 ], even making the on-device support of the training process possible on dedicated systems on chip [ 49 , 50 , 51 , 52 ]. In addition, SLFNs can be straightforwardly extended to Online Sequential Learning [ 53 ].…”
Section: Related Workmentioning
confidence: 99%
“…Within the paradigm of edge computing, the local embedded system just implements the inference function, which receives its parameters from the cloud. The literature provides some examples of hardware-friendly implementations of inference functions [40][41][42] that could be integrated in our low resources system. Such works exploited the paradigm of random basis neural networks to implement an inference function supported by a single hidden-layer feedforward neural network.…”
Section: Iot-based Predictive Systemmentioning
confidence: 99%