2014
DOI: 10.1007/s10489-013-0501-1
|View full text |Cite
|
Sign up to set email alerts
|

Hardware implementation methods in Random Vector Functional-Link Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(6 citation statements)
references
References 36 publications
0
6
0
Order By: Relevance
“…With respect to hyperdimensional computing, the best starting point is the tutorial-like article [5] by Kanerva. Since even the conventional RVFLs are considered as one of the simplest approaches for machine learning, it explains why the efforts on pushing their resource-efficiency to the extreme are limited. The most relevant works in this direction are [8], [18]. Similar to the present study, both works use FPGA for hardware experiments.…”
Section: Related Workmentioning
confidence: 84%
“…With respect to hyperdimensional computing, the best starting point is the tutorial-like article [5] by Kanerva. Since even the conventional RVFLs are considered as one of the simplest approaches for machine learning, it explains why the efforts on pushing their resource-efficiency to the extreme are limited. The most relevant works in this direction are [8], [18]. Similar to the present study, both works use FPGA for hardware experiments.…”
Section: Related Workmentioning
confidence: 84%
“…The computational complexity of the RLS is dominated by the N ×N matrix inversion in (2), though the other matrix operations are also important factors. These matrix calculations typically place a high computational burden on hardware and may cause numerical instabilities [16]. Methods for computing (H H + λI) −1 (with the commonly used QR decomposition, for example) can be parallelized using graphical processing unit (GPU) clusters [5].…”
Section: Methodsmentioning
confidence: 99%
“…However, directly computing the matrix inverse (H H + λI) −1 is numerically unstable and ill-advised. Instead, we consider a different approach that uses QR decomposition to factor (H H + λI) −1 [16]; this step allows us to solve for the individual rows w (j) of W out without computing matrix inverses. Please consult [16] for more details on this approach.…”
Section: Computational Cost Of Rls and Glvq Classifiersmentioning
confidence: 99%
See 1 more Smart Citation
“…To this end, they also investigated how RVFL networks can be customized to offer a practical tool for density estimation . RVFL networks have gained increasing popularity in the last few years, and numerous additional applications were explored, which are briefly summarized in Section 1 of Martnez‐Villena et al An extensive evaluation of different RVFL variants is given instead in Zhang and Suganthan, particularly showing significant improvements by the inclusion of the input/output links. RVFL networks have also been explored in semisupervised scenarios, wherein only a part of the training data is labeled, and more recently in a multilayered configuration using a variation of the autoencoder network …”
Section: Feedforward Network With Random Weightsmentioning
confidence: 99%