2014
DOI: 10.15598/aeee.v12i1.831
|View full text |Cite
|
Sign up to set email alerts
|

FPGA Implementations of Feed Forward Neural Network by using Floating Point Hardware Accelerators

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0
1

Year Published

2014
2014
2023
2023

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 15 publications
(10 citation statements)
references
References 30 publications
0
9
0
1
Order By: Relevance
“…Two alternatives were used: the full-precision activation function, computed using math.h C library, and a polynomial interpolation of the activation function, pre-computed in Matlab, composed by a fit of five 2 nd order polynomials [18]. The second solution is obviously a tradeoff Table 2: Code configurations for best performance and best precision.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Two alternatives were used: the full-precision activation function, computed using math.h C library, and a polynomial interpolation of the activation function, pre-computed in Matlab, composed by a fit of five 2 nd order polynomials [18]. The second solution is obviously a tradeoff Table 2: Code configurations for best performance and best precision.…”
Section: Resultsmentioning
confidence: 99%
“…Networks were trained and validated in Matlab® environment according to the methodology described in [16,35]. To enhance the embedded performance of the NN, according to [18,19], a speedup for the activation function of the hidden neurons was obtained by computing it through a 2nd degree polynomial interpolating function.…”
Section: Neural Estimatormentioning
confidence: 99%
See 1 more Smart Citation
“…A variation of the simple LUT is the RA-LUT (Range Addressable LUT), where each sample corresponds not only to a specific point in the domain, but to a neighborhood of the point. In [39] the authors propose a comparison between two FPGA architectures which uses floating-point accelerators based on RA-LUT to compute fast AFs. The first solution, refined from the one proposed in [40, 41], implements the NN on a soft processor and computes the AFs through a smartly spaced RA-LUT.…”
Section: Activation Functions For Fast Computationmentioning
confidence: 99%
“…Combination of LUT and PWL approximation is also used in [56, 57], where different authors investigate the approximation in fixed point for the synthesis of exponential and sigmoid AFs. In [39, 58] authors propose a piecewise II-degree polynomial approximation of the activation function for both the sigmoid and the hyperbolic tangent AFs. Performance, resources required, and precision degradation are compared to full-precision and RA-LUT solutions.…”
Section: Activation Functions For Fast Computationmentioning
confidence: 99%