2006
DOI: 10.1007/0-387-28487-7
|View full text |Cite
|
Sign up to set email alerts
|

FPGA Implementations of Neural Networks

Abstract: In this article, I would like to discuss the implementation and using of neural networks on an FPGA [1] based system and also the topology of neural network deployment on FPGA, as well as the advantages of using FPGA-based neural networks over a CPU-based neural network. We will also discuss the requirements for neural networks deployed in an embedded system. Concluding with the advantages and disadvantages of the neural networks produced by the FPGA manufacturers as well as the advantages of using HLS[2] (Hig… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 201 publications
(3 citation statements)
references
References 57 publications
0
3
0
Order By: Relevance
“…They are data-driven non-linear models that can identify the relationship between the input variables and output variables of interest (Anctil, Filion, & Tournebize, 2009). In general, NNs consist of (a) a set of inputs, (b) a set of synaptic connections whose weights represent its applicability and (c) an activation function relating the input to the output of individual neurons (Omondi, Rajapakse, & Bajger, 2006). As NNs extract set of rules from training datasets and derive outputs by processing large number of neurons (Omondi et al, 2006), they work well even in the presence of noise or measurement errors (Govindaraju, 2000).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…They are data-driven non-linear models that can identify the relationship between the input variables and output variables of interest (Anctil, Filion, & Tournebize, 2009). In general, NNs consist of (a) a set of inputs, (b) a set of synaptic connections whose weights represent its applicability and (c) an activation function relating the input to the output of individual neurons (Omondi, Rajapakse, & Bajger, 2006). As NNs extract set of rules from training datasets and derive outputs by processing large number of neurons (Omondi et al, 2006), they work well even in the presence of noise or measurement errors (Govindaraju, 2000).…”
Section: Introductionmentioning
confidence: 99%
“…In general, NNs consist of (a) a set of inputs, (b) a set of synaptic connections whose weights represent its applicability and (c) an activation function relating the input to the output of individual neurons (Omondi, Rajapakse, & Bajger, 2006). As NNs extract set of rules from training datasets and derive outputs by processing large number of neurons (Omondi et al, 2006), they work well even in the presence of noise or measurement errors (Govindaraju, 2000). However, it is important to point out that NNs are black-box models often unable to provide physical basis of a system, especially since they are typically over-parameterized (Anctil et al, 2009).…”
Section: Introductionmentioning
confidence: 99%
“…MAPLE [3], which is an FPGA-based training accelerator, has been proposed to speed up both training and inference by accelerating the matrix product operations on an arithmetic unit basis. However, there are few studies specializing in architectures for the training process [4]- [6]. Although recently, a system using FPGA clusters was proposed [7], [8] to accelerate training, according to research [6], the bottleneck in designing a fast trainingaccelerator is the memory size.…”
Section: Multilayer Perceptron (Mlp) Trainingmentioning
confidence: 99%