2020
DOI: 10.3390/electronics9122193
|View full text |Cite
|
Sign up to set email alerts
|

An Approach of Feed-Forward Neural Network Throughput-Optimized Implementation in FPGA

Abstract: Artificial Neural Networks (ANNs) have become an accepted approach for a wide range of challenges. Meanwhile, the advancement of chip manufacturing processes is approaching saturation which calls for new computing solutions. This work presents a novel approach of an FPGA-based accelerator development for fully connected feed-forward neural networks (FFNNs). A specialized tool was developed to facilitate different implementations, which splits FFNN into elementary layers, allocates computational resources and g… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
18
1
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 28 publications
(22 citation statements)
references
References 27 publications
0
18
1
2
Order By: Relevance
“…For further investigations on accuracy loss, fault model of the FPGA with voltage scaling can be extracted to study behavior of the larger models at reduced voltages. The RTL of this model was generated using tools provided in [27].…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…For further investigations on accuracy loss, fault model of the FPGA with voltage scaling can be extracted to study behavior of the larger models at reduced voltages. The RTL of this model was generated using tools provided in [27].…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Likewise, at the output an accumulator together with a comparator verifies the integrity from ABFT checksums, and the results are sent to the control processor. The tool chain [27] was modified to automatically generate the necessary RTL code to integrate ABFT into the neural model computations. In addition to network output, the results of the checksum inspection blocks are routed back to the processor to report possible detections during the inference phase as single value output through AXI interfaces [25].…”
Section: B Error Detection Through Abftmentioning
confidence: 99%
“…Firmware was written for the ARM processor in the SoC to control the FPGA acting as a neural accelerator. The processor feeds the network with input vectors and fetches the output via AXI ports [27]. For further investigations on accuracy loss, fault model of the FPGA with voltage scaling can be extracted to study behavior of the larger models at reduced voltages.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…3. Furthermore, a Fully Connected Neural Network (FC-NN) [17] with ABFT in matrix multiplications in the largest layers was synthesized on the FPGA. The checksum augmentation of the inputs and the inspection of the results were both done by circuit logic on-the-fly.…”
Section: Experimentationmentioning
confidence: 99%