2022
DOI: 10.3390/electronics11091421
|View full text |Cite
|
Sign up to set email alerts
|

FPGA-Based BNN Architecture in Time Domain with Low Storage and Power Consumption

Abstract: With the increasing demand for convolutional neural networks (CNNs) in many edge computing scenarios and resource-limited settings, researchers have made efforts to apply lightweight neural networks on hardware platforms. While binarized neural networks (BNNs) perform excellently in such tasks, many implementations still face challenges such as an imbalance between accuracy and computational complexity, as well as the requirement for low power and storage consumption. This paper first proposes a novel binary c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 33 publications
0
5
0
Order By: Relevance
“…neuromorphic) computing systems have achieved significant improvements in parallel computing and efficient data processing 1 5 . In particular, binarized neural networks (BNNs) have recently demonstrated their capabilities in image recognition applications 6 10 . The accelerators in the BNNs perform a matrix "multiply accumulate" (MAC) operation (i.e.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…neuromorphic) computing systems have achieved significant improvements in parallel computing and efficient data processing 1 5 . In particular, binarized neural networks (BNNs) have recently demonstrated their capabilities in image recognition applications 6 10 . The accelerators in the BNNs perform a matrix "multiply accumulate" (MAC) operation (i.e.…”
Section: Introductionmentioning
confidence: 99%
“…vector–matrix multiplication (VMM)) between the binarized weights and analog inputs 6 , 7 . The bitwise MAC operation enables extensive applicability to resource-constrained platforms, such as edge devices and mobile processors, promising a considerable reduction in memory (approximately 32 ×) and computation (approximately 2 ×) requirements compared with other neural networks 6 10 . Furthermore, it is still difficult for emerging synaptic devices to fully implement analog neural networks (analog input and analog weight) with nonlinear conductance changes and device variations 1 4 , 11 13 .…”
Section: Introductionmentioning
confidence: 99%
“…They significantly reduce the hardware requirements with a minimal impact on the accuracy [8,9]. An interesting, practical use of FPGA systems to implement CNN is presented in the works [10][11][12][13]. Several algorithmic optimization strategies used in FPGA hardware for CNNs are discussed, and a few of neural network FPGA-based accelerators architectures are presented.…”
Section: Introductionmentioning
confidence: 99%
“…Among these options, FPGAs have gained popularity for implementing CNNs in embedded systems primarily due to their ability to perform convolution operations in parallel with high energy efficiency. Compared to GPUs, FPGAs offer higher energy efficiency, making them an attractive choice for resource-constrained embedded systems [4][5][6][7].…”
Section: Introductionmentioning
confidence: 99%