2021
DOI: 10.1109/access.2021.3099075
|View full text |Cite
|
Sign up to set email alerts
|

A Resource Efficient Integer-Arithmetic-Only FPGA-Based CNN Accelerator for Real-Time Facial Emotion Recognition

Abstract: Recently, much research has been conducted on the recognition of facial emotion using convolutional neural networks (CNNs), which show excellent performance in computer vision. To obtain a high classification accuracy, a CNN architecture with many parameters and high computational complexity is required. However, this is not suitable for embedded systems where hardware resources are limited. In this paper, we present a lightweight CNN architecture optimized for embedded systems. The proposed CNN architecture h… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 32 publications
(16 citation statements)
references
References 36 publications
0
16
0
Order By: Relevance
“…We extended the LLTQ [14] method and applied it to the four RBs. RBs, including quantizers, are shown in Fig.…”
Section: Fully Integer-based Residual Blockmentioning
confidence: 99%
See 1 more Smart Citation
“…We extended the LLTQ [14] method and applied it to the four RBs. RBs, including quantizers, are shown in Fig.…”
Section: Fully Integer-based Residual Blockmentioning
confidence: 99%
“…• We extended a novel hardware-friendly quantization method [14] and applied it to residual blocks (RBs).…”
Section: Introductionmentioning
confidence: 99%
“…However, the lightweight CNN model contains a variety of kernel sizes, which challenges the design of FPGA-based CNN accelerators. Most existing designs [12][13][14][15][16][17][18][19][20][21] can effectively handle the convolution with some specified kernel sizes. However, when the kernel size changes, the utilization of PE units in the computation array is significantly reduced.…”
Section: Introductionmentioning
confidence: 99%
“…However, when the kernel size changes, the utilization of PE units in the computation array is significantly reduced. The designs proposed in [16,17,21] can deal with convolutions of several common kernel sizes, but it is still not applicable to convolutions of any kernel sizes. The authors in [22][23][24][25][26] adopt multiple computing engines to deal with the convolution with different kernel sizes for improving performance.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation