Proceedings of the 2020 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays 2020
DOI: 10.1145/3373087.3375311
|View full text |Cite
|
Sign up to set email alerts
|

Light-OPU

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 73 publications
(11 citation statements)
references
References 29 publications
0
8
0
Order By: Relevance
“…These toolchains just can modify hardware parameter, when timing-closure is not meet, it has to decrease the operating frequency, which leads to performance deterioration. So, Yu et al [25] propose Light-OPU, its toolchain eliminates the re-implementation process. But it doesn't consider the TinyML model situation that all parameters are stored on-chip memory.…”
Section: Introductionmentioning
confidence: 99%
See 4 more Smart Citations
“…These toolchains just can modify hardware parameter, when timing-closure is not meet, it has to decrease the operating frequency, which leads to performance deterioration. So, Yu et al [25] propose Light-OPU, its toolchain eliminates the re-implementation process. But it doesn't consider the TinyML model situation that all parameters are stored on-chip memory.…”
Section: Introductionmentioning
confidence: 99%
“…BNN [26], [27] use 1-bit weight and activation which dramatically decrease hardware consumption, but it also decreases the accuracy. 8-bits quantization was adopted by [16], [21], [25] which use their own scheme and may deteriorate the accuracy for different CNNs. Currently, TF (TensorFlow) Lite adopts per-channel quantization of weights and per-layer quantization of activations to 8-bits, which produces accuracy loss within 2% compared with floating-point networks for a wide variety of CNN [28].…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations