2021 31st International Conference on Field-Programmable Logic and Applications (FPL) 2021
DOI: 10.1109/fpl53798.2021.00009
|View full text |Cite
|
Sign up to set email alerts
|

Eciton: Very Low-Power LSTM Neural Network Accelerator for Predictive Maintenance at the Edge

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 16 publications
(8 citation statements)
references
References 42 publications
0
8
0
Order By: Relevance
“…However, the proposed model has far fewer parameters, its storage size is much lower than the LSTM’s one and it is also faster in the training stage. A trade-off between efficiency and effectiveness was thus achieved, which is of paramount importance in industrial contexts where the relationship between performance obtained and resources allocated is to be optimized (Chen et al ., 2021; Markiewicz et al ., 2019). In addition, the overall accuracy performances are comparable with the best techniques of the literature.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, the proposed model has far fewer parameters, its storage size is much lower than the LSTM’s one and it is also faster in the training stage. A trade-off between efficiency and effectiveness was thus achieved, which is of paramount importance in industrial contexts where the relationship between performance obtained and resources allocated is to be optimized (Chen et al ., 2021; Markiewicz et al ., 2019). In addition, the overall accuracy performances are comparable with the best techniques of the literature.…”
Section: Discussionmentioning
confidence: 99%
“…(2022) introduced another LSTM-based attention mechanism to improve the ability of the model to analyze a sequence of signals in survival analysis. In turn, Chen et al . (2021) proposed a PdM system based on LSTM network adapted on FPGA, whose aim is to jointly reduce power consumption and management cost.…”
Section: Related Workmentioning
confidence: 99%
“…Noticing this problem, Chen et al [4] implemented a similar accelerator on a much smaller FPGA iCE40 UP5K in 2021. Thanks to the ultra-low static power (at µW scale) of this FPGA, the overall power consumption during inference is approximately equal to the dynamic power of the FPGA, which is 17 mW.…”
Section: Related Workmentioning
confidence: 99%
“…In both works [4,6], researchers applied fixed-point logic to simplify the design and reduce the loss of precision compared to aggressive quantisation. The activation functions tanh() and sigmoid() were replaced with hard tanh() and hard sigmoid() respectively, which simplifies the computations but leads to a large reduction of precision [10].…”
Section: Related Workmentioning
confidence: 99%
“…Due to the computation-intensive nature of neural networks, the use of power-efficient hardware accelerators is one of the most prominent methods of bringing the benefits of deep neural networks to resource-constrained embedded systems. Embedded neural networks can reduce decision-making latency by removing the requirement for querying a remote server [22,23], as well as reducing power-hungry wireless network transmission requirements [24]. Some acceleration approaches include using mobile GPUs [1], customdesigned application-specific integrated circuits (ASICs) [25][26][27][28], as well as FPGAs [2].…”
Section: Embedded Neural Network Accelerationmentioning
confidence: 99%