2021
DOI: 10.3390/s21092984
|View full text |Cite
|
Sign up to set email alerts
|

Quantization and Deployment of Deep Neural Networks on Microcontrollers

Abstract: Embedding Artificial Intelligence onto low-power devices is a challenging task that has been partly overcome with recent advances in machine learning and hardware design. Presently, deep neural networks can be deployed on embedded targets to perform different tasks such as speech recognition, object detection or Human Activity Recognition. However, there is still room for optimization of deep neural networks onto embedded devices. These optimizations mainly address power consumption, memory and real-time const… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
57
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 93 publications
(57 citation statements)
references
References 44 publications
0
57
0
Order By: Relevance
“…Neural networks (NNs) have achieved remarkable success in a wide range of real-world applications. However, their application might be limited or impeded in edge devices with a constrained resource, such as IoT and mobile devices [ 1 , 2 , 3 , 4 , 5 , 6 ]. Although on such resource-constrained devices, decreased storage and/or computational costs for NNs are indispensable, the accuracy of NN can be severely degraded if the pathway toward this decrease is not chosen prudently [ 2 , 4 , 6 ].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Neural networks (NNs) have achieved remarkable success in a wide range of real-world applications. However, their application might be limited or impeded in edge devices with a constrained resource, such as IoT and mobile devices [ 1 , 2 , 3 , 4 , 5 , 6 ]. Although on such resource-constrained devices, decreased storage and/or computational costs for NNs are indispensable, the accuracy of NN can be severely degraded if the pathway toward this decrease is not chosen prudently [ 2 , 4 , 6 ].…”
Section: Introductionmentioning
confidence: 99%
“…The rapid proliferation of IoTs, as predicted by [ 7 ], additionally highlights the increasing importance of the edge infrastructure and a needful migration of NNs to the very end devices. A training of NNs on the edge devices presents the very idea of edge computing, which is, for the edge devices, competitive to the cloud computing from the aspects of latency, memory footprint and power consumption [ 1 , 3 , 5 , 6 ]. In artificial intelligence (AI) algorithms that are running in the cloud, data have to be sent over the Internet to the cloud, causing latency and thus preventing AI-based real-time applications, as well as security problems.…”
Section: Introductionmentioning
confidence: 99%
“…For example, authors in [ 13 ] propose and edge computing framework for collaboration among nodes with the aim to improve resources management and achieve optimal offloading directed towards healthcare systems. Also, energy consumption on the edge and the used of ML to improve its performance is addressed in [ 14 , 15 , 16 ], since energy consumption is essential during ML forecasts due the limited power supplies available for light-weight IoT devices. Furthermore, authors in [ 17 ] apply ML algorithms for an indoor classification applications which uses features collected from radio frequency measurements.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Furthermore, recent advancements in machine learning algorithms and portable device hardware could pave the way for the simplification of wearables, allowing the implementation of deep learning algorithms directly on embedded devices based on microcontrollers (MCUs) with limited computational power and very low energy consumption, without the need for transferring data to a more powerful computer to be elaborated [36,37].…”
Section: Introductionmentioning
confidence: 99%