2023
DOI: 10.3390/s23031185
|View full text |Cite
|
Sign up to set email alerts
|

Lightweight and Energy-Efficient Deep Learning Accelerator for Real-Time Object Detection on Edge Devices

Abstract: Tiny machine learning (TinyML) has become an emerging field according to the rapid growth in the area of the internet of things (IoT). However, most deep learning algorithms are too complex, require a lot of memory to store data, and consume an enormous amount of energy for calculation/data movement; therefore, the algorithms are not suitable for IoT devices such as various sensors and imaging systems. Furthermore, typical hardware accelerators cannot be embedded in these resource-constrained edge devices, and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 31 publications
(39 reference statements)
0
1
0
Order By: Relevance
“…However, it is not feasible to have such embedded systems that do this in real time. Hence, it is compelling to develop ultra-low-power AI chips that can run inferences of AI algorithms at a faster rate and consume low energy [ 17 ]. Approaches to training AI models vary between supervised learning, unsupervised learning, transfer learning, and reinforcement learning.…”
Section: Reviewmentioning
confidence: 99%
“…However, it is not feasible to have such embedded systems that do this in real time. Hence, it is compelling to develop ultra-low-power AI chips that can run inferences of AI algorithms at a faster rate and consume low energy [ 17 ]. Approaches to training AI models vary between supervised learning, unsupervised learning, transfer learning, and reinforcement learning.…”
Section: Reviewmentioning
confidence: 99%
“…However, these network structures have even more network parameters, resulting in greater computational complexity and longer training times. This is evident from observing the resource requirements for the FPGA implementation of CNNs for image classification [ 7 , 8 ] and more lightweight CNNs are developed to be fit into edge devices [ 9 , 10 ]. In general, network structures with fewer parameters require shorter training times and are more suitable for systems with resource limitations on edge devices; however, they must still be capable of delivering the required performance.…”
Section: Introductionmentioning
confidence: 99%