2019
DOI: 10.3390/electronics8111289
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning in Resource-Scarce Embedded Systems, FPGAs, and End-Devices: A Survey

Abstract: The number of devices connected to the Internet is increasing, exchanging large amounts of data, and turning the Internet into the 21st-century silk road for data. This road has taken machine learning to new areas of applications. However, machine learning models are not yet seen as complex systems that must run in powerful computers (i.e., Cloud). As technology, techniques, and algorithms advance, these models are implemented into more computational constrained devices. The following paper presents a study ab… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
39
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 60 publications
(39 citation statements)
references
References 70 publications
(70 reference statements)
0
39
0
Order By: Relevance
“…As can be observed in this table, microcontrollers have very limited hardware resources. This scarcity of resources makes them unsuitable for high-end machine learning applications, except the machine learning models are heavily optimized to fit within this space [ 24 ].…”
Section: Machine Learning In Resource-constrained Environmentsmentioning
confidence: 99%
See 2 more Smart Citations
“…As can be observed in this table, microcontrollers have very limited hardware resources. This scarcity of resources makes them unsuitable for high-end machine learning applications, except the machine learning models are heavily optimized to fit within this space [ 24 ].…”
Section: Machine Learning In Resource-constrained Environmentsmentioning
confidence: 99%
“…However, graphic processing units (GPUs), due to their high floating-point performance and thread-level parallelism, are more suitable for training deep learning models [ 13 ]. Extensive research is actively being carried out to develop suitable hardware acceleration units using FPGAs [ 20 , 21 , 22 , 23 , 24 , 25 , 26 ], GPUs, ASICs, and TPUs to create heterogeneous and sometimes distributed systems to meet up the high computational demand of deep learning models. At both the algorithm and hardware levels, optimization techniques for classical machine learning and deep learning algorithms are being investigated such as pruning, quantization, reduced precision, hardware acceleration, etc.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Using (11) gives R Q = 3 and the set Z = {z 1 , z 2 , z 3 }. Using (15) gives R I = 1 and the set V = {v 1 }.…”
Section: Example Of Synthesismentioning
confidence: 99%
“…In this article we consider methods of implementing FSM circuits in the context of field programmable gate arrays [11][12][13]. These chips are very popular devices used for implementations of digital systems [2,[14][15][16][17][18]. This fact explains our choice of FPGA-based Mealy FSMs as a research object.…”
Section: Introductionmentioning
confidence: 99%