2021
DOI: 10.1145/3469029
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning at the Network Edge: A Survey

Abstract: Resource-constrained IoT devices, such as sensors and actuators, have become ubiquitous in recent years. This has led to the generation of large quantities of data in real-time, which is an appealing target for AI systems. However, deploying machine learning models on such end-devices is nearly impossible. A typical solution involves offloading data to external computing systems (such as cloud servers) for further processing but this worsens latency, leads to increased communication costs, and adds to privacy … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
72
0
2

Year Published

2022
2022
2022
2022

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 205 publications
(111 citation statements)
references
References 81 publications
0
72
0
2
Order By: Relevance
“…GANs have shown impressive performance in various tasks [75,41,35]. It not only provides a kind of novel learning algorithm but also offers a vital alternative to generative models.…”
Section: Gan-based Methodsmentioning
confidence: 99%
“…GANs have shown impressive performance in various tasks [75,41,35]. It not only provides a kind of novel learning algorithm but also offers a vital alternative to generative models.…”
Section: Gan-based Methodsmentioning
confidence: 99%
“…Edge computing is a concept where processing is done close to the device that produced the data, which generally means on devices with much less memory than regular computing servers. There are many surveys about classification for edge computing [15], [16], [17], but most of the work focuses on deep learning, which is not applicable in our case because it requires a lot of data and time to train the model. They discuss inherent problems related to learning with edge devices, in particular about lighter architecture and distributed training.…”
Section: Related Workmentioning
confidence: 99%
“…Finally, these studies draw future work opportunities such as data augmentation, distributed training, and explainable AI. Aside from the deep learning approaches, the survey in [15] discusses two machine learning techniques with a small memory footprint: the Bonsai and ProtoNN methods. Bonsai [18] is a tree-based algorithm designed to fit in an edge device memory.…”
Section: Related Workmentioning
confidence: 99%
“…Extreme-edge inference is achievable in practical cases since it can be performed with low precision integer operations that help increase the energy efficiency, reduce the memory footprint and the area overhead, with reduced accuracy loss [1], [2]. Eyeriss [3] is a hardware accelerator designed for Convolutional Neural Networks (CNNs) inference with INT16 arithmetic and implemented in 65 nm technology.…”
Section: Introduction and Related Workmentioning
confidence: 99%