Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation 2019
DOI: 10.1145/3314221.3314597
|View full text |Cite
|
Sign up to set email alerts
|

Compiling KB-sized machine learning models to tiny IoT devices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
44
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 62 publications
(49 citation statements)
references
References 41 publications
0
44
0
Order By: Relevance
“…Their work also concerns recurrent neural networks [22]. A major achievement concerns the translation of floating-point ML models into fixed-point code [23], which is, however, not the case in state-of-the-art mainstream microcontrollers.…”
Section: Related Workmentioning
confidence: 99%
“…Their work also concerns recurrent neural networks [22]. A major achievement concerns the translation of floating-point ML models into fixed-point code [23], which is, however, not the case in state-of-the-art mainstream microcontrollers.…”
Section: Related Workmentioning
confidence: 99%
“…Furthermore, by allowing some data analysis and interpretation to be performed locally and in real time at the collection point, services as these can translate into huge cost savings and better privacy protection [7]. Most of the work performed in the field of TinyML has been focused on the reduction and optimization of existing models, such as Artificial Neural Networks (ANNs), to fit into these tiny devices and commodity microcontrollers, despite their computational restrictions [7,32,33]. Additionally, for IoT scenarios, it can be argued that the algorithms should preferably work without prior knowledge of the data, i.e., unsupervised.…”
Section: Introductionmentioning
confidence: 99%
“…Owing to the variety of application domains and deployment constraints, DNNs come in many different sizes. For instance, large image-recognition and natural-language processing models are trained and deployed using cloud resources [33,12], medium-size models could be trained in the cloud but deployed on hardware with limited resources [31], and finally small models could be trained and deployed directly on edge devices [47,9,22,34,35]. There has also been a recent push to compress trained models to reduce their size [24].…”
Section: Introductionmentioning
confidence: 99%