Harvard Data Science Review 2022
DOI: 10.1162/99608f92.762d171a
|View full text |Cite
|
Sign up to set email alerts
|

Widening Access to Applied Machine Learning with TinyML

Abstract: Broadening access to both computational and educational resources is critical to diffusing machine learning (ML) innovation. However, today, most ML resources and experts are siloed in a few countries and organizations. In this article, we describe our pedagogical approach to increasing access to applied ML through a massive open online course (MOOC) on Tiny Machine Learning (TinyML). We suggest that TinyML, applied ML on resource-constrained embedded devices, is an attractive means to widen access because Tin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 22 publications
(13 citation statements)
references
References 41 publications
0
7
0
Order By: Relevance
“…Neural networks are usually trained on dedicated servers, while inference can be computed on the edge, even on very resource-constrained devices [46]. Recent developments are introducing continuous learning techniques [47], often employing custom-designed architectures [48]. TinyML systems are gaining more and more traction, and to support this expansion benchmarking [49][50] tools have been developed to assess ML performances at the edge.…”
Section: Related Workmentioning
confidence: 99%
“…Neural networks are usually trained on dedicated servers, while inference can be computed on the edge, even on very resource-constrained devices [46]. Recent developments are introducing continuous learning techniques [47], often employing custom-designed architectures [48]. TinyML systems are gaining more and more traction, and to support this expansion benchmarking [49][50] tools have been developed to assess ML performances at the edge.…”
Section: Related Workmentioning
confidence: 99%
“…Moving a step forward, frameworks for characterizing and assessing ML deployment on the edge [20] [6] help us systematically tackle potential issues. Many platforms and resources, such as open online courses [22], X-CUBE-AI [29] from STM, Apache's TVM [4], and TensorFlow Lite for Microcontrollers [7] from Google, are made available to accelerate TinyML. Besides, remarkable applications of TinyML show up across all fields, see [23] [12] [33].…”
Section: Related Workmentioning
confidence: 99%
“…for ML applications that use miniaturized models for extremely energy-efficient inference [28,61]. Further, the lack of a resource-rich Operating Systems (OS) (E.g.…”
Section: Definitionmentioning
confidence: 99%