2021 IEEE SmartWorld, Ubiquitous Intelligence &Amp; Computing, Advanced &Amp; Trusted Computing, Scalable Computing &Amp; Commu 2021
DOI: 10.1109/swc50871.2021.00023
|View full text |Cite
|
Sign up to set email alerts
|

Train++: An Incremental ML Model Training Algorithm to Create Self-Learning IoT Devices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
17
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 19 publications
(17 citation statements)
references
References 19 publications
0
17
0
Order By: Relevance
“…Their approach allows for improving the running model at run-time without relying on cloud entities. Instead, in another of their papers [ 35 ], the authors present Train++, an algorithm to locally and incrementally train a TinyML model directly on an embedded device without relying on a cloud or any other external service. The target ML problem has to be reduced to a set of binary classifiers.…”
Section: Related Workmentioning
confidence: 99%
“…Their approach allows for improving the running model at run-time without relying on cloud entities. Instead, in another of their papers [ 35 ], the authors present Train++, an algorithm to locally and incrementally train a TinyML model directly on an embedded device without relying on a cloud or any other external service. The target ML problem has to be reduced to a set of binary classifiers.…”
Section: Related Workmentioning
confidence: 99%
“…First, the training framework must be able to detect when a significant shift has happened in the input dataset (when to learn). This can be done by calculating the per-output covariate distribution divergence on principal feature components [183], running mean and variance of streaming input [184], or confidence score of predictions [186]. Second, the on-device training framework must perform model adaptation within device constraints and limited training samples (how to learn).…”
Section: A On-device Trainingmentioning
confidence: 99%
“…This allows learning from non-IID data. Incremental training uses constrained optimization to update the weights one sample at a time [186]. Both approaches suffer from limited application space due to limited supported network types.…”
Section: A On-device Trainingmentioning
confidence: 99%
“…CL has already been explored for classic server and parallel architectures, but only in the last period focused on resource-constrained platforms. As of now, there already exist some well performing strategies and frameworks like TinyTL [13], Progress & Compress [14], TinyOL [15], and Train++ [16]. CL has already been successfully applied in a domain of interest for our study: image classification.…”
Section: Related Workmentioning
confidence: 99%