2022 International Joint Conference on Neural Networks (IJCNN) 2022
DOI: 10.1109/ijcnn55064.2022.9892356
|View full text |Cite
|
Sign up to set email alerts
|

Incremental Online Learning Algorithms Comparison for Gesture and Visual Smart Sensors

Abstract: Tiny machine learning (TinyML) in IoT systems exploits MCUs as edge devices for data processing. However, traditional TinyML methods can only perform inference, limited to static environments or classes. Real case scenarios usually work in dynamic environments, thus drifting the context where the original neural model is no more suitable. For this reason, pre-trained models reduce accuracy and reliability during their lifetime because the data recorded slowly becomes obsolete or new patterns appear. Continual … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 21 publications
0
2
0
Order By: Relevance
“…Another effective approach is to use dynamic architectures that can adapt their structure to accommodate new tasks. For instance, in [149], a regularization-based approach for an IoT scenario is presented, in which MCUs are exploited as edge devices for data processing considering two tasks: gesture recognition based on accelerometer data and image classification.…”
Section: ) Continual Learningmentioning
confidence: 99%
“…Another effective approach is to use dynamic architectures that can adapt their structure to accommodate new tasks. For instance, in [149], a regularization-based approach for an IoT scenario is presented, in which MCUs are exploited as edge devices for data processing considering two tasks: gesture recognition based on accelerometer data and image classification.…”
Section: ) Continual Learningmentioning
confidence: 99%
“…Up till now, universal frameworks enabling full-network fine-tuning/training on MCUs [76]- [78] are still scarce, while they are mostly dedicated to vision tasks based on 2D-Convolutional Neural Network (CNN) architecture. Compared with fine-tuning the whole network, on-device transfer learning (ODTL) [26], [80], [81] that selectively updates only specific layers, e.g., the last dense layer, offers a preferable solution that better balances between resourcefriendliness and performance gains. For example, [49] reports an average performance loss of only 1.5% by shifting from full-network training to partial transfer learning (only dense layers) in the personalization task on the WISDM data set, while 70.4% latency and 36.2% memory footprint can be saved during the learning process.…”
Section: B Gradient Descent-based On-device Continuous Learningmentioning
confidence: 99%