2021
DOI: 10.1145/3448112
|View full text |Cite
|
Sign up to set email alerts
|

SelfHAR

Abstract: Machine learning and deep learning have shown great promise in mobile sensing applications, including Human Activity Recognition. However, the performance of such models in real-world settings largely depends on the availability of large datasets that captures diverse behaviors. Recently, studies in computer vision and natural language processing have shown that leveraging massive amounts of unlabeled data enables performance on par with state-of-the-art supervised models. In this work, we present Se… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
14
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 64 publications
(22 citation statements)
references
References 53 publications
(67 reference statements)
0
14
0
Order By: Relevance
“…This entails transfer learning, a concept that improves a classifier in one domain (pretext task) using more readily obtainable data and then applying this acquired knowledge to another domain (downstream task) [24] . For instance, Tang et al [25] utilized a teacher-student self-training model to capture information from a large-scale unlabeled dataset of wearable and mobile sensing data, which was subsequently transferred to seven different datasets with varying sensor types, populations, and protocols. Similarly, Spathis et al [26] used activity accelerometer sensor data as input to forecast heart rate and transferred the learned representations to capture physiologically meaningful and personalized information using linear classifiers.…”
Section: Introductionmentioning
confidence: 99%
“…This entails transfer learning, a concept that improves a classifier in one domain (pretext task) using more readily obtainable data and then applying this acquired knowledge to another domain (downstream task) [24] . For instance, Tang et al [25] utilized a teacher-student self-training model to capture information from a large-scale unlabeled dataset of wearable and mobile sensing data, which was subsequently transferred to seven different datasets with varying sensor types, populations, and protocols. Similarly, Spathis et al [26] used activity accelerometer sensor data as input to forecast heart rate and transferred the learned representations to capture physiologically meaningful and personalized information using linear classifiers.…”
Section: Introductionmentioning
confidence: 99%
“…The HAR problem, concerning machine learning, is considered a classifier task (Brishtel et al, 2023), where the task of the method is to relate all input dataset points to sort out a predefined number of categories (e.g. human actions) (Tang et al, 2021). Classification refers to a supervised task that Journal of Disability Research 2024 the methods learn in the training phase by linking the input dataset to its respective ground truth label that denotes the activity that was happening when the data was recorded.…”
Section: Introductionmentioning
confidence: 99%
“…Many surrogate tasks [17,18,[50][51][52][53] have been developed to learn spatial and temporal relations from images, videos, and text, proving invaluable in extracting high-level representations that aid downstream transfer and semi-supervised learning models. However, applying these paradigms to mobile sensing poses challenges due to the unique characteristics of sensor data and the limited computational resources of mobile device Recent self-supervised learning initiatives [1,[54][55][56][57][58][59] have demonstrated success in extracting useful features from unlabeled sensor data, thereby enhancing downstream task-specific model performance. For example, the encoder models from TPN [1] and Self-HAR [55] represent early efforts to contextualize features from unlabeled sensor data, enabling task-specific models to achieve improved performance even with limited labeled data.…”
Section: Deep Learning For Mobile Sensingmentioning
confidence: 99%
“…However, applying these paradigms to mobile sensing poses challenges due to the unique characteristics of sensor data and the limited computational resources of mobile device Recent self-supervised learning initiatives [1,[54][55][56][57][58][59] have demonstrated success in extracting useful features from unlabeled sensor data, thereby enhancing downstream task-specific model performance. For example, the encoder models from TPN [1] and Self-HAR [55] represent early efforts to contextualize features from unlabeled sensor data, enabling task-specific models to achieve improved performance even with limited labeled data. Nevertheless, these models often still require some labeled data for training downstream classifiers and may risk overfitting to specific domains, limiting their generalizability for target users or environments that lack any labeled data.…”
Section: Deep Learning For Mobile Sensingmentioning
confidence: 99%
See 1 more Smart Citation