2020
DOI: 10.48550/arxiv.2012.07963
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Invariant Feature Learning for Sensor-based Human Activity Recognition

Abstract: Wearable sensor-based human activity recognition (HAR) has been a research focus in the field of ubiquitous and mobile computing for years. In recent years, many deep models have been applied to HAR problems. However, deep learning methods typically require a large amount of data for models to generalize well. Significant variances caused by different participants or diverse sensor devices limit the direct application of a pre-trained model to a subject or device that has not been seen before. To address these… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 38 publications
0
1
0
Order By: Relevance
“…In the work by He et al triplets were sampled based on a hierarchical strategy in the application of fine-grained image classification, where a convolutional neural network was trained to extract low-level features [40]. Inter-class subject variability may also be approached as a domain adaptation problem as in the work by Hao et al [41], where a domain-invariant deep feature extractor is combined with task-specific networks for the domains of subjects and devices.…”
Section: Related Workmentioning
confidence: 99%
“…In the work by He et al triplets were sampled based on a hierarchical strategy in the application of fine-grained image classification, where a convolutional neural network was trained to extract low-level features [40]. Inter-class subject variability may also be approached as a domain adaptation problem as in the work by Hao et al [41], where a domain-invariant deep feature extractor is combined with task-specific networks for the domains of subjects and devices.…”
Section: Related Workmentioning
confidence: 99%