Proceedings of the 2016 ACM International Symposium on Wearable Computers 2016
DOI: 10.1145/2971763.2971764
|View full text |Cite
|
Sign up to set email alerts
|

Deep convolutional feature transfer across mobile activity recognition domains, sensor modalities and locations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
67
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 106 publications
(68 citation statements)
references
References 10 publications
1
67
0
Order By: Relevance
“…activity recognition (HAR), 1D convolutional and recurrent neural networks trained on raw labeled signals signi cantly improve the detection rate over traditional methods [20,44,68,72,73]. Despite the recent advances in the eld of HAR, learning representations from a massive amount of unlabeled data still presents a signi cant challenge.…”
Section: :2 • a Saeed Et Almentioning
confidence: 99%
See 1 more Smart Citation
“…activity recognition (HAR), 1D convolutional and recurrent neural networks trained on raw labeled signals signi cantly improve the detection rate over traditional methods [20,44,68,72,73]. Despite the recent advances in the eld of HAR, learning representations from a massive amount of unlabeled data still presents a signi cant challenge.…”
Section: :2 • a Saeed Et Almentioning
confidence: 99%
“…We also demonstrate that the learned representations from a di erent (but related) unlabeled data source can be successfully transferred to improve the performance of diverse tasks even in the case of semi-supervised learning. In terms of transfer learning, our approach also di ers signi cantly from some earlier a empts [44,69] that were concerned with features transferability from a fully-supervised model learned from inertial measurement units data, as our approach utilizes widely available smartphones and does not require labeled data. Finally, the proposed technique is also di erent from previously studied unsupervised pre-training methods such as autoencoders [37], restricted Boltzmann machines [55] and sparse coding [9] as we employ an end-to-end (self) supervised learning paradigm on multiple surrogate tasks to extract features.…”
Section: Determining Representational Similaritymentioning
confidence: 99%
“…erefore, the features are likely to be more domain-invariant and tend to perform be er for cross-domain tasks. A recent work evaluated deep models for cross-domain HAR [28], which provides some experience for future research on this area. ere are still many open problems for deep learning based CDAR.…”
Section: Related Work 21 Activity Recognitionmentioning
confidence: 99%
“…Therefore, the features are likely to be more domain-invariant and tend to perform better for cross-domain tasks. A recent work evaluated deep models for cross-domain HAR [23], which provides some experience for future research on this area. There are stll many open problems for deep learning based CDAR.…”
Section: A Activity Recognitionmentioning
confidence: 99%