The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2017
DOI: 10.18494/sam.2017.1546
|View full text |Cite
|
Sign up to set email alerts
|

Activity Recognition Using Transfer Learning

Abstract: The technology for human activity recognition has become an active research topic in recent years as it has many potential applications, such as surveillance systems, healthcare systems, and human-computer interaction. In the research of activity recognition, supervised machine learning approaches have been widely used for activity recognition. However, the cost of collecting labeled sensor data in new environments is high. Furthermore, these methods do not work well in a crossdomain environment using conventi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 10 publications
0
3
0
Order By: Relevance
“…In Table A.5 (“Appendix 1 ”), the description of sensor-datasets is illustrated with attributes such as data source, #factors, sensor location, and activity type. It includes wearable sensor-based datasets (Alsheikh et al 2016 ; Asteriadis and Daras 2017 ; Zhang et al 2012 ; Chavarriaga et al 2013 ; Munoz-Organero 2019 ; Roggen et al 2010 ; Qin et al 2019 ), as well as smart-device sensor-based datasets (Ravi et al 2016 ; Cui and Xu 2013 ; Weiss et al 2019 ; Miu et al 2015 ; Reiss and Stricker 2012a , b ; Lv et al 2020 ; Gani et al 2019 ; Stisen et al 2015 ; Röcker et al 2017 ; Micucci et al 2017 ) Apart from datasets mentioned in Table A.5 , there are few more datasets worth mentioning such as Kasteren dataset (Kasteren et al 2011 ; Chen et al 2017 ), which is also very popular. (2) Vision-based HAR: Devices for collecting 3D data are CCTV cameras (Koppula and Saxena 2016 ; Devanne et al 2015 ; Zhang and Parker 2016 ; Li et al 2010 ; Duan et al 2020 ; Kalfaoglu et al 2020 ; Gorelick et al 2007 ; Mahadevan et al 2010 ), depth cameras (Cippitelli et al 2016 ; Gaglio et al 2015 ; Neili Boualia and Essoukri Ben Amara 2021 ; Ding et al 2016 ; Cornell Activity Datasets: CAD-60 & CAD-120 2021 ), and videos from public domains like YouTube and Hollywood movie scenes (Gu et al 2018 ; Soomro et al 2012 ; Kuehne et al 2011 ; Sigurdsson et al 2016 ; Kay et al 2017 ; Carreira et al 2018 ; Goyal et al 2017 ).…”
Section: Critical Discussionmentioning
confidence: 99%
“…In Table A.5 (“Appendix 1 ”), the description of sensor-datasets is illustrated with attributes such as data source, #factors, sensor location, and activity type. It includes wearable sensor-based datasets (Alsheikh et al 2016 ; Asteriadis and Daras 2017 ; Zhang et al 2012 ; Chavarriaga et al 2013 ; Munoz-Organero 2019 ; Roggen et al 2010 ; Qin et al 2019 ), as well as smart-device sensor-based datasets (Ravi et al 2016 ; Cui and Xu 2013 ; Weiss et al 2019 ; Miu et al 2015 ; Reiss and Stricker 2012a , b ; Lv et al 2020 ; Gani et al 2019 ; Stisen et al 2015 ; Röcker et al 2017 ; Micucci et al 2017 ) Apart from datasets mentioned in Table A.5 , there are few more datasets worth mentioning such as Kasteren dataset (Kasteren et al 2011 ; Chen et al 2017 ), which is also very popular. (2) Vision-based HAR: Devices for collecting 3D data are CCTV cameras (Koppula and Saxena 2016 ; Devanne et al 2015 ; Zhang and Parker 2016 ; Li et al 2010 ; Duan et al 2020 ; Kalfaoglu et al 2020 ; Gorelick et al 2007 ; Mahadevan et al 2010 ), depth cameras (Cippitelli et al 2016 ; Gaglio et al 2015 ; Neili Boualia and Essoukri Ben Amara 2021 ; Ding et al 2016 ; Cornell Activity Datasets: CAD-60 & CAD-120 2021 ), and videos from public domains like YouTube and Hollywood movie scenes (Gu et al 2018 ; Soomro et al 2012 ; Kuehne et al 2011 ; Sigurdsson et al 2016 ; Kay et al 2017 ; Carreira et al 2018 ; Goyal et al 2017 ).…”
Section: Critical Discussionmentioning
confidence: 99%
“…(25) Chen et al, however, proposed a transfer learning framework based on principal component analysis (PCA) transformation, Gale-Shapley similarity measurement, and Jensen-Shannon divergence (JSD) feature mapping. (26) Semisupervised learning makes use of only a small amount of labeled training data and a substantial amount of unlabeled training data. (27) For example, self-training, cotraining, (28) and En-Co-Training are some typical semisupervised techniques whereas a special case of semisupervised learning, (29) i.e., active learning, mainly focuses on labeling the most profitable instances, but human intervention is necessary to some extent for a small amount of labeled data.…”
Section: Related Workmentioning
confidence: 99%
“…Precision, recall, and F-measure are usually used in a binary classification setting and consist of three scores including true positive (TP), false positive (FP), and false negative (FN). (26) In our work, nonzero labels (i.e., 17 target activities) are treated as positive, and zero labels (i.e., transition activities and RP) are treated as negative. In this manner, the multilabel labeling issue in this work is turned into a binary labeling issue.…”
Section: Olamentioning
confidence: 99%