Proceedings of the 2019 International Conference on Computer, Network, Communication and Information Systems (CNCI 2019) 2019
DOI: 10.2991/cnci-19.2019.95
|View full text |Cite
|
Sign up to set email alerts
|

Human Action Recognition on Cellphone Using Compositional Bidir-LSTM-CNN Networks

Abstract: Recently, the multimoal and high dimensional sensor data are prone to problems such as artificial error and time-consuming acquisition processes, especially in supervised human activity recognition. Therefore, this study proposes an activity recognition framework called compositional Bidir-LSTM-CNN Networks, which automatically extracts features from raw data using the optimized Convolutional Neural Network and further capture dynamic temporal features through the Bidirectional Lone Short Term Memory Network. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
2

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 13 publications
0
3
0
Order By: Relevance
“…In [14], [322], [323], [324], the tri-axial acceleration data was fed to different CNN architectures for HAR. Wang et al [325] proposed a framework consisting of CNN and Bi-LSTM networks to extract spatial and temporal features from the raw acceleration data. Unlike the above-mentioned works, Lu et al [326] utilized a modified Recurrence Plot (RP) [327] to transform the raw triaxial acceleration data into color images, which were then fed to a ResNet for HAR.…”
Section: Acceleration Modalitymentioning
confidence: 99%
“…In [14], [322], [323], [324], the tri-axial acceleration data was fed to different CNN architectures for HAR. Wang et al [325] proposed a framework consisting of CNN and Bi-LSTM networks to extract spatial and temporal features from the raw acceleration data. Unlike the above-mentioned works, Lu et al [326] utilized a modified Recurrence Plot (RP) [327] to transform the raw triaxial acceleration data into color images, which were then fed to a ResNet for HAR.…”
Section: Acceleration Modalitymentioning
confidence: 99%
“…Moreover, IMU sensor data is a good choice for HAR, due to their robustness against viewpoint, occlusion and background variations. Many works of literature Wang et al (2019a) have proposed wearable sensor-based solutions for HAR. However, the accuracy of IMU sensor-based methods is sensitive to the placement on the human body Mukhopadhyay (2014).…”
Section: Related Workmentioning
confidence: 99%
“…Obviously, there is a huge structure divergence between the imu sensor and vision-sensor data. Since the imu sensor data are one-dimensional time-series signals, most of the previous literature utilize 1D-CNN or LSTM network to extract spatial and temporal features of raw imu sensor data Steven Eyobu & Han (2018); Panwar et al (2017); Wang et al (2019a). The visionsensor activity data, however, usually are images or videos with two or more dimensions, which is suitable for 2D-CNN or 3D-CNN to extract visual features Simonyan & Zisserman (2014); Karpathy et al (2014); Sun et al (2017).…”
Section: Introductionmentioning
confidence: 99%