2018 3rd International Conference on Mechanical, Control and Computer Engineering (ICMCCE) 2018
DOI: 10.1109/icmcce.2018.00052
|View full text |Cite
|
Sign up to set email alerts
|

Human Activity Recognition with Smartphone Inertial Sensors Using Bidir-LSTM Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
32
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 62 publications
(33 citation statements)
references
References 10 publications
0
32
0
Order By: Relevance
“…After 10 repetitions, the average accuracy of the open database was 95.08%, and the mean accuracy of the database we recorded was 87.88%. [20] 93.70% LSTM-CNN [28] 95.78% Bidir-LSTM [31] 93.79% EHARS [32] 93.92% CNN-LSTM [33] 92.13% CNN-LSTM [34] 93.40% Ours 95.99%…”
Section: G K-fold Cross-validation In Both Open Dataset and Data Thimentioning
confidence: 99%
See 1 more Smart Citation
“…After 10 repetitions, the average accuracy of the open database was 95.08%, and the mean accuracy of the database we recorded was 87.88%. [20] 93.70% LSTM-CNN [28] 95.78% Bidir-LSTM [31] 93.79% EHARS [32] 93.92% CNN-LSTM [33] 92.13% CNN-LSTM [34] 93.40% Ours 95.99%…”
Section: G K-fold Cross-validation In Both Open Dataset and Data Thimentioning
confidence: 99%
“…This study used only CNN and could obtain similar or even superior accuracy. Reference [31] used bidirectional LSTM, and it took 50,000 iterations to converge. By contrast, the present study only required 600 iterations to obtain superior accuracy.…”
Section: H the Comparisons Of Several Models Of The Open Datasetmentioning
confidence: 99%
“…Generally, stacked autoencoder provides compact feature representation from continuous unlabelled sensor streams to achieve robust and seamless implementation of human activity recognition system [ 34 ]. In addition to the above mentioned, deep learning methods also include Recurrent Neural Network (RNN) [ 50 , 51 ], Long Short-Term Memory (LSTM) [ 52 ], Deep Belief Networks (DBN) [ 53 ], and so on. In Table 1 , some references that utilized deep learning methods were listed.…”
Section: Related Workmentioning
confidence: 99%
“…Dynamic time warping [23] 89.00 Handcrafted features + SVM [24] 89.00 Convolutional neural network [25] 90.89 Hidden Markov models [26] 91.76 PCA + SVM [27] 91.82 Stacked autoencoders + SVM [28] 92.16 Hierarchical continuous HMM [28] 93.18 Bidir-LSTM network [29] 93.79 A multi-layer parallel LSTM network [30] 94.34 Convolutional neural network [31] 94.79 Convolutional neural network [16] 95.31 Fully convolutional network [32] 96.32 Genetic algorithm to optimize feature vector [33] 96.38 Bidirectional LSTM network [34] 92.67 CNN [35] 94.00 Hierarchical deep learning model [36] 97.95 Our method (LDA + SVM)…”
Section: Recognition Accuracy (%)mentioning
confidence: 99%