2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP) 2018
DOI: 10.1109/globalsip.2018.8646367
|View full text |Cite
|
Sign up to set email alerts
|

Human Activity Classification Incorporating Egocentric Video and Inertial Measurement Unit Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 18 publications
0
8
0
Order By: Relevance
“…Our recognition result achieved 49.89% for action recognition using only IMUs data from left and right hands. This result is better than [25] by around 5% having in consideration that we are recognizing 9 actions from only 2 sensors while [25] method was used to recognize only 6 action with 4 IMUs sensors, and they are using LSTM for feature extractions and we are using statistical feature extraction method. On the other hand, [34] achieved 62% f-measure recognition score by using only Accelerometer data captured from IMUs sensors attached to non-human objects.…”
Section: Resultsmentioning
confidence: 91%
See 2 more Smart Citations
“…Our recognition result achieved 49.89% for action recognition using only IMUs data from left and right hands. This result is better than [25] by around 5% having in consideration that we are recognizing 9 actions from only 2 sensors while [25] method was used to recognize only 6 action with 4 IMUs sensors, and they are using LSTM for feature extractions and we are using statistical feature extraction method. On the other hand, [34] achieved 62% f-measure recognition score by using only Accelerometer data captured from IMUs sensors attached to non-human objects.…”
Section: Resultsmentioning
confidence: 91%
“…However, for real life scenarios, realistic and compromised number of modalities should be used. Lu and Velipasalar [25] used LSTM to classify actions using four IMUs sensors corresponding 36 components with egocentric video from CMU Multimodal Activity (CMU-MMAC) Database [41]. Visual and audio sensors was used by [5] for activity recognition.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In real-world circumstances, however, it is preferable to have a reasonable and balanced number of modalities. Lu et al [31] used LSTM to categorize activities utilizing four IMU sensors and egocentric video from the CMU Multimodal Activity (CMU-MMAC) database [32]. For activity recognition, [33] used visual and audio sensors.…”
Section: Related Work and Contributionsmentioning
confidence: 99%
“…Twenty US states including California, Texas, and New York have recently passed legislation to enable testing and deployment of autonomous vehicles. Since the control successors are highly depended on the outputs from the object detector and neural networks [1][2] [3], the performance of object detection is significant to the security of autonomous driving. However, neural network based object detectors are shown to be vulnerable to adversarial examples by recent researches, inputs that are carefully crafted to fool the model [4] [5][6] [7].…”
Section: Introductionmentioning
confidence: 99%