2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) 2019
DOI: 10.1109/percomw.2019.8730690
|View full text |Cite
|
Sign up to set email alerts
|

Vision and Acceleration Modalities: Partners for Recognizing Complex Activities

Abstract: Wearable devices have been used widely for human activity recognition in the field of pervasive computing. One big area of in this research is the recognition of activities of daily living where especially inertial and interaction sensors like RFID tags and scanners have been used. An issue that may arise when using interaction sensors is a lack of certainty. A positive signal from an interaction sensor is not necessarily caused by a performed activity e.g. when an object is only touched but no interaction occ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 26 publications
0
6
0
Order By: Relevance
“…In [119]- [121], an action recognition approach was introduced improving the motion-based action recognition with egocentric vision. In [120], [121], inertial data were collected from a smart-watch and video data were collected from a pair of smart-glasses to recognize actions. The inertial data were used to characterize the forearm movement pattern whereas the egocentric video data was used to characterize objects.…”
Section: B Video and Inertial Fusionmentioning
confidence: 99%
See 2 more Smart Citations
“…In [119]- [121], an action recognition approach was introduced improving the motion-based action recognition with egocentric vision. In [120], [121], inertial data were collected from a smart-watch and video data were collected from a pair of smart-glasses to recognize actions. The inertial data were used to characterize the forearm movement pattern whereas the egocentric video data was used to characterize objects.…”
Section: B Video and Inertial Fusionmentioning
confidence: 99%
“…In [53], [120], [121], both feature-level fusion and decisionlevel fusion were examined with the decision-level fusion achieving higher accuracy. For the feature-level fusion, after combining the features from the two modalities, a softmax classifier was used to make the final decision.…”
Section: B Video and Inertial Fusionmentioning
confidence: 99%
See 1 more Smart Citation
“…IMU measures motion from an accelerometer and gyroscope and is widely available on popular wearable devices. Prior work leverages IMU as an extra modality for human action recognition [13,68,69], (e.g., jumping, walking, standing) or as geometric cues for visualinertial odometry [7,20,71].…”
Section: Introductionmentioning
confidence: 99%
“…This work is an extension of a previous publication [19]. In this extension, we included experiments with more data, a deeper analysis and a comparison of our work against another system.…”
Section: Introductionmentioning
confidence: 99%