2021
DOI: 10.3390/s21186291
|View full text |Cite
|
Sign up to set email alerts
|

The Role of Surface Electromyography in Data Fusion with Inertial Sensors to Enhance Locomotion Recognition and Prediction

Abstract: Locomotion recognition and prediction is essential for real-time human–machine interactive control. The integration of electromyography (EMG) with mechanical sensors could improve the performance of locomotion recognition. However, the potential of EMG in motion prediction is rarely discussed. This paper firstly investigated the effect of surface EMG on the prediction of locomotion while integrated with inertial data. We collected EMG signals of lower limb muscle groups and linear acceleration data of lower li… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(24 citation statements)
references
References 35 publications
0
24
0
Order By: Relevance
“…[98], [100], [104], [116], [125], [127], [187], [206], [241], [326], [365], [395], [411], Image & Numerical [62], [75], [119], [126], [167], [313], [331], [353], [405], [410], Audio & Text & Sensor [384], Audio & Text [180], [282], [377], [391], [392], Text & Signal [109], Text & Numerical [304], [349], Sensor & Signal [240], [242], [258], [389], Sensor & Numerical [183], Signal & Numerical [205], [257], [260], [318]. Figure 10 displays the extracted information related to each modality and data type with the links between them.…”
Section: B Taskmentioning
confidence: 99%
See 1 more Smart Citation
“…[98], [100], [104], [116], [125], [127], [187], [206], [241], [326], [365], [395], [411], Image & Numerical [62], [75], [119], [126], [167], [313], [331], [353], [405], [410], Audio & Text & Sensor [384], Audio & Text [180], [282], [377], [391], [392], Text & Signal [109], Text & Numerical [304], [349], Sensor & Signal [240], [242], [258], [389], Sensor & Numerical [183], Signal & Numerical [205], [257], [260], [318]. Figure 10 displays the extracted information related to each modality and data type with the links between them.…”
Section: B Taskmentioning
confidence: 99%
“…A total of 212 articles related to fusion learning were encountered. Of 155 articles, 99 were model-agnostic, where 62 pertained to early [55], [56], [58], [59], [62], [63], [75], [76], [79], [98], [102]- [105], [111], [115], [119], [120], [133], [141], [142], [166], [173], [207], [213], [240], [242], [250], [252], [254], [258], [259], [270], [271], [280], [282], [299], [303], [305]- [307], [313], [320], [324], [326], [330], [334], [337], [347], [349], [357], [359], [364], [367], [381],…”
Section: F Fusionmentioning
confidence: 99%
“…b) Motion prediction based on EMG and IMU and experimental process. [185] Copyright 2021, MDPI. mal motion modes, namely trunk rotation, trunk forward tilt, and scapular elevation, with recognition rates of 91.56%, 91.90%, 82.62%, respectively.…”
Section: Recovery Periodmentioning
confidence: 99%
“…The results of these experiments show a high percentage of accuracy (more than 80% on most of the predictions), with the problem appearing when defining an initial certainty value of the possible next terrain, in which the manually selected values can increase detection errors (Stairs Down locomotion mode, with 67% of detection rate). To overcome the issue of prediction with only the previous and current states, machine learning and specifically deep reinforcement learning techniques for human locomotion have been used to improve detection based on repetitive learning Huang et al (2011);Peng et al (2020b); Meng et al (2021), and using a combination of sensors to view the environment Zhang et al (2019), it is possible to adapt to complex environments and motions to perform on uneven terrains (Song et al, 2021).…”
Section: Future Directionmentioning
confidence: 99%
“…(2011) ; Peng et al. (2020b) ; Meng et al. (2021) , and using a combination of sensors to view the environment Zhang et al.…”
Section: Future Directionmentioning
confidence: 99%