2019
DOI: 10.1109/tnsre.2019.2935765
|View full text |Cite
|
Sign up to set email alerts
|

Sequential Decision Fusion for Environmental Classification in Assistive Walking

Abstract: Powered prostheses are effective for helping amputees walk on level ground, but these devices are inconvenient to use in complex environments. Prostheses need to understand the motion intent of amputees to help them walk in complex environments. Recently, researchers have found that they can use vision sensors to classify environments and predict the motion intent of amputees. Previous researchers can classify environments accurately in the offline analysis, but they neglect to decrease the corresponding time … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
34
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 30 publications
(34 citation statements)
references
References 35 publications
0
34
0
Order By: Relevance
“…In comparison, the previous largest dataset contained approximately 402,000 images (Massalin et al, 2018). While most environment recognition systems have included fewer than 6 classes (Khademi and Simon, 2019; Krausz and Hargrove, 2015; Krausz et al, 2015; 2019; Laschowski et al, 2019b; Massalin et al, 2018; Novo-Torres et al, 2019; Varol and Massalin, 2016; Zhang et al, 2019b; 2019c; 2019d; 2020), the ExoNet database features a 12-class hierarchical labelling architecture. These differences have practical implications given that learning-based algorithms like deep convolutional neural networks require significant and diverse training images (LeCun et al, 2015).…”
Section: Discussionmentioning
confidence: 99%
See 4 more Smart Citations
“…In comparison, the previous largest dataset contained approximately 402,000 images (Massalin et al, 2018). While most environment recognition systems have included fewer than 6 classes (Khademi and Simon, 2019; Krausz and Hargrove, 2015; Krausz et al, 2015; 2019; Laschowski et al, 2019b; Massalin et al, 2018; Novo-Torres et al, 2019; Varol and Massalin, 2016; Zhang et al, 2019b; 2019c; 2019d; 2020), the ExoNet database features a 12-class hierarchical labelling architecture. These differences have practical implications given that learning-based algorithms like deep convolutional neural networks require significant and diverse training images (LeCun et al, 2015).…”
Section: Discussionmentioning
confidence: 99%
“…One subject was instrumented with a lightweight wearable smartphone camera system (iPhone XS Max); photograph shown in Figure 1A. Unlike limb-mounted systems (Da Silva et al, 2020; Diaz et al, 2018; Hu et al, 2018; Kleiner et al, 2018; Massalin et al, 2018; Rai and Rombokas, 2018; Varol and Massalin, 2016; Zhang et al, 2011; 2019b; 2019c), chest-mounting can provide more stable video recording and allow users to wear pants and long dresses without obstructing the sampled field-of-view. The chest-mount height was approximately 1.3 m from the ground when the participant stood upright.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations