2021
DOI: 10.1007/s00371-021-02138-x
|View full text |Cite
|
Sign up to set email alerts
|

Real-time limb tracking in single depth images based on circle matching and line fitting

Abstract: Modern lower limb prostheses neither measure nor incorporate healthy residual leg information for intent recognition or device control. In order to increase robustness and reduce misclassification of devices like these, we propose a vision-based solution for real-time 3D human contralateral limb tracking (CoLiTrack). An inertial measurement unit and a depth camera are placed on the side of the prosthesis. The system is capable of estimating the shank axis of the healthy leg. Initially, the 3D input is transfor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 57 publications
0
5
0
Order By: Relevance
“…Compared to radar and laser rangefinders, cameras can provide more detailed information about the field-of-view and detect physical obstacles and terrain changes in peripheral locations (Figure 3). Most environment recognition systems have used RGB cameras Diaz et al, 2018;Khademi and Simon, 2019;Laschowski et al, 2019bLaschowski et al, , 2020bLaschowski et al, , 2021bNovo-Torres et al, 2019;Da Silva et al, 2020;Zhong et al, 2020) or 3D depth cameras Varol and Massalin, 2016;Hu et al, 2018;Massalin et al, 2018;Zhang et al, 2019bZhang et al, ,c,d, 2020Krausz and Hargrove, 2021;Tschiedel et al, 2021) mounted on the chest Laschowski et al, 2019bLaschowski et al, , 2020bLaschowski et al, , 2021b, waist (Khademi and Simon, 2019;Zhang et al, 2019d;Krausz and Hargrove, 2021), or lower-limbs (Varol and Massalin, 2016;Diaz et al, 2018;Massalin et al, 2018;Zhang et al, 2019bZhang et al, ,c, 2020Da Silva et al, 2020;Zhong et al, 2020) (Table 1). Few studies have adopted head-mounted cameras for biomimicry (Novo-Torres et al, 2019;Zhong et al, 2020).…”
Section: Literature Reviewmentioning
confidence: 99%
“…Compared to radar and laser rangefinders, cameras can provide more detailed information about the field-of-view and detect physical obstacles and terrain changes in peripheral locations (Figure 3). Most environment recognition systems have used RGB cameras Diaz et al, 2018;Khademi and Simon, 2019;Laschowski et al, 2019bLaschowski et al, , 2020bLaschowski et al, , 2021bNovo-Torres et al, 2019;Da Silva et al, 2020;Zhong et al, 2020) or 3D depth cameras Varol and Massalin, 2016;Hu et al, 2018;Massalin et al, 2018;Zhang et al, 2019bZhang et al, ,c,d, 2020Krausz and Hargrove, 2021;Tschiedel et al, 2021) mounted on the chest Laschowski et al, 2019bLaschowski et al, , 2020bLaschowski et al, , 2021b, waist (Khademi and Simon, 2019;Zhang et al, 2019d;Krausz and Hargrove, 2021), or lower-limbs (Varol and Massalin, 2016;Diaz et al, 2018;Massalin et al, 2018;Zhang et al, 2019bZhang et al, ,c, 2020Da Silva et al, 2020;Zhong et al, 2020) (Table 1). Few studies have adopted head-mounted cameras for biomimicry (Novo-Torres et al, 2019;Zhong et al, 2020).…”
Section: Literature Reviewmentioning
confidence: 99%
“…The Euler angle is calculated through BOSCH's sensor fusion algorithm [31] and is represented as the rotation angle ( • ) of the GMS based on Heading (yaw), Roll, and Pitch axes. The Euler angle is used to detect gait events [32], [33] and detect gravity for sensor calibration. In this study, the acceleration signal is the focus on analysis, and the Euler angle is utilized to assist the acceleration signal processing.…”
Section: B Data Collectionmentioning
confidence: 99%
“…When performing gait cycle segmentation, it is necessary to determine heel strike and gait length. Heel strike is detected based on the Euler angle [32], which is reflected in the acceleration signal and is set as the start point (sp) of a gait cycle. A previously defined cycle segmentation method [13] is performed to estimate the cycle length.…”
Section: ) Cycle Segmentation and Outlier Cycle Removalmentioning
confidence: 99%
“…While the above methods use common inertial sensors on powered prostheses, suprasensory environmental perception could potentially detect obstacles to avoid collisions without intact joint compensations. For example, environmental features collected by a laser distance meter [29], RGB camera [30]- [32], depth camera such as LiDAR [33], or the fusion of camera and inertial sensors [34], [35] have been utilized to recognize the approaching terrain with high accuracy for prosthesis control. The depth camera in [34] calculated the vertical distance between the prosthetic foot and the top of the obstacle so that extra knee flexion can be applied to provide enough clearance to cross over obstacles up to 0.3 m in height.…”
Section: Introductionmentioning
confidence: 99%