2022
DOI: 10.1186/s12984-022-01022-6
|View full text |Cite
|
Sign up to set email alerts
|

Egocentric vision-based detection of surfaces: towards context-aware free-living digital biomarkers for gait and fall risk assessment

Abstract: Background Falls in older adults are a critical public health problem. As a means to assess fall risks, free-living digital biomarkers (FLDBs), including spatiotemporal gait measures, drawn from wearable inertial measurement unit (IMU) data have been investigated to identify those at high risk. Although gait-related FLDBs can be impacted by intrinsic (e.g., gait impairment) and/or environmental (e.g., walking surfaces) factors, their respective impacts have not been differentiated by the majori… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 51 publications
(39 reference statements)
0
4
0
Order By: Relevance
“…Typically, the need for AI computer vision-based algorithms is to classify everyday environments, i.e., whether a participant is indoor/outdoor and on what terrain they are walking. Approaches typically involve CNNs (convolutional neural networks) developed using a python-based deep learning library (e.g., PyTorch/TensorFlow) and either involve mobile waist-mounted cameras aimed directly at a participant’s feet [ 38 ] or utilise further pre-processing steps on the image stemming from head-mounted video glasses to extract and classify floor location [ 39 ]. For more nuanced classification of external factors affecting gait, object detection algorithms can be implemented.…”
Section: Resultsmentioning
confidence: 99%
“…Typically, the need for AI computer vision-based algorithms is to classify everyday environments, i.e., whether a participant is indoor/outdoor and on what terrain they are walking. Approaches typically involve CNNs (convolutional neural networks) developed using a python-based deep learning library (e.g., PyTorch/TensorFlow) and either involve mobile waist-mounted cameras aimed directly at a participant’s feet [ 38 ] or utilise further pre-processing steps on the image stemming from head-mounted video glasses to extract and classify floor location [ 39 ]. For more nuanced classification of external factors affecting gait, object detection algorithms can be implemented.…”
Section: Resultsmentioning
confidence: 99%
“…Video analysis also needs to consider environmental factors that influence fall risk. For example, uneven terrain 26 , lighting, and obstacles can impact an individual's mobility-based gait patterns that increase the likelihood of falls 23 . Previous studies have investigated that by using GPS sensors to infer terrain type, however, unlike video captured directly from the participant, absolute context cannot be gained from GPS sensors alone 27 .…”
Section: Discussionmentioning
confidence: 99%
“…These algorithms have been applied in the assessment of the motor function of elderly extremities by virtue of their strong learning ability, wide coverage, good adaptability, high data-driven upper limit, and good portability. Motioncapture systems and IMUs [51] provide objective contextual information in an automated manner by combining decision trees [50] and deep learning [52]. The trained network has been validated to monitor older adults at increased risk of falls or with any severe gait impairment, with an accuracy of 89.13%.…”
Section: Multimodal Data Analysismentioning
confidence: 99%