Background: Walking patterns can provide important indications of a person's health status and be beneficial in the early diagnosis of individuals with a potential walking disorder. For appropriate gait analysis, it is critical that natural functional walking characteristics are captured, rather than those experienced in artificial or observed settings. To better understand the extent to which setting influences gait patterns, and particularly whether observation plays a varying role on subjects of different ages, the current study investigates to what extent people walk differently in lab versus real-world environments and whether age dependencies exist. Methods: The walking patterns of 20 young and 20 elderly healthy subjects were recorded with five wearable inertial measurement units (ZurichMOVE sensors) attached to both ankles, both wrists and the chest. An automated detection process based on dynamic time warping was developed to efficiently identify the relevant sequences. From the ZurichMOVE recordings, 15 spatio-temporal gait parameters were extracted, analyzed and compared between motion patterns captured in a controlled lab environment (10 m walking test) and the non-controlled ecologically valid real-world environment (72 h recording) in both groups. Results: Several parameters (Cluster A) showed significant differences between the two environments for both groups, including an increased outward foot rotation, step width, number of steps per 180 • turn, stance to swing ratio, and cycle time deviation in the real-world. A number of parameters (Cluster B) showed only significant differences between the two environments for elderly subjects, including a decreased gait velocity (p = 0.0072), decreased cadence (p = 0.0051) and increased cycle time (p = 0.0051) in real-world settings. Importantly, the real-world environment increased the differences in several parameters between the young and elderly groups. Conclusion: Elderly test subjects walked differently in controlled lab settings compared to their real-world environments, which indicates the need to better understand
Cardiothoracic open-heart surgery has revolutionized the treatment of cardiovascular disease, the leading cause of death worldwide. After the surgery, hemodynamic and volume management can be complicated, for example in case of vasoplegia after endocarditis. Timely treatment is crucial for outcomes. Currently, treatment decisions are made based on heart volume, which needs to be measured manually by the clinician each time using ultrasound. Alternatively, implantable sensors offer a real-time window into the dynamic function of our body. Here it is shown that a soft flexible sensor, made with biocompatible materials, implanted on the surface of the heart, can provide continuous information of the heart volume after surgery. The sensor works robustly for a period of two days on a tensile machine. The accuracy of measuring heart volume is improved compared to the clinical gold standard in vivo, with an error of 7.1 mL for the strain sensor versus impedance and 14.0 mL versus ultrasound. Implanting such a sensor would provide essential, continuous information on heart volume in the critical time following the surgery, allowing early identification of complications, facilitating treatment, and hence potentially improving patient outcome.
For an in-depth, AOI-based analysis of mobile eye tracking data, a preceding gaze assignment step is inevitable. Current solutions such as manual gaze mapping or marker-based approaches are tedious and not suitable for applications manipulating tangible objects. This makes mobile eye tracking studies with several hours of recording difficult to analyse quantitatively. We introduce a new machine learning-based algorithm, the computational Gaze-Object Mapping (cGOM), that automatically maps gaze data onto respective AOIs. cGOM extends state-of-the-art object detection and segmentation by mask R-CNN with a gaze mapping feature. The new algorithm’s performance is validated against a manual fixation-by-fixation mapping, which is considered as ground truth, in terms of true positive rate (TPR), true negative rate (TNR) and efficiency. Using only 72 training images with 264 labelled object representations, cGOM is able to reach a TPR of approx. 80% and a TNR of 85% compared to the manual mapping. The break-even point is reached at 2 hours of eye tracking recording for the total procedure, respectively 1 hour considering human working time only. Together with a real-time capability of the mapping process after completed training, even hours of eye tracking recording can be evaluated efficiently. (Code and video examples have been made available at: https://gitlab.ethz.ch/pdz/cgom.git)
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.