BackgroundRecently, much attention has been given to the use of inertial sensors for remote monitoring of individuals with limited mobility. However, the focus has been mostly on the detection of symptoms, not specific activities. The objective of the present study was to develop an automated recognition and segmentation algorithm based on inertial sensor data to identify common gross motor patterns during activity of daily living.MethodA modified Time-Up-And-Go (TUG) task was used since it is comprised of four common daily living activities; Standing, Walking, Turning, and Sitting, all performed in a continuous fashion resulting in six different segments during the task. Sixteen healthy older adults performed two trials of a 5 and 10 meter TUG task. They were outfitted with 17 inertial motion sensors covering each body segment. Data from the 10 meter TUG were used to identify pertinent sensors on the trunk, head, hip, knee, and thigh that provided suitable data for detecting and segmenting activities associated with the TUG. Raw data from sensors were detrended to remove sensor drift, normalized, and band pass filtered with optimal frequencies to reveal kinematic peaks that corresponded to different activities. Segmentation was accomplished by identifying the time stamps of the first minimum or maximum to the right and the left of these peaks. Segmentation time stamps were compared to results from two examiners visually segmenting the activities of the TUG.ResultsWe were able to detect these activities in a TUG with 100% sensitivity and specificity (n = 192) during the 10 meter TUG. The rate of success was subsequently confirmed in the 5 meter TUG (n = 192) without altering the parameters of the algorithm. When applying the segmentation algorithms to the 10 meter TUG, we were able to parse 100% of the transition points (n = 224) between different segments that were as reliable and less variable than visual segmentation performed by two independent examiners.ConclusionsThe present study lays the foundation for the development of a comprehensive algorithm to detect and segment naturalistic activities using inertial sensors, in hope of evaluating automatically motor performance within the detected tasks.
A recent trend in human motion capture is the use of inertial measurement units (IMUs) for monitoring and performance evaluation of mobility in the natural living environment. Although the use of such systems have grown significantly, the development of methods and algorithms to process IMU data for clinical purposes is still limited. The aim of this work is to develop algorithms based on wavelet transform and discrete-time detection of events for the automatic segmentation of tasks related activities of daily living (ADL) from body worn IMUs. Seven healthy older adults (73 ± 4 years old) performed 10 ADL tasks in a simulated apartment during trials of different durations (3, 4, and 5 min). They wore a suit (Synertial UK Ltd IGS-180) comprised of 17 IMUs positioned strategically on body segments to capture full body motion. The proposed method automatically detected the number of template waveforms (representing each movement separately) using discrete wavelet transform (DWT) and discrete-time detection of events based on angular velocity, linear acceleration and 3D orientation data of pertinent IMUs. The sensitivity (Se.) and specificity (Sp.) of detection for the proposed method was established using time stamps of10tasks obtained from visual segmentation of each trial using the video records and the avatar provided by the system's software. At first, we identified six pertinent sensors that were strongly associated to different activities (at most two sensors/task) that allowed detection of tasks with high accuracy. The proposed algorithm exhibited significant global accuracy (N events = 1999, Se. = 97.5%, Sp. = 94%), despite the variation in the occurrences of the performed tasks (free living). The Se. varied from 94% to 100% for all the detected ADL tasks and Sp. ranged from 90% to 100% with the worst Sp. = 85 and 87% for Release_mid (reaching for object held just beyond reach at chest height) and Turning_Left tasks, respectively. This study demonstrated that DWT in conjunction with a nonlinear transform and auto-adaptive thresholding process for decision rules are highly efficient in detecting and segmenting tasks performed during free-living activities. This study also helped to determine the optimal number of sensors, and their location to detect such activities. This work lays the foundation for the automatic assessment of mobility performance within the segmented signals, as well as potentially helps differentiate populations based on their mobility patterns and symptomatology.
Currently, concussions are detected by observing physical and cognitive symptoms such as dizziness, disorientation and loss of consciousness that are often associated with mild traumatic brain injury (mTBI). Evaluation methods such as neurocognitive tests and neuroimaging are often performed post-concussion. However, these methods can be expensive and cumbersome to use. In this study, we developed a new testing protocol using a markerless motion capture system to quickly monitor the cognitive and motor dysfunction of football players over the course of the season. This protocol utilized a dual-task paradigm to identify kinematic measures that could detect the subtle changes in the motor and cognitive function of players due to mTBI. Four high school football players (2 healthy and 2 with history of concussion) volunteered to participate in the study. Participants were asked to navigate a staged obstacle course with and without an N-Back (N-2) cognitive task. Positional data of 23 limb segment nodes were recorded using markerless motion tracking system. Data collection lasted less than 5 minutes, with minimal preparation time. The results showed that walking speed, median frequency of sacrum in the vertical direction and step width variability during straightway walking were strongly associated with the presence of mTBI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.