An important part of developing a performant assessment algorithm for post-stroke rehabilitation is to achieve a high-precision activity recognition. Convolutional Neural Networks (CNN) are known to give very accurate results, however they require the data to be of a specific structure that differs from the sequential time-series format typically collected from wearable sensors. In this paper, we describe models to improve the activity recognition using the CNN classifier. At first by modifying the Gramian angular field algorithm by encoding all the sensors' channels from a single time window into a single 2D image allows to map the maximum activity characteristics. Feeding the resulting images to a simple 1D CNN classifier improves the accuracy of the test data from 94% for the traditional segmentation approach to 97.06%. Subsequently, we convert the 2D images into the RGB format and use a 2D CNN classifier. This results in increasing the test data accuracy to 97.52%. Finally, we employ transfer learning with the popular VGG 16 model to the RGB images, which yields to improving the accuracy further more to reach 98.53%.
Knee joint moments are commonly calculated to provide an indirect measure of knee joint loads. A shortcoming of inverse dynamics approaches is that the process of collecting and processing human motion data can be time-consuming. This study aimed to benchmark five different deep learning methods in using walking segment kinematics for predicting internal knee abduction impulse during walking. Three-dimensional kinematic and kinetic data used for the present analyses came from a publicly available dataset on walking (participants n = 33). The outcome for prediction was the internal knee abduction impulse over the stance phase. Three-dimensional (3D) angular and linear displacement, velocity, and acceleration of the seven lower body segment’s center of mass (COM), relative to a fixed global coordinate system were derived and formed the predictor space (126 time-series predictors). The total number of observations in the dataset was 6,737. The datasets were split into training (75%, n = 5,052) and testing (25%, n = 1685) datasets. Five deep learning models were benchmarked against inverse dynamics in quantifying knee abduction impulse. A baseline 2D convolutional network model achieved a mean absolute percentage error (MAPE) of 10.80%. Transfer learning with InceptionTime was the best performing model, achieving the best MAPE of 8.28%. Encoding the time-series as images then using a 2D convolutional model performed worse than the baseline model with a MAPE of 16.17%. Time-series based deep learning models were superior to an image-based method when predicting knee abduction moment impulse during walking. Future studies looking to develop wearable technologies will benefit from knowing the optimal network architecture, and the benefit of transfer learning for predicting joint moments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.