We seek to use dimensionality reduction to simplify the difficult task of controlling a lower limb prosthesis. Though many techniques for dimensionality reduction have been described, it is not clear which is the most appropriate for human gait data. In this study, we first compare how Principal Component Analysis (PCA) and an autoencoder on poses (Pose-AE) transform human kinematics data during flat ground and stair walking. Second, we compare the performance of PCA, Pose-AE and a new autoencoder trained on full human movement trajectories (Move-AE) in order to capture the time varying properties of gait. We compare these methods for both movement classification and identifying the individual. These are key capabilities for identifying useful data representations for prosthetic control. We first find that Pose-AE outperforms PCA on dimensionality reduction by achieving a higher Variance Accounted For (VAF) across flat ground walking data, stairs data, and undirected natural movements. We then find in our second task that Move-AE significantly outperforms both PCA and Pose-AE on movement classification and individual identification tasks. This suggests the autoencoder is more suitable than PCA for dimensionality reduction of human gait, and can be used to encode useful representations of entire movements to facilitate prosthetic control tasks.
Intuitive control of powered prosthetic lower limbs is still an open-ended research goal. Current controllers employ discrete locomotion modes for well-defined and frequently encountered scenarios such as stair ascent, stair descent, or ramps. Non-standard movements such as side-shuffling into cars and avoiding obstacles are challenging to powered limb users. Human locomotion is a continuous motion comprising rhythmic and non-rhythmic movements, fluidly adapting to the environment. It exhibits strong inter-joint coordination and the movement of a single joint can be largely predicted based on the movement of the rest of the body. We explore a continuous and unified kinematics estimation strategy for a wide variety of movements without the need for labeled examples. Our data-driven approach uses natural body motion from the intact limbs and trunk to generate a kinematic reference trajectory for prosthetic joints. Wearable sensors were worn by 63 subjects without disabilities to record full-body kinematics during typical scenarios (flat ground and stairs), and non-rhythmic and atypical movements (side shuffles, weaving through cones, backward walking). A Recurrent Neural Network (RNN) was trained to predict right ankle and knee kinematics from the kinematics of other joints as inputs. Results were assessed on 3 different test subjects previously unseen by the network. All predictions had a RMSE of less than 7.5 degrees and a high correlation across activities. These offline predictions were robust to subject-specific variations such as walking speed and step length. Additionally, to test the feasibility of using a data-driven reference towards prosthetic control in real-time, a systems test was designed with a single participant. The controller acquired live kinematics, generated predictions using a pre-trained neural network, and demonstrated the capability to actuate the knee joint of a powered prosthesis for the treadmill walking task.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.