Human walking is a dynamic, partly self-stabilizing process relying on the interaction of the biomechanical design with its neuronal control. The coordination of this process is a very difficult problem, and it has been suggested that it involves a hierarchy of levels, where the lower ones, e.g., interactions between muscles and the spinal cord, are largely autonomous, and where higher level control (e.g., cortical) arises only pointwise, as needed. This requires an architecture of several nested, sensori–motor loops where the walking process provides feedback signals to the walker's sensory systems, which can be used to coordinate its movements. To complicate the situation, at a maximal walking speed of more than four leg-lengths per second, the cycle period available to coordinate all these loops is rather short. In this study we present a planar biped robot, which uses the design principle of nested loops to combine the self-stabilizing properties of its biomechanical design with several levels of neuronal control. Specifically, we show how to adapt control by including online learning mechanisms based on simulated synaptic plasticity. This robot can walk with a high speed (>3.0 leg length/s), self-adapting to minor disturbances, and reacting in a robust way to abruptly induced gait changes. At the same time, it can learn walking on different terrains, requiring only few learning experiences. This study shows that the tight coupling of physical with neuronal control, guided by sensory feedback from the walking pattern itself, combined with synaptic learning may be a way forward to better understand and solve coordination problems in other complex motor tasks.
Highlights A wide variety of tracking and detection tools for computer vision-based GMA exist. A “method-of-choice” for automated GMA does not yet exist. Large expert-annotated valid datasets are urgently needed. The prerequisites of classic GMA is indispensable for developing automated solutions. A future augmented GMA shall combine human expertise with computerised tools.
The adaptive mechanisms of homo- and heterosynaptic plasticity play an important role in learning and memory. In order to maintain plasticity-induced changes for longer time scales (up to several days), they have to be consolidated by transferring them from a short-lasting early-phase to a long-lasting late-phase state. The underlying processes of this synaptic consolidation are already well-known for homosynaptic plasticity, however, it is not clear whether the same processes also enable the induction and consolidation of heterosynaptic plasticity. In this study, by extending a generic calcium-based plasticity model with the processes of synaptic consolidation, we show in simulations that indeed heterosynaptic plasticity can be induced and, furthermore, consolidated by the same underlying processes as for homosynaptic plasticity. Furthermore, we show that by local diffusion processes the heterosynaptic effect can be restricted to a few synapses neighboring the homosynaptically changed ones. Taken together, this generic model reproduces many experimental results of synaptic tagging and consolidation, provides several predictions for heterosynaptic induction and consolidation, and yields insights into the complex interactions between homo- and heterosynaptic plasticity over a broad variety of time (minutes to days) and spatial scales (several micrometers).
When learning a complex task our nervous system self-organizes large groups of neurons into coherent dynamic activity patterns. During this, a network with multiple, simultaneously active, and computationally powerful cell assemblies is created. How such ordered structures are formed while preserving a rich diversity of neural dynamics needed for computation is still unknown. Here we show that the combination of synaptic plasticity with the slower process of synaptic scaling achieves (i) the formation of cell assemblies and (ii) enhances the diversity of neural dynamics facilitating the learning of complex calculations. Due to synaptic scaling the dynamics of different cell assemblies do not interfere with each other. As a consequence, this type of self-organization allows executing a difficult, six degrees of freedom, manipulation task with a robot where assemblies need to learn computing complex non-linear transforms and – for execution – must cooperate with each other without interference. This mechanism, thus, permits the self-organization of computationally powerful sub-structures in dynamic networks for behavior control.
The past decade has evinced a boom of computer-based approaches to aid movement assessment in early infancy. Increasing interests have been dedicated to develop AI driven approaches to complement the classic Prechtl general movements assessment (GMA). This study proposes a novel machine learning algorithm to detect an age-specific movement pattern, the fidgety movements (FMs), in a prospectively collected sample of typically developing infants. Participants were recorded using a passive, single camera RGB video stream. The dataset of 2800 five-second snippets was annotated by two well-trained and experienced GMA assessors, with excellent inter- and intra-rater reliabilities. Using OpenPose, the infant full pose was recovered from the video stream in the form of a 25-points skeleton. This skeleton was used as input vector for a shallow multilayer neural network (SMNN). An ablation study was performed to justify the network’s architecture and hyperparameters. We show for the first time that the SMNN is sufficient to discriminate fidgety from non-fidgety movements in a sample of age-specific typical movements with a classification accuracy of 88%. The computer-based solutions will complement original GMA to consistently perform accurate and efficient screening and diagnosis that may become universally accessible in daily clinical practice in the future.
In this paper, we present a generic locomotion control framework for legged robots and a strategy for control policy optimization. The framework is based on neural control and black-box optimization. The neural control combines a central pattern generator (CPG) and a radial basis function (RBF) network to create a CPG-RBF network. The control network acts as a neural basis to produce arbitrary rhythmic trajectories for the joints of robots. The main features of the CPG-RBF network are: 1) it is generic, since it can be applied to legged robots with different morphologies; 2) it has few control parameters, resulting in fast learning; 3) it is scalable, both in terms of policy/trajectory complexity and the number of legs that can be controlled using similar trajectories; 4) it does not rely heavily on sensory feedback to generate locomotion and is thus less prone to sensory faults; and 5) once trained, it is simple, minimal, and intuitive to use and analyze. These features will lead to an easy-to-use framework with fast convergence and the ability to encode complex locomotion control policies. In this work, we show that the framework can successfully be applied to three different simulated legged robots with varying morphologies, and even broken joints, to learn locomotion control policies. We also show that after learning, the control policies can also be successfully transferred to a real-world robot without any modifications. We, furthermore, show the scalability of the framework by implementing it as a central controller for all legs of a robot and as a decentralized controller for individual legs and leg pairs. By investigating the correlation between robot morphology and encoding type, we are able to present a strategy for control policy optimization. Finally, we show how sensory feedback can be integrated into the CPG-RBF network to enable online adaptation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.