Compared with the single robot system, a multi-robot system has higher efficiency and fault tolerance. The multi-robot system has great potential in some application scenarios, such as the robot search, rescue and escort tasks, and so on. Deep reinforcement learning provides a potential framework for multi-robot formation and collaborative navigation. This paper mainly studies the collaborative formation and navigation of multi-robots by using the deep reinforcement learning algorithm. The proposed method improves the classical Deep Deterministic Policy Gradient (DDPG) to address the single robot mapless navigation task. We also extend the single-robot Deep Deterministic Policy Gradient algorithm to the multi-robot system, and obtain the Parallel Deep Deterministic Policy Gradient (PDDPG). By utilizing the 2D lidar sensor, the group of robots can accomplish the formation construction task and the collaborative formation navigation task. The experiment results in a Gazebo simulation platform illustrates that our method is capable of guiding mobile robots to construct the formation and keep the formation during group navigation, directly through raw lidar data inputs.
Previous studies have indicated that corticocortical neural mechanisms differ during various grasping behaviors. However, the literature rarely considers corticocortical contributions to various imagined grasping behaviors. To address this question, we examine their mechanisms by transcranial magnetic stimulation (TMS) triggered when detecting event-related desynchronization during right-hand grasping behavior imagination through a brain-computer interface (BCI) system. Based on the BCI system, we designed two experiments. In Experiment 1, we explored differences in motor evoked potentials (MEPs) between power grip and resting conditions. In Experiment 2, we used the three TMS coil orientations (lateral-medial (LM), posterioranterior (PA), and anterior-posterior (AP) directions) over the primary motor cortex to elicit MEPs during imagined index finger abduction, precision grip, and power grip. We found that larger MEP amplitudes and shorter latencies were obtained in imagined power grip than in resting. We also detected lower MEP amplitudes during imagined power grip, while MEP amplitudes remained similar across imagined precision grip and index finger abduction in each TMS coil orientation. Differences in AP-LM latency were longer when subjects imagined a power grip compared with precision grip and index finger abduction. Based on our results, higher cortical excitability may be achieved when humans imagine precision grip and index finger abduction. Our results suggests that higher cortical excitability may be achieved when humans imagine precision grip and index finger abduction. We also propose that preferential recruitment of late synaptic inputs to corticospinal neurons may occur when humans imagine a power grip.
Objective.
The gait phase and joint angle are two essential and complementary components of kinematics during normal walking, whose accurate prediction is critical for lower-limb rehabilitation, such as controlling the exoskeleton robots. Multi-modal signals have been used to promote the prediction performance of the gait phase or joint angle separately, but it is still few reports to examine how these signals can be used to predict both simultaneously. 
Approach.
To address this problem, we propose a new method named transferable multi-modal fusion (TMMF) to perform a continuous prediction of knee angles and corresponding gait phases by fusing multi-modal signals. Specifically, TMMF consists of a multi-modal signal fusion block, a time series feature extractor, a regressor, and a classifier. The multi-modal signal fusion block leverages the Maximum Mean Discrepancy to reduce the distribution discrepancy across different modals in the latent space, achieving the goal of transferable multi-modal fusion. Subsequently, by using the long short-term memory-based network, we obtain the feature representation from time series data to predict the knee angles and gait phases simultaneously. To validate our proposal, we design an experimental paradigm with random walking and resting to collect data containing multi-modal biomedical signals from electromyography, gyroscopes, and virtual reality.
Main results.
Comprehensive experiments on our constructed dataset demonstrate the effectiveness of the proposed method. TMMF achieves a root mean square error of 0.090±0.022 s in knee angle prediction and a precision of 83.7±7.7\% in gait phase prediction.
Significance.
We demonstrate the feasibility and validity of using TMMF to predict lower-limb kinematics continuously from multi-modal biomedical signals. This proposed method represents application potential in predicting the motor intent of patients with different pathologies.
Sensory integration contributes to temporal coordination of the movement with external rhythms. How the information flowing of sensory inputs is regulated with increasing tapping rates and its function remains unknown. Here, somatosensory evoked potentials to ulnar nerve stimulation were recorded during auditory-cued repetitive right-index finger tapping at 0.5, 1, 2, 3, and 4 Hz in 13 healthy subjects. We found that sensory inputs were suppressed at subcortical level (represented by P14) and primary somatosensory cortex (S1, represented by N20/P25) during repetitive tapping. This suppression was decreased in S1 but not in subcortical level during fast repetitive tapping (2, 3, and 4 Hz) compared with slow repetitive tapping (0.5 and 1 Hz). Furthermore, we assessed the ability to analyze temporal information in S1 by measuring the somatosensory temporal discrimination threshold (STDT). STDT increased during fast repetitive tapping compared with slow repetitive tapping, which was negatively correlated with the task performance of phase shift and positively correlated with the peak-to-peak amplitude (% of resting) in S1 but not in subcortical level. These novel findings indicate that the increased sensory input (lower sensory gating) in S1 may lead to greater temporal uncertainty for sensorimotor integration dereasing the performance of repetitive movement during increasing tapping rates.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.