This paper proposes a new multimodal architecture for gaze-independent brain-computer interface (BCI)-driven control of a robotic upper limb exoskeleton for stroke rehabilitation to provide active assistance in the execution of reaching tasks in a real setting scenario. At the level of action plan, the patient's intention is decoded by means of an active vision system, through the combination of a Kinect-based vision system, which can online robustly identify and track 3-D objects, and an eye-tracking system for objects selection. At the level of action generation, a BCI is used to control the patient's intention to move his/her own arm, on the basis of brain activity analyzed during motor imagery. The main kinematic parameters of the reaching movement (i.e., speed, acceleration, and jerk) assisted by the robot are modulated by the output of the BCI classifier so that the robot-assisted movement is performed under a continuous control of patient's brain activity. The system was experimentally evaluated in a group of three healthy volunteers and four chronic stroke patients. Experimental results show that all subjects were able to operate the exoskeleton movement by BCI with a classification error rate of 89.4±5.0% in the robot-assisted condition, with no difference of the performance observed in stroke patients compared with healthy subjects. This indicates the high potential of the proposed gaze-BCI-driven robotic assistance for neurorehabilitation of patients with motor impairments after stroke since the earliest phase of recover
This paper presents a novel electromyography (EMG)-driven hand exoskeleton for bilateral rehabilitation of grasping in stroke. The developed hand exoskeleton was designed with two distinctive features: (a) kinematics with intrinsic adaptability to patient's hand size, and (b) free-palm and free-fingertip design, preserving the residual sensory perceptual capability of touch during assistance in grasping of real objects. In the envisaged bilateral training strategy, the patient's non paretic hand acted as guidance for the paretic hand in grasping tasks. Grasping force exerted by the non paretic hand was estimated in real-time from EMG signals, and then replicated as robotic assistance for the paretic hand by means of the hand-exoskeleton. Estimation of the grasping force through EMG allowed to perform rehabilitation exercises with any, non sensorized, graspable objects. This paper presents the system design, development, and experimental evaluation. Experiments were performed within a group of six healthy subjects and two chronic stroke patients, executing robotic-assisted grasping tasks. Results related to performance in estimation and modulation of the robotic assistance, and to the outcomes of the pilot rehabilitation sessions with stroke patients, positively support validity of the proposed approach for application in stroke rehabilitation.
the contribution of vibration-evoked kinaesthetic feedback led to statistically higher BCI performance (Anova, F = 18.1, p < .01) and more stable EEG event-related-desynchronization. Obtained results suggest promising application of the proposed method in neuro-rehabilitation scenarios: the advantage of an improved usability could make the MI-BCIs more applicable for those patients having difficulties in performing kinaesthetic imagery.
In this paper we propose a full upper limb exoskeleton for motor rehabilitation of reaching, grasping and releasing in post-stroke patients. The presented system takes into account the hand pre-shaping for object affordability and it is driven by patient's intentional control through a selfpaced asynchronous Motor Imagery based Brain Computer Interface (MI-BCI). The developed antropomorphic eight DoFs exoskeleton (two DoFs for the hand, two for the wrist and four for the arm) allows full support of the manipulation activity at the level of single upper limb joint. In this study, we show the feasibility of the proposed system through experimental rehabilitation sessions conducted with three chronic post-stroke patients. Results show the potential of the proposed system for being introduced in a rehabilitation protocol.
Background The past decade has seen the emergence of rehabilitation treatments using virtual reality. One of the advantages in using this technology is the potential to create positive motivation, by means of engaging environments and tasks shaped in the form of serious games. The aim of this study is to determine the efficacy of immersive Virtual Environments and weaRable hAptic devices (VERA) for rehabilitation of upper limb in children with Cerebral Palsy (CP) and Developmental Dyspraxia (DD). Methods A two period cross-over design was adopted for determining the differences between the proposed therapy and a conventional treatment. Eight children were randomized into two groups: one group received the VERA treatment in the first period and the manual therapy in the second period, and viceversa for the other group. Children were assessed at the beginning and the end of each period through both the Nine Hole Peg Test (9-HPT, primary outcome) and Kinesiological Measurements obtained during the performing of similar tasks in a real setting scenario (secondary outcomes). Results All subjects, not depending from which group they come from, significantly improved in both the performance of the 9-HPT and in the parameters of the kinesiological measurements (movement error and smoothness). No statistically significant differences have been found between the two groups. Conclusions These findings suggest that immersive VE and wearable haptic devices is a viable alternative to conventional therapy for improving upper extremity function in children with neuromotor impairments. Trial registration ClinicalTrials, NCT03353623. Registered 27 November 2017-Retrospectively registered, https://clinicaltrials.gov/ct2/show/NCT03353623
The growing interest of the industry production in wearable robots for assistance and rehabilitation purposes opens the challenge for developing intuitive and natural control strategies. Myoelectric control, or myo-control, which consists in decoding the human motor intent from muscular activity and its mapping into control outputs, represents a natural way to establish an intimate human-machine connection. In this field, model based myo-control schemes (e.g., EMG-driven neuromusculoskeletal models, NMS) represent a valid solution for estimating the moments of the human joints. However, a model optimization is needed to adjust the model's parameters to a specific subject and most of the optimization approaches presented in literature consider complex NMS models that are unsuitable for being used in a control paradigm since they suffer from long-lasting setup and optimization phases. In this work we present a minimal NMS model for predicting the elbow and shoulder torques and we compare two optimization approaches: a linear optimization method (LO) and a non-linear method based on a genetic algorithm (GA). The LO optimizes only one parameter per muscle, whereas the GA-based approach performs a deep customization of the muscle model, adjusting 12 parameters per muscle. EMG and force data have been collected from 7 healthy subjects performing a set of exercises with an arm exoskeleton. Although both optimization methods substantially improved the performance of the raw model, the findings of the study suggest that the LO might be beneficial with respect to GA as the latter is much more computationally heavy and leads to minimal improvements with respect to the former. From the comparison between the two considered joints, it emerged also that the more accurate the NMS model is, the more effective a complex optimization procedure could be. Overall, the two optimized NMS models were able to predict the shoulder and elbow moments with a low error, thus demonstrating the potentiality for being used in an admittance-based myo-control scheme. Thanks to the low computational cost and to the short setup phase required for wearing and calibrating the system, obtained results are promising for being introduced in industrial or rehabilitation real time scenarios.
This study investigates how the sense of embodiment in virtual environments can be enhanced by multisensory feedback related to body movements. In particular, we analyze the effect of combined vestibular and proprioceptive afferent signals on the perceived embodiment within an immersive walking scenario. These feedback signals were applied by means of a motion platform and by tendon vibration of lower limbs, evoking illusory leg movements. Vestibular and proprioceptive feedback were provided congruently with a rich virtual scenario reconstructing a real city, rendered on a head-mounted display (HMD). The sense of embodiment was evaluated through both self-reported questionnaires and physiological measurements in two experimental conditions: with all active sensory feedback (highly embodied condition), and with visual feedback only. Participants' self-reports show that the addition of both vestibular and proprioceptive feedback increases the sense of embodiment and the individual's feeling of presence associated with the walking experience. Furthermore, the embodiment condition significantly increased the measured galvanic skin response and respiration rate. The obtained results suggest that vestibular and proprioceptive feedback can improve the participant's sense of embodiment in the virtual experience.
It is important for rehabilitation exoskeletons to move with a spatiotemporal motion patterns that well match the upper-limb joint kinematic characteristics. However, few efforts have been made to manipulate the motion control based on human kinematic synergies. This work analyzed the spatiotemporal kinematic synergies of right arm reaching movement and investigated their potential usage in upper limb assistive exoskeleton motion planning. Ten right-handed subjects were asked to reach 10 target button locations placed on a cardboard in front. The kinematic data of right arm were tracked by a motion capture system. Angular velocities over time for shoulder flexion/extension, shoulder abduction/adduction, shoulder internal/external rotation, and elbow flexion/extension were computed. Principal component analysis (PCA) was used to derive kinematic synergies from the reaching task for each subject. We found that the first four synergies can explain more than 94% of the variance. Moreover, the joint coordination patterns were dynamically regulated over time as the number of kinematic synergy (PC) increased. The synergies with different order played different roles in reaching movement. Our results indicated that the low-order synergies represented the overall trend of motion patterns, while the high-order synergies described the fine motions at specific moving phases. A 4-DoF upper limb assistive exoskeleton was modeled in SolidWorks to simulate assistive exoskeleton movement pattern based on kinematic synergy. An exoskeleton Denavit-Hartenberg (D-H) model was established to estimate the exoskeleton moving pattern in reaching tasks. The results further confirmed that kinematic synergies could be used for exoskeleton motion planning, and different principal components contributed to the motion trajectory and end-point accuracy to some extent. The findings of this study may provide novel but simplified strategies for the development of rehabilitation and assistive robotic systems approximating the motion pattern of natural upper-limb motor function.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.