The aim of rehabilitation robotic area is to research on the application of robotic devices to therapeutic procedures. The goal is to achieve the best possible motor, cognitive and functional recovery for people with impairments following various diseases. Pneumatic actuators are attractive for robotic rehabilitation applications because they are lightweight, powerful, and compliant, but their control has historically been difficult, limiting their use. This article first reviews the current state-of-art in rehabilitation robotic devices with pneumatic actuation systems reporting main features and control issues of each therapeutic device. Then, a new pneumatic rehabilitation robot for proprioceptive neuromuscular facilitation therapies and for relearning daily living skills: like taking a glass, drinking, and placing object on shelves is described as a case study and compared with the current pneumatic rehabilitation devices.
Arm and finger paralysis, e.g. due to brain stem stroke, often results in the inability to perform activities of daily living (ADLs) such as eating and drinking. Recently, it was shown that a hybrid electroencephalography/electrooculography (EEG/EOG) brain/neural hand exoskeleton can restore hand function to quadriplegics, but it was unknown whether such control paradigm can be also used for fluent, reliable and safe operation of a semi-autonomous whole-arm exoskeleton restoring ADLs. To test this, seven abled-bodied participants (seven right-handed males, mean age 30 ± 8 years) were instructed to use an EEG/EOG-controlled whole-arm exoskeleton attached to their right arm to perform a drinking task comprising multiple sub-tasks (reaching, grasping, drinking, moving back and releasing a cup). Fluent and reliable control was defined as average ‘time to initialize’ (TTI) execution of each sub-task below 3 s with successful initializations of at least 75% of sub-tasks within 5 s. During use of the system, no undesired side effects were reported. All participants were able to fluently and reliably control the vision-guided autonomous whole-arm exoskeleton (average TTI 2.12 ± 0.78 s across modalities with 75% successful initializations reached at 1.9 s for EOG and 4.1 s for EEG control) paving the way for restoring ADLs in severe arm and hand paralysis.
The paper presents the developing of a new robotic system for the administration of a highly sophisticated therapy to stroke patients. This therapy is able to maximize patient motivation and involvement in the therapy and continuously assess the progress of the recovery from the functional viewpoint. Current robotic rehabilitation systems do not include patient information on the control loop. The main novelty of the presented approach is to close patient in the loop and use multisensory data (such as pulse, skin conductance, skin temperature, position, velocity, etc.) to adaptively and dynamically change complexity of the therapy and real-time displays of a virtual reality system in accordance with specific patient requirements. First, an analysis of subject's physiological responses to different tasks is presented with the objective to select the best candidate of physiological signals to estimate the patient physiological state during the execution of a virtual rehabilitation task. Then, the design of a prototype of multimodal robotic platform is defined and developed to validate the scientific value of the proposed approach.
The reference joint position of upper-limb exoskeletons is typically obtained by means of Cartesian motion planners and inverse kinematics algorithms with the inverse Jacobian; this approach allows exploiting the available Degrees of Freedom (i.e. DoFs) of the robot kinematic chain to achieve the desired end-effector pose; however, if used to operate non-redundant exoskeletons, it does not ensure that anthropomorphic criteria are satisfied in the whole human-robot workspace. This paper proposes a motion planning system, based on Learning by Demonstration, for upper-limb exoskeletons that allow successfully assisting patients during Activities of Daily Living (ADLs) in unstructured environment, while ensuring that anthropomorphic criteria are satisfied in the whole human-robot workspace. The motion planning system combines Learning by Demonstration with the computation of Dynamic Motion Primitives and machine learning techniques to construct task- and patient-specific joint trajectories based on the learnt trajectories. System validation was carried out in simulation and in a real setting with a 4-DoF upper-limb exoskeleton, a 5-DoF wrist-hand exoskeleton and four patients with Limb Girdle Muscular Dystrophy. Validation was addressed to (i) compare the performance of the proposed motion planning with traditional methods; (ii) assess the generalization capabilities of the proposed method with respect to the environment variability. Three ADLs were chosen to validate the system: drinking, pouring and lifting a light sphere. The achieved results showed a 100% success rate in the task fulfillment, with a high level of generalization with respect to the environment variability. Moreover, an anthropomorphic configuration of the exoskeleton is always ensured.
Post-stroke neurorehabilitation based on virtual therapies are performed completing repetitive exercises shown in visual electronic devices, whose content represents imaginary or daily life tasks. Currently, there are two ways of visualization of these task. 3D virtual environments are used to get a three dimensional space that represents the real world with a high level of detail, whose realism is determinated by the resolucion and fidelity of the objects of the task. Furthermore, 2D virtual environments are used to represent the tasks with a low degree of realism using techniques of bidimensional graphics. However, the type of visualization can influence the quality of perception of the task, affecting the patient's sensorimotor performance. The purpose of this paper was to evaluate if there were differences in patterns of kinematic movements when post-stroke patients performed a reach task viewing a virtual therapeutic game with two different type of visualization of virtual environment: 2D and 3D. Nine post-stroke patients have participated in the study receiving a virtual therapy assisted by PUPArm rehabilitation robot. Horizontal movements of the upper limb were performed to complete the aim of the tasks, which consist in reaching peripheral or perspective targets depending on the virtual environment shown. Various parameter types such as the maximum speed, reaction time, path length, or initial movement are analyzed from the data acquired objectively by the robotic device to evaluate the influence of the task visualization. At the end of the study, a usability survey was provided to each patient to analysis his/her satisfaction level. For all patients, the movement trajectories were enhanced when they completed the therapy. This fact suggests that patient's motor recovery was increased. Despite of the similarity in majority of the kinematic parameters, differences in reaction time and path length were higher using the 3D task. Regarding the success rates were very similar. In conclusion, the using of 2D environments in virtual therapy may be a more appropriate and comfortable way to perform tasks for upper limb rehabilitation of post-stroke patients, in terms of accuracy in order to effectuate optimal kinematic trajectories.
BackgroundEnd-effector robots are commonly used in robot-assisted neuro-rehabilitation therapies for upper limbs where the patient’s hand can be easily attached to a splint. Nevertheless, they are not able to estimate and control the kinematic configuration of the upper limb during the therapy. However, the Range of Motion (ROM) together with the clinical assessment scales offers a comprehensive assessment to the therapist. Our aim is to present a robust and stable kinematic reconstruction algorithm to accurately measure the upper limb joints using only an accelerometer placed onto the upper arm.MethodsThe proposed algorithm is based on the inverse of the augmented Jaciobian as the algorithm (Papaleo, et al., Med Biol Eng Comput 53(9):815–28, 2015). However, the estimation of the elbow joint location is performed through the computation of the rotation measured by the accelerometer during the arm movement, making the algorithm more robust against shoulder movements. Furthermore, we present a method to compute the initial configuration of the upper limb necessary to start the integration method, a protocol to manually measure the upper arm and forearm lengths, and a shoulder position estimation. An optoelectronic system was used to test the accuracy of the proposed algorithm whilst healthy subjects were performing upper limb movements holding the end effector of the seven Degrees of Freedom (DoF) robot. In addition, the previous and the proposed algorithms were studied during a neuro-rehabilitation therapy assisted by the ‘PUPArm’ planar robot with three post-stroke patients.ResultsThe proposed algorithm reports a Root Mean Square Error (RMSE) of 2.13cm in the elbow joint location and 1.89cm in the wrist joint location with high correlation. These errors lead to a RMSE about 3.5 degrees (mean of the seven joints) with high correlation in all the joints with respect to the real upper limb acquired through the optoelectronic system. Then, the estimation of the upper limb joints through both algorithms reveal an instability on the previous when shoulder movement appear due to the inevitable trunk compensation in post-stroke patients.ConclusionsThe proposed algorithm is able to accurately estimate the human upper limb joints during a neuro-rehabilitation therapy assisted by end-effector robots. In addition, the implemented protocol can be followed in a clinical environment without optoelectronic systems using only one accelerometer attached in the upper arm. Thus, the ROM can be perfectly determined and could become an objective assessment parameter for a comprehensive assessment.Electronic supplementary materialThe online version of this article (10.1186/s12984-018-0348-0) contains supplementary material, which is available to authorized users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.