In this article, we explain an often overlooked process that may significantly contribute to positive correlations between measures of species diversity and community stability. Empirical studies showing positive stability-diversity relationships have, for the most part, used a single class of stability (or, more accurately, instability) measures: the temporal variation in aggregate community properties such as biomass or productivity. We show that for these measures, stability will essentially always rise with species diversity because of the statistical averaging of the fluctuations in species' abundances. This simple probabilistic process will operate in the absence of any strong species interactions, although its strength is driven by the relative abundances of species, as well as by the existence of positive or negative correlations in the fluctuations of species. To explore the possible importance of this effect in real communities, we fit a simple simulation model to Tilman's grassland community. Our results indicate that statistical averaging might play a substantial role in explaining stability-diversity correlations for this and other systems. Models of statistical averaging can serve as a useful baseline for predictions of community stability, to which the influences of both negative and positive species interactions may then be added and tested.
As robotic devices are applied to problems beyond traditional manufacturing and industrial settings, we find that interaction between robots and humans, especially physical interaction, has become a fast developing field. Consider the application of robotics in healthcare, where we find telerobotic devices in the operating room facilitating dexterous surgical procedures, exoskeletons in the rehabilitation domain as walking aids and upper-limb movement assist devices, and even robotic limbs that are physically integrated with amputees who seek to restore their independence and mobility. In each of these scenarios, the physical coupling between human and robot, often termed physical human robot interaction (pHRI), facilitates new human performance capabilities and creates an opportunity to explore the sharing of task execution and control between humans and robots. In this review, we provide a unifying view of human and robot sharing task execution in scenarios where collaboration and cooperation between the two entities are necessary, and where the physical coupling of human and robot is a vital aspect. We define three key themes that emerge in these shared control scenarios, namely, intent detection, arbitration, and feedback. First, we explore methods for how the coupled pHRI system can detect what the human is trying to do, and how the physical coupling itself can be leveraged to detect intent. Second, once the human intent is known, we explore techniques for sharing and modulating control of the coupled system between robot and human operator. Finally, we survey methods for informing the human operator of the state of the coupled system, or the characteristics of the environment with which the pHRI system is interacting. At the conclusion of the survey, we present two case studies that exemplify shared control in pHRI systems, and specifically highlight the approaches used for the three key themes of intent detection, arbitration, and feedback for applications of upper limb robotic rehabilitation and haptic feedback from a robotic prosthesis for the upper limb.fields. Finally, two case studies from the authors' prior work are described, showing in detail how the framework can be applied in the design of two prototypical pHRI systems.
This paper presents the design, control and performance of a high fidelity four degree-of-freedom wrist exoskeleton robot, RiceWrist, for training and rehabilitation. The RiceWrist is intended to provide kinesthetic feedback during the training of motor skills or rehabilitation of reaching movements. Motivation for such applications is based on findings that show robot-assisted physical therapy aids in the rehabilitation process following neurological injuries. The exoskeleton device accommodates forearm supination and pronation, wrist flexion and extension and radial and ulnar deviation in a compact parallel mechanism design with low friction, zero backlash and high stiffness. As compared to other exoskeleton devices, the RiceWrist allows easy measurement of human joint angles and independent kinesthetic feedback to individual human joints. In this paper, joint-space as well as task-space position controllers and an impedance-based force controller for the device are presented. The kinematic performance of the device is characterized in terms of its workspace, singularities, manipulability, backlash and backdrivability. The dynamic performance of RiceWrist is characterized in terms of motor torque output, joint friction, step responses, behavior under closed loop set-point and trajectory tracking control and display of virtual walls. The device is singularity-free, encompasses most of the natural workspace of the human joints and exhibits low friction, zero-backlash and high manipulability, which are kinematic properties that characterize a high-quality impedance display device. In addition, the device displays fast, accurate response under position control that matches human actuation bandwidth and the capability to display sufficiently hard contact with little coupling between controlled degrees-of-freedom.
This study demonstrates the feasibility of detecting motor intent from brain activity of chronic stroke patients using an asynchronous electroencephalography (EEG)-based brain machine interface (BMI). Intent was inferred from movement related cortical potentials (MRCPs) measured over an optimized set of EEG electrodes. Successful intent detection triggered the motion of an upper-limb exoskeleton (MAHI Exo-II), to guide movement and to encourage active user participation by providing instantaneous sensory feedback. Several BMI design features were optimized to increase system performance in the presence of single-trial variability of MRCPs in the injured brain: (1) an adaptive time window was used for extracting features during BMI calibration; (2) training data from two consecutive days were pooled for BMI calibration to increase robustness to handle the day-to-day variations typical of EEG, and (3) BMI predictions were gated by residual electromyography (EMG) activity from the impaired arm, to reduce the number of false positives. This patient-specific BMI calibration approach can accommodate a broad spectrum of stroke patients with diverse motor capabilities. Following BMI optimization on day 3, testing of the closed-loop BMI-MAHI exoskeleton, on 4th and 5th days of the study, showed consistent BMI performance with overall mean true positive rate (TPR) = 62.7 ± 21.4% on day 4 and 67.1 ± 14.6% on day 5. The overall false positive rate (FPR) across subjects was 27.74 ± 37.46% on day 4 and 27.5 ± 35.64% on day 5; however for two subjects who had residual motor function and could benefit from the EMG-gated BMI, the mean FPR was quite low (< 10%). On average, motor intent was detected −367 ± 328 ms before movement onset during closed-loop operation. These findings provide evidence that closed-loop EEG-based BMI for stroke patients can be designed and optimized to perform well across multiple days without system recalibration.
We focus on learning robot objective functions from human guidance: specifically, from physical corrections provided by the person while the robot is acting. Objective functions are typically parametrized in terms of features, which capture aspects of the task that might be important. When the person intervenes to correct the robot's behavior, the robot should update its understanding of which features matter, how much, and in what way. Unfortunately, real users do not provide optimal corrections that isolate exactly what the robot was doing wrong. Thus, when receiving a correction, it is difficult for the robot to determine which features the person meant to correct, and which features were changed unintentionally. In this paper, we propose to improve the efficiency of robot learning during physical interactions by reducing unintended learning. Our approach allows the human-robot team to focus on learning one feature at a time, unlike state-of-the-art techniques that update all features at once. We derive an online method for identifying the single feature which the human is trying to change during physical interaction, and experimentally compare this one-at-a-time approach to the all-at-once baseline in a user study. Our results suggest that users teaching one-at-a-time perform better, especially in tasks that require changing multiple features.
In this paper, we analyze the correlations between four clinical measures (Fugl–Meyer upper extremity scale, Motor Activity Log, Action Research Arm Test, and Jebsen-Taylor Hand Function Test) and four robotic measures (smoothness of movement, trajectory error, average number of target hits per minute, and mean tangential speed), used to assess motor recovery. Data were gathered as part of a hybrid robotic and traditional upper extremity rehabilitation program for nine stroke patients. Smoothness of movement and trajectory error, temporally and spatially normalized measures of movement quality defined for point-to-point movements, were found to have significant moderate to strong correlations with all four of the clinical measures. The strong correlations suggest that smoothness of movement and trajectory error may be used to compare outcomes of different rehabilitation protocols and devices effectively, provide improved resolution for tracking patient progress compared to only pre-and post-treatment measurements, enable accurate adaptation of therapy based on patient progress, and deliver immediate and useful feedback to the patient and therapist.
Shared-control haptic guidance is a common form of robot-mediated training used to teach novice subjects to perform dynamic tasks. Shared-control guidance is distinct from more traditional guidance controllers, such as virtual fixtures, in that it provides novices with real-time visual and haptic feedback from a real or virtual expert. Previous studies have shown varying levels of training efficacy using shared-control guidance paradigms; it is hypothesized that these mixed results are due to interactions between specific guidance implementations ("paradigms") and tasks. This work proposes a novel guidance paradigm taxonomy intended to help classify and compare the multitude of implementations in the literature, as well as a revised proxy rendering model to allow for the implementation of more complex guidance paradigms. The efficacies of four common paradigms are compared in a controlled study with 50 healthy subjects and two dynamic tasks. The results show that guidance paradigms must be matched to a task's dynamic characteristics to elicit effective training and low workload. Based on these results, we provide suggestions for the future development of improved haptic guidance paradigms.
Rehabilitation of the hands is critical for the restoration of independence in activities of daily living for individuals exhibiting disabilities of the upper extremities. There is initial evidence that robotic devices with force-control-based strategies can help in effective rehabilitation of human limbs. However, to the best of our knowledge, none of the existing hand exoskeletons allow for accurate force or torque control. In this work, we present a novel index finger exoskeleton with Bowden-cable-based series elastic actuation allowing for bidirectional torque control of the device with high backdrivability and low reflected inertia. We present exoskeleton and finger joint torque controllers along with an optimization-based offline parameter estimator. Finally, we carry out tests with the developed prototype to characterize its kinematics, dynamics, and controller performance. Results show that the device preserves the characteristics of natural motion of finger and can be controlled to achieve both exoskeleton and finger joint torque control. Finally, dynamic transparency tests show that the device can be controlled to offer minimal resistance to finger motion. Beyond the present application of the device as a hand rehabilitation exoskeleton, it has the potential to be used as a haptic device for teleoperation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.