Myocontrol, that is, control of a prosthesis via muscle signals, is still a surprisingly hard problem. Recent research indicates that surface electromyography (sEMG), the traditional technique used to detect a subject's intent, could proficiently be replaced, or conjoined with, other techniques (multi-modal myocontrol), with the aim to improve both on dexterity and reliability. In this paper we present an online assessment of multimodal sEMG and force myography (FMG) targeted at hand and wrist myocontrol. Twenty sEMG and FMG sensors in total were used to enforce simultaneous and proportional control of hand opening/closing, wrist pronation/supination and wrist flexion/extension of 12 intact subjects. We found that FMG yields in general a better performance than sEMG, and that the main drawback of the sEMG array we used is not the inability to perform a desired action, but rather action interference, that is, the undesired concurrent activation of another action. FMG, on the other hand, causes less interference.
The recent generation of compliant robots enables kinesthetic teaching of novel skills by human demonstration. This enables strategies to transfer tasks to the robot in a more intuitive way than conventional programming interfaces. Programming physical interactions can be achieved by manually guiding the robot to learn the behavior from the motion and force data. To let the robot react to changes in the environment, force sensing can be used to identify constraints and act accordingly. While autonomous exploration strategies in the whole workspace are time consuming, we propose a way to learn these schemes from human demonstrations in an object targeted manner. The presented teaching strategy and the learning framework allow to generate adaptive robot behaviors relying on the robot's sense of touch in a systematically changing environment. A generated behavior consists of a hierarchical representation of skills, where haptic exploration skills are used to touch the environment with the end effector, and relative manipulation skills, which are parameterized according to previous exploration events. The effectiveness of the approach has been proven in a manipulation task, where the adaptive task structure is able to generalize to unseen object locations. The robot autonomously manipulates objects without relying on visual feedback.
Myocontrol, that is control of prostheses using bodily signals, has proved in the decades to be a surprisingly hard problem for the scientific community of assistive and rehabilitation robotics. In particular, traditional surface electromyography (sEMG) seems to be no longer enough to guarantee dexterity (i.e., control over several degrees of freedom) and, most importantly, reliability. Multi-modal myocontrol is concerned with the idea of using novel signal gathering techniques as a replacement of, or alongside, sEMG, to provide high-density and diverse signals to improve dexterity and make the control more reliable. In this paper we present an offline and online assessment of multi-modal sEMG and force myography (FMG) targeted at hand and wrist myocontrol. A total number of twenty sEMG and FMG sensors were used simultaneously, in several combined configurations, to predict opening/closing of the hand and activation of two degrees of freedom of the wrist of ten intact subjects. The analysis was targeted at determining the optimal sensor combination and control parameters; the experimental results indicate that sEMG sensors alone perform worst, yielding a nRMSE of 9.1%, while mixing FMG and sEMG or using FMG only reduces the nRMSE to 5.2-6.6%. To validate these results, we engaged the subject with median performance in an online goal-reaching task. Analysis of this further experiment reveals that the online behaviour is similar to the offline one.
Conditional tasks include a decision on how the robot should react to an observation. This requires to select the appropriate action during execution. For instance, spatial sorting of objects may require different goal positions based on the objects properties, such as weight or geometry. We propose a framework that allows a user to demonstrate conditional tasks including recovery behaviors for expected situations. In our framework, human demonstrations define the required actions for task completion, which we term solutions. Each specific solution accounts for different conditions which may arise during execution. We exploit a clustering scheme to assign multiple demonstrations to a specific solution, which is then encoded in a probabilistic model. At runtime, our approach monitors the execution of the current solution using measured robot pose, external wrench, and grasp status. Deviations from the expected state are then classified as anomalies. This triggers the execution of an alternative solution, appropriately selected from the pool of demonstrated actions. Experiments on a real robot show the capability of the proposed approach to detect anomalies online and switch to an appropriate solution that fulfills the task.
Conventional robot programming methods are not suited for non-experts to intuitively teach robots new tasks. For this reason, the potential of collaborative robots for production cannot yet be fully exploited. In this work, we propose an active learning framework, in which the robot and the user collaborate to incrementally program a complex task. Starting with a basic model, the robot's task knowledge can be extended over time if new situations require additional skills. An on-line anomaly detection algorithm therefore automatically identifies new situations during task execution by monitoring the deviation between measured-and commanded sensor values. The robot then triggers a teaching phase, in which the user decides to either refine an existing skill or demonstrate a new skill. The different skills of a task are encoded in separate probabilistic models and structured in a high-level graph, guaranteeing robust execution and successful transition between skills. In the experiments, our approach is compared to two state-of-the-art Programming by Demonstration frameworks on a real system. Increased intuitiveness and task performance of the method can be shown, allowing shop-floor workers to program industrial tasks with our framework.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.