Assistive devices, like exoskeletons or orthoses, often make use of physiological data that allow the detection or prediction of movement onset. Movement onset can be detected at the executing site, the skeletal muscles, as by means of electromyography. Movement intention can be detected by the analysis of brain activity, recorded by, e.g., electroencephalography, or in the behavior of the subject by, e.g., eye movement analysis. These different approaches can be used depending on the kind of neuromuscular disorder, state of therapy or assistive device. In this work we conducted experiments with healthy subjects while performing self-initiated and self-paced arm movements. While other studies showed that multimodal signal analysis can improve the performance of predictions, we show that a sensible combination of electroencephalographic and electromyographic data can potentially improve the adaptability of assistive technical devices with respect to the individual demands of, e.g., early and late stages in rehabilitation therapy. In earlier stages for patients with weak muscle or motor related brain activity it is important to achieve high positive detection rates to support self-initiated movements. To detect most movement intentions from electroencephalographic or electromyographic data motivates a patient and can enhance her/his progress in rehabilitation. In a later stage for patients with stronger muscle or brain activity, reliable movement prediction is more important to encourage patients to behave more accurately and to invest more effort in the task. Further, the false detection rate needs to be reduced. We propose that both types of physiological data can be used in an and combination, where both signals must be detected to drive a movement. By this approach the behavior of the patient during later therapy can be controlled better and false positive detections, which can be very annoying for patients who are further advanced in rehabilitation, can be avoided.
The rehabilitation of patients should not only be limited to the first phases during intense hospital care but also support and therapy should be guaranteed in later stages, especially during daily life activities if the patient’s state requires this. However, aid should only be given to the patient if needed and as much as it is required. To allow this, automatic self-initiated movement support and patient-cooperative control strategies have to be developed and integrated into assistive systems. In this work, we first give an overview of different kinds of neuromuscular diseases, review different forms of therapy, and explain possible fields of rehabilitation and benefits of robotic aided rehabilitation. Next, the mechanical design and control scheme of an upper limb orthosis for rehabilitation are presented. Two control models for the orthosis are explained which compute the triggering function and the level of assistance provided by the device. As input to the model fused sensor data from the orthosis and physiology data in terms of electromyography (EMG) signals are used.
The ability of today's robots to autonomously support humans in their daily activities is still limited. To improve this, predictive human-machine interfaces (HMIs) can be applied to better support future interaction between human and machine. To infer upcoming context-based behavior relevant brain states of the human have to be detected. This is achieved by brain reading (BR), a passive approach for single trial EEG analysis that makes use of supervised machine learning (ML) methods. In this work we propose that BR is able to detect concrete states of the interacting human. To support this, we show that BR detects patterns in the electroencephalogram (EEG) that can be related to event-related activity in the EEG like the P300, which are indicators of concrete states or brain processes like target recognition processes. Further, we improve the robustness and applicability of BR in application-oriented scenarios by identifying and combining most relevant training data for single trial classification and by applying classifier transfer. We show that training and testing, i.e., application of the classifier, can be carried out on different classes, if the samples of both classes miss a relevant pattern. Classifier transfer is important for the usage of BR in application scenarios, where only small amounts of training examples are available. Finally, we demonstrate a dual BR application in an experimental setup that requires similar behavior as performed during the teleoperation of a robotic arm. Here, target recognition processes and movement preparation processes are detected simultaneously. In summary, our findings contribute to the development of robust and stable predictive HMIs that enable the simultaneous support of different interaction behaviors.
Advanced man-machine interfaces (MMIs) are being developed for teleoperating robots at remote and hardly accessible places. Such MMIs make use of a virtual environment and can therefore make the operator immerse him-/herself into the environment of the robot. In this paper, we present our developed MMI for multi-robot control. Our MMI can adapt to changes in task load and task engagement online. Applying our approach of embedded Brain Reading we improve user support and efficiency of interaction. The level of task engagement was inferred from the single-trial detectability of P300-related brain activity that was naturally evoked during interaction. With our approach no secondary task is needed to measure task load. It is based on research results on the single-stimulus paradigm, distribution of brain resources and its effect on the P300 event-related component. It further considers effects of the modulation caused by a delayed reaction time on the P300 component evoked by complex responses to task-relevant messages. We prove our concept using single-trial based machine learning analysis, analysis of averaged event-related potentials and behavioral analysis. As main results we show (1) a significant improvement of runtime needed to perform the interaction tasks compared to a setting in which all subjects could easily perform the tasks. We show that (2) the single-trial detectability of the event-related potential P300 can be used to measure the changes in task load and task engagement during complex interaction while also being sensitive to the level of experience of the operator and (3) can be used to adapt the MMI individually to the different needs of users without increasing total workload. Our online adaptation of the proposed MMI is based on a continuous supervision of the operator's cognitive resources by means of embedded Brain Reading. Operators with different qualifications or capabilities receive only as many tasks as they can perform to avoid mental overload as well as mental underload.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.