In the last few decades, bio-robotics applications such as exoskeletons, prosthetics and robotic wheelchairs have progressed from machines in science fiction to nearly commercialized products. Though there are still several challenges associated with electromyography (EMG) signals, the advances in use of EMG signals for controlling such bio-robotics applications have been enormous. Similarly, recent trends and attempts in developing electroencephalography- (EEG) based control methods have shown the potential of this area in the modern bio-robotics field. However, the EEG-based control methods are also yet to be perfected. A new approach of combining both these control methods, which take the advantages, and diminish the disadvantages, of each system might therefore be a promising approach. In this paper, we review hybrid fusion of EMG- and EEG-based control approaches in the bio-robotics field which have been attempted or developed to date. We provide a design overview of the method and consider the main features and merits/disadvantagages for the approaches that have been analyzed. We also discuss the current challenges regarding these hybrid EEG-EMG control approaches and propose some potential future directions.
In recent years, many myoelectric arms that are controlled based on electromyogram (EMG) signals of amputee's stump or residual muscles have been proposed. In the cases of above elbow amputees, however, the muscles which generate the forearm, wrist and hand motions do not remain. Therefore, most myoelectric arms for above elbow amputees have less degree of freedom and its dexterity is relatively poor compared with a human upper-limb. To improve the quality of life of above elbow amputees and to increase their mobility in daily life activities, some additional input signals must be prepared. One of the strong candidates of the additional input signals is an electroencephalogram (EEG) signal. An EEG signal is an electric signal that can be measured along a scalp, so that it can be measured even with an above elbow amputee. In this study, an artificial arm for above elbow amputees is controlled based on EMG and EEG signals. In this paper, the EEG-based motion estimation method is proposed to control the forearm supination/pronation motion of the artificial arm. The angle, angular velocity, and angular acceleration of the forearm motion are estimated under several velocities by using EEG signals.
Physicians use pain expressions shown in a patient's face to regulate their palpation methods during physical examination. Training to interpret patients' facial expressions with different genders and ethnicities still remains a challenge, taking novices a long time to learn through experience. This paper presents MorphFace: a controllable 3D physical-virtual hybrid face to represent pain expressions of patients from different ethnicity-gender backgrounds. It is also an intermediate step to expose trainee physicians to the gender and ethnic diversity of patients. We extracted four principal components from the Chicago Face Database to design a four degrees of freedom (DoF) physical face controlled via tendons to span ∼ 85% of facial variations among gender and ethnicity. Details such as skin colour, skin texture, and facial expressions are synthesized by a virtual model and projected onto the 3D physical face via a frontmounted LED projector to obtain a hybrid controllable patient face simulator. A user study revealed that certain differences in ethnicity between the observer and the MorphFace lead to different perceived pain intensity for the same pain level rendered by the MorphFace. This highlights the value of having MorphFace as a controllable hybrid simulator to quantify perceptual differences during physician training.
Recent technological advances in robotic sensing and actuation methods have prompted development of a range of new medical training simulators with multiple feedback modalities. Learning to interpret facial expressions of a patient during medical examinations or procedures has been one of the key focus areas in medical training. This paper reviews facial expression rendering systems in medical training simulators that have been reported to date. Facial expression rendering approaches in other domains are also summarized to incorporate the knowledge from those works into developing systems for medical training simulators. Classifications and comparisons of medical training simulators with facial expression rendering are presented, and important design features, merits and limitations are outlined. Medical educators, students and developers are identified as the three key stakeholders involved with these systems and their considerations and needs are presented. Physical-virtual (hybrid) approaches provide multimodal feedback, present accurate facial expression rendering, and can simulate patients of different age, gender and ethnicity group; makes it more versatile than virtual and physical systems. The overall findings of this review and proposed future directions are beneficial to researchers interested in initiating or developing such facial expression rendering systems in medical training simulators.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.