Surface electromyography signal plays an important role in hand function recovery training. In this paper, an IoT-enabled stroke rehabilitation system was introduced which was based on a smart wearable armband (SWA), machine learning (ML) algorithms, and a 3-D printed dexterous robot hand. User comfort is one of the key issues which should be addressed for wearable devices. The SWA was developed by integrating a low-power and tiny-sized IoT sensing device with textile electrodes, which can measure, pre-process, and wirelessly transmit bio-potential signals. By evenly distributing surface electrodes over user’s forearm, drawbacks of classification accuracy poor performance can be mitigated. A new method was put forward to find the optimal feature set. ML algorithms were leveraged to analyze and discriminate features of different hand movements, and their performances were appraised by classification complexity estimating algorithms and principal components analysis. According to the verification results, all nine gestures can be successfully identified with an average accuracy up to 96.20%. In addition, a 3-D printed five-finger robot hand was implemented for hand rehabilitation training purpose. Correspondingly, user’s hand movement intentions were extracted and converted into a series of commands which were used to drive motors assembled inside the dexterous robot hand. As a result, the dexterous robot hand can mimic the user’s gesture in a real-time manner, which shows the proposed system can be used as a training tool to facilitate rehabilitation process for the patients after stroke.
For industrial manufacturing, industrial robots are required to work together with human counterparts on certain special occasions, where human workers share their skills with robots. Intuitive human–robot interaction brings increasing safety challenges, which can be properly addressed by using sensor-based active control technology. In this article, we designed and fabricated a three-dimensional flexible robot skin made by the piezoresistive nanocomposite based on the need for enhancement of the security performance of the collaborative robot. The robot skin endowed the YuMi robot with a tactile perception like human skin. The developed sensing unit in the robot skin showed the one-to-one correspondence between force input and resistance output (percentage change in impedance) in the range of 0–6.5 N. Furthermore, the calibration result indicated that the developed sensing unit is capable of offering a maximum force sensitivity (percentage change in impedance per Newton force) of 18.83% N−1 when loaded with an external force of 6.5 N. The fabricated sensing unit showed good reproducibility after loading with cyclic force (0–5.5 N) under a frequency of 0.65 Hz for 3500 cycles. In addition, to suppress the bypass crosstalk in robot skin, we designed a readout circuit for sampling tactile data. Moreover, experiments were conducted to estimate the contact/collision force between the object and the robot in a real-time manner. The experiment results showed that the implemented robot skin can provide an efficient approach for natural and secure human–robot interaction.
Conventional industrial robots are unable to guarantee the inherent safety when working together with humans due to the use of rigid components and the lack of force sensation. To enhance the safety of humanrobot collaboration (HRC), the new collaborative robot skin (CoboSkin) with the features of softness, variable stiffness, and sensitivity is designed and studied in this article. The CoboSkin is composed of an array of inflatable units and sensing units. The sensing units made of soft porous materials are capable of measuring distributed contact force in a real-time manner. By leveraging the foaming process, the sensing units are interconnected with inflatable units fabricated by the elastomer of which the deformation is limited by the textile wrapped around it. Variation of stiffness is enabled by adjusting the internal air pressure supplied to inflatable units, thereby changing the sensitivity of the sensing units and reducing the peak impact force. Soft porous materials endowed the CoboSkin with increased sensitivity, minimal hysteresis, excellent cycling stability, and response time in the millisecond range, which enabled sensing feedback for controlling a robot arm at different levels of stiffness. Finally, the validation of the CoboSkin
As an emerging research topic for proximity service (ProSe), automatic emotion recognition enables the machines to understand the emotional changes of human beings which can not only facilitate natural, effective, seamless, and advanced human-robot interaction or human-computer interface but also promote emotional health. Facial expression recognition (FER) is a vital task for emotion recognition. However, significant gap between human and machine exists in FER task. In this paper, we present a conditional generative adversarial network-based approach to alleviate the intra-class variations by individually controlling the facial expressions and learning the generative and discriminative representations simultaneously. The proposed framework consists of a generator G and three discriminators (Di, Da, and Dexp). The generator G transforms any query face image into another prototypic facial expression image with other factors preserved. Armed with action units condition, the generator G pays more attention to information relevant to facial expression. Three loss functions (L I , La, and Lexp) corresponding to the three discriminators (Di, Da, and Dexp) were designed to learn generative and discriminative representations. Moreover, after rendering the generated expression back to its original facial expression, cycle consistency loss is also applied to guarantee the identity and produce more constrained visual representations. Optimized by combining both synthesis and classification loss functions, the learnt representation is explicitly disentangled from other variations such as identity, head pose, and illumination. Qualitative and quantitative experimental results demonstrate the proposed FER system is effective for expression recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.