Driven by the shortage of qualified nurses and the increasing average age of the population, the ambient assisted living style using intelligent service robots and smart home systems has become an excellent choice to free up caregiver time and energy and provide users with a sense of independence. However, users’ unique environments and differences in abilities to express themselves through different interaction modalities make intention recognition and interaction between user and service system very difficult, limiting the use of these new nursing technologies. This paper presents a multimodal domestic service robot interaction system and proposes a multimodal fusion algorithm for intention recognition to deal with these problems. The impacts of short-term and long-term changes were taken into account. Implemented interaction modalities include touch, voice, myoelectricity gesture, visual gesture, and haptics. Users could freely choose one or more modalities through which to express themselves. Virtual games and virtual activities of independent living were designed for pre-training and evaluating users’ abilities to use different interaction modalities in their unique environments. A domestic service robot interaction system was built, on which a set of experiments were carried out to test the system’s stability and intention recognition ability in different scenarios. The experiment results show that the system is stable and effective and can adapt to different scenarios. In addition, the intention recognition rate in the experiments was 93.62%. Older adults could master the system quickly and use it to provide some assistance for their independent living.
Facial expression is an important carrier to reflect psychological emotion, and the lightweight expression recognition system with small-scale and high transportability is the basis of emotional interaction technology of intelligent robots. With the rapid development of deep learning, fine-grained expression classification based on the convolutional neural network has strong data-driven properties, and the quality of data has an important impact on the performance of the model. To solve the problem that the model has a strong dependence on the training dataset and weak generalization performance in real environments in a lightweight expression recognition system, an application method of confidence learning is proposed. The method modifies self-confidence and introduces two hyper-parameters to adjust the noise of the facial expression datasets. A lightweight model structure combining a deep separation convolution network and attention mechanism is adopted for noise detection and expression recognition. The effectiveness of dynamic noise detection is verified on datasets with different noise ratios. Optimization and model training is carried out on four public expression datasets, and the accuracy is improved by 4.41% on average in multiple test sample sets. A lightweight expression recognition system is developed, and the accuracy is significantly improved, which verifies the effectiveness of the application method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.