Face recognition is a technology with great potential in the field of robotics, due to its prominent role in human-robot interaction (HRI). This interaction is a keystone for the successful deployment of robots in areas requiring a customized assistance like education and healthcare, or assisting humans in everyday tasks. These unconstrained environments present additional difficulties for face recognition, extreme head pose variability being one of the most challenging. In this paper, we address this issue and make a fourfold contribution. First, it has been designed a tool for gathering an uniform distribution of head pose images from a person, which has been used to collect a new dataset of faces, both presented in this work. Then, the dataset has served as a testbed for analyzing the detrimental effects this problem has on a number of state-of-the-art methods, showing their decreased effectiveness outside a limited range of poses. Finally, we propose an optimization method to mitigate said negative effects by considering key pose samples in the recognition system’s set of known faces. The conducted experiments demonstrate that this optimized set of poses significantly improves the performance of a state-of-the-art, cutting-edge system based on Multitask Cascaded Convolutional Neural Networks (MTCNNs) and ArcFace.
Jupyter notebooks are recently emerging as a valuable pedagogical resource in academy, being adopted in educational institutions worldwide. This is mainly due to their ability to combine the expressiveness of traditional explanations from textbooks, with the interaction capabilities of software applications, which provides numerous benefits for both students and lecturers of different fields. One of the areas that could benefit from their adoption is such of mobile robotics, whose recent popularity has resulted in an increasing demand of trained practitioners with a solid theoretical and practical background. Therefore, there is a need of high quality learning materials adapted to modern tools and methodologies. With that in mind, this work explores how the introduction of Jupyter notebooks in undergraduate mobile robotic courses contributes to improve both teaching and learning experiences. For that, we first present a series of (publicly available) educational notebooks encompassing a variety of topics relevant for robotics, with a particular emphasis in the study of mobile robots and commonly used sensors. Those documents have been built from the ground up to take advantage of the Jupyter Notebook framework, bridging the typical gap between theoretical frame and interactive code. We also present a case study describing the notebooks usage in undergraduate courses at University of Málaga, including a discussion on the promising results and findings obtained.
Assistive robots collaborating with people demand strong Human-Robot interaction capabilities. In this way, recognizing the person the robot has to interact with is paramount to provide a personalized service and reach a satisfactory end-user experience. To this end, face recognition: a non-intrusive, automatic mechanism of identification using biometric identifiers from an user's face, has gained relevance in the recent years, as the advances in machine learning and the creation of huge public datasets have considerably improved the state-of-the-art performance. In this work we study different open-source implementations of the typical components of state-of-the-art face recognition pipelines, including face detection, feature extraction and classification, and propose a recognition system integrating the most suitable methods for their utilization in assistant robots. Concretely, for face detection we have considered MTCNN, OpenCV's DNN, and OpenPose, while for feature extraction we have analyzed InsightFace and Facenet. We have made public an implementation of the proposed recognition framework, ready to be used by any robot running the Robot Operating System (ROS). The methods in the spotlight have been compared in terms of accuracy and performance in common benchmark datasets, namely FDDB and LFW, to aid the choice of the final system implementation, which has been tested in a real robotic platform.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.