The Uncanny valley hypothesis, which tells us that almost-human characteristics in a robot or a device could cause uneasiness in human observers, is an important research theme in the Human Robot Interaction (HRI) field. Yet, that phenomenon is still not well-understood. Many have investigated the external design of humanoid robot faces and bodies but only a few studies have focused on the influence of robot movements on our perception and feelings of the Uncanny valley. Moreover, no research has investigated the possible relation between our uneasiness feeling and whether or not we would accept robots having a job in an office, a hospital or elsewhere. To better understand the Uncanny valley, we explore several factors which might have an influence on our perception of robots, be it related to the subjects, such as culture or attitude toward robots, or related to the robot such as emotions and emotional intensity displayed in its motion. We asked 69 subjects (N = 69) to rate the motions of a humanoid robot (Perceived Humanity, Eeriness, and Attractiveness) and state where they would rather see the robot performing a task. Our results suggest that, among the factors we chose to test, the attitude toward robots is the main influence on the perception of the robot related to the Uncanny valley. Robot occupation acceptability was affected only by Attractiveness, mitigating any Uncanny valley effect. We discuss the implications of these findings for the Uncanny valley and the acceptability of a robotic worker in our society.
Abstract-In this paper we tackle the problem of visually predicting surface friction for environments with diverse surfaces, and integrating this knowledge into biped robot locomotion planning. The problem is essential for autonomous robot locomotion since diverse surfaces with varying friction abound in the real world, from wood to ceramic tiles, grass or ice, which may cause difficulties or huge energy costs for robot locomotion if not considered. We propose to estimate friction and its uncertainty from visual estimation of material classes using convolutional neural networks, together with probability distribution functions of friction associated with each material. We then robustly integrate the friction predictions into a hierarchical (footstep and full-body) planning method using chance constraints, and optimize the same trajectory costs at both levels of the planning method for consistency. Our solution achieves fully autonomous perception and locomotion on slippery terrain, which considers not only friction and its uncertainty, but also collision, stability and trajectory cost. We show promising friction prediction results in real pictures of outdoor scenarios, and planning experiments on a real robot facing surfaces with different friction.
Different legged robot locomotion controllers have different advantages and disadvantages, from speed of motion to energy, computational speed, safety and others. In this paper we propose a method for planning locomotion with multiple controllers and sub-planners, explicitly considering the multiobjective nature of the problem. We propose a parameter-free method that plans in the space of body motion and controller choice, using utopian and lexicographic cost aggregation functions. We empirically analyze the behavior of the method in terms of planning success rates, Pareto-optimality and anytime behavior in cost space. We show that our method is faster than pure footstep planning methods both in computation (2x) and mission time (1.4x), it is safer than pure dynamic-walking methods, and achieves desirable Pareto-optimal solutions (up to 8x) faster than fairly-tuned traditional weighted-sum methods. Our conclusions are drawn from a combination of planning, physics simulation, and real robot experiments.
This article tackles the problem of designing 3D perception systems for robots with high visual requirements, such as versatile legged robots capable of different locomotion styles. In order to guarantee high visual coverage in varied conditions (e.g., biped walking, quadruped walking, ladder climbing), such robots need to be equipped with a large number of sensors, while at the same time managing the computational requirements that arise from such a system. We tackle this problem at both levels: sensor placement (how many sensors to install on the robot and where) and run-time acquisition scheduling under computational constraints (not all sensors can be acquired and processed at the same time). Our first contribution is a methodology for designing perception systems with a large number of depth sensors scattered throughout the links of a robot, using multi-objective optimization for optimal trade-offs between visual coverage and the number of sensors. We estimate the Pareto front of these objectives through evolutionary optimization, and implement a solution on a real legged robot. Our formulation includes constraints on task-specific coverage and design symmetry, which lead to reliable coverage and fast convergence of the optimization problem. Our second contribution is an algorithm for lowering the computational burden of mapping with such a high number of sensors, formulated as an information-maximization problem with several sampling techniques for speed. Our final system uses 20 depth sensors scattered throughout the robot, which can either be acquired simultaneously or optimally scheduled for low CPU usage while maximizing mapping quality. We show that, when compared with state-of-the-art robotic platforms, our system has higher coverage across a higher number of tasks, thus being suitable for challenging environments and versatile robots. We also demonstrate that our scheduling algorithm allows higher mapping performance to be obtained than with naïve and state-of-the-art methods by leveraging on measures of information gain and self-occlusion at low computational costs.
Pedestrian detection algorithms are important components of mobile robots, such as autonomous vehicles, which directly relate to human safety. Performance disparities in these algorithms could translate into disparate impact in the form of biased accident outcomes. To evaluate the need for such concerns, we characterize the age and gender bias in the performance of state-of-the-art pedestrian detection algorithms. Our analysis is based on the INRIA Person Dataset extended with child, adult, male and female labels. We show that all of the 24 top-performing methods of the Caltech Pedestrian Detection Benchmark have higher miss rates on children. The difference is significant and we analyse how it varies with the classifier, features and training data used by the methods. Algorithms were also genderbiased on average but the performance differences were not significant. We discuss the source of the bias, the ethical implications, possible technical solutions and barriers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.