We present an ample description of a socially compliant mobile robotic platform, which is developed in the EU-funded project SPENCER. The purpose of this robot is to assist, inform and guide passengers in large and busy airports. One particular aim is to bring travellers of connecting flights conveniently and efficiently from their arrival gate to the passport control. The uniqueness of the project stems from the strong demand of service robots for this application with a large potential impact for the aviation industry on one side, and on the other side from the scientific advancements in social robotics, brought forward and achieved in SPENCER. The main contributions of SPENCER are novel methods to perceive, learn, and model human social behavior and to use this knowledge to plan appropriate actions in realtime for mobile platforms. In this paper, we describe how the project advances the fields of detection and tracking of individuals and groups, recognition of human social relations and activities, normative human behavior learning, socially-aware task and motion planning, learning socially annotated maps, and conducting empirical experiments to assess socio-psychological effects of normative robot behaviors.
Abstract. In this paper we present a robot supervision system designed to be able to execute collaborative tasks with humans in a flexible and robust way. Our system is designed to take into account the different preferences of the human partners, providing three operation modalities to interact with them. The robot is able to assume a leader role, planning and monitoring the execution of the task for itself and the human, to act as assistent of the human partner, following his orders, and also to adapt its plans to the human actions. We present several experiments that show that the robot can execute collaborative tasks with humans.
Human safety and effective human-robot communication are main concerns in HRI applications. In order to achieve such goals, a system should be very robust, allowing little chance for misunderstanding the user's commands. Moreover, the system should permit natural interaction reducing the time and the effort needed to achieve tasks. The main purpose of this work is to develop a general framework for flexible and multimodal human-robot communication. The proposed architecture should be easy to modify and expand, adding or modifying input channels and changing the multimodal fusion strategies. In this paper, we introduce our general approach and provide a case study with two modalities (gesture and speech).
Astronomy is undergoing a methodological revolution triggered by an unprecedented wealth of complex and accurate data. The new panchromatic, synoptic sky surveys require advanced tools for discovering patterns and trends hidden behind data which are both complex and of high dimensionality. We present DAMEWARE (DAta Mining & Exploration Web Application REsource): a general purpose, web-based, distributed data mining environment developed for the exploration of large data sets, and finely tuned for astronomical applications. By means of graphical user interfaces, it allows the user to perform classification, regression, or clustering tasks with machine learning methods. Salient features of DAMEWARE include its ability to work on large datasets with minimal human intervention, and to deal with a wide variety of real problems such as the classification of globular clusters in the galaxy NGC1399; the evaluation of photometric redshifts; and, finally, the identification of candidate Active Galactic Nuclei in multiband photometric surveys. In all these applications, DAMEWARE allowed us to achieve better results than those attained with more traditional methods. With the aim of providing potential users with all needed information, in this paper we briefly describe the technological background of DAMEWARE, give a short introduction to some relevant aspects of data mining, followed by a summary of some science cases and, finally, provide a detailed description of a template use case.
Abstract. In this paper we present a robotic system able to guide a person to a destination in a socially acceptable way. Our robot is able to estimate if the user is still actively following and react accordingly. This is achieved by stopping and waiting for the user or by changing the robot's speed to adapt to his needs. We also investigate how the robot can influence a person's behavior by changing its speed, to account for the urgency of the current task or for environmental stimulus, and by interacting with him when he stops following it. We base the planning model on Hierarchical Mixed Observability Markov Decision Processes to decompose the task in smaller subsets, simplifying the computation of a solution. Experimental results suggest the efficacy of our model.
In human-robot interactive scenarios communication and collaboration during task execution are crucial issues. Since the human behavior is unpredictable and ambiguous, an interactive robotic system is to continuously interpret intentions and goals adapting its executive and communicative processes according to the users behaviors. In this work, we propose an integrated system that exploits attentional mechanisms to flexibly adapt planning and executive processes to the multimodal human-robot interaction.
International audienceMany robotic projects use simulation as a faster and easier way to develop, evaluate and validate software components compared with on-board real world settings. In the human-robot interaction field, some recent works have attempted to integrate humans in the simulation loop. In this paper we investigate how such kind of robotic simulation software can be used to provide a dynamic and interactive environment to both collect a multimodal situated dialogue corpus and to perform an efficient reinforcement learning-based dialogue management optimisation procedure. Our proposition is illustrated by a preliminary experiment involving real users in a Pick-Place-Carry task for which encouraging results are obtained
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.