When a robot has to execute a shared plan with a human, a number of unexpected situations and contingencies can happen due, essentially, to human initiative. For instance, a temporary absence or inattention of the human can entail a partial, and potentially not sufficient, knowledge about the current situation. To ensure a successful and fluent execution of the shared plan the robot might need to detect such situations and be able to provide the information to its human partner about what he missed without being annoying or intrusive. To do so, we have developed a framework which allows a robot to estimate the other agents mental states not only about the environment but also about the state of goals, plans and actions and to take them into account when executing human-robot shared plans.
Despite major progress in Robotics and AI, robots are still basically "zombies" repeatedly achieving actions and tasks without understanding what they are doing. Deep-Learning AI programs classify tremendous amounts of data without grasping the meaning of their inputs or outputs. We still lack a genuine theory of the underlying principles and methods that would enable robots to understand their environment, to be cognizant of what they do, to take appropriate and timely initiatives, to learn from their own experience and to show that they know that they have learned and how. The rationale of this paper is that the understanding of its environment by an agent (the agent itself and its effects on the environment included) requires its self-awareness, which actually is itself emerging as a result of this understanding and the distinction that the agent is capable to make between its own mind-body and its environment. The paper develops along five issues: agent perception and interaction with the environment; learning actions; agent interaction with other agents-specifically humans; decision-making; and the cognitive architecture integrating these capacities.
It has been shown that, when a human and a robot have to perform a joint activity together, they need to structure their activity based on a so-called "shared plan". In this work, we present a scheme and an implemented system which allow the robot to elaborate and execute shared plans that are flexible enough to be achieved in collaboration with a human in a smooth and non-intrusive manner. We identify and analyze the decisions that should preferably be taken at planning time and those that should be better postponed. We also show in which conditions the robot can determine when it has to take the decision by itself or leave it to its human partner. As a consequence, the robot avoids useless communication by smoothly adapting its behavior to the human.
The domain of human-robot Joint Action is a growing field where roboticists, psychologists and philosophers start to collaborate in order to devise robot abilities that are as efficient and convenient for the human partner as possible. Besides studying Joint Action and developing algorithms and schemes to control the robot and manage the interaction, one of the current challenges is to come up with a method to properly evaluate the progresses made by the community. Several questionnaires have already been proposed to the community that deal with the evaluation of humanrobot interaction. However, these studies mainly concern either specific basic behaviors during Joint Action or human-robot interactions without effective physical Joint Action. When it comes to high level decisions during physical human-robot Joint Action, there are fewer contributions to the topic, and also, the methods to evaluate them are even rarer. The aim of this paper is to propose a reusable questionnaire PeRDITA (Pertinence of Robot Decisions In joinT Action) allowing us to evaluate the pertinence of high level decision abilities of a robot during physical Joint Action with a human.
This work proposes a full pipeline for a robot to explore, model and segment an apartment from a 2-D map. Viewpoints are found offline and then visited by the robot to create a 3-D model of the environment. This model is segmented in order to find the various rooms and how they are linked (windows, doors, walls) yielding a topological map. Moreover areas of interest are also segmented, in this case furniture's planar surfaces. The method is validated on a realistic three rooms apartment. Results show that, despite occlusion, autonomous exploration and modeling covers 95% of the apartment. For the segmentation part, 1 link out of 14 is wrongly classified while all the existing areas of interest are found.
Intégration de l’action, de l’action conjointe et de l’apprentissage dans les architectures cognitives robotiques. À l’opposé des recherches en Intelligence Artificielle, où des algorithmes conçus pour des problèmes spécifiques peuvent être testés dans des conditions de simulation parfaitement contrôlée, la Robotique a toujours eu à faire face à la nécessité d’intégrer la perception, la décision et l’exécution de l’action pour pouvoir fonctionner sur des plateformes physiques en interaction dans et avec le monde réel. Cette particularité a obligé les roboticiennes et roboticiens à se poser très tôt la question de comment coordonner efficacement différents systèmes de mémoires, différents niveaux de prise de décision et différents processus d’apprentissage dans des architectures cognitives. En conséquence, les propositions d’architectures robotiques ont souvent été confrontées et nourries par les questionnements sur les architectures cognitives tels qu’abordés en Philosophie, en Psychologie, en Neurosciences, et plus généralement en Sciences Cognitives. Dans cet article, nous passons en revue les travaux robotiques qui ont abordé le problème de l’intégration de différents niveaux d’action et de leurs mécanismes d’apprentissage associés (de la planification des actions orientées vers un but aux simples réflexes ; de la surveillance de l’action aux comportements réactifs guidés par la vision). Nous montrons qu’un tel type d’intégration est nécessaire et suffisant pour permettre à un agent artificiel de montrer un niveau basique de surveillance de sa propre performance, et en conséquence de montrer de plus grandes capacités de flexibilité comportementale, d’autonomie et de généralisation à différents environnements. Nous illustrons cette problématique à travers des exemples expérimentaux sur la coordination de systèmes multiples d’apprentissage au sein d’un même robot, et sur l’application à l’action conjointe lors de la coopération homme-robot. Nous trouvons en effet que des mécanismes partiellement similaires d’intégration des niveaux d’action peuvent fonctionner à la fois pour l’action individuelle et pour l’action conjointe. Enfin, nous mettons en lumière certains des succès et des échecs de ces approches robotiques en espérant nourrir et contribuer aux débats similaires sur l’action qui se posent dans les autres champs des Sciences Cognitives.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.