In human-human interaction, we use information from gestures, facial expressions and gaze direction to make inferences about what interaction partners think, feel or intend to do next. Observing changes in gaze direction triggers shifts of attention to gazed-at locations and helps establish shared attention between gazer and observer -a pre-requisite for more complex social skills like mentalizing, action understanding and joint action. The ability to follow others' gaze develops early in life and being able to process gaze signals is a crucial milestone in human development. While human gaze signals are so essential for social interactions that we automatically follow them, it is unclear whether robot gaze cues are followed to similar degrees, and whether they have the ability to establish shared attention between human and robot. Furthermore, most studies on social attention in human-robot-interaction (HRI) use robot images and videos in controlled laboratory settings, which makes it necessary to determine whether gaze following can also be observed in social interactions with embodied robot platforms in realtime. In the current experiment, we use the humanoid robot Meka to examine whether gaze following can be induced in realistic interactions with social robots. The results indicate that Meka's gaze cues were reliably followed, and that they were able to establish shared attention in HRI. Implications of this finding for social robotics are discussed.
Abstract-This paper presents our progress toward a userguided manipulation framework for High Degree-of-Freedom robots operating in environments with limited communication. The system we propose consists of three components: (1) a userguided perception interface which assists the user to provide task level commands to the robot, (2) planning algorithms that autonomously generate robot motion while obeying relevant constraints, and (3) a trajectory execution and monitoring system which detects errors in execution. We have performed quantitative experiments on these three components, and qualitative experiments of the entire pipeline with the PR2 robot rotating a valve for the DARPA Robotics Challenge. We ran 20 tests of the entire framework with an average run time of two minutes. We also report results for tests of each individual component.
Editorial on the Research Topic Interdisciplinary approaches to the structure and performance of interdependent autonomous human machine teams and systemsOur Research Topic seeks to advance the physics of autonomous human-machine teams with a mathematical, generalizable model [1]. However, limited team science exists (e.g., aircrews; in [2]). Why? Team science has been hindered by relying on observing how "independent" individuals act and communicate (viz., i.i.d. data; [3,4]), but independent data cannot reproduce the interdependence observed in teams [5]. In agreement, the National Academy of Sciences stated: The "performance of a team is not decomposable to, or an aggregation of, individual performances" ([6], p. 11), evidence of nonfactorable teams and data dependency, requiring random searches to find well-fitted teammates, all characterized by fewer degrees of freedom and reduced entropy from interdependence. We review what else we know about a physics of autonomous humanmachine teams.First, we argue that state-dependency [7] rescues traditional social science from its current validation (e.g., "implicit" bias; [8,9]) and replication crises ([10]; e.g., attempts to reduce bias are "dispiriting" [11]), caused by assuming that cognition subsumes individual behavior, needing only independent data (i.i.d.) for teams. The result: Traditional models include large language models like OpenAI's ChatGPT and game theory. Strictly cognitive, ChatGPT and two-person games are assumed to easily connect to reality, but ChatGPT skeptics exist ([12]; [13]); and in Science [14], real-world multi-agent approaches are "currently out of reach for state-of-the-art AI methods." Previewed in Science, "realworld, large-scale multiagent problems . . . are currently unsolvable" [15].Second, to describe interdependence between cogition and behavior, Bohr, the quantum pioneer [16,17]) borrowed "complementary" from psychologist, William James [18]. Later, but long before the Academy's 2021 report, Schrödinger [19] wrote that entanglement meant "the best possible knowledge of a whole does not necessarily include the best possible knowledge of all its parts, even though they may be entirely separate." [20] borrowed
This work develops and implements a multi-agent time-based path-planning method using A*. The purpose of this work is to create methods in which multi-agent systems can coordinate actions and complete them at the same time. We utilized A* with constraints defined by a dynamic model of each agent. The model for each agent is updated during each time step and the resulting control is determined. This results in a translational path that each of the agents is physically capable of completing in synchrony. The resulting path is given to the agents as a sequence of waypoints. Periodic updates of the path are calculated, utilizing real-world position and velocity information, as the agents complete the task to account for external disturbances. Our methodology is tested in a dynamic simulation environment as well as on real-world lighter-than-air robotic agents.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.