Gone are the days of robots solely operating in isolation, without direct interaction with people. Rather, robots are increasingly being deployed in environments and roles that require complex social interaction with humans. The implementation of human-robot teams continues to increase as technology develops in tandem with the state of human-robot interaction (HRI) research. Trust, a major component of human interaction, is an important facet of HRI. However, the ideas of trust repair and trust violations are understudied in the HRI literature. Trust repair is the activity of rebuilding trust after one party breaks the trust of another. These trust breaks are referred to as trust violations . Just as with humans, trust violations with robots are inevitable; as a result, a clear understanding of the process of HRI trust repair must be developed in order to ensure that a human-robot team can continue to perform well after a trust violation. Previous research on human-automation trust and human-human trust can serve as starting places for exploring trust repair in HRI. Although existing models of human-automation and human-human trust are helpful, they do not account for some of the complexities of building and maintaining trust in unique relationships between humans and robots. The purpose of this article is to provide a foundation for exploring human-robot trust repair by drawing upon prior work in the human-robot, human-automation, and human-human trust literature, concluding with recommendations for advancing this body of work.
Suboptimal exchange of information can have tragic consequences to patient's safety and survival. To this end, the Joint Commission lists communication error among the most common attributable causes of sentinel events. The risk management literature further supports this finding, ascribing communication error as a major factor (70%) in adverse events. Despite numerous strategies to improve patient safety, which are rooted in other high reliability industries (e.g., commercial aviation and naval aviation), communication remains an adaptive challenge that has proven difficult to overcome in the sociotechnical landscape that defines healthcare. Attributing a breakdown in information exchange to simply a generic "communication error" without further specification is ineffective and a gross oversimplification of a complex phenomenon. Further dissection of the communication error using root cause analysis, a failure modes and effects analysis, or through an event reporting system is needed. Generalizing rather than categorizing is an oversimplification that clouds clear pattern recognition and thereby prevents focused interventions to improve process reliability. We propose that being more precise when describing communication error is a valid mechanism to learn from these errors. We assert that by deconstructing communication in healthcare into its elemental parts, a more effective organizational learning strategy emerges to enable more focused patient safety improvement efforts. After defining the barriers to effective communication, we then map evidence-based recovery strategies and tools specific to each barrier as a tactic to enhance the reliability and validity of information exchange within healthcare.
Evaluation of team communication can provide critical insights into team dynamics, cohesion, trust, and performance on joint tasks. Although many communication-based measures have been tested and validated for human teams, this review article extends this research by identifying key approaches specific to human-autonomy teams. It is not possible to identify all approaches for all situations, though the following seem to generalize and support multi-size teams and a variety of military operations. Therefore, this article will outline several key approaches to assessing communication, associated data requirements, example applications, verification of methods through HAT use cases, and lessons learned, where applicable. Some approaches are based on the structure of team communication; others draw from dynamical systems theory to consider perspectives across different timescales; other approaches leverage features of team members’ voices or facial expressions to detect emotional states that can provide windows into other workings of the team; still others consider the content of communication to produce insights. Taken together, these approaches comprise a varied toolkit for deriving critical information about how team interactions affect, and are affected by, coordination, trust, cohesion, and performance outcomes. Future research directions describe four critical areas for further study of communication in human-autonomy teams.
The rise in artificial intelligence capabilities in autonomy-enabled systems and robotics has pushed research to address the unique nature of human-autonomy team collaboration. The goal of these advanced technologies is to enable rapid decision making, enhance situation awareness, promote shared understanding, and improve team dynamics. Simultaneously, use of these technologies is expected to reduce risk to those who collaborate with these systems. Yet, for appropriate human- autonomy teaming to take place, especially as we move beyond dyadic partnerships, proper calibration of team trust is needed to effectively coordinate interactions during high-risk operations. But to meet this end, critical measures of team trust for this new dynamic of human-autonomy teams are needed. This paper seeks to expand on trust measurement principles and the foundation of human-autonomy teaming to propose a “toolkit” of novel methods that support the development, maintenance and calibration of trust in human-autonomy teams operating within uncertain, risky, and dynamic environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.