Urban Search and Rescue (USAR) missions continue to benefit from the incorporation of human–robot teams (HRTs). USAR environments can be ambiguous, hazardous, and unstable. The integration of robot teammates into USAR missions has enabled human teammates to access areas of uncertainty, including hazardous locations. For HRTs to be effective, it is pertinent to understand the factors that influence team effectiveness, such as having shared goals, mutual understanding, and efficient communication. The purpose of our research is to determine how to (1) better establish human trust, (2) identify useful levels of robot transparency and robot explanations, (3) ensure situation awareness, and (4) encourage a bipartisan role amongst teammates. By implementing robot transparency and robot explanations, we found that the driving factors for effective HRTs rely on robot explanations that are context-driven and are readily available to the human teammate.
Virtual testbeds are fundamental to the success of research on cognitive work in safety-critical domains. A testbed that can meet researchers' objectives and create a sense of reality for participants positively impacts the research process; they have the potential to allow researchers to address questions not achievable in physical environments. This paper discusses the development of a synthetic task environment (STE) for Urban Search and Rescue (USAR) to advance the boundaries of Human-Robot Teams (HRTs) using Roblox. Virtual testbeds can simulate USAR task environments and HRT interactions. After assessing alternative STE platforms, we discovered Roblox not only met our research capabilities but also would prove invaluable for research teams without substantial coding experience. This paper outlines the design process of creating an STE to meet our research team's objectives.
Project Overview. Agent transparency in human-machine teams affords human team members the ability to understand the machine's status, reasoning, and future states (Chen et al., 2018). When humans work with an agent that is transparent, they will have an accurate mental model of that machine's behavior and be able to plan and execute their own actions accordingly. The words a robot uses (Guznov et al., 2020) and the modality a robot communicates with (Ezenyilimba et al., n.d.;Fernandes et al., 2018) have been shown to affect human teammates' perceptions of trust and situation awareness (SA), as well as workload and performance. Additionally, agent confidence, a component of transparency, has been shown to improve trust in robot teammates (Wang et al., 2016).Graphical modalities have been championed as the primary method of communicating agent transparency (Selkowitz et al., 2017), however, in some scenarios, textbased communication has been shown to benefit ratings of trust in robot teammates, and SA, compared to graphical communication (Ezenyilimba et al., n.d.). In this study two methods of conveying agent transparency to a human teammate are examined: text displays and graphical displays. How the presence or absence of agent confidence within each of these displays affects trust, SA, workload, and performance is investigated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.