Strategies for improving the explainability of artificial agents are a key approach to support the understandability of artificial agents’ decision-making processes and their trustworthiness. However, since explanations are not inclined to standardization, finding solutions that fit the algorithmic-based decision-making processes of artificial agents poses a compelling challenge. This paper addresses the concept of trust in relation to complementary aspects that play a role in interpersonal and human–agent relationships, such as users’ confidence and their perception of artificial agents’ reliability. Particularly, this paper focuses on non-expert users’ perspectives, since users with little technical knowledge are likely to benefit the most from “post-hoc”, everyday explanations. Drawing upon the explainable AI and social sciences literature, this paper investigates how artificial agent’s explainability and trust are interrelated at different stages of an interaction. Specifically, the possibility of implementing explainability as a trust building, trust maintenance and restoration strategy is investigated. To this extent, the paper identifies and discusses the intrinsic limits and fundamental features of explanations, such as structural qualities and communication strategies. Accordingly, this paper contributes to the debate by providing recommendations on how to maximize the effectiveness of explanations for supporting non-expert users’ understanding and trust.
In recent years, the governance of robotic technologies has become an important topic in policy-making contexts. The many potential applications and roles of robots in combination with steady advances in their uptake within society are expected to cause various unprecedented issues, which in many cases will increase the demand for new policy measures. One of the major issues is the way in which societies will address potential changes in the moral and legal status of autonomous social robots. Robot standing is an important concept that aims to understand and elaborate on such changes in robots’ status. This paper explores the concept of robot standing as a useful idea that can assist in the anticipatory governance of social robots. However, at the same time, the concept necessarily involves forms of speculative thinking, as it is anticipating a future that has not yet fully arrived. This paper elaborates on how such speculative engagement with the potential of technology represents an important point of discussion in the critical study of technology more generally. The paper then situates social robotics in the context of anticipatory technology governance by emphasizing the idea that robots are currently in the process of becoming constituted as objects of governance. Subsequently, it explains how specifically a speculative concept like robot standing can be of value in this process.
This paper develops an approach towards the study of trust in emerging robotics in a context of technology governance. First, a notion of robotics’ speculative character as an emerging technology is developed, thereby pointing at the different expectations regarding its societal impact. Furthermore, robots as speculative objects are explained as important to engage with, thereby arguing for a narrative approach towards robot trajectories. Finally, based on the above, a concept for the analysis of trust building through technology governance is developed. A concept that can engage with the speculative character of emerging robotics on a societal level.
The place of social robots in our social institutions is currently an important topic of discussion. An issue that rises regularly concerns the speculative character of several of the arguments in that discussion. This is unsurprising, as many of those arguments refer to future potentialities. For instance, the argument for robot rights has led to heavy debates on the usefulness of this speculative notion. In this contribution, the aim is to reflect on the role of speculative concepts in the field of robot ethics. The goal of this contribution is first of all to examine how robot ethics as a field is engaged in the development of speculative arguments. As a part of this, the speculative components in robotics narratives are reviewed. Furthermore, the contribution zooms in on the discussion around social robots while elaborating on different issues that can be seen as constitutive for improving speculative robot ethics. Finally, the goal is to provide new directions for further engagement with the contingent futures of social robots.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.