Abstract:Social robots are robots that can interact socially with humans. As social robots and the artificial intelligence (AI) that powers them becomes more advanced, they will likely take on more social and work roles. This has many important ethical implications. In this paper, we focus on one of the most central of these, the impacts that social robots can have on human autonomy. We argue that, due to their physical presence and social capacities, there is a strong potential for social robots to enhance human auton… Show more
“…ASAs can improve the autonomy of humans by supporting them to achieve more valuable ends, make more authentic choices, or improve their competencies. On the other hand, our autonomy can be impaired when ASAs restrict us from achieving valuable ends, making authentic choices, and developing competencies, as well as when they disrespect our agency (Formosa, 2021). In three of our autonomy aligned sub-scenarios (1D, 2C, and 4E), the ASAs are negatively impacting human autonomy by restricting authentic choice, disrespecting user agency, and increasing the vulnerability of the user's autonomy.…”
“…What we find in these scenarios is that human autonomy is negatively impacted by the ASA. However, as argued by Formosa (2021), in a given situation social robots and ASAs have the ability to either boost or inhibit human autonomy. ASAs can improve the autonomy of humans by supporting them to achieve more valuable ends, make more authentic choices, or improve their competencies.…”
“…While ethical issues have previously been identified with ASAs, such as whether dialogue systems should be used to change humans' goals or actions (Allwood et al, 2000), and some theoretical work exists exploring significant ethical implications of ASAs, such as their impacts on human autonomy (Formosa, 2021), there have been few studies designed to investigate what ASA behaviours are seen as ethically acceptable. One exception is an early study by van Vugt et al (2009) that explored ASA ethics from the perspective of trustworthiness, reporting that participants found the obese embodied agent who provided weight loss advice more trustworthy, which lends support to a body of work that uses ASAs to challenge stereotypes and biases (Bickmore et al, 2021;Rossen et al, 2008;Sebastian and Richards, 2017;Vugt et al, 2010).…”
“…ASAs can improve the autonomy of humans by supporting them to achieve more valuable ends, make more authentic choices, or improve their competencies. On the other hand, our autonomy can be impaired when ASAs restrict us from achieving valuable ends, making authentic choices, and developing competencies, as well as when they disrespect our agency (Formosa, 2021). In three of our autonomy aligned sub-scenarios (1D, 2C, and 4E), the ASAs are negatively impacting human autonomy by restricting authentic choice, disrespecting user agency, and increasing the vulnerability of the user's autonomy.…”
“…What we find in these scenarios is that human autonomy is negatively impacted by the ASA. However, as argued by Formosa (2021), in a given situation social robots and ASAs have the ability to either boost or inhibit human autonomy. ASAs can improve the autonomy of humans by supporting them to achieve more valuable ends, make more authentic choices, or improve their competencies.…”
“…While ethical issues have previously been identified with ASAs, such as whether dialogue systems should be used to change humans' goals or actions (Allwood et al, 2000), and some theoretical work exists exploring significant ethical implications of ASAs, such as their impacts on human autonomy (Formosa, 2021), there have been few studies designed to investigate what ASA behaviours are seen as ethically acceptable. One exception is an early study by van Vugt et al (2009) that explored ASA ethics from the perspective of trustworthiness, reporting that participants found the obese embodied agent who provided weight loss advice more trustworthy, which lends support to a body of work that uses ASAs to challenge stereotypes and biases (Bickmore et al, 2021;Rossen et al, 2008;Sebastian and Richards, 2017;Vugt et al, 2010).…”
“…Social robotic agents demonstrate a degree of sociability and emotional perception, by, inter alia, their engagement in high-level interactive dialogue, responsiveness to social cues, gesturing, mimicking human social behaviour, and voice recognition (Darling, 2016;Formosa, 2021). This serves not only to facilitate the human-robot interface but also to promote their self-maintenance, learning, and decision-making capacity (Breazeal, 2003).…”
Section: Stage 1: Identifying Norms and Normative Principlesmentioning
confidence: 99%
“…Such agents can potentially (and paradoxically) serve to either enhance or diminish human well-being. They can allow users to achieve more valuable ends and make more authentic choices or can serve to diminish authentic human choice (Formosa, 2021).…”
With recent advancements in systems engineering and artificial intelligence, autonomous agents are increasingly being called upon to execute tasks that have normative relevance. These are tasks that directly—and potentially adversely—affect human well-being and demand of the agent a degree of normative-sensitivity and -compliance. Such norms and normative principles are typically of a social, legal, ethical, empathetic, or cultural (‘SLEEC’) nature. Whereas norms of this type are often framed in the abstract, or as high-level principles, addressing normative concerns in concrete applications of autonomous agents requires the refinement of normative principles into explicitly formulated practical rules. This paper develops a process for deriving specification rules from a set of high-level norms, thereby bridging the gap between normative principles and operational practice. This enables autonomous agents to select and execute the most normatively favourable action in the intended context premised on a range of underlying relevant normative principles. In the translation and reduction of normative principles to SLEEC rules, we present an iterative process that uncovers normative principles, addresses SLEEC concerns, identifies and resolves SLEEC conflicts, and generates both preliminary and complex normatively-relevant rules, thereby guiding the development of autonomous agents and better positioning them as normatively SLEEC-sensitive or SLEEC-compliant.
As technologies become smarter, they tend to protect their users, much like parents protect their children. However, caring too much about a user can lead to technology paternalism, a construct that is becoming increasingly relevant with the advent of smart technologies. Nonetheless, very little is known about what technology paternalism is or how it can be measured. The authors applied established procedures from scale development methodology followed by quantitative measurement to present and validate a three‐factor scale (limiting, overruling, and welfare). The approach offers first empirical evidence linking technology paternalism to associated concepts, showing that it correlates as expected with established constructs in the literature on technology acceptance. This study contributes to the literature by uncovering a construct of interest to a critical discussion of technology paternalism and by providing a measurement tool that can be used by researchers, policy makers, and managers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.