Children will increasingly come of age with personified robots and potentially form social and even moral relationships with them. What will such relationships look like? To address this question, 90 children (9-, 12-, and 15-year-olds) initially interacted with a humanoid robot, Robovie, in 15-min sessions. Each session ended when an experimenter interrupted Robovie's turn at a game and, against Robovie's stated objections, put Robovie into a closet. Each child was then engaged in a 50-min structural-developmental interview. Results showed that during the interaction sessions, all of the children engaged in physical and verbal social behaviors with Robovie. The interview data showed that the majority of children believed that Robovie had mental states (e.g., was intelligent and had feelings) and was a social being (e.g., could be a friend, offer comfort, and be trusted with secrets). In terms of Robovie's moral standing, children believed that Robovie deserved fair treatment and should not be harmed psychologically but did not believe that Robovie was entitled to its own liberty (Robovie could be bought and sold) or civil rights (in terms of voting rights and deserving compensation for work performed). Developmentally, while more than half the 15-year-olds conceptualized Robovie as a mental, social, and partly moral other, they did so to a lesser degree than the 9- and 12-year-olds. Discussion focuses on how (a) children's social and moral relationships with future personified robots may well be substantial and meaningful and (b) personified robots of the future may emerge as a unique ontological category.
This study investigated whether a robotic dog might aid in the social development of children with autism. Eleven children diagnosed with autism (ages 5-8) interacted with the robotic dog AIBO and, during a different period within the same experimental session, a simple mechanical toy dog (Kasha), which had no ability to detect or respond to its physical or social environment. Results showed that, in comparison to Kasha, the children spoke more words to AIBO, and more often engaged in three types of behavior with AIBO typical of children without autism: verbal engagement, reciprocal interaction, and authentic interaction. In addition, we found suggestive evidence (with p values ranging from .07 to .09) that the children interacted more with AIBO, and, while in the AIBO session, engaged in fewer autistic behaviors. Discussion focuses on why robotic animals might benefit children with autism.
Robots will increasingly take on roles in our social lives where they can cause humans harm. When robots do so, will people hold robots morally accountable? To investigate this question, 40 undergraduate students individually engaged in a 15-minute interaction with ATR's humanoid robot, Robovie. The interaction culminated in a situation where Robovie incorrectly assessed the participant's performance in a game, and prevented the participant from winning a $20 prize. Each participant was then interviewed in a 50-minute session. Results showed that all of the participants engaged socially with Robovie, and many of them conceptualized Robovie as having mental/emotional and social attributes. Sixtyfive percent of the participants attributed some level of moral accountability to Robovie. Statistically, participants held Robovie less accountable than they would a human, but more accountable than they would a vending machine. Results are discussed in terms of the New Ontological Category Hypothesis and robotic warfare.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.