This article argues in defence of human–robot friendship. I begin by outlining the standard Aristotelian view of friendship, according to which there are certain necessary conditions which x must meet in order to ‘be a friend’. I explain how the current literature typically uses this Aristotelian view to object to human–robot friendships on theoretical and ethical grounds. Theoretically, a robot cannot be our friend because it cannot meet the requisite necessary conditions for friendship. Ethically, human–robot friendships are wrong because they are deceptive (the robot does not actually meet the conditions for being a friend), and could also make it more likely that we will favour ‘perfect’ robots, and disrespect, exploit, or exclude other human beings. To argue against the above position, I begin by outlining and assessing current attempts to reject the theoretical argument—that we cannot befriend robots. I argue that the current attempts are problematic, and do little to support the claim that we can be friends with robots now (rather than in some future time). I then use the standard Aristotelian view as a touchstone to develop a new degrees-of-friendship view. On my view, it is theoretically possible for humans to have some degree of friendship with social robots now. I explain how my view avoids ethical concerns about human–robot friendships being deceptive, and/or leading to the disrespect, exploitation, or exclusion of other human beings.
Ali (Ethics and Information Technology 17:267-274, 2015) and McCormick (Ethics and Information Technology 3:277-287, 2001) claim that virtual murders are objectionable when they show inappropriate engagement with the game or bad sportsmanship. McCormick argues that such virtual murders cannot be wrong on Kantian grounds because virtual murders only violate indirect moral duties, and bad sportsmanship is shown across competitive sports in the same way. To condemn virtual murder on grounds of bad sportsmanship, we would need to also condemn other competitive games. I argue, contra McCormick, that virtual murders performed within massively multiplayer online roleplaying games can be wrong on Kantian grounds when they are exploitative. Exploitation occurs when virtual murder treats the player controlling the victim in a way that they have no opportunity to consent to (i.e. as a mere means in Kantian terminology). I argue that some virtual murders involving inappropriate engagement (Ali, Ethics and Information Technology 17:267-274, 2015) and bad sportsmanship (McCormick, Ethics and Information Technology 3:277-287, 2001) are exploitative in this way and therefore also wrong on Kantian grounds.
On average, humans sleep for a third of their lives, and sleep disorders are common and treatable. However, services for most sleep disorders are highly variable across the UK, and sleep medicine is neglected in the medical curriculum. We report the findings of an audit of patients with neurological sleep disorders seen in a combined cognitive neurology and sleep disorder clinics over a seven‐year period, 75 with hypersomnias, 67 with parasomnias and 39 with insomnia. Also, the results of a pilot of a cognitive behavioural therapy service for insomnia undertaken in the same population are analysed.
This article defends two claims. First, humans could be in relationships characterised by hate with some robots. Second, it matters that humans could hate robots, as this hate could wrong the robots (by leaving them at risk of mistreatment, exploitation, etc.). In defending this second claim, I will thus be accepting that morally considerable robots either currently exist, or will exist in the near future, and so it can matter (morally speaking) how we treat these robots. The arguments presented in this article make an important original contribution to the robo-philosophy literature, and particularly the literature on human–robot relationships (which typically only consider positive relationship types, e.g., love, friendship, etc.). Additionally, as explained at the end of the article, my discussions of robot hate could also have notable consequences for the emerging robot rights movement. Specifically, I argue that understanding human–robot relationships characterised by hate could actually help theorists argue for the rights of robots.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.