Are acts of violence performed in virtual environments ever morally wrong, even when no other persons are affected? While some such acts surely reflect deficient moral character, I focus on the moral rightness or wrongness of acts. Typically it's thought that, on Kant's moral theory, an act of virtual violence is morally wrong (i.e., violate the Categorical Imperative) only if the act mistreats another person. But I argue that, on Kant's moral theory, some acts of virtual violence can be morally wrong, even when no other persons or their avatars are affected. First, I explain why many have thought that, in general on Kant's moral theory, virtual acts affecting no other persons or their avatars can't violate the Categorical Imperative. For there are real world acts that clearly do, but it seems that when we consider the same sorts of acts done alone in a virtual environment, they don't violate the Categorical Imperative, because no others persons were involved. But then, how could any virtual acts like these, that affect no other persons or their avatars, violate the Categorical Imperative? I then argue that there indeed can be such cases of morally wrong virtual acts-some due to an actor's having erroneous beliefs about morally relevant facts, and others due not to error, but to the actor's intention leaving out morally relevant facts while immersed in a virtual environment. I conclude by considering some implications of my arguments for both our present technological context as well as the future.
Leibniz accepts causal independence, the claim that no created substance can causally interact with any other. And Leibniz needs causal independence to be true, since his wellknown pre-established harmony is premised upon it. So, what is Leibniz's argument for causal independence? Sometimes he claims that causal interaction between substances is superfluous; sometimes he claims that it would require the transfer of accidents, and that this is impossible. But when Leibniz finds himself under sustained pressure to defend causal independence, those are not the reasons that he marshals in its defense. Instead, deep into his long correspondence with Burchard de Volder, he gives a different sort of argument, one that has gone nearly unnoticed by commentators and has not yet been properly understood. In part, this is because the argument develops slowly over four years of correspondence. It emerges in early 1704, but it is formulated tersely and appears murky unless understood in light of Leibniz and De Volder's tangled exchanges. There Leibniz argues that, on his distinctive ontology of an infinity of created substances, no two created substances could possibly causally interact, for roughly the same reasons that some Cartesians like De Volder deny interaction between minds and bodies on their substance dualist ontology. In this paper I draw out this lost argument, explain it and the metaphysics on which Leibniz builds it, and untangle Leibniz and De Volder's exchanges concerning causation from which this argument results. 1. Even though Leibniz thinks of God as a substance-though certainly a special one-for the sake of convenience I'll use the term 'substance' to mean 'created substance', unless stated otherwise.
Some argue that robots could never be sentient, and thus could never have intrinsic moral status. Others disagree, believing that robots indeed will be sentient and thus will have moral status. But a third group thinks that, even if robots could never have moral status, we still have a strong moral reason to treat some robots as if they do. Drawing on a Kantian argument for indirect animal rights, a number of technology ethicists contend that our treatment of anthropomorphic or even animal-like robots could condition our treatment of humans: treat these robots well, as we would treat humans, or else risk eroding good moral behavior toward humans. But then, this argument also seems to justify giving rights to robots, even if robots lack intrinsic moral status. In recent years, however, this indirect argument in support of robot rights has drawn a number of objections. In this paper I have three goals. First, I will formulate and explicate the Kant-inspired indirect argument meant to support robot rights, making clearer than before its empirical commitments and philosophical presuppositions. Second, I will defend the argument against a number of objections. The result is the fullest explication and defense to date of this well-known and influential but often criticized argument. Third, however, I myself will raise a new concern about the argument's use as a justification for robot rights. This concern is answerable to some extent, but it cannot be dismissed fully. It shows that, surprisingly, the argument's advocates have reason to resist, at least somewhat, producing the sorts of robots that, on their view, ought to receive rights.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.