Abstract:Recent technological innovation has made video doctoring increasingly accessible. This has given rise to Deepfake Pornography, an emerging phenomenon in which Deep Learning algorithms are used to superimpose a person's face onto a pornographic video. Although to most people, Deepfake Pornography is intuitively unethical, it seems difficult to justify this intuition without simultaneously condemning other actions that we do not ordinarily find morally objectionable, such as sexual fantasies. In the present arti… Show more
“…A key point to note about the graveness of a wrongdoing is, that it is sensitive to both extrinsic and intrinsic factors. This point is related to one made by Öhman ( 2020 ), who stated that “the permissibility of some actions appears to depend on the degree to which they are abstracted from their natural context” (p.133). So, two wrongdoings that might seem equally grave in abstraction (their intrinsic graveness), might not seem equally grave once we also consider other factors (their extrinsic graveness) - such as: the social context of the wrongdoing; how long ago the wrongdoing occurred; how distanced from reality a fictional wrongdoing may be; and who committed the wrongdoing (and there is bound to be further such factors.)…”
Section: What Is the Grave Resolution?mentioning
confidence: 57%
“…And there will be further factors than these. The point here is to suggest that the graveness of a wrongdoing can be affected by such factors; factors that we may not notice when wrongdoings are “abstracted from their natural context”(Öhman, 2020 , p.133). And once a wrongdoing is sufficiently grave it may become off-limits.…”
Section: What Is the Grave Resolution?mentioning
confidence: 99%
“… 5 This formulation was originally given by Luck ( 2017 , 2018 , p.157), and has also been followed by Tilson ( 2018 , p.208), Bartel ( 2020 , p.122), Nader ( 2020 , p.239), and Öhman ( 2020 , p.139). …”
In this paper a new resolution to the gamer’s dilemma (a paradox concerning the moral permissibility of virtual wrongdoings) is presented. The first part of the paper is devoted to strictly formulating the dilemma, and the second to establishing its resolution. The proposed resolution, the grave resolution, aims to resolve not only the gamer’s dilemma, but also a wider set of analogous paradoxes – which together make up the
paradox of treating wrongdoing lightly.
“…A key point to note about the graveness of a wrongdoing is, that it is sensitive to both extrinsic and intrinsic factors. This point is related to one made by Öhman ( 2020 ), who stated that “the permissibility of some actions appears to depend on the degree to which they are abstracted from their natural context” (p.133). So, two wrongdoings that might seem equally grave in abstraction (their intrinsic graveness), might not seem equally grave once we also consider other factors (their extrinsic graveness) - such as: the social context of the wrongdoing; how long ago the wrongdoing occurred; how distanced from reality a fictional wrongdoing may be; and who committed the wrongdoing (and there is bound to be further such factors.)…”
Section: What Is the Grave Resolution?mentioning
confidence: 57%
“…And there will be further factors than these. The point here is to suggest that the graveness of a wrongdoing can be affected by such factors; factors that we may not notice when wrongdoings are “abstracted from their natural context”(Öhman, 2020 , p.133). And once a wrongdoing is sufficiently grave it may become off-limits.…”
Section: What Is the Grave Resolution?mentioning
confidence: 99%
“… 5 This formulation was originally given by Luck ( 2017 , 2018 , p.157), and has also been followed by Tilson ( 2018 , p.208), Bartel ( 2020 , p.122), Nader ( 2020 , p.239), and Öhman ( 2020 , p.139). …”
In this paper a new resolution to the gamer’s dilemma (a paradox concerning the moral permissibility of virtual wrongdoings) is presented. The first part of the paper is devoted to strictly formulating the dilemma, and the second to establishing its resolution. The proposed resolution, the grave resolution, aims to resolve not only the gamer’s dilemma, but also a wider set of analogous paradoxes – which together make up the
paradox of treating wrongdoing lightly.
“…Over the past years, several articles have appeared that touch on the ethical implications of this technology (Caporusso, 2021;Diakopoulos & Johnson, 2020;Fletcher, 2018;Franks & Waldman, 2019;Meskys et al, 2020;Öhman, 2020;Silbey and Hartzog, 2019;Spivak, 2019;Westerlund, 2019). Concerns have been raised about the potential use of deepfakes for blackmail, intimidation, and sabotage , ideological manipulation (Fletcher, 2018: 467), and incitement to violence .…”
Deepfake technology presents significant ethical challenges. The ability to produce realistic looking and sounding video or audio files of people doing or saying things they did not do or say brings with it unprecedented opportunities for deception. The literature that addresses the ethical implications of deepfakes raises concerns about their potential use for blackmail, intimidation, and sabotage, ideological influencing, and incitement to violence as well as broader implications for trust and accountability. While this literature importantly identifies and signals the potentially far-reaching consequences, less attention is paid to the moral dimensions of deepfake technology and deepfakes themselves. This article will help fill this gap by analysing whether deepfake technology and deepfakes are intrinsically morally wrong, and if so, why. The main argument is that deepfake technology and deepfakes are morally suspect, but not inherently morally wrong. Three factors are central to determining whether a deepfake is morally problematic: (i) whether the deepfaked person(s) would object to the way in which they are represented; (ii) whether the deepfake deceives viewers; and (iii) the intent with which the deepfake was created. The most distinctive aspect that renders deepfakes morally wrong is when they use digital data representing the image and/or voice of persons to portray them in ways in which they would be unwilling to be portrayed. Since our image and voice are closely linked to our identity, protection against the manipulation of hyper-realistic digital representations of our image and voice should be considered a fundamental moral right in the age of deepfakes.
“…Recently, the malicious design of deepfakes has been described as a "[...] serious threat to psychological security" [179]. Adult targets may despite the synthetic nature of the deepfake samples and often eventually their private character restricted to a personal possession of the agent in question, perceive their mere existence as degradation [180]-a phenomenon certainly requiring social discourses in the long-term.…”
In the last years, artificial intelligence (AI) safety gained international recognition in the light of heterogeneous safety-critical and ethical issues that risk overshadowing the broad beneficial impacts of AI. In this context, the implementation of AI observatory endeavors represents one key research direction. This paper motivates the need for an inherently transdisciplinary AI observatory approach integrating diverse retrospective and counterfactual views. We delineate aims and limitations while providing hands-on-advice utilizing concrete practical examples. Distinguishing between unintentionally and intentionally triggered AI risks with diverse socio-psycho-technological impacts, we exemplify a retrospective descriptive analysis followed by a retrospective counterfactual risk analysis. Building on these AI observatory tools, we present near-term transdisciplinary guidelines for AI safety. As further contribution, we discuss differentiated and tailored long-term directions through the lens of two disparate modern AI safety paradigms. For simplicity, we refer to these two different paradigms with the terms artificial stupidity (AS) and eternal creativity (EC) respectively. While both AS and EC acknowledge the need for a hybrid cognitive-affective approach to AI safety and overlap with regard to many short-term considerations, they differ fundamentally in the nature of multiple envisaged long-term solution patterns. By compiling relevant underlying contradistinctions, we aim to provide future-oriented incentives for constructive dialectics in practical and theoretical AI safety research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.