Political discourse on social media is seen by many as polarized, vitriolic and permeated by falsehoods and misinformation. Political operators have exploited all of these aspects of the discourse for strategic purposes, most famously during the Russian social media influence campaign during the 2016 presidential election in the United States and current, similar efforts targeting the U.S. elections in 2018 and 2020. The results of the social media study presented in this paper presents evidence that political influence through manipulation of social media discussions is no longer exclusive to political debate but can now also be found in pop culture. Specifically, this study examines a collection of tweets relating to a much-publicized fan dispute over the Star Wars franchise film Episode VII: The Last Jedi. This study finds evidence of deliberate, organized political influence measures disguised as fan arguments. The likely objective of these measures is increasing media coverage of the fandom conflict, thereby adding to and further propagating a narrative of widespread discord and dysfunction in American society. Persuading voters of this narrative remains a strategic goal for the U.S. alt-right movement, as well as the Russian Federation. The results of this study show that among those who address The Last Jedi director Rian Johnson directly on Twitter to express their dissatisfaction, more than half are bots, trolls/sock puppets or political activists using the debate to propagate political messages supporting extreme right-wing causes and the discrimination of gender, race or sexuality. A number of these users appear to be Russian trolls. The paper concludes that while it is only a minority of Twitter accounts that tweet negatively about The Last Jedi, organized attempts at politicizing the pop culture discourse on social media for strategic purposes are significant enough that users should be made aware of these measures, so they can act accordingly.
Targeted social media advertising based on psychometric user profiling has emerged as an effective way of reaching individuals who are predisposed to accept and be persuaded by the advertising message. In the political realm, the use of psychometrics appears to have been used to spread both information and misinformation through social media in recent elections in the U.S. and Europe, partially resulting in the current, public debate about 'fake news'. This paper questions the ethics of these methods, both in a commercial context and in the context of democratic processes. The ethical approach is based on the theoretical, contractarian work of John Rawls which serves as a lens through which the author examines whether the rights of citizens, as Rawls attributes them, are violated by this practice. The paper concludes that within a Rawlsian framework, use of psychometrics in commercial advertising on social media platforms is not necessarily unethical, since the user enters freely into a contract that allows for psychometrics to be used, and because this type of advertising is not necessary for full participation in society. The opposite is the case for political information, and thus, the paper concludes that use of psychometrics in political campaigning violates several of Rawls' ethical maxims.
Targeted social media advertising based on psychometric user profiling has emerged as an effective way of reaching individuals who are predisposed to accept and be persuaded by the advertising message. This article argues that in the case of political advertising, this may present a democratic and ethical challenge. Hypertargeting methods such as psychometrics can “crowd out” political communication with opposing views due to individual attention and time limitations, creating inequities in the access to information essential for voting decisions. The use of psychometrics also appears to have been used to spread both information and misinformation through social media in recent elections in the U.S. and Europe. This article is an applied ethics study of these methods in the context of democratic processes and compared to purely commercial situations. The ethical approach is based on the theoretical, contractarian work of John Rawls, which serves as a lens through which the author examines whether the rights of individuals, as Rawls attributes them, are violated by this practice. The article concludes that within a Rawlsian framework, use of psychometrics in commercial advertising on social media platforms, though not immune to criticism, is not necessarily unethical. In a democracy, however, the individual cannot abandon the consumption of political information, and since using psychometrics in political campaigning makes access to such information unequal, it violates Rawlsian ethics and should be regulated.
Inspired by the 2016 case of the encrypted Apple iPhone used by alleged terrorists in the San Bernardino, Calif. attack, this paper explores the question of whether the use of completely unbreakable encryption online or off-line would be considered ethical by the political philosopher John Rawls. Rawls is widely acknowledged as having played an important role in how we perceive freedom and liberty in Western democracies today, and his work on justice, fairness and liberty appears to be a great source of knowledge for politicians, policy-makers and activists. Several recent events and threats to national security of a technological nature have raised ethical questions about the relationship between state and citizen and how technological power should be divided between these two parties, particularly when it comes to the right to privacy. However, in contrast with a wide-spread perception of Rawls’ work, this article shows that there are cases in which Rawls’ principles actually place a limitation on liberty in these matters. This paper presents a thought experiment in which it becomes clear that Rawls’ advocacy for liberty did not extend to cases in which social cooperation in a well-ordered society would be obstructed. Based on a study of Rawls’ work, the author concludes that whereas Rawls would consider strong encryption both necessary and ethical, completely unbreakable encryption would be considered a violation of social cooperation and thus indefensible for Rawls.
Purpose As interest in technology ethics is increasing, so is the interest in bringing schools of ethics from non-Western philosophical traditions to the field, particularly when it comes to information and communication technology. In light of this development and recent publications that result from it, this paper aims to present responds critically to recent work on Confucian virtue ethics (CVE) and technology. Design/methodology/approach Four critiques are presented as theoretical challenges to CVE in technology, claiming that current literature insufficiently addresses: overall applicability, collective ethics issues, epistemic overconfidence within technology corporations and amplification of epistemic overconfidence by the implementation of CVE. These challenges make use of general CVE literature and work on technology critique, political philosophy, epistemology and business ethics. Findings Implementing CVE in technology may yield some benefits, but these may be outweighed by other outcomes, include strengthening hierarchies, widening inequities, increasing, rather than limiting, predictive activity, personal data collection, misinformation, privacy violations and challenges to the democratic process. Originality/value Though not directly advocating against CVE, the paper reveals hitherto unidentified and serious issues that should be addressed before CVE are used to inform ethics guidelines or regulatory policies. It also serves as a foundation for further inquiry into how Eastern philosophy more broadly can inform technology ethics in the West.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.