2021
DOI: 10.1017/can.2021.23
|View full text |Cite
|
Sign up to set email alerts
|

What’s Wrong with Automated Influence

Abstract: Automated Influence is the use of Artificial Intelligence (AI) to collect, integrate, and analyse people’s data in order to deliver targeted interventions that shape their behaviour. We consider three central objections against Automated Influence, focusing on privacy, exploitation, and manipulation, showing in each case how a structural version of that objection has more purchase than its interactional counterpart. By rejecting the interactional focus of “AI Ethics” in favour of a more structural, political p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(10 citation statements)
references
References 64 publications
0
8
0
Order By: Relevance
“…For example, the first data protection principle requires that consent is obtained from the individual for the information to be collected and processed. 8 If a party plans to use your data in a way that was not disclosed during the initial consent process, this original consent becomes invalid and a fresh consent is required. For example, if a party intends to use your postal code for a different purpose than the one you initially agreed to, such as determining insurance rates, they must seek your permission again [2].…”
Section: The Privacy Problemmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, the first data protection principle requires that consent is obtained from the individual for the information to be collected and processed. 8 If a party plans to use your data in a way that was not disclosed during the initial consent process, this original consent becomes invalid and a fresh consent is required. For example, if a party intends to use your postal code for a different purpose than the one you initially agreed to, such as determining insurance rates, they must seek your permission again [2].…”
Section: The Privacy Problemmentioning
confidence: 99%
“…The core of the chilling effect we are discussing lies in this disparity. Persistent surveillance fosters an environment of constraint, where individual autonomy is violated [8]. For a democracy to thrive, it necessitates independent and autonomous decision-makers, a condition which becomes challenging to fulfill in the absence of privacy.…”
Section: The Privacy Problemmentioning
confidence: 99%
“…Yet one might correctly point out that many more objections to algorithms have recently appeared in the algorithmic ethics literature (Birhane, 2021; Hunkenschroer & Luetge, 2022; Martin, 2019; Müller, 2021; Tasioulas, 2019; Tsamados et al, 2022). For example, there are concerns related to algorithms systemically excluding certain individuals (Creel & Hellman, 2022), eliciting organizational monocultures (Kleinberg & Raghavan, 2021), or disproportionately harming marginalized groups (Birhane, 2021); worries related to the legitimacy and trustworthiness of algorithms (Benn & Lazar, 2022; Martin & Waldman, 2022; Tong, Jia, Luo, & Fang, 2021) and the lack of explainability in the case of opaque algorithms (Anthony, 2021; Kim & Routledge, 2022; Lu, Lee, Kim, & Danks, 2020; Rahman, 2021; Rudin, 2019; Selbst & Powles, 2017; Véliz, Prunkl, Phillips-Brown, & Lechterman, 2021; Wachter, Mittelstadt, & Floridi, 2017); 13 issues related to whether algorithms preclude us from taking people seriously as individuals (Lippert-Rasmussen, 2011; Susser, 2021); and concerns related to whether automated systems create responsibility or accountability gaps (Bhargava & Velasquez, 2019; Danaher, 2016; Himmelreich, 2019; Nyholm, 2018; Roff, 2013; Simpson & Müller, 2016; Sparrow, 2007; Tigard, 2021), among other concerns (Bedi, 2021; Tasioulas, 2019; Tsamados et al, 2022; Yam & Skorburg, 2021). In short, there’s now a rich literature involving a wide range of concerns related to adopting algorithms in lieu of human decision makers (Hunkenschroer & Luetge, 2022; Martin, 2022; Müller, 2021; Tsamados et al, 2022).…”
Section: The Interview Puzzle: the Behavioral And Algorithmic Threatsmentioning
confidence: 99%
“…Finally, the concentration of economic, social, and political power in the hands of a few firms is problematic. An account which focuses only on bilateral consent transactions will miss the cumulative effects of these transactions on social arrangements and the concentration of power within society (Véliz 2020; Benn and Lazar 2022).…”
Section: Against Autonomous Authorization As the Basis For User-sns E...mentioning
confidence: 99%
“…While these categories call to mind each person’s interests to control their personal data, there are also important interests that others have in a person’s disclosure of data because one person’s disclosure may permit inferences about others. Moreover, there are collective interests in the rules governing the transfer of personal data, including the way that such transfers contribute to concentrations of economic and social power and how they shape democratic discourse (Véliz 2020; Benn and Lazar 2022).…”
Section: Introductionmentioning
confidence: 99%