2022 ACM Conference on Fairness, Accountability, and Transparency 2022
DOI: 10.1145/3531146.3533186
|View full text |Cite
|
Sign up to set email alerts
|

The Algorithmic Imprint

Abstract: When algorithmic harms emerge, a reasonable response is to stop using the algorithm to resolve concerns related to fairness, accountability, transparency, and ethics (FATE). However, just because an algorithm is removed does not imply its FATE-related issues cease to exist. In this paper, we introduce the notion of the "algorithmic imprint" to illustrate how merely removing an algorithm does not necessarily undo or mitigate its consequences. We operationalize this concept and its implications through the 2020 … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 70 publications
0
4
0
Order By: Relevance
“…To deconstruct systems of oppression that are reified through algorithmic oppression, we argue that socio-technical researchers must have a common foundational base of sociology, ethics, and self-reflexivity. A prevalent limitation in AI ethics work is the avoidance, or inability, to explicitly state the critical structures that shape the world and examine the impacts of sociotechnical systems on society (Ehsan et al, 2022a). Without systemic analysis, we claim work dedicated to positively improving the impact of technology on society will be performative at best and reify systems of oppression at worst.…”
Section: Liberation Cannot Be Operationalized Under Systems Of Oppres...mentioning
confidence: 99%
See 2 more Smart Citations
“…To deconstruct systems of oppression that are reified through algorithmic oppression, we argue that socio-technical researchers must have a common foundational base of sociology, ethics, and self-reflexivity. A prevalent limitation in AI ethics work is the avoidance, or inability, to explicitly state the critical structures that shape the world and examine the impacts of sociotechnical systems on society (Ehsan et al, 2022a). Without systemic analysis, we claim work dedicated to positively improving the impact of technology on society will be performative at best and reify systems of oppression at worst.…”
Section: Liberation Cannot Be Operationalized Under Systems Of Oppres...mentioning
confidence: 99%
“…Fairness and bias are common words used when considering technology's impact and repairing its negative lasting effects (Ehsan, Singh, Metcalf, & Riedl, 2022a). Other researchers have commented on the various definitions of fairness, such as Hampton's reference to over 21 definitions presented in (Tal, Batsuren, Bogina, Giunchiglia, Hartman, Loizou, Kuflik, & Otterbacher, 2019a).…”
Section: The Misuse Of Bias and Fairnessmentioning
confidence: 99%
See 1 more Smart Citation
“…Here, privacy violations may reflect more traditional conceptualizations of privacy attacks or security violations [69,90] and privacy elements beyond what may be protected by regulations or under the traditional purview of a privacy officer [119,159]. For instance, privacy violence may arise from algorithmic systems making predictive inference beyond what users openly disclose [196] or when data collected and algorithmic inferences made about people in one context is applied to another without the person's knowledge or consent through big data flows [119], even after those datasets or systems have been deprecated [50,72]. Even if those inferences are false (e.g., the incorrect assessment of one's sexuality), people or systems can act on that information in ways that lead to discrimination and harm [208].…”
Section: Privacy Violationmentioning
confidence: 99%