As of 2020, the Public Employment Service Austria (AMS) makes use of algorithmic profiling of job seekers to increase the efficiency of its counseling process and the effectiveness of active labor market programs. Based on a statistical model of job seekers' prospects on the labor market, the system—that has become known as the AMS algorithm—is designed to classify clients of the AMS into three categories: those with high chances to find a job within half a year, those with mediocre prospects on the job market, and those clients with a bad outlook of employment in the next 2 years. Depending on the category a particular job seeker is classified under, they will be offered differing support in (re)entering the labor market. Based in science and technology studies, critical data studies and research on fairness, accountability and transparency of algorithmic systems, this paper examines the inherent politics of the AMS algorithm. An in-depth analysis of relevant technical documentation and policy documents investigates crucial conceptual, technical, and social implications of the system. The analysis shows how the design of the algorithm is influenced by technical affordances, but also by social values, norms, and goals. A discussion of the tensions, challenges and possible biases that the system entails calls into question the objectivity and neutrality of data claims and of high hopes pinned on evidence-based decision-making. In this way, the paper sheds light on the coproduction of (semi)automated managerial practices in employment agencies and the framing of unemployment under austerity politics.
The growth of technologies promising to infer emotions raises political and ethical concerns, including concerns regarding their accuracy and transparency. A marginalized perspective in these conversations is that of data subjects potentially affected by emotion recognition. Taking social media as one emotion recognition deployment context, we conducted interviews with data subjects (i.e., social media users) to investigate their notions about accuracy and transparency in emotion recognition and interrogate stated attitudes towards these notions and related folk theories. We find that data subjects see accurate inferences as uncomfortable and as threatening their agency, pointing to privacy and ambiguity as desired design principles for social media platforms. While some participants argued that contemporary emotion recognition must be accurate, others raised concerns about possibilities for contesting the technology and called for better transparency. Furthermore, some challenged the technology altogether, highlighting that emotions are complex, relational, performative, and situated. In interpreting our findings, we identify new folk theories about accuracy and meaningful transparency in emotion recognition. Overall, our analysis shows an unsatisfactory status quo for data subjects that is shaped by power imbalances and a lack of reflexivity and democratic deliberation within platform governance.
Social media platforms aspire to create online experiences where users can participate safely and equitably. However, women around the world experience widespread online harassment, including insults, stalking, aggression, threats, and non-consensual sharing of sexual photos. This article describes women's perceptions of harm associated with online harassment and preferred platform responses to that harm. We conducted a survey in 14 geographic regions around the world (N = 3,993), focusing on regions whose perspectives have been insufficiently elevated in social media governance decisions (e.g. Mongolia, Cameroon). Results show that, on average, women perceive greater harm associated with online harassment than men, especially for non-consensual image sharing. Women also prefer most platform responses compared to men, especially removing content and banning users; however, women are less favorable towards payment as a response. Addressing global gender-based violence online requires understanding how women experience online harms and how they wish for it to be addressed. This is especially important given that the people who build and govern technology are not typically those who are most likely to experience online harms.
No abstract
Social media has both been hailed for enabling social movements and critiqued for its affordances as a surveillance infrastructure. In this work, I focus on the latter by analyzing research, products, and discourses around the recent history of civil unrest prediction based on social media data and other public data sources, thereby giving insights into current and often opaque protest surveillance and forecasting practices. Technologies to monitor individuals and groups online have been developed for instance to predict US protests following the election of President Trump in 2016 and labor strikes across global supply chains. These works are part of an emerging computer science research field focused on “civil unrest prediction” dedicated to forecasting protests across the globe (e.g., Indonesia, Brazil, and Australia). Foremost I focus on scholarly literature as my unit of analysis, but also other artifacts discussing or detailing applications for companies, organizations or governments are examined. I provide a conceptualization of civil unrest prediction technology by illustrating data sources, features and methods used, and how prediction and detection are necessarily entangled. Then I show how various kinds of unrest activity are framed as risks to be fixed or averted for various actors with differing interests such as the military, law enforcement, and various industries. Finally, I critically unpack justifications and ascribed benefits of the technology and point to how the perspectives of protestors are almost completely absent. My analysis shows a critical need for regulation centering activists and workers, and reflection within academia, particularly in the fields of computer and data science, on the ethics and politics of protest research and ensuing technological applications.
The use of opaque machine learning algorithms is often justified by their accuracy. For example, IBM has advertised its algorithms as being able to predict when workers will quit with 95% accuracy, an EU research project on lie detection in border control has reported 75% accuracy, and researchers have claimed to be able to deduce sexual orientation with 91% accuracy from face images. Such performance numbers are, on the one hand, used to make sense of the functioning of opaque algorithms and promise to quantify the quality of algorithmic predictions. On the other hand, they are also performative, rhetorical, and meant to convince others of the ability of algorithms to know the world and its future objectively, making calculated, partial visions appear certain. This duality marks a conflict of interest when the actors who conduct an evaluation also profit from positive outcomes. Building on work in the sociology of testing and agnotology, I discuss seven ways how the construction of high accuracy claims also involves the production of ignorance. I argue that this ignorance should be understood as productive and strategic as it is imbued with epistemological authority by making uncertain matters seem certain in ways that benefit some groups over others. Several examples illustrate how tech companies increasingly strategically produce ignorance reminiscent of tactics used by controversial companies with a high concentration of market power such as big oil or tobacco. My analysis deconstructs claims of certainty by highlighting the politics and contingencies of testing used to justify the adoption of algorithms. I further argue that current evaluation practices in ML are prone to producing problematic forms of ignorance, like misinformation, and reinforcing structural inequalities due to how human judgment and power structures are invisibilized, narrow, oversimplified metrics overused, and pernicious incentive structures encourage overstatements enabled by flexibility in testing. I provide recommendations on how to deal with and rethink incentive structures, testing practices, and the communication and study of accuracy with the goal of opening possibilities, making contingencies more visible, and enabling the imagination of different futures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.