As of 2020, the Public Employment Service Austria (AMS) makes use of algorithmic profiling of job seekers to increase the efficiency of its counseling process and the effectiveness of active labor market programs. Based on a statistical model of job seekers' prospects on the labor market, the system—that has become known as the AMS algorithm—is designed to classify clients of the AMS into three categories: those with high chances to find a job within half a year, those with mediocre prospects on the job market, and those clients with a bad outlook of employment in the next 2 years. Depending on the category a particular job seeker is classified under, they will be offered differing support in (re)entering the labor market. Based in science and technology studies, critical data studies and research on fairness, accountability and transparency of algorithmic systems, this paper examines the inherent politics of the AMS algorithm. An in-depth analysis of relevant technical documentation and policy documents investigates crucial conceptual, technical, and social implications of the system. The analysis shows how the design of the algorithm is influenced by technical affordances, but also by social values, norms, and goals. A discussion of the tensions, challenges and possible biases that the system entails calls into question the objectivity and neutrality of data claims and of high hopes pinned on evidence-based decision-making. In this way, the paper sheds light on the coproduction of (semi)automated managerial practices in employment agencies and the framing of unemployment under austerity politics.
No abstract
Large media platforms are now in the habit of providing facts in their products and representing knowledge to various publics. For example, Google’s Knowledge Graph is a database of facts that Google uses to provide quick answers to publics who use their products, while Wikipedia has a product called Wikidata that similarly stores facts about the world in data formats through which various apps can retrieve the data. Microsoft, Amazon, and IBM use similar fact storing and retrieval techniques in their products. This panel introduces papers that take a political economy perspective on such platformaized versions of fact production and examines the underlying infrastructures, histories, and modeling techniques used in such knowledge representation systems. Knowledge representation, long a central topic in archiving work in library and information sciences, is a key feature of platforms and practiced by internet companies more broadly. Much of this work has historically centered on metadata models that seek to organize and describe information in standardized ways. In the context of expanding this data organizing and labeling work into the wider web, one of the main facilitators was the “Semantic Web” project proposed by Tim-Berners Lee and the World Wide Web Consortium (W3C). Today, many of the same principles, technologies, and standards that were proposed by those early projects in metadata modeling from groups like W3C are found at companies like Google and Facebook, organizations like Wikipedia, government portals, and beyond. These platform metadata models are typically produced by industry professionals (e.g., taxonomists, ontologists, knowledge engineers, etc.) who help structure information for algorithmic processing on platforms and their recommender systems. Such structured information is supposed to add a layer of contextual expressivity to web data that would otherwise be more difficult to parse, though the issue of context control is not unproblematic in relation to statements of facts. In many of these automated systems, metadata models contribute to articulating ready-made facts that then travel through these systems and eventually reach the products that are engaged by everyday web users. This panel connects scholars working in information, media studies, and science and technology studies to discuss these semantic technologies. The first paper presents data gathered from interviews with semantic web practitioners who build or have built metadata models at large internet and platform companies. It presents results from a qualitative study of these platform data management professionals (collectively referred to as “metadata modelers”) and draws from unstructured interviews (n=10) and archival research. The paper describes the image of a metadata ecology along with selected work-related contestations expressed by interview subjects regarding some of the difficulties and intractable problems in metadata modeling work. The paper includes a discussion of the political economy of platform semantics through an examination of critical semantic web literature and ends with some policy concerns. The second paper translates the method of tracing “traveling facts” from science studies to the context of online knowledge about evolving, historic events. The goal is to understand the socio-political impact of the semantic web as it has been implemented by monopolistic digital platforms and how such practices intersect in the context of Wikipedia, where the majority of knowledge graph entities are sourced from. The paper describes how the adoption (and domination) by platform companies of linked data has catalyzed a re-shaping of web content to accord with the question and answer linked data formats, weakening the power of open content licenses to support local knowledge and consolidating the power of algorithmic knowledge systems that favor knowledge monopolies. The third paper discusses building a semantic foundation for machine learning and examines how information infrastructures that convey meaning are intimately tied to colonial labor relations. It traces the practice of building a digital infrastructure that enables machines to learn from human language. The paper describes examples from an ethnographic study of semantic computing and its infrastructuring practices to show how such techniques are materially and discursively performative in their co-emergence with techno-epistemic discourses and politico-economic structures. It examines sociomaterial process in which classifications, standards, metadata, and methods co-emerge with processes of signification that reconstitute and/or shift hegemonic ecologies of knowledge. The fourth paper evaluates and examines the ethics of “free” data (CC-0) in Wikidata by evaluating the sources and usage of data from and within Wikidata. From knowledge graphs to AI training, Wikidata is the semantic web platform that is being used across the Internet to power new platforms. Through a consideration of the ways in which Wikidata scrapes Wikipedia’s “share alike” knowledge through scraping metadata and the significant donations and partnerships from large technology firms (Google in particular), this paper addresses ethical concerns within the largest semantic web platform, how these transformations of knowledge alienate donated volunteer labor, and offers some ways in which these issues might be mitigated.
Abstract. The paper reviews various eco-feedback systems including carbon calculators and discusses how different disciplinary approaches conceptualise and explain anticipated impacts of these systems. The European collaborative research project e2democracy investigates how citizen participation combined with long-term CO 2 monitoring and feedback can contribute to achieve local climate targets. Empirical results from local climate initiatives in Austria, Germany and Spain show positive effects in terms of learning about CO 2 impacts, increased awareness, enhanced efforts and guidance as well as individual empowerment leading to slightly reduced CO 2 emissions. The findings highlight that a combined approach integrating eco-feedback and (e-)participation is promising to foster sustainability.
Abstract. This paper assesses the status of eParticipation within the political system in Austria. It takes a top-down perspective focusing on the role of public participation and public policies on eParticipation. The status of eParticipation in Austria as well as of social and political trends regarding civic participation and its electronic embedding are analysed. The results show a remarkable recent increase of eParticipation projects and initiatives. A major conclusion is that eParticipation is becoming a subject of public policies in Austria; however, the upswing of supportive initiatives for public participation and eParticipation goes together with ambivalent attitudes among politicians and administration.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.