Amid good intentions, such as providing humanitarian assistance to refugees, the use of biometric technology in humanitarian refugee management may entail various risks for the implicated refugee populations. Drawing on insights from science and technology studies, this article introduces a distinction between risks stemming from technology failure and risks stemming from successful uses of biometric technology. The article thus departs from the literature in which technology failure has been in focus by showing that analysing the effect of technology success adds an important dimension to our analysis of the range of risks that may emerge in the context of humanitarian technology uses. The usefulness of this distinction is then illustrated through an analysis of the use by the United Nations High Commissioner for Refugees (UNHCR) of iris recognition in the repatriation of Afghan refugees; besides risks of failure at the implementation stage, risks also emerged once refugees had successfully registered their biometric data with UNHCR. To recognize how humanitarian refugee biometrics produces digital refugees at risk of exposure to new forms of intrusion and insecurity, we need to appreciate how successful technology can have critical implications arising from how technology is constituted in and constitutive of social phenomena.
This article aims to acknowledge and articulate the notion of "humanitarian experimentation". Whether through innovation or uncertain contexts, managing risk is a core component of the humanitarian initiative -but all risk is not created equal. There is a stark ethical and practical difference between managing risk and introducing it, which is mitigated in other fields through experimentation and regulation. This article identifies and historically contextualizes the concept of humanitarian experimentation, which is increasingly prescient, as a range of humanitarian subfields embark on projects of digitization and privatization. This trend is illustrated here through three contemporary examples of humanitarian innovations (biometrics, data modelling, cargo drones), with references to critical questions about adherence to the humanitarian "do no harm" imperative. This article outlines a broad taxonomy of harms, intended to serve as the starting point for a more comprehensive conversation about humanitarian action and the ethics of experimentation.
Surprisingly little attention is paid to the role of digital technology and related forms of data production, storage, processing, and sharing in humanitarian governance. This paper uses Michael Barnett's () conceptualisation of humanitarian governance when arguing for a better accounting of technology in literature on humanitarian governance. Specifically, it proposes a two‐fold alertness to governance of (a) the uses of new technology and (b) that which is produced by digital technologies. This elucidates important issues, including that of access to digitalised data collected from humanitarian subjects, with implications for their (in)security. The paper concludes by suggesting that access is no longer ‘only’ about challenges of gaining access to vulnerable populations, but also about challenges of preventing access to vulnerable digital bodies and their use for aggressive purposes. In short, access and protection acquire a new dimension and analyses of humanitarian governance must be more attentive to the role of digital technology.
Better management and new technological solutions are increasingly portrayed as the way to improve refugee protection and enhance the accountability of humanitarian actors. Taking concepts of legibility, quantification and co-production as the point of departure, this article explores how techno-bureaucratic practices shape conceptions of international refugee protection. We do this by examining the evolving roles of results-based management (RBM), biometrics and cash-based interventions as 'accountability technologies' in the United Nations High Commissioner for Refugees' international protection efforts. The article challenges the assumption that these technologies produce a seamless form of accountability that is equally attentive to donor requests and the protection needs of refugees. By focusing on how the constitution of these techniques as 'accountability solutions' shapes conceptions of the very meaning of protection (i.e. the problem to be addressed), we also show what dimensions of protection gets omitted in this co-production of technical solutions and socio-political problems.
Questions about how algorithms contribute to (in)security are under discussion across international political sociology. Building upon and adding to these debates, our collective discussion foregrounds questions about algorithmic violence. We argue that it is important to examine how algorithmic systems feed (into) specific forms of violence, and how they justify violent actions or redefine what forms of violence are deemed legitimate. Bringing together different disciplinary and conceptual vantage points, this collective discussion opens a conversation about algorithmic violence focusing both on its specific instances and on the challenges that arise in conceptualizing and studying it. Overall, the discussion converges on three areas of concern—the violence undergirding the creation and feeding of data infrastructures; the translation processes at play in the use of computer/machine vision across diverse security practices; and the institutional governing of algorithmic violence, especially its organization, limitation, and legitimation. Our two-fold aim is to show the potential of a cross-disciplinary conversation and to move toward an interactional research agenda. While our approaches diverge, they also enrich each other. Ultimately, we highlight the critical purchase of studying the role of algorithmic violence in the fabric of the international through a situated analysis of algorithmic systems as part of complex, and often messy, practices. Les questions concernant la manière dont les algorithmes affectent l’(in)sécurité deviennent de plus en plus courantes en sociologie politique internationale. Notre discussion collective s'appuie sur ces débats et les enrichit en abordant les questions portant sur la violence algorithmique. Nous soutenons qu'il est important d'analyser et de discuter de la manière dont les systèmes algorithmiques alimentent (et entretiennent) des formes spécifiques de violence, ainsi que de la façon dont ils justifient des actes violents ou redéfinissent les formes de violence jugées légitimes. Cette discussion collective réunit différents points de vue disciplinaires et conceptuels pour ouvrir un débat sur la violence algorithmique en se concentrant à la fois sur des exemples spécifiques et sur les défis à relever pour la conceptualiser et l’étudier. Cette discussion se concentre sur trois sujets de préoccupation : la violence qui sous-tend la création et l'alimentation des infrastructures de données, les processus de conversion en jeu dans l'utilisation de la vision informatique/machine à travers diverses pratiques de sécurité, et la gouvernance institutionnelle de la violence algorithmique, en particulier son organisation, sa limitation et sa légitimation. Notre double objectif est de montrer le potentiel d'une discussion interdisciplinaire et d'avancer vers un programme de recherche interactionnel. Bien que nos approches divergent, elles s'enrichissent mutuellement. Notre but est de mettre en évidence les possibilités analytiques ouvertes par l'étude de la violence algorithmique et de son role dans la fabrique des relations internationales, par le biais d'une étude des systèmes algorithmiques dans le cadre de pratiques complexes et désordonnées. Las preguntas acerca de cómo afectan los algoritmos a la (in)seguridad son cada vez más comunes en la Sociología Política Internacional. A fin de construir y sumar a estos debates, nuestro Debate Colectivo pone en primer plano las preguntas sobre la violencia algorítmica. Sostenemos que es importante abrir el debate acerca de cómo los sistemas algorítmicos alimentan (en) formas específicas de violencia, cómo justifican las acciones violentas o redefinen qué formas de violencia se consideran legítimas. A partir de la reunión de diferentes puntos de vista disciplinarios y conceptuales, este Debate Colectivo abre una conversación sobre la violencia algorítmica centrándose tanto en sus instancias específicas como en los desafíos de su conceptualización y estudio. En general, el debate converge en tres áreas de interés: la violencia que sustenta la creación y alimentación de las infraestructuras de datos, los procesos de traducción en juego en la utilización de la visión de la computadora/máquina a través de diversas prácticas de seguridad y el gobierno institucional de la violencia algorítmica, especialmente su organización, limitación y legitimación. Nuestro doble objetivo es mostrar el potencial de una conversación interdisciplinaria y avanzar hacia una agenda de investigación interactiva. Si bien nuestros abordajes divergen, se enriquecen mutuamente. Finalmente, destacamos la adquisición fundamental del estudio de las funciones de la violencia algorítmica en el tejido de lo internacional a través de un análisis situado de los sistemas algorítmicos como parte de prácticas complejas y, a menudo, desordenadas.
It has been argued that we are witnessing a retreat from democracy promotion in liberal interventionism. Focusing on the roll-out of biometric voter registration (BVR) across Africa, as supported by institutions such as the United Nations Development Programme, this article suggests that rather than a retreat we are seeing the emergence of a new and seemingly lighter approach to liberal democracy promotion. Through an analysis of the use of BVR in Kenyan elections, the article illustrates some key implications of this development. At the local level, the framing of BVR as a ‘solution’ omits important challenges to democratic elections in Kenya. At the global level, the roll-out of BVR reinforces unequal global power structures, for example by constituting an increasing number of African states as laboratories for the trialling of a technology which, due to fears of hacking, has now been rolled back in the US. To make this argument, the article combines insights from recent debates about the state of liberal interventionism, with insights from Michel Foucault and Sheila Jasanoff about the politics of technology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.