Existing approaches to ‘algorithmic accountability’, such as transparency, provide an important baseline, but are insufficient to address the (potential) harm to human rights caused by the use of algorithms in decision-making. In order to effectively address the impact on human rights, we argue that a framework that sets out a shared understanding and means of assessing harm; is capable of dealing with multiple actors and different forms of responsibility; and applies across the full algorithmic life cycle, from conception to deployment, is needed. While generally overlooked in debates on algorithmic accountability, in this article, we suggest that international human rights law already provides this framework. We apply this framework to illustrate the effect it has on the choices to employ algorithms in decision-making in the first place and the safeguards required. While our analysis indicates that in some circumstances, the use of algorithms may be restricted, we argue that these findings are not ‘anti-innovation’ but rather appropriate checks and balances to ensure that algorithms contribute to society, while safeguarding against risks.
The digital age has brought new possibilities and potency to state surveillance activities. Of significance has been the advent of bulk communications data monitoring, which involves the large-scale collection, retention and subsequent analysis of communications data. The scale and invasiveness of these techniques generate key questions regarding their ‘necessity’ from a human rights law perspective and they are the subject of ongoing human rights-based litigation. This article examines bulk communications data surveillance through the lens of human rights law, undertaking critical examination of both the potential utility of bulk communications surveillance and – drawing on social science analysis – the potential human rights-related harm. It argues that utility and harm calculations can conceal the complex nature of contemporary digital surveillance practices, rendering current approaches to the ‘necessity’ test problematic. The article argues that (i) the distinction between content and communications data be removed; (ii) analysis of surveillance-related harm must extend beyond privacy implications and incorporate society-wide effects; and (iii) a more nuanced approach to bulk communications data be developed. Suggestions are provided as to how the ‘necessity’ of bulk surveillance measures may be evaluated, with an emphasis on understanding the type of activity that may qualify as ‘serious crime’.
Open source information, particularly digital open source information that is publicly available on the internet, plays an increasingly central role in the landscape of human rights investigations. This article provides a thorough analysis of how open source information is used in practice by UN human rights fact-finding missions, commissions of inquiry and other official human rights investigations. Combining data from semi-structured interviews carried out with investigators with specific experience in open source human rights investigations with a review of reports and other primary and secondary sources, it examines the utility of open source information to UN human rights investigative bodies. It posits that open source research can offer tremendous benefits in planning investigations, supplying lead evidence, and providing direct evidence of violations, thereby overcoming some of the access barriers that investigators face, and potentially giving voice to a wider range of perspectives. On the other hand, this article argues that open source investigations should be approached with a clear eye to their challenges and possible pitfalls. These include the gaps of open source information and the potential to silence already-marginalized communities through open source investigations, as well as the resource-intensive nature of these investigations, the danger that open source information can affect witnesses’ perceptions, and the risks posed by online disinformation. As open source research is likely to comprise an important component of the human rights investigator’s toolbox in the future, this article argues in favour of the institutional buy-in, resourcing, and methodological rigour that it deserves.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.