“…In highly value-charged areas where accuracy costs lives, such as child welfare and abuse, it is common to hear calls after a scandal that a tragedy 'could have been prevented' , or that the 'information needed to stop this was there' . 3 Increased accuracy and the avoidance of human bias, rather than just the scalability and cost-efficiency of automation, is cited as a major driver for the development of machine learning models in high stakes spaces such as these (Cuccaro-Alamin et al 2017).…”
Public bodies and agencies increasingly seek to use new forms of data analysis in order to provide 'better public services'. These reforms have consisted of digital service transformations generally aimed at 'improving the experience of the citizen', 'making government more efficient' and 'boosting business and the wider economy'. More recently however, there has been a push to use administrative data to build algorithmic models, often using machine learning, to help make day-to-day operational decisions in the management and delivery of public services rather than providing general policy evidence. This chapter asks several questions relating to this. What are the drivers of these new approaches? Is public sector machine learning a smooth continuation of e-Government, or does it pose fundamentally different challenge to practices of public administration? And how are public management decisions and practices at different levels enacted when machine learning solutions are implemented in the public sector? Focussing on different levels of government: the macro, the meso, and the 'street-level', we map out and analyse the current efforts to frame and standardise machine learning in the public sector, noting that they raise several concerns around the skills, capacities, processes and practices governments currently employ. The forms of these are likely to have value-laden, political consequences worthy of significant scholarly attention.
“…In highly value-charged areas where accuracy costs lives, such as child welfare and abuse, it is common to hear calls after a scandal that a tragedy 'could have been prevented' , or that the 'information needed to stop this was there' . 3 Increased accuracy and the avoidance of human bias, rather than just the scalability and cost-efficiency of automation, is cited as a major driver for the development of machine learning models in high stakes spaces such as these (Cuccaro-Alamin et al 2017).…”
Public bodies and agencies increasingly seek to use new forms of data analysis in order to provide 'better public services'. These reforms have consisted of digital service transformations generally aimed at 'improving the experience of the citizen', 'making government more efficient' and 'boosting business and the wider economy'. More recently however, there has been a push to use administrative data to build algorithmic models, often using machine learning, to help make day-to-day operational decisions in the management and delivery of public services rather than providing general policy evidence. This chapter asks several questions relating to this. What are the drivers of these new approaches? Is public sector machine learning a smooth continuation of e-Government, or does it pose fundamentally different challenge to practices of public administration? And how are public management decisions and practices at different levels enacted when machine learning solutions are implemented in the public sector? Focussing on different levels of government: the macro, the meso, and the 'street-level', we map out and analyse the current efforts to frame and standardise machine learning in the public sector, noting that they raise several concerns around the skills, capacities, processes and practices governments currently employ. The forms of these are likely to have value-laden, political consequences worthy of significant scholarly attention.
“…The first (hypothetical) client wishes to develop a child abuse screening tool similar to that of the real cases extensively studied and reported on [11,14,15,21,25,36]. This complex case intersects heavily with applications in high-risk scenarios with dire consequences.…”
Section: Smactr: An Internal Audit Frameworkmentioning
Rising concern for the societal implications of artificial intelligence systems has inspired a wave of academic and journalistic literature in which deployed systems are audited for harm by investigators from outside the organizations deploying the algorithms. However, it remains challenging for practitioners to identify the harmful repercussions of their own systems prior to deployment, and, once deployed, emergent issues can become difficult or impossible to trace back to their source. In this paper, we introduce a framework for algorithmic auditing that supports artificial intelligence system development end-to-end, to be applied throughout the internal organization development lifecycle. Each stage of the audit yields a set of documents that together form an overall audit report, drawing on an organization's values or principles to assess the fit of decisions made throughout the process. The proposed auditing framework is intended to contribute to closing the accountability gap in the development and deployment of large-scale artificial intelligence systems by embedding a robust process to ensure audit integrity. CCS CONCEPTS • Social and professional topics → System management; Technology audits; • Software and its engineering → Software development process management.
“…There is not a single type of use, a single type of algorithm, uniform types of data, nor a single end user impacted by the use of algorithmic risk prediction tools in child protection. In terms of type of use, algorithmic tools can be used either to distribute preventive family support services, in child protection screening decision making, or in risk terrain profiling to predict spatially where child abuse reports might occur (Cuccaro-Alamin et al 2017;Daley et al 2016;van der Put et al 2017). The type of algorithm selected categorises data in algorithm-specific ways to generate graded recommendations or binary flags and can include decision trees or regression methods amongst others, with varying levels of transparency or opacity.…”
Section: Setting the Scene: Algorithms In Contextmentioning
confidence: 99%
“…On the one hand, some argue predictive tools can contribute to the prevention of child abuse and neglect by efficient prediction of future service contact, substantiation or placement, through the triage of large linked datasets, drawing on more data than a human could rapidly and accurately appraise, and can select predictor variables based on predictive power in real-time (Cuccaro-Alamin et al 2017). Particularly at system intake, when human decision-makers have limited information and time (particularly poor conditions for optimum decision-making), algorithms can quickly compute risks of future system contact (Cuccaro-Alamin et al 2017). On the other hand, issues relating to class and ethnic biases in the data used, other sources of variability in the decisions used as data, data privacy implications, the issue of false positives, limited service user consultation and the lack of transparency of algorithmic processes are cited as serious challenges to the use of algorithmic tools in child protection, particularly where the recipients of services experience high levels of social inequalities, marginalisation, and lack of power in the state-family relationship (Keddell 2014(Keddell , 2015a(Keddell , 2016Munro 2019;Eubanks 2017;Dencik et al 2018).…”
Section: Setting the Scene: Algorithms In Contextmentioning
confidence: 99%
“…They are able to draw on more variables derived from large administrative datasets, and weight them directly in relation to the outcome of interest. They can be updated with data in real-time or near real-time; they do not rely on a human to input data; and derive the predictive variables from the data itself, rather than relying on research or professional consensus to identify them (Cuccaro-Alamin et al 2017). They can then be used to both direct limited resources to the most needy/risky families, or triage notifications to child protection services, serving utilitarian ideals of both demand management in a context of limited resources, and distribute fairly based on need rather than more arbitrary methods of referral or child protection worker decision maker.…”
Algorithmic tools are increasingly used in child protection decision-making. Fairness considerations of algorithmic tools usually focus on statistical fairness, but there are broader justice implications relating to the data used to construct source databases, and how algorithms are incorporated into complex sociotechnical decision-making contexts. This article explores how data that inform child protection algorithms are produced and relates this production to both traditional notions of statistical fairness and broader justice concepts. Predictive tools have a number of challenging problems in the child protection context, as the data that predictive tools draw on do not represent child abuse incidence across the population and child abuse itself is difficult to define, making key decisions that become data variable and subjective. Algorithms using these data have distorted feedback loops and can contain inequalities and biases. The challenge to justice concepts is that individual and group rights to non-discrimination become threatened as the algorithm itself becomes skewed, leading to inaccurate risk predictions drawing on spurious correlations. The right to be treated as an individual is threatened when statistical risk is based on a group categorisation, and the rights of families to understand and participate in the decisions made about them is difficult when they have not consented to data linkage, and the function of the algorithm is obscured by its complexity. The use of uninterpretable algorithmic tools may create ‘moral crumple zones’, where practitioners are held responsible for decisions even when they are partially determined by an algorithm. Many of these criticisms can also be levelled at human decision makers in the child protection system, but the reification of these processes within algorithms render their articulation even more difficult, and can diminish other important relational and ethical aims of social work practice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.