2017
DOI: 10.31235/osf.io/6cdhe
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Logics and practices of transparency and opacity in real-world applications of public sector machine learning

Abstract: Presented as a talk at the 4th Workshop on Fairness, Accountability and Transparency in Machine Learning (FAT/ML 2017), Halifax, Nova Scotia, Canada.Machine learning systems are increasingly used to support public sector decision-making across a variety of sectors. Given concerns around accountability in these domains, and amidst accusations of intentional or unintentional bias, there have been increased calls for transparency of these technologies. Few, however, have considered how logics and practices concer… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 7 publications
0
5
0
Order By: Relevance
“…The increased management of data science systems is a priority [65,96,105], but "the complex decision-making structure" needed to manage them often exceeds "the human and organizational resources available for oversight" [68]. One limiting factor is the opacity of data science systems [97,100] arising from a variety of reasons: (a) algorithms are often trade-secrets, (b) data science requires specialized knowledge, and (c) novel analytic methods remain conceptually challenging [14]. Neural networks are often critiqued for their black-boxed nature, 3 but researchers argue that even simpler models are not necessarily more interpretable than their complex counterparts [64].…”
Section: Trust Objectivity and Justificationmentioning
confidence: 99%
“…The increased management of data science systems is a priority [65,96,105], but "the complex decision-making structure" needed to manage them often exceeds "the human and organizational resources available for oversight" [68]. One limiting factor is the opacity of data science systems [97,100] arising from a variety of reasons: (a) algorithms are often trade-secrets, (b) data science requires specialized knowledge, and (c) novel analytic methods remain conceptually challenging [14]. Neural networks are often critiqued for their black-boxed nature, 3 but researchers argue that even simpler models are not necessarily more interpretable than their complex counterparts [64].…”
Section: Trust Objectivity and Justificationmentioning
confidence: 99%
“…Updates to accounting or audit standards, in both the public and private sectors, would make the assessment of traceability substantially more straightforward. Further, investigating the effectiveness of such standards in furthering the goals of traceability (as measured by, say, perspectives on the performance of a system [30,238]) would provide useful benchmarks for those charged with the test and evaluation of practical systems. Understanding the operationalization of this principle, seemingly amenable to testing and assessment more than other, contested principles dealing with fairness, bias, and equity, would demonstrate a path to operationalizing ethical principles in the development of practical systems, and is thus of paramount importance.…”
Section: Adoptionmentioning
confidence: 99%
“…In computer science, transparency has been historically a significant challenge, and it has been addressed in different ways by different disciplines. We can mention here algorithmic transparency, algorithmic accountabily [12,15,27,28,50] and, more recently, interpretable/explainable systems [23]. The common goal of all the above disciplines is that algorithms' actions must be easily understood by users (expert and non-experts) when we execute them in a particular context.…”
Section: Introduction: Ai and Explainabilitymentioning
confidence: 99%