2020
DOI: 10.1007/s00146-020-01069-w
|View full text |Cite
|
Sign up to set email alerts
|

Getting into the engine room: a blueprint to investigate the shadowy steps of AI ethics

Abstract: Enacting an AI system typically requires three iterative phases where AI engineers are in command: selection and preparation of the data, selection and configuration of algorithmic tools, and fine-tuning of the different parameters on the basis of intermediate results. Our main hypothesis is that these phases involve practices with ethical questions. This paper maps these ethical questions and proposes a way to address them in light of a neo-republican understanding of freedom, defined as absence of domination… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 37 publications
(18 reference statements)
0
4
0
Order By: Relevance
“…For example, a technology can fail at the organizational level, because technology development can be halted by cultural susceptibility to innovation, interdepartmental competition and lack of R&D input (e.g., Calantone et al 1993;Souder and Sherman 1993;Schilling 1998). A new technology could also fail in the adoption/diffusion process when potential users may not know about the technology, or decide to reject the technology after being informed (Rogers 1962).…”
Section: Technology Failure and Public Controversymentioning
confidence: 99%
“…For example, a technology can fail at the organizational level, because technology development can be halted by cultural susceptibility to innovation, interdepartmental competition and lack of R&D input (e.g., Calantone et al 1993;Souder and Sherman 1993;Schilling 1998). A new technology could also fail in the adoption/diffusion process when potential users may not know about the technology, or decide to reject the technology after being informed (Rogers 1962).…”
Section: Technology Failure and Public Controversymentioning
confidence: 99%
“…Given this, the decisions or acts made by participators or technology itself during innovation practices should be understandable and explainable, which is essential for establishing solid public trust in technological outcomes [34,77]. Whether in the innovation process or in specific practice, the reason or standard for choosing any options in the decision process should be clearly made explicit and be able to be justified [50,83]. Professionals and participators should be able to explain the rationale and the strengths and weaknesses of innovation and technology to relevant audiences in an interpretable, intuitive, and human-understandable way [34,51,55].…”
Section: Keyword Description Exemplar Referencementioning
confidence: 99%
“…The trio of deontology, utilitarianism, and virtue ethics [6,58,69], and chief thinkers associated with each tradition, namely Immanuel Kant, Jeremy Bentham, and Aristotle, respectively, are taught as alternative perspectives on professional ethics in universities, so naturally, they are expected to get mentions in AI ethics scholarship. A modified conception of virtue ethics [30], Martha Nussbaum's capabilities approach [10], the republican conception of freedom as non-domination [73], as well as the impossibility of endorsing any philosophical foundation to resolve moral dilemmas [24], have all been discussed at length in the philosophy literature, as well. Some of those writings find their way into the professional training of practitioners and specialized scholars, at least in summary form.…”
Section: Philosophical Foundationsmentioning
confidence: 99%
“…Recently, calls to go beyond the instrumental notions of ethics to incorporate intrinsic conceptions of values systems [9], and to work with a wide theoretical and methodological toolkit to eschew the narrow notion of ethics as "good design" [61], have increased. Even though the latest scholarship documents an emerging interest in including more purely philosophical arguments in AI ethics [73], the field's engagement with philosophy remains an open question.…”
Section: Philosophical Foundationsmentioning
confidence: 99%