2019
DOI: 10.1108/jices-11-2018-0092
|View full text |Cite
|
Sign up to set email alerts
|

“It would be pretty immoral to choose a random algorithm”

Abstract: Purpose The purpose of this paper is to report on empirical work conducted to open up algorithmic interpretability and transparency. In recent years, significant concerns have arisen regarding the increasing pervasiveness of algorithms and the impact of automated decision-making in our lives. Particularly problematic is the lack of transparency surrounding the development of these algorithmic systems and their use. It is often suggested that to make algorithms more fair, they should be made more transparent, b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 13 publications
0
6
0
Order By: Relevance
“…Reporting on empirical work conducted on algorithmic interpretability and transparency, Webb et al (2019) reveal that moral references, particularly on fairness, are consistent across participants discussing their preferences on algorithms. The study notes that people tend to go beyond personal preferences to focus instead on "right and wrong behaviour", as a way to indicate the need to understand the context of deployment of the algorithm and the difficulty of understanding the algorithm and its consequences (Webb et al 2019). In the context of recommender systems, Burke (2017) proposes a multi-stakeholder and multisided approach to defining fairness, moving beyond usercentric definitions to include the interests of other system stakeholders.…”
Section: Unfair Outcomes Leading To Discriminationmentioning
confidence: 95%
See 1 more Smart Citation
“…Reporting on empirical work conducted on algorithmic interpretability and transparency, Webb et al (2019) reveal that moral references, particularly on fairness, are consistent across participants discussing their preferences on algorithms. The study notes that people tend to go beyond personal preferences to focus instead on "right and wrong behaviour", as a way to indicate the need to understand the context of deployment of the algorithm and the difficulty of understanding the algorithm and its consequences (Webb et al 2019). In the context of recommender systems, Burke (2017) proposes a multi-stakeholder and multisided approach to defining fairness, moving beyond usercentric definitions to include the interests of other system stakeholders.…”
Section: Unfair Outcomes Leading To Discriminationmentioning
confidence: 95%
“…Inscrutable evidence focuses on problems related to the lack of transparency that often characterise algorithms (particularly ML algorithms and models); the socio-technical infrastructure in which they exist; and the decisions they support. Lack of transparency-whether inherent due to the limits of technology or acquired by design decisions and obfuscation of the underlying data (Lepri et al 2018;Dahl 2018;Ananny and Crawford 2018;Weller 2019)-often translates into a lack of scrutiny and/ or accountability (Oswald 2018;Fink 2018;Webb et al 2019) and leads to a lack of "trustworthiness" (see Al-Hleg 2019).…”
Section: Inscrutable Evidence Leading To Opacitymentioning
confidence: 99%
“…Although all researchers of this category state that an ethical framework is needed to use AI in organizational decision making, there is no agreement on the design. Some recommend an implementation of decision rules into AI systems (Webb et al 2019;Wong 2019), while others concentrate on making the machine learn moral guidelines by itself (Bogosian 2017), relating to top-down and bottom-up approaches of AI.…”
Section: Ethical Perspectives On Using Ai In Strategic Organizationalmentioning
confidence: 99%
“…In an attempt to offer a new definition, several researchers have analyzed the behavior humans exhibit when working with artificial agents, especially in terms of attributing human values and shortcomings to the machines. The UnBias project by Webb et al (2019) demonstrates that fairness is the guiding principle in decisions, though the understanding of fairness differs among participants. Wong (2019) lists conditions to ensure fairness.…”
Section: Ethical Perspectives On Using Ai In Strategic Organizationalmentioning
confidence: 99%
See 1 more Smart Citation