2017
DOI: 10.1145/3125780
|View full text |Cite
|
Sign up to set email alerts
|

Toward algorithmic transparency and accountability

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
41
0
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 70 publications
(50 citation statements)
references
References 0 publications
1
41
0
1
Order By: Relevance
“…Fairness and transparency Alternatively, self-disclosure can be encouraged when firms provide assurances of algorithmic fairness and transparency (Garfinkel et al, 2017). In terms of fairness, audits and certifications provide assurances that the firm's CAs are fair and unlikely to generate biased interactions towards particular subgroups, such as customers of certain race, gender, or socioeconomic status.…”
Section: Encouraging Voluntary Self-disclosure With Conversational Agmentioning
confidence: 99%
“…Fairness and transparency Alternatively, self-disclosure can be encouraged when firms provide assurances of algorithmic fairness and transparency (Garfinkel et al, 2017). In terms of fairness, audits and certifications provide assurances that the firm's CAs are fair and unlikely to generate biased interactions towards particular subgroups, such as customers of certain race, gender, or socioeconomic status.…”
Section: Encouraging Voluntary Self-disclosure With Conversational Agmentioning
confidence: 99%
“…ACM [39] introduces a set of principles intended to ensure fairness in the evolving policy and technology ecosystem: awareness, access and redness, accountability, explanation, data provenance, auditability, and validation and testing. We particularly focus on awareness, explanation and auditability, as we do not rely on the Facebook API to collect impressions.…”
Section: Fairness Accountability and Transparencymentioning
confidence: 99%
“…For example, identification of the corpus-the collection of data points from which an AI system learns and bases its decisions-as the source of bias, as in the case of natural language processing that uses only one dialect of English, is similar to the role that literary warrant-or the decision to base classification decisions on the extant collection of literature-plays in the bias of systems such as the Library of Congress Subject Headings toward the historical (often white and male) canon of American literature (Olson, 2000). The way in which knowledge organization scholars have taken these ubiquitous examples of bias to move toward transparency and "responsible bias" (Feinberg, 2007) is a possible path forward for machine learning, consistent with the recent critiques within the field (Garfinkel, Matthews, Shapiro, & Smith, 2017).…”
Section: Biasmentioning
confidence: 89%