Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 2020
DOI: 10.1145/3375627.3375866
|View full text |Cite
|
Sign up to set email alerts
|

Why Reliabilism Is not Enough

Abstract: In this paper we argue that standard calls for explainability that focus on the epistemic inscrutability of black-box machine learning models may be misplaced. If we presume, for the sake of this paper, that machine learning can be a source of knowledge, then it makes sense to wonder what kind of justification it involves. How do we rationalize on the one hand the seeming justificatory black box with the observed widespread adoption of machine learning? We argue that, in general, people implicitly adopt reliab… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 27 publications
0
1
0
Order By: Relevance
“…There continues to be a debate in AGI development around the tension between an AGI's ability to achieve epistemic justification in contrast with practical task completion. This amounts to determining if an AGI will know why it has success, or just that it has reliablist success (Smart et al, 2020). In the project of a self-aware machine, the question is, will AGI have consciousness or merely simulate consciousness (Dong et al, 2020)?…”
Section: Attention-based Models and Their Limitationsmentioning
confidence: 99%
“…There continues to be a debate in AGI development around the tension between an AGI's ability to achieve epistemic justification in contrast with practical task completion. This amounts to determining if an AGI will know why it has success, or just that it has reliablist success (Smart et al, 2020). In the project of a self-aware machine, the question is, will AGI have consciousness or merely simulate consciousness (Dong et al, 2020)?…”
Section: Attention-based Models and Their Limitationsmentioning
confidence: 99%