2010
DOI: 10.1007/s11023-010-9191-1
|View full text |Cite
|
Sign up to set email alerts
|

Varieties of Justification in Machine Learning

Abstract: Forms of justification for inductive machine learning techniques are discussed and classified into four types. This is done with a view to introduce some of these techniques and their justificatory guarantees to the attention of philosophers, and to initiate a discussion as to whether they must be treated separately or rather can be viewed consistently from within a single framework.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(12 citation statements)
references
References 5 publications
0
12
0
Order By: Relevance
“…From the law of large numbers, we know that the expected standard deviation of this randomly drawn subset is proportional to 1∕ √ . Definition (15) amounts to the well-known student's t test 10 and it can be converted into a p value 11 . Even though Definition ( 15) is rather intuitive and has good statistical properties, it is possible to opt for different versions of strengh in Algocate (e.g.…”
Section: Strength Relationmentioning
confidence: 99%
See 1 more Smart Citation
“…From the law of large numbers, we know that the expected standard deviation of this randomly drawn subset is proportional to 1∕ √ . Definition (15) amounts to the well-known student's t test 10 and it can be converted into a p value 11 . Even though Definition ( 15) is rather intuitive and has good statistical properties, it is possible to opt for different versions of strengh in Algocate (e.g.…”
Section: Strength Relationmentioning
confidence: 99%
“…Another distinction is made by Klass and Finin [16] based on the intention, which should be to "produce knowledge in the hearer" for explanations and "to affect the beliefs of the hearer" for justifications. From a different perspective, [10], introduces a classification of justifications in machine learning related to the performances (accuracy) of the systems. A series of works [7,24,28] refer to justifications as ways of ensuring that a decision is good (in contrast to understanding a decision), which is in line with the approach followed in this paper and definitions of explanations and justifications in philosophy [3].…”
Section: Related Workmentioning
confidence: 99%
“…A different approach aims at justifying the output of a DLN by supplementing it with 31 Justification in this sense is not to be equated with mathematical criteria or measures of the performance of learning machines. Different varieties of such measures are discussed in an interesting paper by Corfield (2010). additional information intended to strengthen the user's confidence.…”
Section: Mckeown Arguementioning
confidence: 99%
“…There are only few works attempting to provide an epistemological treatment of statistical learning theory [cf. Harman and Kulkarni 2007;Corfield et al 2009;Corfield 2010;von Luxburg and Schölkopf 2011;Spelda 2018], which, moreover, captures only a part of the story. Second, the recent history of ML has been dominated by an empiricist practice that derives estimates of performance from the a posteriori evaluation.…”
Section: Introductionmentioning
confidence: 99%