2022
DOI: 10.3390/e24101469
|View full text |Cite
|
Sign up to set email alerts
|

A Formal Framework for Knowledge Acquisition: Going beyond Machine Learning

Abstract: Philosophers frequently define knowledge as justified, true belief. We built a mathematical framework that makes it possible to define learning (increasing number of true beliefs) and knowledge of an agent in precise ways, by phrasing belief in terms of epistemic probabilities, defined from Bayes’ rule. The degree of true belief is quantified by means of active information I+: a comparison between the degree of belief of the agent and a completely ignorant person. Learning has occurred when either the agent’s … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 57 publications
0
3
0
Order By: Relevance
“…where the target A ⊂ Ω is a subset of a search space Ω, whereas P and P 0 are probability distributions on Ω that represent searches of the programmer and blind search, respectively. Since its inception, active information has been used in the measurement of bias for machine-learning algorithms (Montañez 2017a(Montañez , 2017bMontañez et al 2019Montañez et al , 2021, hypothesis testing (Hössjer et al 2023;Zhou et al 2023), among others. In fact, active information can be used as a measure of FT if the search space Ω equals the sample space  of the physical parameter X, Equation (4) is large for A = ℓ X , with P 0 (A) = F 0 (A) as in Equation (1) and d = ( ) ( )…”
Section: Mathematical Framework For Learning and Knowledge Acquisitionmentioning
confidence: 99%
“…where the target A ⊂ Ω is a subset of a search space Ω, whereas P and P 0 are probability distributions on Ω that represent searches of the programmer and blind search, respectively. Since its inception, active information has been used in the measurement of bias for machine-learning algorithms (Montañez 2017a(Montañez , 2017bMontañez et al 2019Montañez et al , 2021, hypothesis testing (Hössjer et al 2023;Zhou et al 2023), among others. In fact, active information can be used as a measure of FT if the search space Ω equals the sample space  of the physical parameter X, Equation (4) is large for A = ℓ X , with P 0 (A) = F 0 (A) as in Equation (1) and d = ( ) ( )…”
Section: Mathematical Framework For Learning and Knowledge Acquisitionmentioning
confidence: 99%
“…as the channel between them distorting the message. 11,12 This interpretation, taken from Shannon's information diagram, is particularly important to analyze bias as a modification of the information inherent to the prevalence parameter in Appendix D. 13 Analogously to (9) with Proposition 1, the naive estimator of individuals with symptoms s, ps, * T , and the naive estimator of individuals with infection status i, p * ,i T , are defined as…”
Section: No Testing Errorsmentioning
confidence: 99%
“…Then Nsfalse(ifalse)$$ {N}_s^{(i)} $$, the population size of Isfalse(ifalse)$$ {I}_s^{(i)} $$, disappears from the sample estimator, and () in the Appendix shows that all information in the sample about the group Isfalse(ifalse)$$ {I}_s^{(i)} $$ comes from the sampling mechanism q()Isfalse(ifalse)$$ q\left({I}_s^{(i)}\right) $$. In fact, psfalse(ifalse)$$ {p}_s^{(i)} $$ can be seen as the message sent, trueboldp^Ts,i$$ {\hat{\mathbf{p}}}_T^{s,i} $$ as the message received, and q()Isfalse(ifalse)=Pfalse(T=1false|Isfalse(ifalse)false)$$ q\left({I}_s^{(i)}\right)=P\left(T=1|{I}_s^{(i)}\right) $$ as the channel between them distorting the message 11,12 . This interpretation, taken from Shannon's information diagram, is particularly important to analyze bias as a modification of the information inherent to the prevalence parameter in Appendix 13 …”
Section: No Testing Errorsmentioning
confidence: 99%