2020
DOI: 10.1017/s1755020320000404
|View full text |Cite
|
Sign up to set email alerts
|

Probabilistic Stability, Agm Revision Operators and Maximum Entropy

Abstract: Several authors have investigated the question of whether canonical logic-based accounts of belief revision, and especially the theory of AGM revision operators, are compatible with the dynamics of Bayesian conditioning. Here we show that Leitgeb’s stability rule for acceptance, which has been offered as a possible solution to the Lottery paradox, allows to bridge AGM revision and Bayesian update: using the stability rule, we prove that AGM revision operators emerge from Bayesian conditioning by an application… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 30 publications
0
1
0
Order By: Relevance
“…As is widely recognized, the consequence relations that are generated in these two ways behave quite differently.' Systems of non-monotonic reasoning can often be given quantitative probabilistic interpretations (see, e.g., Pearl 1989), and there are various ways of ameliorating the inferential tensions in these contexts (Leitgeb, 2017;Mierzewski, 2020). But such resolution usually comes at the expense of quantitative granularity typical of numerical probabilistic reasoning.…”
Section: Semantic Proposalsmentioning
confidence: 99%
“…As is widely recognized, the consequence relations that are generated in these two ways behave quite differently.' Systems of non-monotonic reasoning can often be given quantitative probabilistic interpretations (see, e.g., Pearl 1989), and there are various ways of ameliorating the inferential tensions in these contexts (Leitgeb, 2017;Mierzewski, 2020). But such resolution usually comes at the expense of quantitative granularity typical of numerical probabilistic reasoning.…”
Section: Semantic Proposalsmentioning
confidence: 99%