2021
DOI: 10.1016/j.neunet.2021.08.018
|View full text |Cite
|
Sign up to set email alerts
|

An empirical evaluation of active inference in multi-armed bandits

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
30
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 28 publications
(31 citation statements)
references
References 71 publications
1
30
0
Order By: Relevance
“…If the initial belief π (0) is a Dirichlet distribution, then π (t) is also a Dirichlet distribution at any time t for a wide range of learning rules(Faraji et al, 2018;Liakoni et al, 2021;Maheu et al, 2019;Markovic et al, 2021;Meyniel et al, 2016;Modirshanechi et al, 2019;Ryali et al, 2018;Yu & Cohen, 2009) -since the Dirichlet distribution is the conjugate prior of the categorical distribution(Efron & Hastie, 2016).3 We note that the qualitative behavior of confidence in Fig.5and Fig.6is the same for both definitions C[π(t) ] and CatConf(t).…”
mentioning
confidence: 88%
See 3 more Smart Citations
“…If the initial belief π (0) is a Dirichlet distribution, then π (t) is also a Dirichlet distribution at any time t for a wide range of learning rules(Faraji et al, 2018;Liakoni et al, 2021;Maheu et al, 2019;Markovic et al, 2021;Meyniel et al, 2016;Modirshanechi et al, 2019;Ryali et al, 2018;Yu & Cohen, 2009) -since the Dirichlet distribution is the conjugate prior of the categorical distribution(Efron & Hastie, 2016).3 We note that the qualitative behavior of confidence in Fig.5and Fig.6is the same for both definitions C[π(t) ] and CatConf(t).…”
mentioning
confidence: 88%
“…Using this notation, we can write the joint distribution over the observation and the parameter P (t) θ t+1 , y t+1 |x t+1 as P Θ t+1 θ t+1 , y t+1 |x t+1 ; π (t) and the updated belief π (t+1) (θ) as P Θ t+1 θ|y t+1 , x t+1 ; π (t) . The variational loss or free energy can then be defined as (Friston, 2010;Friston et al, 2017;Liakoni et al, 2021;Markovic et al, 2021;Sajid et al, 2021)…”
Section: Belief-mismatch Surprise 2: Postdictive Surprisementioning
confidence: 99%
See 2 more Smart Citations
“…Although AIF has yet to be scaled-to tackle high dimensional problems-to the same extent as established approaches, such as deep reinforcement learning [17,18], numerical analyses generally show that active inference performs at least as well in simple environments [9,[19][20][21][22][23], and better in environments featuring volatility, ambiguity and context sensitivity [21,22]. In this paper, we consider how AIF's features could help address key technical challenges in robotics and discuss practical robotic applications.…”
Section: Active Inferencementioning
confidence: 99%