2020 International Joint Conference on Neural Networks (IJCNN) 2020
DOI: 10.1109/ijcnn48605.2020.9207382
|View full text |Cite
|
Sign up to set email alerts
|

Scaling Active Inference

Abstract: In reinforcement learning (RL), agents often operate in partially observed and uncertain environments. Model-based RL suggests that this is best achieved by learning and exploiting a probabilistic model of the world. 'Active inference' is an emerging normative framework in cognitive and computational neuroscience that offers a unifying account of how biological agents achieve this. On this framework, inference, learning and action emerge from a single imperative to maximize the Bayesian evidence for a niched m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
47
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 51 publications
(51 citation statements)
references
References 31 publications
0
47
0
Order By: Relevance
“…Ideally the agent should be trained while interacting with the environment, making the entire system end-to-end. This would require the agent to also evaluate expected free energy during the training process for exploration (Schwartenbeck et al, 2019 ), i.e., by maintaining a posterior distribution over model parameters similar to Tschantz et al ( 2019 ).…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Ideally the agent should be trained while interacting with the environment, making the entire system end-to-end. This would require the agent to also evaluate expected free energy during the training process for exploration (Schwartenbeck et al, 2019 ), i.e., by maintaining a posterior distribution over model parameters similar to Tschantz et al ( 2019 ).…”
Section: Discussionmentioning
confidence: 99%
“…However, this is impractical for high dimensional observations such as pixels. Another option is to embed a reward signal in the observation space as proposed by Tschantz et al ( 2019 ), and put a prior preference on high reward outcomes.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…This approach clearly has limitations, in the sense that one has to specify a priori allowable policies, each of which represents a possible path through a deep tree of action sequences. This formulation limits the scalability of the ensuing schemes because only a relatively small number of policies can be evaluated (Tschantz, Baltieri, Seth, & Buckley, 2019). In this letter, we consider active inference schemes that enable a deep tree search over all allowable sequences of 1 Technically, a functional is defined as a function whose arguments (in this case, beliefs about hidden states) are themselves functions of other arguments (in this case, observed outcomes generated by hidden states).…”
Section: Introductionmentioning
confidence: 99%