2020
DOI: 10.3389/fncom.2020.574372
|View full text |Cite
|
Sign up to set email alerts
|

Learning Generative State Space Models for Active Inference

Abstract: In this paper we investigate the active inference framework as a means to enable autonomous behavior in artificial agents. Active inference is a theoretical framework underpinning the way organisms act and observe in the real world. In active inference, agents act in order to minimize their so called free energy, or prediction error. Besides being biologically plausible, active inference has been shown to solve hard exploration problems in various simulated environments. However, these simulations typically re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
41
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

4
4

Authors

Journals

citations
Cited by 29 publications
(41 citation statements)
references
References 34 publications
0
41
0
Order By: Relevance
“…While promising, the application is in its early days and much work remains to be undertaken in order to resolve practical challenges and fulfil the framework's potential. Current endeavours include scaling AIF to handle high dimensional state-spaces in a variety of applications [10,12,13,59], effectively learning the generative model from data [2,34], and show its practicality in the real world, beyond the lab boundaries. While significant engineering challenges remain, the state-of-the-art laboratory experiments show AIF's potential as a powerful method in robotics [14].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…While promising, the application is in its early days and much work remains to be undertaken in order to resolve practical challenges and fulfil the framework's potential. Current endeavours include scaling AIF to handle high dimensional state-spaces in a variety of applications [10,12,13,59], effectively learning the generative model from data [2,34], and show its practicality in the real world, beyond the lab boundaries. While significant engineering challenges remain, the state-of-the-art laboratory experiments show AIF's potential as a powerful method in robotics [14].…”
Section: Discussionmentioning
confidence: 99%
“…However, a crucial difference is that the (expected) free energy optimised during planning combines exploitative and explorative behaviour [32] in a Bayes optimal fashion [2,7]. The agent's model-i.e., representations and goals-can then be learnt through few-shot learning [21], structure learning, imitation learning, and evolutionary approaches [1,[33][34][35].…”
Section: Active Inferencementioning
confidence: 99%
“…In the above experiments, we have shown that it is possible to use the active inference paradigm as a natural solution for active vision on complex tasks in which the distribution over the environment is not defined upfront. Similar to prior work on learning state space models for active inference (Çatal et al, 2020 ), we learn our generative model directly from data.…”
Section: Discussionmentioning
confidence: 99%
“…Hence, we assume the environment is static and its dynamics should not be modeled in our generative model as we do not expect an object on the table to suddenly change color, shape, or move around without external interaction. However, one might extend the generative model depicted here to also include dynamics, similar to Çatal et al ( 2020 ).…”
Section: Methodsmentioning
confidence: 99%
“…This is the case of the memory-equipped models ( Figure 4 c) and hierarchical models ( Figure 4 d). Using memory allows to preserve more information about other (past) observations, and has shown encouraging results in training latent dynamics models with deep learning memory models, such as LSTMs and GRUs [ 47 , 48 , 49 , 124 ]. The memory increases the capacity of the model and allows more accurate predictions of states that are far in the future, especially when the prior model is unknown and must be learned.…”
Section: Variational World Modelsmentioning
confidence: 99%