2022
DOI: 10.1101/2022.01.20.477125
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Active Predictive Coding Networks: A Neural Solution to the Problem of Learning Reference Frames and Part-Whole Hierarchies

Abstract: We introduce Active Predictive Coding Networks (APCNs), a new class of neural networks that solve a major problem posed by Hinton and others in the fields of artificial intelligence and brain modeling: how can neural networks learn intrinsic reference frames for objects and parse visual scenes into part-whole hierarchies by dynamically allocating nodes in a parse tree? APCNs address this problem by using a novel combination of ideas: (1) hypernetworks are used for dynamically generating recurrent neural networ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 10 publications
0
5
0
Order By: Relevance
“…The DPC model can be extended to action-conditioned prediction and hierarchical planning (see, e.g., [77,78] for initial steps in this direction). There is a growing body of evidence that neural activity in the sensory cortex is predictive of the sensory consequences of an animal's own actions [2,4,13,41,79].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The DPC model can be extended to action-conditioned prediction and hierarchical planning (see, e.g., [77,78] for initial steps in this direction). There is a growing body of evidence that neural activity in the sensory cortex is predictive of the sensory consequences of an animal's own actions [2,4,13,41,79].…”
Section: Discussionmentioning
confidence: 99%
“…There is a growing body of evidence that neural activity in the sensory cortex is predictive of the sensory consequences of an animal's own actions [2,4,13,41,79]. These results can be understood in the context of a DPC model in which the transition function at each level is a function of both a state and an action at that level, thereby allowing the hierarchical network to predict the consequences of actions at multiple levels of abstraction [77,78]. Such a model allows probabilistic inference to be used not only for perception but also for hierarchical planning, where actions are selected to minimize the sensory prediction errors with respect to preferred goal states.…”
Section: Discussionmentioning
confidence: 99%
“…The expected information gain underwrites the principles of optimal Bayesian design ( Lindley, 1956 ; Fields et al, 2021 ), while expected cost underwrites Bayesian decision theory ( Berger, 2011 ; Fields et al, 2021 ; Gklezakos and Rao, 2022 ). However, there is a twist that distinguishes active inference from expected utility theory.…”
Section: Generating Hunches: the Bayesian Brain Active Inference And ...mentioning
confidence: 99%
“…Here we use the decoded patches {x k−1 1 , ..., xk−1 τ k−1 } as accumulated evidence to update z k t (similar to other predictive coding models [11,3]).…”
Section: Recursive Neural Programsmentioning
confidence: 99%
“…We introduce recursive neural programs (RNPs), which address this problem by creating a fully differentiable recursive tree representation of sensory-motor programs. Our model builds on past work on Active Predictive Coding Networks [3] in using state and action networks but is fully generative, recursive, and probabilistic, allowing a structured variational approach to inference and sampling of neural programs. The key differences between our approach and existing approaches are: 1) Our approach can be extended to arbitrary tree depth, creating a "grammar" for images that can be recursively applied 2) our approach provides a sensible way to perform gradient descent in hierarchical "program space," and 3) our model can be made adaptive by letting information flow from children to parents in the tree, e.g., via prediction errors [11,3].…”
Section: Introductionmentioning
confidence: 99%