2022
DOI: 10.48550/arxiv.2201.13180
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning on Arbitrary Graph Topologies via Predictive Coding

Abstract: Training with backpropagation (BP) in standard deep learning consists of two main steps: a forward pass that maps a data point to its prediction, and a backward pass that propagates the error of this prediction back through the network. This process is highly effective when the goal is to minimize a specific objective function. However, it does not allow training on networks with cyclic or backward connections. This is an obstacle to reaching brain-like capabilities, as the highly complex heterarchical structu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1

Relationship

3
2

Authors

Journals

citations
Cited by 8 publications
(18 citation statements)
references
References 42 publications
(60 reference statements)
0
18
0
Order By: Relevance
“…The connection to BP for supervised learning suggests that PCNs should perform well on image classification tasks. This is indeed the case: the first formulation of PC for supervised learning, equivalent to the (c,d) are taken from the original papers, i.e., (Ororbia & Kifer, 2020) and (Salvatori et al, 2022), respectively. one described in the preliminary section, shows that PC is able to obtain a performance comparable to BP on small multilayer networks trained on MNIST (Whittington & Bogacz, 2017).…”
Section: Classificationmentioning
confidence: 95%
See 4 more Smart Citations
“…The connection to BP for supervised learning suggests that PCNs should perform well on image classification tasks. This is indeed the case: the first formulation of PC for supervised learning, equivalent to the (c,d) are taken from the original papers, i.e., (Ororbia & Kifer, 2020) and (Salvatori et al, 2022), respectively. one described in the preliminary section, shows that PC is able to obtain a performance comparable to BP on small multilayer networks trained on MNIST (Whittington & Bogacz, 2017).…”
Section: Classificationmentioning
confidence: 95%
“…2(b). More complex models, which have a similar structure but are augmented with different kinds of connections, have been shown to generalize to unseen images (Ororbia & Kifer, 2020;Salvatori et al, 2022). Particularly, Ororbia and Kifer Ororbia and Kifer present three generative models: the first one is a novel model with recurrent connections, while the second and the third are implementations of Rao and Ballard's original PC (Rao & Ballard, 1999), and of a model designed by K. Friston (K. Friston, 2008).…”
Section: Classificationmentioning
confidence: 99%
See 3 more Smart Citations