2019
DOI: 10.48550/arxiv.1912.10489
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Recurrent Feedback Improves Feedforward Representations in Deep Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 15 publications
0
1
0
Order By: Relevance
“…It has been shown that the brain relies on feedback pathways for robust object recognition under challenging conditions [7,8,9,10,11]. In recent years, several approaches aimed to introduce feedback connections in deep networks to improve not only biological plausibility but also model robustness, and accuracy [12,13,14,15]. Importantly, feedback connections can be trained either in a supervised fashion to optimize the task objective (e.g., object recognition) or in an unsupervised way to minimize the reconstruction errors (i.e., prediction errors).…”
Section: Introductionmentioning
confidence: 99%
“…It has been shown that the brain relies on feedback pathways for robust object recognition under challenging conditions [7,8,9,10,11]. In recent years, several approaches aimed to introduce feedback connections in deep networks to improve not only biological plausibility but also model robustness, and accuracy [12,13,14,15]. Importantly, feedback connections can be trained either in a supervised fashion to optimize the task objective (e.g., object recognition) or in an unsupervised way to minimize the reconstruction errors (i.e., prediction errors).…”
Section: Introductionmentioning
confidence: 99%