2020
DOI: 10.48550/arxiv.2012.14905
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Meta Learning Backpropagation And Improving It

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
9
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(9 citation statements)
references
References 0 publications
0
9
0
Order By: Relevance
“…A similar direction has been taken by Kirsch et al 33 , where the neurons and synapses of a neural network are also generalized to higher dimension message-passing systems, but in their case each synapse is replaced by an recurrent neural network (RNN) with the same shared parameters. These RNN synapses are bi-directional and govern the flow of information across the network.…”
Section: Meta-learningmentioning
confidence: 94%
See 2 more Smart Citations
“…A similar direction has been taken by Kirsch et al 33 , where the neurons and synapses of a neural network are also generalized to higher dimension message-passing systems, but in their case each synapse is replaced by an recurrent neural network (RNN) with the same shared parameters. These RNN synapses are bi-directional and govern the flow of information across the network.…”
Section: Meta-learningmentioning
confidence: 94%
“…Since RNNs are general-purpose computers, they were able to demonstrate that the system can encode the gradient-based backpropagation algorithm by training the system to simply emulate backpropagation, rather than explicitly calculating gradients via hand-engineering. Of course, 55 and Kirsch et al 33 attempt to generalize the accepted notion of artificial neural networks, where each neuron can hold multiple states rather than a scalar value, and each synapse function bi-directionally to facilitate both learning and inference. In this figure, Kirsch et al 33 proposes using a identical recurrent neural network (RNN) (with different internal hidden states) to model each synapse, and show the network can be trained by simply running the RNN cells, without the use of backpropagation.…”
Section: Meta-learningmentioning
confidence: 99%
See 1 more Smart Citation
“…A meta-learned policy that can adapt the weights of a neural network to its inputs during inference time have been proposed in fast weights [64,66], associative weights [2], hypernetworks [35], and Hebbian-learning [51,52] approaches. Recently works [45,62] combine ideas of self-organization with meta-learning RNNs, and have demonstrated that modular metalearning RNN systems not only can learn to perform SGD-like learning rules, but can also discover more general learning rules that transfer to classification tasks on unseen datasets.…”
Section: Related Workmentioning
confidence: 99%
“…Meta-optimizers. Meta-optimizers [17,18,71,91,92] define a problem similar to our task, but where H D is an RNN-based model predicting the gradients ∇w, mimicking the behavior of iterative optimizers. Therefore, the objective of meta-optimizers may be phrased as learning to optimize as opposed to our learning to predict parameters.…”
Section: Related Workmentioning
confidence: 99%