2013
DOI: 10.1016/j.neucom.2012.12.047
|View full text |Cite
|
Sign up to set email alerts
|

Intrinsic plasticity via natural gradient descent with application to drift compensation

Abstract: This paper investigates the learning dynamics of intrinsic plasticity (IP), which is a learning rule to tune a neuron's activation function such that its output distribution becomes approximately exponentially distributed. The informationgeometric properties of intrinsic plasticity are analyzed and the improved natural gradient intrinsic plasticity (NIP) dynamics are evaluated for a variety of input distributions. Together with a further new modification of the IP rule, the high capability of NIP to cope with … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 26 publications
0
8
0
Order By: Relevance
“…Adapting the conventional gradient with respect to the Riemannian metric corrects for this non-linearity, such that the distance of two parameter sets linearly transfers to the output space. This change of the gradient is termed natural gradient and leads to a substantial performance increase in the convergence rate for IP (Neumann and Steil, 2012 ; Neumann et al, 2013 ). Therefore the natural gradient is used in this paper due to these technical benefits, although the adaptation of DNFs with IP proposed in this paper in principal also works with the standard IP adaptation.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Adapting the conventional gradient with respect to the Riemannian metric corrects for this non-linearity, such that the distance of two parameter sets linearly transfers to the output space. This change of the gradient is termed natural gradient and leads to a substantial performance increase in the convergence rate for IP (Neumann and Steil, 2012 ; Neumann et al, 2013 ). Therefore the natural gradient is used in this paper due to these technical benefits, although the adaptation of DNFs with IP proposed in this paper in principal also works with the standard IP adaptation.…”
Section: Methodsmentioning
confidence: 99%
“…A natural gradient-based parameter adaptation for IP termed NIP has been derived in Neumann and Steil ( 2012 ), here only the resulting learning rules are given:…”
Section: Methodsmentioning
confidence: 99%
“…The "wicked-map problem" appears, since it is unclear whether the Euclidean distance in the (a, b)-plane is the most natural way to measure distances between the output distributions f y that are parametrized by a and b. In fact, in 2013 a different dynamics has been predicted for the same cost function, but under the assumption of the Fisher information metric 1 [22] which can be considered a more natural choice to measure distances between dis-1 See Section 4 for more details on the Fisher metric.…”
Section: Gradient Descent In Neurosciencementioning
confidence: 99%
“…In [5,7,6,24] the principle of intrinsic plasticity is transferred to ELMs and introduced as an efficient pretraining method, aimed at adapting the hidden layer weights and biases, such that the output distribution of the hidden layer is shaped like an exponential distribution. The only parameter of batch intrinsic plasticity is the mean μ exp of the target exponential distribution.…”
Section: Batch Intrinsic Plasticity (Bip) Elmmentioning
confidence: 99%