2012
DOI: 10.4236/jilsa.2012.43024
|View full text |Cite
|
Sign up to set email alerts
|

Regularization by Intrinsic Plasticity and Its Synergies with Recurrence for Random Projection Methods

Abstract: Neural networks based on high-dimensional random feature generation have become popular under the notions extreme learning machine (ELM) and reservoir computing (RC). We provide an in-depth analysis of such networks with respect to feature selection, model complexity, and regularization. Starting from an ELM, we show how recurrent connections increase the effective complexity leading to reservoir networks. On the contrary, intrinsic plasticity (IP), a biologically inspired, unsupervised learning rule, acts as … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2013
2013
2016
2016

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 27 publications
0
6
0
Order By: Relevance
“…In [5,7,6,24] the principle of intrinsic plasticity is transferred to ELMs and introduced as an efficient pretraining method, aimed at adapting the hidden layer weights and biases, such that the output distribution of the hidden layer is shaped like an exponential distribution. The only parameter of batch intrinsic plasticity is the mean μ exp of the target exponential distribution.…”
Section: Batch Intrinsic Plasticity (Bip) Elmmentioning
confidence: 99%
See 1 more Smart Citation
“…In [5,7,6,24] the principle of intrinsic plasticity is transferred to ELMs and introduced as an efficient pretraining method, aimed at adapting the hidden layer weights and biases, such that the output distribution of the hidden layer is shaped like an exponential distribution. The only parameter of batch intrinsic plasticity is the mean μ exp of the target exponential distribution.…”
Section: Batch Intrinsic Plasticity (Bip) Elmmentioning
confidence: 99%
“…One approach for adapting the hidden layer to the context is the mechanism of batch intrinsic plasticity (BIP) [5][6][7]. The idea of BIP is that it adapts the slope and bias of the hidden layer neurons such that their outputs are approximately exponentially distributed.…”
Section: Introductionmentioning
confidence: 99%
“…The regression parameter is α = 10 −3 in the following experiments. The interplay between IP and ELMs has been analyzed in rigorous detail in [6]. An highly efficient batch version of IP suited for ELMs was proposed in [5].…”
Section: A Real World Example: Learning To Point With the Humanoid Romentioning
confidence: 99%
“…multiplicative normalization, solves the stability problems and generates an algorithm for principal components analysis [26]. This is a computational form of an effect which is believed to happen in biological neurons.…”
Section: A Anti-oja Rulementioning
confidence: 99%