1991
DOI: 10.1103/physreva.44.2718
|View full text |Cite
|
Sign up to set email alerts
|

Learning processes in neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
51
0

Year Published

1992
1992
1996
1996

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 92 publications
(52 citation statements)
references
References 36 publications
1
51
0
Order By: Relevance
“…Because of the random presentation of the input vectors and (possibly) the random initialization of the weights, the learning process is a stochastic process governed by the master equation (Ritter and Schulten 1988;Heskes and Kappen 1991) ae(w', t)…”
Section: Transition Timesmentioning
confidence: 99%
“…Because of the random presentation of the input vectors and (possibly) the random initialization of the weights, the learning process is a stochastic process governed by the master equation (Ritter and Schulten 1988;Heskes and Kappen 1991) ae(w', t)…”
Section: Transition Timesmentioning
confidence: 99%
“…The learning parameter sets the typical scale of the weight change at each update. A large learning parameter leads to large fiuctuations in the network's representation [2].…”
Section: Introductionmentioning
confidence: 99%
“…INTRODUCTION Recently, some progress has been made in understanding learning processes in neural networks [1]. Instead of considering the dynamics of the fast variables, such as the spins in Hopfield-type neural networks, one focuses on the dynamics of the slow variables, the synapses and the thresholds.…”
mentioning
confidence: 99%
“…(1). Examples are backpropagation [2] for multilayered perceptrons, Kohonen-type learning [3,4] for topological maps, and Hebbian learning [5,6] [1] it was possible to compute these constants. In general, however, all the information needed to calculate these constants is not directly available.…”
mentioning
confidence: 99%
See 1 more Smart Citation