Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2000
DOI: 10.1016/s0377-2217(99)00482-8
|View full text |Cite
|
Sign up to set email alerts
|

Training the random neural network using quasi-Newton methods

Abstract: Training in the random neural network (RNN) is generally speci®ed as the minimization of an appropriate error function with respect to the parameters of the network (weights corresponding to positive and negative connections). We propose here a technique for error minimization that is based on the use of quasi-Newton optimization techniques. Such techniques o er more sophisticated exploitation of the gradient information compared to simple gradient descent methods, but are computationally more expensive and di… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
33
0
1

Year Published

2007
2007
2021
2021

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 61 publications
(34 citation statements)
references
References 18 publications
0
33
0
1
Order By: Relevance
“…If the new weight is still negative, repeat until obtaining a positive number or stop the loop using some control parameter. • An alternative option was presented in [9]. The authors propose a change of variable: instead of using w + i,j and w − i,j they use new variables β + i,j and β − i,j , such that:…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…If the new weight is still negative, repeat until obtaining a positive number or stop the loop using some control parameter. • An alternative option was presented in [9]. The authors propose a change of variable: instead of using w + i,j and w − i,j they use new variables β + i,j and β − i,j , such that:…”
Section: Discussionmentioning
confidence: 99%
“…This method is an on-line procedure (this means that each of the K data pairs is read at a time), based on the delta rule [7]. Additionally, Quasi-Newton methods for using RNN to solve supervised learning problems were introduced in [8], [9].…”
Section: A Random Neural Network In Supervised Learning Tasksmentioning
confidence: 99%
“…, N a . In a standard RandNN, the neuron's loads are computed solving the expressions (1), (2) and (3). More precisely, input neurons behave as a M/M/1 queues.…”
Section: A New Reservoir Computing Methodmentioning
confidence: 99%
“…The architecture of a neural network is determined by all the connections in the network and transfer functions of the neurons [41] The backpropagation algorithm proposed by [42] is the most popular algorithm to train ANNs. Moreover, advanced methods like Marquardt [43][44][45], Quasi-Newton [46], or conjugating gradient algorithms [47,48] are also very popular. Due to their application in dynamic environments, these classic learning methods have to be modified to fulfill three important requirements:…”
Section: Artificial Neuralmentioning
confidence: 99%