Theoretical Advances in Neural Computation and Learning 1994
DOI: 10.1007/978-1-4615-2696-4_13
|View full text |Cite
|
Sign up to set email alerts
|

Supervised Learning: Can it Escape its Local Minimum?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

1999
1999
2021
2021

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(3 citation statements)
references
References 6 publications
0
3
0
Order By: Relevance
“…The Nnet mimics a feed-forward neural network that uses the backpropagation algorithm for training coupled with one hidden layer [86]. To calibrate the Nnet algorithm, there are three important parameters required, namely the size, decay, and maxit, which control the number of neurons in the hidden layer, the weight decay, and the maximum number of iterations, respectively.…”
Section: Deep Learning Classifiersmentioning
confidence: 99%
“…The Nnet mimics a feed-forward neural network that uses the backpropagation algorithm for training coupled with one hidden layer [86]. To calibrate the Nnet algorithm, there are three important parameters required, namely the size, decay, and maxit, which control the number of neurons in the hidden layer, the weight decay, and the maximum number of iterations, respectively.…”
Section: Deep Learning Classifiersmentioning
confidence: 99%
“…(1) Even in the "simple" testbed problem of supervised learning (learning a static mapping from an input vector X to a vector of targets or dependent variables Y), we need improved learning speed and generalization ability, exploiting concepts such as "syncretism" (section 2.2) and "simultaneous recurrence" [32]. Also see [34].…”
Section: The Search For True Intelligent Controlmentioning
confidence: 99%
“…Finding a nonlinear adaptive system like a neural network with an appropriate structure is a nonlinear optimization problem, which can be complex and slow. One of the most widely used algorithms for training neural networks is backpropagation (BP) based on gradient descent [90], [103]. Although this simple approach has been successfully used to train neural networks for a wide range of applications, the algorithm can be notoriously slow when used to train a large network in solving a complex problem.…”
Section: Fast Training Algorithmsmentioning
confidence: 99%