Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
1996
DOI: 10.1109/72.548172
|View full text |Cite
|
Sign up to set email alerts
|

A generalized learning paradigm exploiting the structure of feedforward neural networks

Abstract: In this paper a general class of fast learning algorithms for feedforward neural networks is introduced and described. The approach exploits the separability of each layer into linear and nonlinear blocks and consists of two steps. The first step is the descent of the error functional in the space of the outputs of the linear blocks (descent in the neuron space), which can be performed using any preferred optimization strategy. In the second step, each linear block is optimized separately by using a least squa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
31
0
1

Year Published

1999
1999
2018
2018

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 77 publications
(32 citation statements)
references
References 20 publications
0
31
0
1
Order By: Relevance
“…Except for the classical VICOM [17], linearization was performed by a five parameter monotonic logistic function [30], [31], trained by the BRLS modified Newton algorithm [53] over several runs, to minimize the risk of trapping into local minima.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Except for the classical VICOM [17], linearization was performed by a five parameter monotonic logistic function [30], [31], trained by the BRLS modified Newton algorithm [53] over several runs, to minimize the risk of trapping into local minima.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…In addition, to speed up the convergence the Levenberg-Marquardt algorithm is used with backpropagation algorithm. LevenbergMarquardt algorithm is the most powerful and a popular second derivative based algorithm that have been proposed for the training of feed-forward networks which combines the local convergence properties of Gauss-Newton method near a minimum with the consistent error decrease provided by gradient descent far away from the solution [15][16][17][18]. Sigmoid function is used as neuron transfer function.…”
Section: Introductionmentioning
confidence: 99%
“…But, until now, the most widely used NNs, FNs, and WNs systems are algebraic systems, despite the immense popularity of the algebraic neural, fuzzy, and wavelet systems (or feedforward networks) that are usually implemented for the approximation of a non-linear function [13,23,50,52,69,73,75].…”
Section: Introductionmentioning
confidence: 99%