1992
DOI: 10.1109/72.143376
|View full text |Cite
|
Sign up to set email alerts
|

A constructive method for multivariate function approximation by multilayer perceptrons

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
27
0

Year Published

1999
1999
2016
2016

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 79 publications
(27 citation statements)
references
References 9 publications
0
27
0
Order By: Relevance
“…The MLP is the most common architecture employed for ANN [26] and can implement arbitrary mappings between input and output [27]- [30]. It uses back propagation as learning algorithm wherein the synaptic strengths are systematically modified so that network approximates the desired response more closely.…”
Section: Construction Of Ann Modelsmentioning
confidence: 99%
“…The MLP is the most common architecture employed for ANN [26] and can implement arbitrary mappings between input and output [27]- [30]. It uses back propagation as learning algorithm wherein the synaptic strengths are systematically modified so that network approximates the desired response more closely.…”
Section: Construction Of Ann Modelsmentioning
confidence: 99%
“…Furthermore, empirical (Murtagh 1991) and theoretical (Geva & Sitte 1992) considerations suggest that the optimal structure for the approximation of a continuous function is through the use of one hidden layer with 2N + 1 nodes (N being the number of input nodes). It was decided that these guidelines would be followed when setting up the ANNs for our classification problem for as long as it remained computationally feasible.…”
Section: Number Of Hidden Layers and Nodesmentioning
confidence: 99%
“…Often the unknown target function ( ) is inherently complex and cannot be closely approximated by a network comprising of a single hidden layer of neurons implementing simple transfer functions (e.g., sigmoid). To overcome this difficulty, some constructive algorithms use different transfer functions (e.g., the Gaussian [21]) while others such as the projection pursuit regression [18] use a summation of several nonlinear transfer functions. Alternatively, algorithms such as the cascade correlation family construct multilayer networks wherein the structural interconnections among the hidden neurons allow the network to approximate complex functions using relatively simple neuron transfer functions like the sigmoid [13], [39], [49].…”
Section: Constructive Algorithms For Pattern Classificationmentioning
confidence: 99%