1997
DOI: 10.1017/s0890060400001827
|View full text |Cite
|
Sign up to set email alerts
|

Selecting the architecture of a class of back-propagation neural networks used as approximators

Abstract: This paper examines the architecture of back-propagation neural networks used as approximators by addressing the interrelationship between the number of training pairs and the number of input, output, and hidden layer nodes required for a good approximation. It concentrates on nets with an input layer, one hidden layer, and one output layer. It shows that many of the currently proposed schemes for selecting network architecture for such nets are deficient. It demonstrates in numerous examples that overdetermin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

1997
1997
2023
2023

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 18 publications
(11 citation statements)
references
References 22 publications
0
9
0
Order By: Relevance
“…If the undetermined parameters of the network are too few, which will lead to excess fitting to the network, the learning process may easily plunge into poor convergence. Experiments show that good computation results will be obtained when sample quantities have the following relationship with undetermined parameters of the network [17]:…”
Section: Number Of Hidden Layers and Hidden Nodesmentioning
confidence: 99%
“…If the undetermined parameters of the network are too few, which will lead to excess fitting to the network, the learning process may easily plunge into poor convergence. Experiments show that good computation results will be obtained when sample quantities have the following relationship with undetermined parameters of the network [17]:…”
Section: Number Of Hidden Layers and Hidden Nodesmentioning
confidence: 99%
“…Different values of these parameters may give different approximations. The dangers of using underdetermined nets as approximators is discussed in some detail by Carpenter and Hoffman (1997). Therefore, in the present study, only overdetermined networks are examined.…”
Section: Number Of Training Pairs and Number Of Output Nodesmentioning
confidence: 99%
“…Therefore, in the present study, only overdetermined networks are examined. Carpenter and Hoffman (1997) pointed out that, to make multiple approximations, it is better in general to use multiple neural networks that each have one output node than to use one neural network with multiple output nodes. Thus, in the present study, neural networks with a single output node are used to make one approximation.…”
Section: Number Of Training Pairs and Number Of Output Nodesmentioning
confidence: 99%
See 2 more Smart Citations