1989
DOI: 10.1109/31.31313
|View full text |Cite
|
Sign up to set email alerts
|

On hidden nodes for neural nets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
86
0
1

Year Published

1990
1990
2021
2021

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 261 publications
(88 citation statements)
references
References 2 publications
1
86
0
1
Order By: Relevance
“…samples of each class which are near to or bordering on those samples in different classes, are better than randomly selected samples for training ANNs constructed for classification purposes [13]. The distributions of survivors and nonsurvivors across the APACHE II score range are shown in Fig.…”
Section: Data Preparationmentioning
confidence: 99%
“…samples of each class which are near to or bordering on those samples in different classes, are better than randomly selected samples for training ANNs constructed for classification purposes [13]. The distributions of survivors and nonsurvivors across the APACHE II score range are shown in Fig.…”
Section: Data Preparationmentioning
confidence: 99%
“…Output layer 1 proposed by Lippmann (1987), Mirchandani and Cao (1989) and Maren et al (1988), it is found that a network with two hidden layers, 20 nodes in the first hidden layer and 4 nodes in the second hidden layer, can converge reasonably fast and provide sufficient calculation accuracy. Figure 6 shows the architectures of MOBP neural network for interpreting low-resistivity pay zone.…”
Section: Input Layermentioning
confidence: 99%
“…Research has proposed various methods for selection of hidden nodes in the hidden layer (see Chang-Xue, ZhiGuang and Kusiak, 2005), as follows: Marchandani and Cao, 1989)  H 5 = O(I + 1) (Lipmann, 1987) Here, I is the number of inputs, O is the number of output neurons, and n is the number of training data points.…”
Section: Selection Of Hidden Layer and Hidden Nodesmentioning
confidence: 99%