1999
DOI: 10.1017/s0269888998214044
|View full text |Cite
|
Sign up to set email alerts
|

Neural networks: a comprehensive foundation by Simon Haykin, Macmillan, 1994, ISBN 0-02-352781-7.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
92
0
7

Year Published

2016
2016
2022
2022

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 214 publications
(99 citation statements)
references
References 0 publications
0
92
0
7
Order By: Relevance
“…According to Simon [31], neural network can be defined as "a massively parallel distributed processor made up of simple processing units, which has a natural propensity for storing experimental knowledge and making it available for use". Usually the structure of neural network consists of an input layer with single or multiple neuron (s), one or more hidden layers with single or multiple neurons, and one output layer with single or multiple neurons.…”
Section: Artificial Neural Network (Ann) Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…According to Simon [31], neural network can be defined as "a massively parallel distributed processor made up of simple processing units, which has a natural propensity for storing experimental knowledge and making it available for use". Usually the structure of neural network consists of an input layer with single or multiple neuron (s), one or more hidden layers with single or multiple neurons, and one output layer with single or multiple neurons.…”
Section: Artificial Neural Network (Ann) Methodsmentioning
confidence: 99%
“…Figure 2 illustrates the schematics of the topology of artificial neural networks. And the detailed description of different neural network techniques is available in [31]. [9,31].…”
Section: Artificial Neural Network (Ann) Methodsmentioning
confidence: 99%
“…They are the widely used NNs, particularly in systems and controls. Multilayer NNs have input layer, hidden layers, and output layer (no interconnections between the nodes in same layers), they called a hidden and exist in between two layers, input and also output layer [10], [11]. …”
Section: Categories Of Effort Estimationmentioning
confidence: 99%
“…In order to update the weights, the error between the predicted and actual output values is back propagated via the network. Minimizing of the error the desired and predicted output attempts occurs after the procedure of supervised learning [18]. The architecture of this network contains a hidden layer of neurons with non-linear transfer function and an output layer of neurons with non-linear transfer function and an output layer of neurons with linear transfer functions.…”
Section: Back Propagation Neural Networkmentioning
confidence: 99%