2005
DOI: 10.1016/j.asoc.2004.10.008
|View full text |Cite
|
Sign up to set email alerts
|

Improving recognition and generalization capability of back-propagation NN using a self-organized network inspired by immune algorithm (SONIA)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
28
0

Year Published

2006
2006
2018
2018

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 49 publications
(29 citation statements)
references
References 11 publications
0
28
0
Order By: Relevance
“…As shown previously, Widyanto et al [70] introduce a method to improve the recognition as well as the generalization capabilities of the backpropagation algorithm. Our proposed neural network architecture extends the SONIA network by introducing recurrent links.…”
Section: Dynamic Self-organised Network Inspired By the Immune Algorimentioning
confidence: 99%
See 1 more Smart Citation
“…As shown previously, Widyanto et al [70] introduce a method to improve the recognition as well as the generalization capabilities of the backpropagation algorithm. Our proposed neural network architecture extends the SONIA network by introducing recurrent links.…”
Section: Dynamic Self-organised Network Inspired By the Immune Algorimentioning
confidence: 99%
“…This procedure will be repeated until all inputs have found their corresponding hidden unit as follows [70]:…”
Section: Self-organised Multilayer Network Inspired By Immune Algorithmmentioning
confidence: 99%
“…The input layer, hidden layer, and output layer are connected by a forward mode, and the number of neural neurons in each layer may be different. The connection weights of neural neurons can be trained by the error BP algorithm between the input layer and the hidden layer as well as between the hidden layer and output layer, and the thresholds among the three layers can also be changed in this way [49]. BP networks generally use the steepest descent gradient method (SDGM) to revise the weights and thresholds [50].…”
Section: Bp-ann-based Learning/recognitionmentioning
confidence: 99%
“…The selection of these parameters is very important to improve the performance of neural networks. Furthermore, the MLP neural network is affected by some learning algorithm problems such as overfitting problems [10][11][12]. This means that the neural network can perfectly map between input and output in training data but it will not be able to sufficiently generalise its learning to new data.…”
Section: Introductionmentioning
confidence: 99%
“…There are a number of studies which have investigated the ability to use different techniques to improve the generalisation ability of feed-forward neural networks and to automatically select the best number of hidden units and their weights. One of these techniques was proposed by Widyanto et al [12]. They designed a self-organised hidden layer inspired by immune algorithm (SONIA).…”
Section: Introductionmentioning
confidence: 99%