1998
DOI: 10.1016/s0377-2217(97)00292-0
|View full text |Cite
|
Sign up to set email alerts
|

Global optimization for artificial neural networks: A tabu search application

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
51
0
1

Year Published

2006
2006
2017
2017

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 129 publications
(52 citation statements)
references
References 26 publications
0
51
0
1
Order By: Relevance
“…In [140], an improvised TS, called reactive tabu search was used for optimizing weights. Several studies show that TS when used for optimizing FNN weights, outperformed BP and SA algorithm [141,142]. However, SA and TS are single solution based algorithms, which has a limited scope of exploring search space to obtain a global optimal solution.…”
Section: Weight Optimizationmentioning
confidence: 99%
“…In [140], an improvised TS, called reactive tabu search was used for optimizing weights. Several studies show that TS when used for optimizing FNN weights, outperformed BP and SA algorithm [141,142]. However, SA and TS are single solution based algorithms, which has a limited scope of exploring search space to obtain a global optimal solution.…”
Section: Weight Optimizationmentioning
confidence: 99%
“…This issue is well-known for ANN researchers: consult e.g. Bianchini and Gori (1996), Sexton et al (1998), Abraham (2004), Hamm et al (2007). To overcome this difficulty, the various optimization strategies used in practice to train ANNs include experimental design, sampling by low-discrepancy sequences, theoretically exact global or local scope search approaches, as well as popular heuristics such as evolutionary optimization methods, particle swarm optimization, simulated annealing, tabu search and tunneling functions.…”
Section: Postulating and Calibrating A Model Instancesmentioning
confidence: 99%
“…ANN model parameterization frameworks and numerical studies are presented and discussed e.g. by Watkin (1993), Prechelt (1994), Bianchini and Gori (1996), Sexton et al (1998), Jordanov and Brown (1999), Toh (1999), Ye and Lin (2003), Abraham (2004), Hamm et al (2007).…”
Section: Postulating and Calibrating A Model Instancesmentioning
confidence: 99%
“…Engoziner et al [11] presented that BP use some variation of the gradient technique, which is essentially a local optimizing method and thus has some inevitable drawbacks, such as easily trapping into the local optimal and dissatisfying generalization capability. Sexton et al [12] proposed the fact that the gradient descent algorithm may perform poorly even on simple problems when predicting the holdout data. And document [13] suggested that, in interest of mitigating the above limitation, weighted values and thresholds of neurons in BP are optimized by global search algorithms.…”
Section: Introductionmentioning
confidence: 99%