Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challeng 2000
DOI: 10.1109/ijcnn.2000.857826
|View full text |Cite
|
Sign up to set email alerts
|

Global optimization algorithms for training product unit neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
32
0
4

Year Published

2006
2006
2017
2017

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 54 publications
(36 citation statements)
references
References 9 publications
0
32
0
4
Order By: Relevance
“…The Ý Ø is estimated differential coefficient of observed gene expression level at time Ø. The reason why we use equation (11) instead of (10) is to evaluate accurately when small amount of equation (10).…”
Section: The Inference Methods Using Punn Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…The Ý Ø is estimated differential coefficient of observed gene expression level at time Ø. The reason why we use equation (11) instead of (10) is to evaluate accurately when small amount of equation (10).…”
Section: The Inference Methods Using Punn Modelmentioning
confidence: 99%
“…Here, the gradient descent method, which is widely used for learning neural networks, fails to leaning PUNN in general [9], [10]. In this study, because of using the optimization algorithm to lean the PUNN , the minimized function (11) is not called SMC 2008 "error function" but called "the objective function".…”
Section: The Inference Methods Using Punn Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…Hence, they are advantageous in comparison to an EA-based algorithm that needs to simulated mutation and crossover operators for real-valued weight vector [110]. It was found that PSO guides a population of the FNN weight vectors towards an optimum population [161,162]. Hence, many researchers resorted to working on swarm based metaheuristics for the FNN optimization.…”
Section: Weight Optimizationmentioning
confidence: 99%
“…The size of the final network (513 weights) that was used in this work even makes global optimization techniques like Particle Swarm Optimization or Genetic Algorithms infeasible. Consequently for a network of the presented size, higher order units such as product units cannot be incorporated due to the increased amount of local minima, requiring global optimization techniques (Ismail & Engelbrecht, 2000). But also with only sigmoid units, based on the possibility of backpropagation getting stuck in local minima, always a set of at least 10 networks with random initial parameters were trained.…”
Section: Network Trainingmentioning
confidence: 99%