2012
DOI: 10.5120/8468-2392
|View full text |Cite
|
Sign up to set email alerts
|

Neural Approach for Resource Selection with PSO for Grid Scheduling

Abstract: The article that you are looking for is unavailable to public domain. The article is subjected to compliance with 2014-2015 IJCA scientific data guidelines. You might want to navigate the journal via the menu options provided in the left side of the screen. However, feel free to contact us anytime regarding any article which you are unable to find.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2014
2014
2018
2018

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 7 publications
0
3
0
Order By: Relevance
“…DE has been broadly applied to solve several world problems [5,6], in addition to the introduction of the binary version of DE. There have been several advances and enhancements in this area [7][8][9]. DE has been used to derive universal function approximations for any analog function with random updating of weights [42].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…DE has been broadly applied to solve several world problems [5,6], in addition to the introduction of the binary version of DE. There have been several advances and enhancements in this area [7][8][9]. DE has been used to derive universal function approximations for any analog function with random updating of weights [42].…”
Section: Related Workmentioning
confidence: 99%
“…On the other hand, the synaptic weights are adjusted accordingly to minimize the network error. Therefore, in SpikeProp the actual weights are assigned to random initialization values followed the radius rule as depicted in (7) and (8). The input values range on the other hand will differ from each of dataset used.…”
Section: Proposed Radius Initial Weight (Riw)mentioning
confidence: 99%
“…This best value is a global best and called gbest. When a particle takes part of the population as its topological neighbors, the best value is a local best and is called lbest [12].…”
Section: Particle Swarm Optimization Algorithmmentioning
confidence: 99%