2012
DOI: 10.1007/978-3-642-31125-3_12
|View full text |Cite
|
Sign up to set email alerts
|

Swarm Grid: A Proposal for High Performance of Parallel Particle Swarm Optimization Using GPGPU

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0
1

Year Published

2013
2013
2017
2017

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 7 publications
0
4
0
1
Order By: Relevance
“…For example, since IDA-MCS can concentrate samples on the peak area, triangulation of these points produces mesh with high density in peak area which needs more computation, and low density in flat area which needs less computation. 12 12 12…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, since IDA-MCS can concentrate samples on the peak area, triangulation of these points produces mesh with high density in peak area which needs more computation, and low density in flat area which needs less computation. 12 12 12…”
Section: Discussionmentioning
confidence: 99%
“…Particle Swarm Optimization is a method of optimization with scanning the search space by a group of candidate solutions named particle, and these particles are suitable for parallelization. Particle Swarm Optimization is well parallelized by GPU computing [1][2][3][4][5][6] with application to computer sciences [7][8][9][10][11][12][13][14], finance [15,16], physics [17], biology [18], etc.…”
Section: Introductionmentioning
confidence: 99%
“…YCoCg [19]. Στη συνέχεια, τα χρωματικά κανάλια Co και Cg υποδειγματοληπτούνται κατά έναν παράγοντα Ν ο οποίος ορίζεται από το χρήστη και οι τιμές των εικονοστοιχείων τους κανονικοποιούνται στο εύρος [0 , 255] προσθέτοντας την τιμή 127 σε κάθε στοιχείο.…”
Section: ο αλγόριθμος Cvc για συμπίεση βίντεοunclassified
“…Executing the same program for each data element leads to low demand for sophisticated flow control. Moreover, executing the program on a large number of data elements results in high arithmetic intensity and the memory access latency effects are mitigated with calculations instead of big data caches [19]. Data parallel processing is implemented by allocating the computations for each data element to parallel processing threads.…”
Section: General Purpose Gpu Computing (Gpgpu)mentioning
confidence: 99%
See 1 more Smart Citation