2009 IEEE International Symposium on Parallel &Amp; Distributed Processing 2009
DOI: 10.1109/ipdps.2009.5161106
|View full text |Cite
|
Sign up to set email alerts
|

Linear optimization on modern GPUs

Abstract: Optimization algorithms are becoming increasingly more important in many areas, such as finance and engineering. Typically, real problems involve several hundreds of variables, and are subject to as many constraints. Several methods have been developed trying to reduce the theoretical time complexity. Nevertheless, when problems exceed reasonable sizes they end up being very computationally intensive. Heterogeneous systems composed by coupling commodity CPUs and GPUs are becoming relatively cheap, highly perfo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 33 publications
(21 citation statements)
references
References 9 publications
0
21
0
Order By: Relevance
“…Spampinato et al [31] have proposed a parallel implementation of the revised simplex method based on NVIDIA CUBLAS and LAPACK libraries, with a maximum speedup of 2.5, using a GTX 280 GPU vs. the sequential implementation on CPU with Intel Core2 Quad 2.83 GHz for randomly generated LP problems of size 2000x2000. In [32] another implementation of the revised simplex method on GPU was proposed, which permits one to speed up solution with a maximum factor of 18 in single precision on a GeForce 9600 GT GPU card as compared with GLPK solver run on Intel Core 2 Duo 3GHz CPU.…”
Section: Related Workmentioning
confidence: 99%
“…Spampinato et al [31] have proposed a parallel implementation of the revised simplex method based on NVIDIA CUBLAS and LAPACK libraries, with a maximum speedup of 2.5, using a GTX 280 GPU vs. the sequential implementation on CPU with Intel Core2 Quad 2.83 GHz for randomly generated LP problems of size 2000x2000. In [32] another implementation of the revised simplex method on GPU was proposed, which permits one to speed up solution with a maximum factor of 18 in single precision on a GeForce 9600 GT GPU card as compared with GLPK solver run on Intel Core 2 Duo 3GHz CPU.…”
Section: Related Workmentioning
confidence: 99%
“…Thus, we can perform parallel optimization tasks on co-processors concurrently to query processing without slowing down the DBMS. Other com- munities already use co-processors as accelerators for their optimization problems successfully [12,25,32,34,37]. This is a strong indicator that DBMSs can also benefit from parallel optimization on co-processors, as it was already shown for the selectivity estimation problem [2][3][4]21].…”
Section: Introductionmentioning
confidence: 92%
“…Currently, GPUs are only used for query processing and selectivity estimation in DBMSs. Other communities use GPUs in a variety of different optimization problems, such as ant colony optimization [12], fast circuit optimization [25], knapsack optimization [32], linear optimization [34], genetic algorithms [35] or particle swarm optimization [37].…”
Section: Gpu-accelerated Optimizationmentioning
confidence: 99%
“…Our framework can be applied at the price of a higher complexity (up to several hours). One may note that the analysis of the BnB subdivided spaces, the LP solver, and the MBM can be all parallelized [32], [33], [34]. Therefore, if necessary, an appropriate GPU implementation could greatly accelerate the processing speed.…”
Section: Real Datamentioning
confidence: 99%