2016
DOI: 10.1016/j.compchemeng.2015.10.010
|View full text |Cite
|
Sign up to set email alerts
|

An augmented Lagrangian interior-point approach for large-scale NLP problems on graphics processing units

Abstract: The demand for fast solution of nonlinear optimization problems, coupled with the emergence of new concurrent computing architectures, drives the need for parallel algorithms to solve challenging nonlinear programming (NLP) problems. In this paper, we propose an augmented Lagrangian interior-point approach for general NLP problems that solves in parallel on a Graphics processing unit (GPU). The algorithm is iterative at three levels. The first level replaces the original problem by a sequence of bound-constrai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 26 publications
(21 citation statements)
references
References 29 publications
0
21
0
Order By: Relevance
“…Moreover, in connection with paper E, an augmented Lagrangian interior-point approach is presented in Cao et al (2016), where the linear KKT system is solved using a preconditioned conjugate gradient method, which is implemented efficiently on a Graphics Processing Unit (GPU) in parallel. For sufficiently sparse problems, one possible future work is to distribute the computations of the preconditioned conjugate gradient used for solving the KKT system, over the clique tree of the problem such that parallelism can be exploited.…”
Section: Concluding Remarks and Future Workmentioning
confidence: 99%
“…Moreover, in connection with paper E, an augmented Lagrangian interior-point approach is presented in Cao et al (2016), where the linear KKT system is solved using a preconditioned conjugate gradient method, which is implemented efficiently on a Graphics Processing Unit (GPU) in parallel. For sufficiently sparse problems, one possible future work is to distribute the computations of the preconditioned conjugate gradient used for solving the KKT system, over the clique tree of the problem such that parallelism can be exploited.…”
Section: Concluding Remarks and Future Workmentioning
confidence: 99%
“…Other approaches to improve online applicability of optimizationbased methods are based on making the computational process faster. This could be achieved by improving the computational performance of the underlying methods, e.g., obtaining a solution of the linear system of equations arising at each step of many iterative optimization solvers faster [59], or by not fully solving the optimization problem at each step when a sequence of slightly changing problems is computed. Solutions using sequentially quadratic methods could in some cases be of sufficient quality for feedback control after just one iteration [55].…”
Section: Computational Aspects Of Optimizationmentioning
confidence: 99%
“…Other approaches to improve online applicability of optimization-based methods are based on making the computational process faster. This could be achieved by improving the computational performance of the underlying methods, e.g., obtaining a solution of the linear system of equations arising at each step of many iterative optimization solvers faster (Cao et al, 2016), or by not fully solving the optimization problem at each step when a sequence of slightly changing problems are computed. For interior-point methods, this could be achieved by early termination based on a small number of maximum iterations, or not updating the barrier penalty parameter, which also allows taking advantage of warm starting the solver (Wang and Boyd, 2010).…”
Section: Computational Aspects Of Optimizationmentioning
confidence: 99%