2012
DOI: 10.1137/11082960x
|View full text |Cite
|
Sign up to set email alerts
|

A Stable and Efficient Method for Solving a Convex Quadratic Program with Application to Optimal Control

Abstract: Abstract.A method is proposed for reducing the cost of computing search directions in an interior point method for a quadratic program. The KKT system is partitioned and modified, based on the ratios of the slack variables and dual variables associated with the inequality constraints, to produce a smaller, approximate linear system. Analytical and numerical results are included that suggest the distribution of eigenvalues of the new, approximate system matrix is improved, which makes it more amenable to being … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 19 publications
(14 citation statements)
references
References 24 publications
(39 reference statements)
0
14
0
Order By: Relevance
“…This example is taken from . Consider a system of four equal masses connected by springs and to walls at the ends, see Figure .…”
Section: Examplesmentioning
confidence: 99%
“…This example is taken from . Consider a system of four equal masses connected by springs and to walls at the ends, see Figure .…”
Section: Examplesmentioning
confidence: 99%
“…NLP (5) incorporates both integration and optimization, which potentially opens the possibility of accelerating both subproblems. Primal-dual interior-point algorithms have proven their efficiency for the numerical solution of optimal control problems [28]. Moreover, with interior-point algorithms, the structure of the KKT matrix associated with the OCP remains fixed (unlike with active set methods), which is desirable for hardware implementations [16].…”
Section: Nonlinear Predictive Control Algorithmsmentioning
confidence: 99%
“…For example, with inexact Newton methods the resulting linear equations are only solved approximately using iterative solvers, such as the conjugate gradient method. This might result in more Newton steps, but since each iteration is more efficient than for an exact Newton method with Cholesky factorizations, the overall time taken by an inexact method might be less than that for an exact method [12].…”
Section: B Matching Algorithms and Hardwarementioning
confidence: 99%
“…The decision variables (x s and x h ) for those problems can be both continuous (discretization frequency, termination tolerance, chip clock frequency) and discrete (parallelization level, pipeline depth, model order), which makes solving the optimization problem non-trivial. Furthermore, derivative information is not always available for all objectives functions of (12) or this information might be unreliable. Evaluation of an objective function itself can be a time consuming task in some cases.…”
Section: Consider a Cost Functionmentioning
confidence: 99%