2018
DOI: 10.1007/s10107-018-1253-9
|View full text |Cite
|
Sign up to set email alerts
|

A parallelizable augmented Lagrangian method applied to large-scale non-convex-constrained optimization problems

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
16
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 25 publications
(17 citation statements)
references
References 43 publications
1
16
0
Order By: Relevance
“…After uncovering a collection of theoretical properties on the asymptotic behavior of the inner PCG procedure for MCCFBN problems, we provide extensive computational experiments with linear and quadratic instances of up to one billion arcs and 200 and five million nodes in each subset of the node partition. This is in line with the the growing effort to address the design of scalable algorithmic solutions for large-scale optimization [5,9,12,13]. In fact, the presented results provide both a theoretical and computational support of the scalable property of the the specialized IPM for MCCFBN problems.…”
Section: Mathematics Subject Classificationsupporting
confidence: 79%
See 1 more Smart Citation
“…After uncovering a collection of theoretical properties on the asymptotic behavior of the inner PCG procedure for MCCFBN problems, we provide extensive computational experiments with linear and quadratic instances of up to one billion arcs and 200 and five million nodes in each subset of the node partition. This is in line with the the growing effort to address the design of scalable algorithmic solutions for large-scale optimization [5,9,12,13]. In fact, the presented results provide both a theoretical and computational support of the scalable property of the the specialized IPM for MCCFBN problems.…”
Section: Mathematics Subject Classificationsupporting
confidence: 79%
“…For the computational executions we considered the following algorithms implemented in CPLEX 12. 5 While the barrier algorithm can be applied both to linear and (quadratic) convex instances, the remaining solvers are only applied to linear instances. Thus, the barrier algorithm can be regarded as the current benchmark for general MCCFBN problems.…”
Section: Computational Experimentsmentioning
confidence: 99%
“…For example, the Hestenes-Powell-Rockafellar augmented Lagrangian [1][2][3], the cubic augmented Lagrangian [4], Mangasarian's augmented Lagrangian [5,6], the exponential penalty function [7,8], the log-sigmoid Lagrangian [9], modified barrier functions [8,10], the p-th power augmented Lagrangian [11], and nonlinear augmented Lagrangian functions [12][13][14][15]. The other related discussion on augmented Lagrangians regarding special constrained optimization includes second-order cone programming [16,17], semidefinite programming [18][19][20], cone programming [21][22][23], semi-infinite programming [24,25], min-max programming [26], distributed optimization [27], mixed integer programming [28], stochastic mixed-integer programs [29], generalized Nash equilibrium problems [30], quasi-variational inequalities [31], composite convex programming [32], and sparse discrete problems [33]. The duality theory is closely related to the perturbation of primal problem.…”
Section: Introductionmentioning
confidence: 99%
“…Augmented Lagrange multipliers are an important concept in duality theory. Their existence is important for the global convergence analysis of primal-dual type algorithms based on the use of augmented Lagrangians [7,19,29,32,33]. In addition, augmented Lagrange multipliers are closely related to saddle points, the zero duality gap property, and exact penalty representation.…”
Section: Introductionmentioning
confidence: 99%
“…Although the relaxed problem is convex, the resulting Lagrangian relaxation subproblem is usually nonconvex and as difficult to solve as the original problem. To obtain a tighter relaxation with convex subproblems, augmented Lagrangian relaxation [9,36] can be used, at the expense of introducing nonlinear terms. Moreover, both traditional Lagrangian relaxation and the augmented version usually result in nonsmooth problems.…”
Section: Introductionmentioning
confidence: 99%