2016
DOI: 10.3934/jimo.2016.12.1349
|View full text |Cite
|
Sign up to set email alerts
|

A subgradient-based convex approximations method for DC programming and its applications

Abstract: We consider an optimization problem that minimizes a function of the form f = f 0 + f 1 − f 2 with the constraint g − h ≤ 0, where f 0 is continuous differentiable, f 1 , f 2 are convex and g, h are lower semicontinuous convex. We propose to solve the problem by an inexact subgradient-based convex approximations method. Under mild assumptions, we show that the method is guaranteed to converge to a stationary point. Finally, some preliminary numerical results are given.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2018
2018
2018
2018

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 21 publications
0
1
0
Order By: Relevance
“…We denote the optimal value of (1) by f * and the optimal solution set by X * . The nonlinear conjugate gradient method is one of the effective algorithms for solving (1), some ideal results in recent years demonstrate its satisfactory performance under special conditions, and the search direction not only satisfies the sufficient descent condition but also belongs to a trust region, see [28,29,27,7]. Proximal-like bundle method is another promising and efficient algorithm for nonsmooth optimization problems, its convergence rate can be very rapid when compared with the conjugate gradient method, and when it comes to the search direction, unlike nonlinear conjugate gradient method, it is less strict for accepting a candidate as a useful direction since it only concerns with the descent of the objective function.…”
mentioning
confidence: 99%
“…We denote the optimal value of (1) by f * and the optimal solution set by X * . The nonlinear conjugate gradient method is one of the effective algorithms for solving (1), some ideal results in recent years demonstrate its satisfactory performance under special conditions, and the search direction not only satisfies the sufficient descent condition but also belongs to a trust region, see [28,29,27,7]. Proximal-like bundle method is another promising and efficient algorithm for nonsmooth optimization problems, its convergence rate can be very rapid when compared with the conjugate gradient method, and when it comes to the search direction, unlike nonlinear conjugate gradient method, it is less strict for accepting a candidate as a useful direction since it only concerns with the descent of the objective function.…”
mentioning
confidence: 99%