1981
DOI: 10.1007/bf00935173
|View full text |Cite
|
Sign up to set email alerts
|

A minimization method for the sum of a convex function and a continuously differentiable function

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
46
0

Year Published

2009
2009
2021
2021

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 58 publications
(46 citation statements)
references
References 19 publications
0
46
0
Order By: Relevance
“…Problems of minimizing the sum of a convex function and a continuously differentiable function cover a fairly wide range of problems which are encountered in practice, see [19,31]. The form of the problem is as follows…”
Section: Applications and Base Implementationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Problems of minimizing the sum of a convex function and a continuously differentiable function cover a fairly wide range of problems which are encountered in practice, see [19,31]. The form of the problem is as follows…”
Section: Applications and Base Implementationsmentioning
confidence: 99%
“…Many researchers are involved in developing algorithms for minimizing the sum of different kinds of functions and this is a very active field. For instance, the sum of two convex functions which are continuously differentiable with Lipschitz continuous gradients, see [13,24]; the sum of two nonsmooth convex functions [21,22] or the sum of two separable convex functions with different variables, see [5,11]; the sum of a nonsmooth convex function and a continuously differentiable function which is convex, see [2,42], or not convex, see [19,31]; the sum of a convex function and a concave function, see [36,43] and so on. The central interest in these papers is exploiting the separable structure of the objective functions.…”
Section: Introductionmentioning
confidence: 99%
“…In [7] Fukushima and Mine adapted their original algorithm reported in [17] by adding a proximal term ρ 2 x − x k 2 to the objective of the convex optimisation subproblem. As a result they obtain an optimisation subproblem that is identical to the one in Step 2 of DCA, when one transforms (1) into (4) by adding ρ 2 x 2 to each convex function.…”
Section: Preliminariesmentioning
confidence: 99%
“…The function in problem (1) belongs to two important classes of functions: the class of functions that can be decomposed as a sum of a convex function and a differentiable function (composite functions) and the class of functions that are representable as difference of convex functions (DC functions). In 1981, Fukushima and Mine [7,17] introduced two algorithms to minimise a composite function. In both algorithms, the main idea is to linearly approximate the differentiable part of the composite function at the current point and then minimise the resulting convex function to find a new point.…”
Section: Introductionmentioning
confidence: 99%
“…Problem (1.1) with P[x) = \\x\\i and Problem (1.2) arise in many applications, including compressed sensing [9,13,24], signal/image restoration [5,19,23], data mining/classification [3,14,21], and parameter estimation [8,20]. There has been considerable discussion on the problem (1.1), see for instance [2,6,7,11,15]. If P is also smooth, then a coordinate gradient descent based on Armijo-type rule was well developed for the unconditional minimization problem (1.1) in Karmanov [10, pp.…”
Section: Introductionmentioning
confidence: 99%