2014
DOI: 10.1007/s11750-014-0326-z
|View full text |Cite
|
Sign up to set email alerts
|

A variable smoothing algorithm for solving convex optimization problems

Abstract: In this article we propose a method for solving unconstrained optimization problems with convex and Lipschitz continuous objective functions. By making use of the Moreau envelopes of the functions occurring in the objective, we smooth the latter to a convex and differentiable function with Lipschitz continuous gradient by using both variable and constant smoothing parameters. The resulting problem is solved via an accelerated first-order method and this allows us to recover approximately the optimal solutions … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 20 publications
(7 citation statements)
references
References 28 publications
0
7
0
Order By: Relevance
“…One should notice that, since the smoothing parameters are constant, (DS) solves continuously differentiable approximations of (5.5) and (5.6) and does therefore not necessarily converge to the unique minimizers of (5.3) and (5.4). As a second smoothing algorithm, we considered the variable smoothing technique (VS) in [8], which successively reduces the smoothing parameter in each iteration and therefore solves the primal optimization problems as the iteration counter increases. We further considered the primal-dual hybrid gradient method (PDHG) as discussed in [26], which is nothing else than the primal-dual method in [15].…”
Section: Algorithm 42mentioning
confidence: 99%
“…One should notice that, since the smoothing parameters are constant, (DS) solves continuously differentiable approximations of (5.5) and (5.6) and does therefore not necessarily converge to the unique minimizers of (5.3) and (5.4). As a second smoothing algorithm, we considered the variable smoothing technique (VS) in [8], which successively reduces the smoothing parameter in each iteration and therefore solves the primal optimization problems as the iteration counter increases. We further considered the primal-dual hybrid gradient method (PDHG) as discussed in [26], which is nothing else than the primal-dual method in [15].…”
Section: Algorithm 42mentioning
confidence: 99%
“…Notice that in this case the function f + h = h is γ-strongly convex with γ = λ min , where λ min is the smallest eigenvalue of the matrix K. Due to the continuity of the functions g i , i = 1, ..., n, the qualification condition required in Theorem 15 is guaranteed. We solved (46) by Algorithm 14 and used for µ > 0 the following formula (see [8]) As initial choices in Algorithm 14 we took τ 0 = 0.99 2γ K , λ = K + 1 and σ i,0 = 1 + τ 0 (2γ − K τ 0 )/λ/(nτ 0 K 2 ), i = 1, ..., n, and tested different combinations of the kernel parameter σ over a fixed number of 1500 iterations. In Table 2 we present the misclassification rate in percentage for the training and for the test data (the error for the training data is less than the one for the test data).…”
Section: Support Vector Machines Classificationsmentioning
confidence: 99%
“…A similar technique is also proposed in [55], but only for symmetric primal-dual methods. It is also used in conjunction with Nesterov's smoothing technique in [10] for unconstrained problem but had only an O(ln(k)/k) convergence rate.…”
mentioning
confidence: 99%