2020
DOI: 10.1007/s10915-020-01332-8
|View full text |Cite
|
Sign up to set email alerts
|

Variable Smoothing for Convex Optimization Problems Using Stochastic Gradients

Abstract: We aim to solve a structured convex optimization problem, where a nonsmooth function is composed with a linear operator. When opting for full splitting schemes, usually, primal–dual type methods are employed as they are effective and also well studied. However, under the additional assumption of Lipschitz continuity of the nonsmooth function which is composed with the linear operator we can derive novel algorithms through regularization via the Moreau envelope. Furthermore, we tackle large scale problems by me… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 26 publications
0
2
0
Order By: Relevance
“…We also show that in case some of the involved functions are Lipschitz continuous our methods can be easily combined with the ones proposed in [10]. In [11] one can find a variable smoothing approach to minimize convex optimization problems with stochastic gradients, so that large scale problems can be addressed, where, different to our work, the Moreau-envelope, a special case of Nesterov smoothing, is used. In order to illustrate our theoretical achievements we consider applications in Logistics (Location Optimization), Medical Imaging (Tomography) and Machine Learning (Support Vector Machines) modelled as optimization problems that are iteratively solved via the algorithms we propose in this work.…”
Section: Introductionmentioning
confidence: 95%
“…We also show that in case some of the involved functions are Lipschitz continuous our methods can be easily combined with the ones proposed in [10]. In [11] one can find a variable smoothing approach to minimize convex optimization problems with stochastic gradients, so that large scale problems can be addressed, where, different to our work, the Moreau-envelope, a special case of Nesterov smoothing, is used. In order to illustrate our theoretical achievements we consider applications in Logistics (Location Optimization), Medical Imaging (Tomography) and Machine Learning (Support Vector Machines) modelled as optimization problems that are iteratively solved via the algorithms we propose in this work.…”
Section: Introductionmentioning
confidence: 95%
“…Steps of our algorithm have the form for some step length . Accelerated versions of these approaches have been proposed for convex problems in [ 3 5 ]. The use of acceleration makes the analysis more complicated than for the gradient case; see [ 6 , 7 ].…”
Section: Problem Class and Algorithmic Approachmentioning
confidence: 99%
“…Typically, we know in advance whether or not h and g in ( 1 ) are convex, and if they are, we could choose one of the well-established methods that make use of gradients, proximal operators, and possibly acceleration. See, for example the proximal accelerated gradient approach of [ 3 ], which achieves a rate of . A method in the spirit of [ 29 ], which automatically adapts to convexity and simultaneously achieves the optimal rates for both nonconvex and convex problems would be desirable, but is outside the scope of this work.…”
Section: Variable Smoothingmentioning
confidence: 99%
“…Then, an accelerated proximal gradient method is employed on the smooth problem min f μ (Ax) + w(x), [4,14]. The latter approach in which the smoothing parameter is fixed in advance can also be refined within an adaptive smoothing which employs one iteration of an accelerated method on the function f μ k (Ax) + w(x) where μ k is a decreasing sequences that diminishes to zero, as k, the dynamic iteration index, increases, see for instance [7,21].…”
Section: Introductionmentioning
confidence: 99%