2019
DOI: 10.1109/tsp.2018.2890368
|View full text |Cite
|
Sign up to set email alerts
|

Online Learning With Inexact Proximal Online Gradient Descent Algorithms

Abstract: We consider non-differentiable dynamic optimization problems such as those arising in robotics and subspace tracking. Given the computational constraints and the time-varying nature of the problem, a low-complexity algorithm is desirable, while the accuracy of the solution may only increase slowly over time. We put forth the proximal online gradient descent (OGD) algorithm for tracking the optimum of a composite objective function comprising of a differentiable loss function and a non-differentiable regularize… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

4
88
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 91 publications
(97 citation statements)
references
References 34 publications
4
88
0
Order By: Relevance
“…When prediction is used, P A T accumulates the prediction error. The gradient variation measure we propose in (5) is novel and different from the existing one based on the sup-norm considered in [15], [26], [28], [36]:…”
Section: Preliminaries and Problem Definitionmentioning
confidence: 99%
See 2 more Smart Citations
“…When prediction is used, P A T accumulates the prediction error. The gradient variation measure we propose in (5) is novel and different from the existing one based on the sup-norm considered in [15], [26], [28], [36]:…”
Section: Preliminaries and Problem Definitionmentioning
confidence: 99%
“…All these methods focus on centralized optimization problems. Distributed online gradient descent algorithms for unconstrained problems are proposed in [12], [28]- [31], while [32] analyzes the dynamic regret of the time-varying constrained case.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In signal processing, the estimation of timevarying signals on the basis of observations gathered online can be cast as the problem of solving a series of varying problems [4]- [6]. In robotics, path tracking and leader following problems can be cast in the framework of (1), see for example [7]- [9]. Other application domains are economics [10], smart grids [11], and non-linear optimization [12], [13].…”
Section: Introductionmentioning
confidence: 99%
“…While (2) pertains to problems where the map T is "fixed" during the execution of the KM algorithm and it is known, this paper revisits the convergence of the KM method in case of time-varying and possibly inexact maps. This setting is motivated by recent efforts to address the design and analysis of running algorithms for time-varying optimization problems [4], [12]- [14], with particular emphasis on feedbackbased online optimization [14], [15]; additional works along these lines are in the context of online optimization (see the representative works [16]- [18] and references therein) and learning in dynamic environments [19], [20]. In a timevarying optimization setting, the underlying cost, constraints, and problem inputs may change at every step (or a few steps) of the algorithm; therefore, pertinent tasks in this case involve the derivation of results for the tracking of optimal solution trajectories.…”
Section: Introduction and Problem Formulationmentioning
confidence: 99%