2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2014
DOI: 10.1109/icassp.2014.6855218
|View full text |Cite
|
Sign up to set email alerts
|

A saddle point algorithm for networked online convex optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
113
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 48 publications
(116 citation statements)
references
References 7 publications
3
113
0
Order By: Relevance
“…The primal gradient step of the classical saddlepoint approach in [11], [13], [14] is tantamount to minimizing a first-order approximation of L t−1 (x, λ t ) at x = x t−1 plus a proximal term x − x t−1 2 /(2α). We call the primal-dual recursion (8) and (9) as a modified online saddle-point approach, since the primal update (8) is not an exact gradient step when the constraint g t (x) is nonlinear w.r.t.…”
Section: Remarkmentioning
confidence: 99%
See 4 more Smart Citations
“…The primal gradient step of the classical saddlepoint approach in [11], [13], [14] is tantamount to minimizing a first-order approximation of L t−1 (x, λ t ) at x = x t−1 plus a proximal term x − x t−1 2 /(2α). We call the primal-dual recursion (8) and (9) as a modified online saddle-point approach, since the primal update (8) is not an exact gradient step when the constraint g t (x) is nonlinear w.r.t.…”
Section: Remarkmentioning
confidence: 99%
“…x. However, when g t (x) is linear, (8) and (9) reduce to the approach in [11], [13], [14]. Similar to the primal update of OCO with long-term but timeinvariant constraints in [12], the minimization in (8) penalizes the exact constraint violation g t (x) instead of its first-order approximation, which improves control of constraint violations and facilitates performance analysis of MOSP.…”
Section: Remarkmentioning
confidence: 99%
See 3 more Smart Citations