2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP) 2019
DOI: 10.1109/mlsp.2019.8918830
|View full text |Cite
|
Sign up to set email alerts
|

On Convergence of Projected Gradient Descent for Minimizing a Large-Scale Quadratic Over the Unit Sphere

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
23
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 9 publications
(23 citation statements)
references
References 9 publications
0
23
0
Order By: Relevance
“…Remark 6. The local linear rate in (50) matches the rate provided by Theorem 1 in [35]. Compared to the setting in [35], here we consider a special case of the quadratic that is convex (and hence, λ d ≥ 0).…”
Section: Least Squares With the Unit Norm Constraintmentioning
confidence: 76%
See 3 more Smart Citations
“…Remark 6. The local linear rate in (50) matches the rate provided by Theorem 1 in [35]. Compared to the setting in [35], here we consider a special case of the quadratic that is convex (and hence, λ d ≥ 0).…”
Section: Least Squares With the Unit Norm Constraintmentioning
confidence: 76%
“…which means Ax * − b is in the left null space of AV ⊥ C . 3 Step 3: Evaluating the projection in (33) at z * η = x * − ηA ⊤ (Ax * − b) and using the stationarity condition (35) to eliminate the term…”
Section: A Linearly Constrained Least Squaresmentioning
confidence: 99%
See 2 more Smart Citations
“…where 0 > 0, 0 < < 1, and ≥ 0 are real scalars. In optimization, equations of form (1) emanates from the convergence analysis of iterative first-order methods [6][7][8][9][10], in which the sequence { } ∞ =0 represents the error through iterations (e.g., the distance from the current update to the optimum). Naturally, the convergence of { } ∞ =0 to zero provides guarantees on the performance of these optimization algorithms.…”
Section: Introductionmentioning
confidence: 99%