2021
DOI: 10.48550/arxiv.2108.05345
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Lawson-Hanson Algorithm with Deviation Maximization: Finite Convergence and Sparse Recovery

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Similar to the simplex method, the algorithm is an active-set algorithm that iteratively sets parts of the variables to zero in an attempt to identify the active constraints and solves the unconstrained least squares sub-problem for this active set of constraints. It is still, arguably, the most famous method for solving (NNLS) and several improvements have been proposed in a series of follow-up papers [8,66,47,49,21]. Its caveat, however, is that it depends on the normal equations, which makes it infeasible for ill-conditioned or large scale problems.…”
Section: Related Work -Nnlsmentioning
confidence: 99%
See 1 more Smart Citation
“…Similar to the simplex method, the algorithm is an active-set algorithm that iteratively sets parts of the variables to zero in an attempt to identify the active constraints and solves the unconstrained least squares sub-problem for this active set of constraints. It is still, arguably, the most famous method for solving (NNLS) and several improvements have been proposed in a series of follow-up papers [8,66,47,49,21]. Its caveat, however, is that it depends on the normal equations, which makes it infeasible for ill-conditioned or large scale problems.…”
Section: Related Work -Nnlsmentioning
confidence: 99%
“…Its caveat, however, is that it depends on the normal equations, which makes it infeasible for ill-conditioned or large scale problems. Moreover, up to this point there exist no better theoretical guarantees for the algorithm and its modifications than convergence in finitely many steps [44,Chapter 23], [21,Theorem 3]. Another line of research has been developing projected gradient methods for solving (NNLS), which come with linear convergence guarantees [41], [55] [46].…”
Section: Related Work -Nnlsmentioning
confidence: 99%