2022
DOI: 10.1007/s11075-022-01291-1
|View full text |Cite
|
Sign up to set email alerts
|

Deviation maximization for rank-revealing QR factorizations

Abstract: In this paper, we introduce a new column selection strategy, named here “Deviation Maximization”, and apply it to compute rank-revealing QR factorizations as an alternative to the well-known block version of the QR factorization with the column pivoting method, called QP3 and currently implemented in LAPACK’s routine. We show that the resulting algorithm, named QRDM, has similar rank-revealing properties of QP3 and better execution times. We present experimental results on a wide data set of numerically singu… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(9 citation statements)
references
References 26 publications
0
9
0
Order By: Relevance
“…If u 2 is the vector of column norms of C, then the deviation maximization can select numerically linearly independent columns. 20 Here we introduce u 1 as the steepest descend direction at the current solution point in order to take into account first order information to fit the Lawson-Hanson algorithm.…”
Section: The Lawson-hanson Algorithm With Deviation Maximizationmentioning
confidence: 99%
See 4 more Smart Citations
“…If u 2 is the vector of column norms of C, then the deviation maximization can select numerically linearly independent columns. 20 Here we introduce u 1 as the steepest descend direction at the current solution point in order to take into account first order information to fit the Lawson-Hanson algorithm.…”
Section: The Lawson-hanson Algorithm With Deviation Maximizationmentioning
confidence: 99%
“…The set of candidate I$$ I $$ is made of those column indices such that the corresponding entries of boldu1,boldu2$$ {\mathbf{u}}_1,{\mathbf{u}}_2 $$ are "large enough" with respect to thresholds τ1,τ2$$ {\tau}_1,{\tau}_2 $$, respectively. If boldu2$$ {\mathbf{u}}_2 $$ is the vector of column norms of C$$ C $$, then the deviation maximization can select numerically linearly independent columns 20 . Here we introduce boldu1$$ {\mathbf{u}}_1 $$ as the steepest descend direction at the current solution point in order to take into account first order information to fit the Lawson‐Hanson algorithm.…”
Section: Active Set Algorithms For Nonnegative Least Squaresmentioning
confidence: 99%
See 3 more Smart Citations