2017
DOI: 10.1016/j.jsc.2016.11.011
|View full text |Cite
|
Sign up to set email alerts
|

Fast computation of the rank profile matrix and the generalized Bruhat decomposition

Abstract: The row (resp. column) rank profile of a matrix describes the stair-case shape of its row (resp. column) echelon form. We describe a new matrix invariant, the rank profile matrix, summarizing all information on the row and column rank profiles of all the leading sub-matrices. We show that this normal form exists and is unique over a field but also over any principal ideal domain and finite chain ring. We then explore the conditions for a Gaussian elimination algorithm to compute all or part of this invariant, … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
27
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
1

Relationship

3
2

Authors

Journals

citations
Cited by 15 publications
(27 citation statements)
references
References 28 publications
0
27
0
Order By: Relevance
“…It proposes two new structured representations for quasiseparable matrices, a Recursive Rank Revealing (RRR) representation that can be viewed as a simplified version of the HSS representation of Chandrasekaran et al (2006), and a representation based on the generalized Bruhat decomposition, which we name Compact Bruhat (CB) representation. The later one, is made possible by the connection that we make between the notion of quasiseparability and a matrix invariant, the rank profile matrix, that we introduced in Dumas et al (2015) and applied to the generalized Bruhat decomposition in Dumas et al (2016). More precisely, we show that the lower and upper triangular parts of a quasiseparabile matrix have a Generalized Bruhat decompositions off of which many coefficients can be shaved.…”
Section: Introductionmentioning
confidence: 90%
See 2 more Smart Citations
“…It proposes two new structured representations for quasiseparable matrices, a Recursive Rank Revealing (RRR) representation that can be viewed as a simplified version of the HSS representation of Chandrasekaran et al (2006), and a representation based on the generalized Bruhat decomposition, which we name Compact Bruhat (CB) representation. The later one, is made possible by the connection that we make between the notion of quasiseparability and a matrix invariant, the rank profile matrix, that we introduced in Dumas et al (2015) and applied to the generalized Bruhat decomposition in Dumas et al (2016). More precisely, we show that the lower and upper triangular parts of a quasiseparabile matrix have a Generalized Bruhat decompositions off of which many coefficients can be shaved.…”
Section: Introductionmentioning
confidence: 90%
“…We therefore propose here a compact variation on it, called the compact Bruhat, that will be used to derive algorithms taking advantage of fast matrix multiplication. This structured representation relies on the generalized Bruhat decomposition described in Manthey and Helmke (2007), thanks to the connection with the rank profile matrix made in Dumas et al (2016).…”
Section: The Compact Bruhat Representationmentioning
confidence: 99%
See 1 more Smart Citation
“…If the Prover is honest, I is the row rank profile of A and J is the column rank profile of A. Then, the application of Protocol 10 will output the correct rank profile matrix of A I,J which will lead the Verifier to the correct rank profile matrix of A, as described in [8,Theorem 37]. Note that one only needs to verify the lower bound on the rank of A once, which is why Certificate 9 is fully executed once, while the second run only verifies that the committed rank profile is a rank profile indeed.…”
Section: Provermentioning
confidence: 99%
“…The rank profile matrix of A, denoted by R A is the unique m × n {0, 1}-matrix with r nonzero entries, of which every leading sub-matrix has the same rank as the corresponding sub-matrix of A. It is possible to compute R A with a deterministic algorithm in O(mnr ω−2 ) or with a Monte-Carlo probabilistic algorithm in (r ω + m + n + µ(A)) 1+o (1) field operations [8], where µ(A) is the worst case arithmetic cost to multiply A by a vector.…”
Section: Introductionmentioning
confidence: 99%