1992
DOI: 10.1137/0613079
|View full text |Cite
|
Sign up to set email alerts
|

Modifying the QR-Decomposition to Constrained and Weighted Linear Least Squares

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
46
0

Year Published

1994
1994
2022
2022

Publication Types

Select...
4
3
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 48 publications
(46 citation statements)
references
References 10 publications
0
46
0
Order By: Relevance
“…The number of arithmetic floating point operations required to solve (3.2) for all pixels in W i−1 is proportional to n i−1 and therefore modest. The solution of each least-squares problem can be computed, e.g., by determining a modified QRdecomposition of a 9 × 3 matrix based on modified Householder transformations; see Gulliksson and Wedin [12] for details on the latter. The modifications are required because of the weights in the least-squares problem.…”
Section: Algorithm 22 Multilevel Algorithmmentioning
confidence: 99%
“…The number of arithmetic floating point operations required to solve (3.2) for all pixels in W i−1 is proportional to n i−1 and therefore modest. The solution of each least-squares problem can be computed, e.g., by determining a modified QRdecomposition of a 9 × 3 matrix based on modified Householder transformations; see Gulliksson and Wedin [12] for details on the latter. The modifications are required because of the weights in the least-squares problem.…”
Section: Algorithm 22 Multilevel Algorithmmentioning
confidence: 99%
“…In [4] Forsgren also used the Binet-Cauchy formula and Cramer's rule to obtain the supremum of weighted pseudoinverses arising from solving the LSE problem [9] …”
Section: Corollary 21 Under the Notation And The Conditions In Theomentioning
confidence: 99%
“…When solving (1.2) by an interior method [1], [5], [6], [17], [22]- [24], one will obtain the following weighted least squares (WLS) problem min x∈R n W 1 2 (Xx − g) , (1.3) where W = W (τ ) ∈ P(X), τ > 0 is a parameter. Similarly, when solving the equality constrained least squares problem (LSE) [9] min x∈R n W 1 2 2 (Kx − g 2 ) subject to Lx = g 1 (1.4) by the weighting method, one will also obtain a WLS problem like (1.3). When τ → +∞, the minimum 2-norm solution of (1.3) will tend to the minimum 2-norm solution of (1.2) or (1.4).…”
Section: Introductionmentioning
confidence: 99%
“…He obtains a backward error bound similar to our Theorem 4.3 and also shows that iterative refinement may be applied. Reid points out that the method is equivalent to a method of Gulliksson and Wedin [9], [10], which is expressed in the language of "M -invariant reflections. "…”
Section: Introduction Consider the Equality Constrained Least Squarementioning
confidence: 99%