2005
DOI: 10.1007/s10957-005-6389-0
|View full text |Cite
|
Sign up to set email alerts
|

Regularization Methods for Uniformly Rank-Deficient Nonlinear Least-Squares Problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2006
2006
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(16 citation statements)
references
References 8 publications
0
16
0
Order By: Relevance
“…The idea of constructing an iterative method for the computation of the minimal norm solution of a nonlinear least-squares problem was first studied by Eriksson et al In [15,16,17], the case where the Jacobian is rank-deficient or ill-conditioned was analyzed, and solution techniques based on the Gauss-Newton method and on Tikhonov regularization in standard form were proposed.…”
Section: R M (X)]mentioning
confidence: 99%
See 1 more Smart Citation
“…The idea of constructing an iterative method for the computation of the minimal norm solution of a nonlinear least-squares problem was first studied by Eriksson et al In [15,16,17], the case where the Jacobian is rank-deficient or ill-conditioned was analyzed, and solution techniques based on the Gauss-Newton method and on Tikhonov regularization in standard form were proposed.…”
Section: R M (X)]mentioning
confidence: 99%
“…We will explore in this paper what the consequence is of imposing a regularity constraint directly on the solution x of problem (1.1). Approaches of this kind were studied by Eriksson and Wedin [15,16,17]: they proposed a minimal-norm Gauss-Newton method and a Tikhonov regularization method in standard form. We extend, in Theorem 4.2, the minimal-norm Gauss-Newton method by introducing a regularization matrix L. Moreover, in Section 5 we investigate Tikhonov regularization in general form and the use of truncated SVD/GSVD in the minimal-norm Gauss-Newton method.…”
mentioning
confidence: 99%
“…Output: Solution to f (x) = 0 or if k = k max the vector x kmax A restart, see step 14., is performed if either the step length is too small indicating not enough descent, or if the gradient is small while the norm of f is not small, see step 12. The first case appears when the Gauss-Newton method does not converge locally, i.e., the solution has a large residual f and/or a small curvature, see [17] for details. The second case for a restart may occur when the algorithm is converging to a local minima where the norm of f is not close to zero.…”
Section: Greedy Gauss-newton Algorithmmentioning
confidence: 99%
“…Many practical problems can be converted into solving this problem, such as ill-posed problems, inverse problems, some constrained optimization problems, and model parameter estimation [27,31,28,29,14,15].…”
Section: Numerical Experimentsmentioning
confidence: 99%