2002
DOI: 10.1080/1055678021000049345
|View full text |Cite
|
Sign up to set email alerts
|

Convergence Properties of the Inexact Levenberg-Marquardt Method under Local Error Bound Conditions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
54
0
6

Year Published

2005
2005
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 93 publications
(61 citation statements)
references
References 5 publications
1
54
0
6
Order By: Relevance
“…However, ∇F (x * ) might be singular but nevertheless F (x) may provide a local error bound at x * . One can refer to the examples provided in [8] and [27]. It is wellknown that the Levenberg-Marguardt method has local quadratic convergence when the Jacobian at x * is nonsingular.…”
mentioning
confidence: 99%
“…However, ∇F (x * ) might be singular but nevertheless F (x) may provide a local error bound at x * . One can refer to the examples provided in [8] and [27]. It is wellknown that the Levenberg-Marguardt method has local quadratic convergence when the Jacobian at x * is nonsingular.…”
mentioning
confidence: 99%
“…We call a solution x * of problem (1.1) a singular solution if G(x * ) is singular. Recently, there has been a growing interest in the study of Newton's method for solving (1.1) and nonlinear equations with singular solutions [1,3,[5][6][7][13][14][15][16]. These methods retain local quadratic convergence even if the problem has singular solutions.…”
Section: Assumption Amentioning
confidence: 99%
“…For simplicity, in Table 2, we simply use "1" and "2" to denote TRN method and RN method respectively. We define a ratio Ratio Time 1 Time 2 to compare the efficiency of the two methods, where Time 1 and Time 2 denote the cpu time used by TRN method and RN method. The value of Ratio greater than 1 shows that the TRN method performs better than the RN method does.…”
Section: Numerical Experimentsmentioning
confidence: 99%
“…Dan et al [4] have shown that the LM-algorithm is superlinearly convergent if k r ¼ Fðx r Þ k k d ; 0\d62: Furthermore, they showed that LM is quadratically convergent if d = 2. We use the updating formula k r ¼ Fðx r Þ k k 2 [33].…”
Section: Multivariate Minimization Algorithms For Network Trainingmentioning
confidence: 99%