2016
DOI: 10.1080/10556788.2015.1124431
|View full text |Cite
|
Sign up to set email alerts
|

A sufficient descent conjugate gradient method and its global convergence

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 19 publications
0
11
0
Order By: Relevance
“…One direction was to find the best fixed value for t and the other the best approximation for t in each iteration. Analyzing the results from [18,26,131,139], we conclude that the scalar t was defined by a fixed value of 0.1 in numerical experiments. Also, numerical experience related to the fixed valued t = 1 was reported in [26].…”
Section: Dai-liao Methods and Its Variants An Extension Of The Conjugmentioning
confidence: 85%
See 1 more Smart Citation
“…One direction was to find the best fixed value for t and the other the best approximation for t in each iteration. Analyzing the results from [18,26,131,139], we conclude that the scalar t was defined by a fixed value of 0.1 in numerical experiments. Also, numerical experience related to the fixed valued t = 1 was reported in [26].…”
Section: Dai-liao Methods and Its Variants An Extension Of The Conjugmentioning
confidence: 85%
“…, µ 3 ∈ (0, +∞) and 1 is an any given positive constant. Motivated by these modifications, in [29] the authors defined the modified PRP method as [18] gave a variant of the PRP method which we call the WYL method, that is,…”
Section: Denominatormentioning
confidence: 99%
“…wherein t > 0 is a scalar. Some well-known formulas for defining β k have been created by modifying the conjugate gradient parameter β DL k [2][3][4][5][6][7][8][9]. One of them is denoted as β MHSDL k and defined in [7] by…”
Section: Introduction and Background Resultsmentioning
confidence: 99%
“…So far, the research in finding the appropriate value of t has evolved in two directions. One group of methods is aimed at finding an appropriate fixed value for t [1,2,[6][7][8], while methods from another group promote appropriate rules for computing values of t in each iteration, which ensure a satisfactory decrease of the objective. In our research, we will pay attention to the second research stream: find the parameter t whose values change through iterations so that the faster convergence is achieved.…”
Section: Introduction and Background Resultsmentioning
confidence: 99%
“…However, this approach was restricted to parameter identification problems. e common nonlinear least squares iterative solutions [21] are the gradient descent method [22], Gauss-Newton method [23], and Levenberg-Marquart (LM) method [24,25]. For example, in [26], the nonlinear least squares problem of the distributional robust parameter identification model for time-delay systems is transformed into a single-level optimization problem and a gradient-based optimization method is developed to solve the transformed problem.…”
Section: Introductionmentioning
confidence: 99%