2017
DOI: 10.1186/s13660-017-1341-z
|View full text |Cite
|
Sign up to set email alerts
|

Abstract: In this paper, the Dai-Kou type conjugate gradient methods are developed to solve the optimality condition of an unconstrained optimization, they only utilize gradient information and have broader application scope. Under suitable conditions, the developed methods are globally convergent. Numerical tests and comparisons with the PRP+ conjugate gradient method only using gradient show that the methods are efficient.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 22 publications
0
3
0
Order By: Relevance
“…Armijo and weak Wolfe-Powell(WWP) method are two very interesting and classic line-search in deterministic optimization, it has received wide attention. Thus, some scholars have chosen to modify the WWP method [14] and have obtained some ideal results. [15] presented a new modified WWP line search called the YWL line search as follows:…”
Section: Cmme 2023 Journal Of Physics: Conference Series 2607 (2023) ...mentioning
confidence: 99%
“…Armijo and weak Wolfe-Powell(WWP) method are two very interesting and classic line-search in deterministic optimization, it has received wide attention. Thus, some scholars have chosen to modify the WWP method [14] and have obtained some ideal results. [15] presented a new modified WWP line search called the YWL line search as follows:…”
Section: Cmme 2023 Journal Of Physics: Conference Series 2607 (2023) ...mentioning
confidence: 99%
“…where f k = f (x k ), g k = g(x k ), 0 < σ 1 < (1/2) and σ 1 < σ 2 < 1 [1]. Different inexact line search techniques are presented in [1][2][3][4][5].…”
Section: Introductionmentioning
confidence: 99%
“…To improve orthogonality of the gradient vectors generated by the DK method as an advantageous feature of the linear CG methods, Liu et al [77] developed a special scaled version of the DK method using a matrix obtained based on a QN update. Huang and Liu [60] set 𝜗 𝑘 in (4.1) as a convex combination of the Oren-Luenberger [87] and Oren-Spedicato [88] scaling parameters and developed modified DK algorithms based on some new line search conditions.…”
mentioning
confidence: 99%