2014
DOI: 10.1155/2014/705674
|View full text |Cite
|
Sign up to set email alerts
|

New Mono- and Biaccelerator Iterative Methods with Memory for Nonlinear Equations

Abstract: Acceleration of convergence is discussed for some families of iterative methods in order to solve scalar nonlinear equations. In fact, we construct mono- and biparametric methods with memory and study their orders. It is shown that the convergence orders 12 and 14 can be attained using only 4 functional evaluations, which provides high computational efficiency indices. Some illustrations will also be given to reverify the theoretical discussions.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2014
2014
2018
2018

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 14 publications
0
5
0
Order By: Relevance
“…EI = 16 1∕5 ≃ 1.741, EI = 32 1∕6 ≃ 1.781, EI = 64 1∕7 ≃ 1.814 , respectively. Also, which are better than the other methods given in [1,[4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20], [22,[25][26][27][28][29][30][32][33][34][35][36][37][38][39][40][41], 66]. A comparison between the without memory, with memory and adaptive methods in terms of the maximum efficiency index alongside the number of steps per cycle are given in Fig 5. All algorithms are implemented using symbolic Math of MATHEMATICA [23].…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…EI = 16 1∕5 ≃ 1.741, EI = 32 1∕6 ≃ 1.781, EI = 64 1∕7 ≃ 1.814 , respectively. Also, which are better than the other methods given in [1,[4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20], [22,[25][26][27][28][29][30][32][33][34][35][36][37][38][39][40][41], 66]. A comparison between the without memory, with memory and adaptive methods in terms of the maximum efficiency index alongside the number of steps per cycle are given in Fig 5. All algorithms are implemented using symbolic Math of MATHEMATICA [23].…”
Section: Resultsmentioning
confidence: 99%
“…Let us describe it a little more. If we use information from the current and only the last iteration, we come up with the method introduced in [34,36]. Also, we have considered…”
Section: Recursive Adaptive Methods With Memorymentioning
confidence: 99%
“…Specifically, we consider the sixth-order method (JMWM6) introduced by Jovana in [3], sixthorder method (LMWM6) introduced by Lotfi et al in [4], seventh-order method (CMWM7) introduced by Cordero et al in [5], twelfth-order method I (LTMWM12I), II (LTMWM12II), III (LTMWM12III) and IV (LTMWM12IV) introduced by Lotfi and Tavakoli in [6], twelfth-order method I (EMWM12I), II (EMWM12II) and III (EMWM12III) introduced by Eftekhari in [7] and fourteenth-order I (LMWM14I) and II (LMWM14II) introduced by Lotfi et al in [8]. Nowadays, high-order methods are important because numerical applications use high precision in their computations; for this reason numerical tests have been carried out using variable precision arithmetic in MATHEMATICA 8 with 100 significant digits.…”
Section: Application To Non-smooth Equationsmentioning
confidence: 99%
“…Moreover, they allow us to define easily derivative-free schemes by using standard divided difference. The design of this kind of methods has experimented an important growth in the last years, starting with the technique first appeared in the text of Traub [1] and later developed by Petković et al [2] and used by other authors (see [3,4,5,6] and the references inside). Nevertheless, the understanding of their stability has not been developed, as far as we know.…”
Section: Introductionmentioning
confidence: 99%