2018
DOI: 10.1016/j.cnsns.2017.11.013
|View full text |Cite
|
Sign up to set email alerts
|

On strong homogeneity of a class of global optimization algorithms working with infinite and infinitesimal scales

Abstract: The necessity to find the global optimum of multiextremal functions arises in many applied problems where finding local solutions is insufficient. One of the desirable properties of global optimization methods is strong homogeneity meaning that a method produces the same sequences of points where the objective function is evaluated independently both of multiplication of the function by a scaling constant and of adding a shifting constant. In this paper, several aspects of global optimization using strongly ho… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
23
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 51 publications
(23 citation statements)
references
References 34 publications
0
23
0
Order By: Relevance
“…The Grossone Methodology (GM), which has been proved to stay apart from non-Standard Analysis [23], is a novel way for dealing with infinite, finite and infinitesimal numbers at once and in a numerical way [22]. Originally proposed in 2003, such framework already counts a lot of practical applications, for example it has been applied to non-linear optimization [11,18], global optimization [25], ordinary differential equations [1,15,21,24], control theory [3,12], and game theory [13,14], to cite a few. Also linear optimization positively enjoyed the advent of GM, as shown in [4][5][6][7].…”
Section: The Grossone Methodology and The G-simplex Algorithmmentioning
confidence: 99%
“…The Grossone Methodology (GM), which has been proved to stay apart from non-Standard Analysis [23], is a novel way for dealing with infinite, finite and infinitesimal numbers at once and in a numerical way [22]. Originally proposed in 2003, such framework already counts a lot of practical applications, for example it has been applied to non-linear optimization [11,18], global optimization [25], ordinary differential equations [1,15,21,24], control theory [3,12], and game theory [13,14], to cite a few. Also linear optimization positively enjoyed the advent of GM, as shown in [4][5][6][7].…”
Section: The Grossone Methodology and The G-simplex Algorithmmentioning
confidence: 99%
“…(Notice that the noncontradictoriness of the ①-based computational methodology has been studied in depth in [15][16][17].) From the practical point of view, this methodology has given rise both to a new supercomputer patented in several countries (see [18]) and called Infinity Computer and to a variety of applications starting from optimization (see [12,[19][20][21][22][23][24]) and going through infinite series (see [13,[25][26][27][28]), fractals and cellular automata (see [25,[29][30][31][32]), hyperbolic geometry and percolation (see [33,34]), the first Hilbert problem and Turing machines (see [13,35,36]), infinite decision making processes and probability (see [13,[37][38][39]), numerical differentiation and ordinary differential equations (see [40][41][42][43]), etc.…”
Section: A Brief Introduction To the ①-Based Computational Methodologymentioning
confidence: 99%
“…Finally, in the above matrices L and B we assume (for the sake of simplicity) that the Krylov-based method in [14] has performed all CG steps, with the exception of only one planar iteration (namely the kth iteration-see [14] and [48]), corresponding to have p T k Ap k ≈ 0. Then, our novel approach proposes to introduce the numeral grossone, as in [13,[22][23][24], and follows some guidelines from [12], in order to exploit a suitable matrix factorization from (16), such that Lemma 4.1 is fulfilled. In this regard, consider matrix B in (16) and the next technical result.…”
Section: Coupling Cg ① With the Algorithm In [14]mentioning
confidence: 99%
“…In summary, with the change suggested by Finkel and Kelly, DIRECT produces the same sequence of iterates when applied to f (x) as it does when applied to a + b f (x) for any a and any b > 0. This is an obviously desirable property called "strong homogeneity" [12,66,78].…”
Section: Making Direct Insensitive To Additive and Multiplicative Scamentioning
confidence: 99%