The multistart clustering global optimization method called GLOBAL has been introduced in the 1980s for bound constrained global optimization problems with black-box type objective function. Since then the technological environment has been changed much. The present paper describes shortly the revisions and updates made on the involved algorithms to utilize the novel technologies, and to improve its reliability. We discuss in detail the results of the numerical comparison with the old version and with C-GRASP, a continuous version of the GRASP method. According to these findings, the new version of GLOBAL is both more reliable and more efficient than the old one, and it compares favorably with C-GRASP too.
Four methods for global numerical black box optimization with origins in the mathematical programming community are described and experimentally compared with the state of the art evolutionary method, BIPOP-CMA-ES. The methods chosen for the comparison exhibit various features that are potentially interesting for the evolutionary computation community: systematic sampling of the search space (DIRECT, MCS) possibly combined with a local search method (MCS), or a multi-start approach (NEWUOA, GLOBAL) possibly equipped with a careful selection of points to run a local optimizer from (GLOBAL). The recently proposed “comparing continuous optimizers” (COCO) methodology was adopted as the basis for the comparison. Based on the results, we draw suggestions about which algorithm should be used depending on the available budget of function evaluations, and we propose several possibilities for hybridizing evolutionary algorithms (EAs) with features of the other compared algorithms.
GLOBAL is a multi-start type stochastic method for bound constrained global optimization problems. Its goal is to find the best local minima that are potentially global. For this reason it involves a combination of sampling, clustering, and local search. The role of clustering is to reduce the number of local searches by forming groups of points around the local minimizers from a uniformly sampled domain and to start few local searches in each of those groups. We evaluate the performance of the GLOBAL algorithm on the BBOB 2009 noiseless testbed, containing problems which reflect the typical difficulties arising in real-world applications. The obtained results are also compared with those obtained form the simple multi-start procedure in order to analyze the effects of the applied clustering rule. An improved parameterization is introduced in the GLOBAL method and the performance of the new procedure is compared with the performance of the MATLAB GlobalSearch solver by using the BBOB 2010 test environment.
Multi Level Single Linkage is a multistart, stochastic global optimization method which relies on random sampling and local search. In this paper, we benchmarked three variants of the MLSL algorithm by using two gradient based and a derivative-free local search method on the noiseless function testbed. The three methods were also compared with a commercial multistart solver, called OQNLP (OptQuest/NLP).Our experiment showed that, the results may be influenced essentially by the applied local search procedure. Depending of the type of the problem the gradient based local search methods are faster in the initial stage of the optimization, while the derivative-free method show a superior performance in the final phase for moderate dimensions. Considering the percentage of the solved problems, OQNLP is similar or even better (for multi-modal and weakly structured functions) in 5-D than the MLSL method equipped with the gradient type local search methods, while on 20-D the latter algorithms are usually more faster.
Multi Level Single Linkage (MLSL) is a well known stochastic global optimization method. In this paper, a new hybrid variant (HMLSL) of the MLSL algorithm is presented. The most important improvements are related to the sampling phase: the sample is generated from a Sobol quasi-random sequence and a few percent of the population is further improved by using crossover and mutation operators like in a traditional differential evolution (DE) method.The aim of this study is to evaluate the performance of the new HMLSL algorithm on the testbed of 24 noiseless functions. The new algorithm is also compared against a simple MLSL and a traditional DE in order to identify the benefits of the applied improvements.The results confirm that the HMLSL outperforms the MLSL and DE methods. The new method has a larger probability of success and usually is faster especially in the final stage of the optimization than the other two algorithms.
The covariance matrix is an important element of many asset allocation strategies. The widely used sample covariance matrix estimator is unstable especially when the number of time observations is small and the number of assets is large or when high-dimensional data is involved in the computation. In this study, we focus on the most important estimators that are applied on a group of Markowitz-type strategies and also on a recently introduced method based on hierarchical tree clustering. The performance tests of the portfolio strategies using different covariance matrix estimators rely on the out-of-sample characteristics of synthetic and real stock data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.