In this work, we introduce multiplicative drift analysis as a suitable way to analyze the runtime of randomized search heuristics such as evolutionary algorithms.We give a multiplicative version of the classical drift theorem. This allows easier analyses in those settings where the optimization progress is roughly proportional to the current distance to the optimum.To display the strength of this tool, we regard the classical problem how the (1+1) Evolutionary Algorithm optimizes an arbitrary linear pseudo-Boolean function. Here, we first give a relatively simple proof for the fact that any linear function is optimized in expected time O(n log n), where n is the length of the bit string. Afterwards, we show that in fact any such function is optimized in expected time at most (1 + o(1))1.39en ln(n), again using multiplicative drift analysis. We also prove a corresponding lower bound of (1 − o(1))en ln(n) which actually holds for all functions with a unique global optimum.We further demonstrate how our drift theorem immediately gives natural proofs (with better constants) for the best known runtime bounds for the (1+1) Evolutionary Algorithm on combinatorial problems like finding minimum spanning trees, shortest paths, or Euler tours. * Carola Winzen is a recipient of the Google Europe Fellowship in Randomized Algorithms, and this work is supported in part by this Google Fellowship.
Abstract. Finding optimal inpainting data plays a key role in the field of image compression with partial differential equations (PDEs). In this paper, we optimise the spatial as well as the tonal data such that an image can be reconstructed with minimised error by means of discrete homogeneous diffusion inpainting. To optimise the spatial distribution of the inpainting data, we apply a probabilistic data sparsification followed by a nonlocal pixel exchange. Afterwards we optimise the grey values in these inpainting points in an exact way using a least squares approach. The resulting method allows almost perfect reconstructions with only 5% of all pixels. This demonstrates that a thorough data optimisation can compensate for most deficiencies of a suboptimal PDE interpolant.
Drift analysis is one of the strongest tools in the analysis of evolutionary algorithms. Its main weakness is that it is often very hard to find a good drift function.In this paper, we make progress in this direction. We prove a multiplicative version of the classical drift theorem. This allows easier analyses in those settings, where the optimization progress is roughly proportional to the current objective value.Our drift theorem immediately gives natural proofs for the best known run-time bounds for the (1+1) Evolutionary Algorithm computing minimum spanning trees and shortest paths, since here we may simply take the objective function as drift function.As a more challenging example, we give a relatively simple proof for the fact that any linear function is optimized in time O(n log n). In the multiplicative setting, a simple linear function can be used as drift function (without taking any logarithms).However, we also show that, both in the classical and the multiplicative setting, drift functions yielding good results for all linear functions exist only if the mutation probability is at most c/n for a small constant c.
Rigorous runtime analyses of evolutionary algorithms (EAs) mainly investigate algorithms that use elitist selection methods. Two algorithms commonly studied are Randomized Local Search (RLS) and the (1+1) EA and it is well known that both optimize any linear pseudo-Boolean function on n bits within an expected number of O(n log n) fitness evaluations.In this paper, we analyze variants of these algorithms that use fitness proportional selection.A well-known method in analyzing the local changes in the solutions of RLS is a reduction to the gambler's ruin problem. We extend this method in order to analyze the global changes imposed by the (1+1) EA. By applying this new technique we show that with high probability using fitness proportional selection leads to an exponential optimization time for any linear pseudo-Boolean function with non-zero weights. Even worse, all solutions of the algorithms during an exponential number of fitness evaluations differ with high probability in linearly many bits from the optimal solution.Our theoretical studies are complemented by experimental investigations which confirm the asymptotic results on realistic input sizes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.