TY - JOUR A1 - Shi, Feng A1 - Schirneck, Friedrich Martin A1 - Friedrich, Tobias A1 - Kötzing, Timo A1 - Neumann, Frank T1 - Reoptimization time analysis of evolutionary algorithms on linear functions under dynamic uniform constraints JF - Algorithmica : an international journal in computer science N2 - Rigorous runtime analysis is a major approach towards understanding evolutionary computing techniques, and in this area linear pseudo-Boolean objective functions play a central role. Having an additional linear constraint is then equivalent to the NP-hard Knapsack problem, certain classes thereof have been studied in recent works. In this article, we present a dynamic model of optimizing linear functions under uniform constraints. Starting from an optimal solution with respect to a given constraint bound, we investigate the runtimes that different evolutionary algorithms need to recompute an optimal solution when the constraint bound changes by a certain amount. The classical (1+1) EA and several population-based algorithms are designed for that purpose, and are shown to recompute efficiently. Furthermore, a variant of the (1+(λ,λ))GA for the dynamic optimization problem is studied, whose performance is better when the change of the constraint bound is small. Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-605295 SN - 0178-4617 SN - 1432-0541 VL - 82 IS - 10 SP - 3117 EP - 3123 PB - Springer CY - New York ER - TY - JOUR A1 - Doerr, Benjamin A1 - Kötzing, Timo T1 - Lower bounds from fitness levels made easy JF - Algorithmica N2 - One of the first and easy to use techniques for proving run time bounds for evolutionary algorithms is the so-called method of fitness levels by Wegener. It uses a partition of the search space into a sequence of levels which are traversed by the algorithm in increasing order, possibly skipping levels. An easy, but often strong upper bound for the run time can then be derived by adding the reciprocals of the probabilities to leave the levels (or upper bounds for these). Unfortunately, a similarly effective method for proving lower bounds has not yet been established. The strongest such method, proposed by Sudholt (2013), requires a careful choice of the viscosity parameters gamma(i), j, 0 <= i < j <= n. In this paper we present two new variants of the method, one for upper and one for lower bounds. Besides the level leaving probabilities, they only rely on the probabilities that levels are visited at all. We show that these can be computed or estimated without greater difficulties and apply our method to reprove the following known results in an easy and natural way. (i) The precise run time of the (1+1) EA on LEADINGONES. (ii) A lower bound for the run time of the (1+1) EA on ONEMAX, tight apart from an O(n) term. (iii) A lower bound for the run time of the (1+1) EA on long k-paths (which differs slightly from the previous result due to a small error in the latter). We also prove a tighter lower bound for the run time of the (1+1) EA on jump functions by showing that, regardless of the jump size, only with probability O(2(-n)) the algorithm can avoid to jump over the valley of low fitness. KW - First hitting time KW - Fitness level method KW - Evolutionary computation Y1 - 2022 U6 - https://doi.org/10.1007/s00453-022-00952-w SN - 0178-4617 SN - 1432-0541 PB - Springer CY - New York ER - TY - JOUR A1 - Doerr, Benjamin A1 - Kötzing, Timo A1 - Lagodzinski, Gregor J. A. A1 - Lengler, Johannes T1 - The impact of lexicographic parsimony pressure for ORDER/MAJORITY on the run time JF - Theoretical computer science : the journal of the EATCS N2 - While many optimization problems work with a fixed number of decision variables and thus a fixed-length representation of possible solutions, genetic programming (GP) works on variable-length representations. A naturally occurring problem is that of bloat, that is, the unnecessary growth of solution lengths, which may slow down the optimization process. So far, the mathematical runtime analysis could not deal well with bloat and required explicit assumptions limiting bloat. In this paper, we provide the first mathematical runtime analysis of a GP algorithm that does not require any assumptions on the bloat. Previous performance guarantees were only proven conditionally for runs in which no strong bloat occurs. Together with improved analyses for the case with bloat restrictions our results show that such assumptions on the bloat are not necessary and that the algorithm is efficient without explicit bloat control mechanism. More specifically, we analyzed the performance of the (1 + 1) GP on the two benchmark functions ORDER and MAJORITY. When using lexicographic parsimony pressure as bloat control, we show a tight runtime estimate of O(T-init + nlogn) iterations both for ORDER and MAJORITY. For the case without bloat control, the bounds O(T-init logT(i)(nit) + n(logn)(3)) and Omega(T-init + nlogn) (and Omega(T-init log T-init) for n = 1) hold for MAJORITY(1). KW - genetic programming KW - bloat control KW - theory KW - runtime analysis Y1 - 2020 U6 - https://doi.org/10.1016/j.tcs.2020.01.011 SN - 0304-3975 SN - 1879-2294 VL - 816 SP - 144 EP - 168 PB - Elsevier CY - Amsterdam [u.a.] ER - TY - JOUR A1 - Kötzing, Timo A1 - Lagodzinski, Gregor J. A. A1 - Lengler, Johannes A1 - Melnichenko, Anna T1 - Destructiveness of lexicographic parsimony pressure and alleviation by a concatenation crossover in genetic programming JF - Theoretical computer science N2 - For theoretical analyses there are two specifics distinguishing GP from many other areas of evolutionary computation: the variable size representations, in particular yielding a possible bloat (i.e. the growth of individuals with redundant parts); and also the role and the realization of crossover, which is particularly central in GP due to the tree-based representation. Whereas some theoretical work on GP has studied the effects of bloat, crossover had surprisingly little share in this work.
We analyze a simple crossover operator in combination with randomized local search, where a preference for small solutions minimizes bloat (lexicographic parsimony pressure); we denote the resulting algorithm Concatenation Crossover GP. We consider three variants of the well-studied MAJORITY test function, adding large plateaus in different ways to the fitness landscape and thus giving a test bed for analyzing the interplay of variation operators and bloat control mechanisms in a setting with local optima. We show that the Concatenation Crossover GP can efficiently optimize these test functions, while local search cannot be efficient for all three variants independent of employing bloat control. (C) 2019 Elsevier B.V. All rights reserved. KW - genetic programming KW - mutation KW - theory KW - run time analysis Y1 - 2020 U6 - https://doi.org/10.1016/j.tcs.2019.11.036 SN - 0304-3975 VL - 816 SP - 96 EP - 113 PB - Elsevier CY - Amsterdam ER - TY - JOUR A1 - Doerr, Benjamin A1 - Kötzing, Timo T1 - Multiplicative Up-Drift JF - Algorithmica N2 - Drift analysis aims at translating the expected progress of an evolutionary algorithm (or more generally, a random process) into a probabilistic guarantee on its run time (hitting time). So far, drift arguments have been successfully employed in the rigorous analysis of evolutionary algorithms, however, only for the situation that the progress is constant or becomes weaker when approaching the target. Motivated by questions like how fast fit individuals take over a population, we analyze random processes exhibiting a (1+delta)-multiplicative growth in expectation. We prove a drift theorem translating this expected progress into a hitting time. This drift theorem gives a simple and insightful proof of the level-based theorem first proposed by Lehre (2011). Our version of this theorem has, for the first time, the best-possible near-linear dependence on 1/delta} (the previous results had an at least near-quadratic dependence), and it only requires a population size near-linear in delta (this was super-quadratic in previous results). These improvements immediately lead to stronger run time guarantees for a number of applications. We also discuss the case of large delta and show stronger results for this setting. KW - drift theory KW - evolutionary computation KW - stochastic process Y1 - 2020 U6 - https://doi.org/10.1007/s00453-020-00775-7 SN - 0178-4617 SN - 1432-0541 VL - 83 IS - 10 SP - 3017 EP - 3058 PB - Springer CY - New York ER - TY - GEN A1 - Kötzing, Timo A1 - Krejca, Martin Stefan T1 - First-Hitting times under additive drift T2 - Parallel Problem Solving from Nature – PPSN XV, PT II N2 - For the last ten years, almost every theoretical result concerning the expected run time of a randomized search heuristic used drift theory, making it the arguably most important tool in this domain. Its success is due to its ease of use and its powerful result: drift theory allows the user to derive bounds on the expected first-hitting time of a random process by bounding expected local changes of the process - the drift. This is usually far easier than bounding the expected first-hitting time directly. Due to the widespread use of drift theory, it is of utmost importance to have the best drift theorems possible. We improve the fundamental additive, multiplicative, and variable drift theorems by stating them in a form as general as possible and providing examples of why the restrictions we keep are still necessary. Our additive drift theorem for upper bounds only requires the process to be nonnegative, that is, we remove unnecessary restrictions like a finite, discrete, or bounded search space. As corollaries, the same is true for our upper bounds in the case of variable and multiplicative drift. Y1 - 2018 SN - 978-3-319-99259-4 SN - 978-3-319-99258-7 U6 - https://doi.org/10.1007/978-3-319-99259-4_8 SN - 0302-9743 SN - 1611-3349 VL - 11102 SP - 92 EP - 104 PB - Springer CY - Cham ER - TY - GEN A1 - Kötzing, Timo A1 - Krejca, Martin Stefan T1 - First-Hitting times for finite state spaces T2 - Parallel Problem Solving from Nature – PPSN XV, PT II N2 - One of the most important aspects of a randomized algorithm is bounding its expected run time on various problems. Formally speaking, this means bounding the expected first-hitting time of a random process. The two arguably most popular tools to do so are the fitness level method and drift theory. The fitness level method considers arbitrary transition probabilities but only allows the process to move toward the goal. On the other hand, drift theory allows the process to move into any direction as long as it move closer to the goal in expectation; however, this tendency has to be monotone and, thus, the transition probabilities cannot be arbitrary. We provide a result that combines the benefit of these two approaches: our result gives a lower and an upper bound for the expected first-hitting time of a random process over {0,..., n} that is allowed to move forward and backward by 1 and can use arbitrary transition probabilities. In case that the transition probabilities are known, our bounds coincide and yield the exact value of the expected first-hitting time. Further, we also state the stationary distribution as well as the mixing time of a special case of our scenario. Y1 - 2018 SN - 978-3-319-99259-4 SN - 978-3-319-99258-7 U6 - https://doi.org/10.1007/978-3-319-99259-4_7 SN - 0302-9743 SN - 1611-3349 VL - 11102 SP - 79 EP - 91 PB - Springer CY - Cham ER - TY - GEN A1 - Kötzing, Timo A1 - Lagodzinski, Gregor J. A. A1 - Lengler, Johannes A1 - Melnichenko, Anna T1 - Destructiveness of Lexicographic Parsimony Pressure and Alleviation by a Concatenation Crossover in Genetic Programming T2 - Parallel Problem Solving from Nature – PPSN XV N2 - For theoretical analyses there are two specifics distinguishing GP from many other areas of evolutionary computation. First, the variable size representations, in particular yielding a possible bloat (i.e. the growth of individuals with redundant parts). Second, the role and realization of crossover, which is particularly central in GP due to the tree-based representation. Whereas some theoretical work on GP has studied the effects of bloat, crossover had a surprisingly little share in this work. We analyze a simple crossover operator in combination with local search, where a preference for small solutions minimizes bloat (lexicographic parsimony pressure); the resulting algorithm is denoted Concatenation Crossover GP. For this purpose three variants of the wellstudied Majority test function with large plateaus are considered. We show that the Concatenation Crossover GP can efficiently optimize these test functions, while local search cannot be efficient for all three variants independent of employing bloat control. Y1 - 2018 SN - 978-3-319-99259-4 SN - 978-3-319-99258-7 U6 - https://doi.org/10.1007/978-3-319-99259-4_4 SN - 0302-9743 SN - 1611-3349 VL - 11102 SP - 42 EP - 54 PB - Springer CY - Cham ER - TY - JOUR A1 - Friedrich, Tobias A1 - Kötzing, Timo A1 - Krejca, Martin Stefan T1 - Unbiasedness of estimation-of-distribution algorithms JF - Theoretical computer science N2 - In the context of black-box optimization, black-box complexity is used for understanding the inherent difficulty of a given optimization problem. Central to our understanding of nature-inspired search heuristics in this context is the notion of unbiasedness. Specialized black-box complexities have been developed in order to better understand the limitations of these heuristics - especially of (population-based) evolutionary algorithms (EAs). In contrast to this, we focus on a model for algorithms explicitly maintaining a probability distribution over the search space: so-called estimation-of-distribution algorithms (EDAs). We consider the recently introduced n-Bernoulli-lambda-EDA framework, which subsumes, for example, the commonly known EDAs PBIL, UMDA, lambda-MMAS(IB), and cGA. We show that an n-Bernoulli-lambda-EDA is unbiased if and only if its probability distribution satisfies a certain invariance property under isometric automorphisms of [0, 1](n). By restricting how an n-Bernoulli-lambda-EDA can perform an update, in a way common to many examples, we derive conciser characterizations, which are easy to verify. We demonstrate this by showing that our examples above are all unbiased. (C) 2018 Elsevier B.V. All rights reserved. KW - Estimation-of-distribution algorithm KW - Unbiasedness KW - Theory Y1 - 2019 U6 - https://doi.org/10.1016/j.tcs.2018.11.001 SN - 0304-3975 SN - 1879-2294 VL - 785 SP - 46 EP - 59 PB - Elsevier CY - Amsterdam ER - TY - JOUR A1 - Kötzing, Timo A1 - Krejca, Martin Stefan T1 - First-hitting times under drift JF - Theoretical computer science N2 - For the last ten years, almost every theoretical result concerning the expected run time of a randomized search heuristic used drift theory, making it the arguably most important tool in this domain. Its success is due to its ease of use and its powerful result: drift theory allows the user to derive bounds on the expected first-hitting time of a random process by bounding expected local changes of the process - the drift. This is usually far easier than bounding the expected first-hitting time directly. Due to the widespread use of drift theory, it is of utmost importance to have the best drift theorems possible. We improve the fundamental additive, multiplicative, and variable drift theorems by stating them in a form as general as possible and providing examples of why the restrictions we keep are still necessary. Our additive drift theorem for upper bounds only requires the process to be lower-bounded, that is, we remove unnecessary restrictions like a finite, discrete, or bounded state space. As corollaries, the same is true for our upper bounds in the case of variable and multiplicative drift. By bounding the step size of the process, we derive new lower-bounding multiplicative and variable drift theorems. Last, we also state theorems that are applicable when the process has a drift of 0, by using a drift on the variance of the process. KW - First-hitting time KW - Random process KW - Drift Y1 - 2019 U6 - https://doi.org/10.1016/j.tcs.2019.08.021 SN - 0304-3975 SN - 1879-2294 VL - 796 SP - 51 EP - 69 PB - Elsevier CY - Amsterdam ER -