Dagstuhl Seminar Proceedings, Volume 8051
Dagstuhl Seminar Proceedings
DagSemProc
https://www.dagstuhl.de/dagpub/1862-4405
https://dblp.org/db/series/dagstuhl
1862-4405
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
8051
2008
https://drops.dagstuhl.de/entities/volume/DagSemProc-volume-8051
08051 Abstracts Collection – Theory of Evolutionary Algorithms
From Jan. 27, 2008 to Feb. 1, 2008, the Dagstuhl Seminar 08051 ``Theory of Evolutionary Algorithms'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available.
Evolutionary Computation
Theory of Evolutionary Algorithms
1-15
Regular Paper
Dirk V.
Arnold
Dirk V. Arnold
Anne
Auger
Anne Auger
Carsten
Witt
Carsten Witt
Jonathan E.
Rowe
Jonathan E. Rowe
10.4230/DagSemProc.08051.1
Creative Commons Attribution 4.0 International license
https://creativecommons.org/licenses/by/4.0/legalcode
08051 Executive Summary – Theory of Evolutionary Algorithms
The 2008 Dagstuhl Seminar "Theory of Evolutionary Algorithms" was the fifth in a firmly established series of biannual events. In the week from Jan. 27, 2008 to Feb. 1, 2008, 47 researchers from nine countries discussed their recent work and trends in evolutionary computation.
Evolutionary Algorithms
Theory of Evolutionary Algorithms
1-5
Regular Paper
Dirk V.
Arnold
Dirk V. Arnold
Anne
Auger
Anne Auger
Jonathan E.
Rowe
Jonathan E. Rowe
Carsten
Witt
Carsten Witt
10.4230/DagSemProc.08051.2
Creative Commons Attribution 4.0 International license
https://creativecommons.org/licenses/by/4.0/legalcode
A Comparison of GAs Penalizing Infeasible Solutions and Repairing Infeasible Solutions on the 0-1 Knapsack Problem
Constraints exist in almost every optimization problem. Different
constraint handling techniques have been incorporated with genetic
algorithms (GAs), however most of current studies are based on
computer experiments. An example is Michalewicz's comparison among
GAs using different constraint handling techniques on the 0-1
knapsack problem. The following phenomena are observed in
experiments: 1) the penalty method needs more generations to find a
feasible solution to the restrictive capacity knapsack than the
repair method; 2) the penalty method can find
better solutions to the average capacity knapsack. Such observations
need a theoretical explanation. This paper aims at providing a
theoretical analysis of Michalewicz's experiments. The main result
of the paper is that GAs using the repair method are more efficient
than GAs using the penalty method on both restrictive capacity and
average capacity knapsack problems. This result of the average
capacity is a little different from Michalewicz's experimental
results. So a supplemental experiment is implemented to support the
theoretical claim. The results confirm the general principle pointed
out by Coello: a better constraint-handling approach should tend to
exploit specific domain knowledge.
Genetic Algorithms
Constrained Optimization
Knapsack Problem
Computation Time
Performance Analysis
1-39
Regular Paper
Jun
He
Jun He
Yuren
Zhou
Yuren Zhou
Xin
Yao
Xin Yao
10.4230/DagSemProc.08051.3
Creative Commons Attribution 4.0 International license
https://creativecommons.org/licenses/by/4.0/legalcode
Evaluating Stationary Distribution of the Binary GA Markov Chain in Special Cases
The evolutionary algorithm stochastic process is well-known to be
Markovian. These have been under investigation in much of the
theoretical evolutionary computing research. When mutation rate is
positive, the Markov chain modeling an evolutionary algorithm is
irreducible and, therefore, has a unique stationary distribution,
yet, rather little is known about the stationary distribution. On the other
hand, knowing the stationary distribution may provide
some information about the expected times to hit optimum, assessment of the biases due to recombination and is of importance in population
genetics to assess what's called a ``genetic load" (see the
introduction for more details). In this talk I will show how the quotient
construction method can be exploited to derive rather explicit bounds on the ratios of the stationary distribution values of various subsets of
the state space. In fact, some of the bounds obtained in the current
work are expressed in terms of the parameters involved in all the
three main stages of an evolutionary algorithm: namely selection,
recombination and mutation. I will also discuss the newest developments which may allow for further improvements of the bounds
Genetic algorithms
Markov chains
stationary distribution
lumping quotient
1-0
Regular Paper
Boris S.
Mitavskiy
Boris S. Mitavskiy
Chris
Cannings
Chris Cannings
10.4230/DagSemProc.08051.4
Creative Commons Attribution 4.0 International license
https://creativecommons.org/licenses/by/4.0/legalcode
N-gram GP: Early results and half-baked ideas
In this talk I present N-gram GP, a system for evolving linear GP programs using an EDA style system to update the probabilities of different 3-grams (triplets) of instructions. I then pick apart some of the evolved programs in an effort to better understand the properties of this approach and identify ways that it might be extended.
Doing so reveals that there are frequently cases where the system needs two triples of the form ABC and ABD to solve the problem, but can only choose between them probabilistically in the EDA phase. I present the entirely untested idea of creating a new pseudo-instruction that is a duplicate of a key instruction. This could potentially allow the system to learn, for example, that AB is always followed by C, while AB' is always followed by D.
Genetic programming
estimation of distribution algorithms
linear GP
machine learning
1-3
Regular Paper
Nicholas Freitag
McPhee
Nicholas Freitag McPhee
Riccardo
Poli
Riccardo Poli
10.4230/DagSemProc.08051.5
Creative Commons Attribution 4.0 International license
https://creativecommons.org/licenses/by/4.0/legalcode
Runtime Analysis of Binary PSO
We investigate the runtime of the Binary Particle Swarm Optimization (PSO) algorithm introduced by Kennedy and Eberhart (1997). The Binary PSO maintains a global best solution and a swarm of particles. Each particle consists of a current position, an own best position and a velocity vector used in a probabilistic process to update the particle's position. We present lower bounds for a broad class of implementations with swarms of polynomial size. To prove upper bounds, we transfer a fitness-level argument well-established for evolutionary algorithms (EAs) to PSO. This method is then applied to estimate the expected runtime on the class of unimodal functions. A simple variant of the Binary PSO is considered in more detail. The1-PSO only maintains one particle, hence own best and global best solutions coincide. Despite its simplicity, the 1-PSO is surprisingly efficient.
A detailed analysis for the function Onemax shows that the 1-PSO is competitive to EAs.
Particle swarm optimization
runtime analysis
1-22
Regular Paper
Dirk
Sudholt
Dirk Sudholt
Carsten
Witt
Carsten Witt
10.4230/DagSemProc.08051.6
Creative Commons Attribution 4.0 International license
https://creativecommons.org/licenses/by/4.0/legalcode