Evolutionary Problem Solving [PDF]

EVOLUTIONARY. PROBLEM SOLVING. Thomas Philip Runarsson. Faculty of Engineering. Reykjavik 2000 ... The work presented is a study of a problem solving technique inspired by the principles of biological evolution. Solving a problem is ..... 101] in which both are acting as both, the problem and solution. By isolating ...

4 downloads 12 Views 1MB Size

Recommend Stories


Evolutionary Problem Solving
If you are irritated by every rub, how will your mirror be polished? Rumi

Problem Solving
Be like the sun for grace and mercy. Be like the night to cover others' faults. Be like running water

Problem Solving
Your big opportunity may be right where you are now. Napoleon Hill

Problem Solving
What we think, what we become. Buddha

Problem Solving
Be like the sun for grace and mercy. Be like the night to cover others' faults. Be like running water

Problem Solving
How wonderful it is that nobody need wait a single moment before starting to improve the world. Anne

Problem Solving
Every block of stone has a statue inside it and it is the task of the sculptor to discover it. Mich

Problem Solving
You're not going to master the rest of your life in one day. Just relax. Master the day. Than just keep

Problem Solving
You have to expect things of yourself before you can do them. Michael Jordan

PDF Problem Solving with C++ (9th Edition)
Ego says, "Once everything falls into place, I'll feel peace." Spirit says "Find your peace, and then

Idea Transcript


EVOLUTIONARY PROBLEM SOLVING

Thomas Philip Runarsson

Faculty of Engineering Reykjavik 2000

Copyright 2000 Thomas Philip Runarsson All Rights Reserved ISBN 9979-9494-1-4 H´ask´ ola´ utg´ afan University of Iceland

This thesis has been accepted by the Faculty of Engineering of the University of Iceland for public defence in fulfilment of the requirements for the degree of doctor scientiarum ingeniarius.

Faculty of Engineering Reykjavik, 29th January 2001 Valdimar K. Jonsson, Dean

Thesis Committee:

Professor Magnus Thor Jonsson Professor Xin Yao Professor Hans-Paul Schwefel

Preface This thesis on evolutionary problem solving is a summation of work accomplished over a period of four years. My interest in this field was aroused when unintentionally attending an IEEE workshop on genetic algorithms in 1994. There are numerous people to thank for making this work possible: First of all Magnus Thor Jonsson for his encouragement, and creating the liberty to pursue this area of research. Xin Yao for sharing his expertise and guidance. Einar Arnason for endless discussions on evolutionary processes and natural systems. John Stuart for his patience in discussing the nature of representations. Gudmundur R. Jonsson for helping with difficult statistics. J¨orgen Pind for renewing my interest in connectionist models. Mikael Karlson for making me doubt everything. Ruhul Sarker for generating interest in constrained problems. Hans-Paul Schwefel and David B. Fogel for sharing their insight into the working principles of evolutionary computation. Hans-George Beyer for discussions on theories for evolution strategies. Richard Sutton for pointing out the problems with evolutionary computation. Finally, I thank the Department of Mechanical Engineering and my many friends there for creating an invigorating academic environment. Financial support for this work was provided by the Research Council of Iceland and the Research Fund of the University of Iceland. Travel grants were awarded by the Engineering Labour Union and the University of Iceland. Many thanks to the Department of Computer Science at the Australian Defence Force Academy for hosting me in Canberra, in particular Bob McKay and Charles Newton. Thomas Philip Runarsson Faculty of Engineering, University of Iceland Reykjavik, December 2000.

Abstract The work presented is a study of a problem solving technique inspired by the principles of biological evolution. Solving a problem is the transformation of a given situation into a desired one. The work may be divided into three main interdependent areas of research: (1) describing a situation, (2) operator design for changing a situation, and (3) evaluation by selection. Evolutionary problem solvers are inspired from a model of natural evolutionary processes where organisms are ‘solutions’ to ‘problems’ set up by the environment. Therefore, how an organism and its environment are represented on a computational device is of central interest in the work. Experimental studies and examples are given illustrating the use of symbolic and connectionist representations in evolutionary problem solving. The plausibility that the syntactical manipulation of symbols is an effective and efficient approach to evolutionary search is questioned. For ‘intelligent’ search operators must essentially infer the location of better solutions and for this the symbols must be transformed based on their meaning. It is often difficult to interpret how evolutionary search methods locate good solutions. If a landscape is described by the parent position and the expected fitness of the best offspring, then its topology will determine when evolutionary search locates on average the desired goal. An interpretation of this landscape assists in the design of variation operators and in determining the number of offspring required. The analysis is also useful for the understanding of a new rank-based selection method presented for the treatment of infeasible solutions.



       !"#$%&#'(%')*  '+ &,'+-.)#/ 0!#1#!243!-."56 287:9&;0;0< =3>-,$!;%? 243!-<2@-  0 <:AT Q^]/_I3%'+ !"#H%&I!  :,'` a^b)ced f!ghVi&j=k=k!glnm%ipo%h 5 a*q0c ;  'G $% U"I!  &< !Qr9& < tsuvw lVhVi&j%x>k!glnm%ipo)gry ; a*z c|{}m0l 2} 'P 0!-<nJILN)#I! %'!*  '%< S7:9&;0; ' ) G3} H%(  (<:7-.3%ZI! G)T--CAC   y ;}"2}_:  ZXI|_ I! ƒ24 '}%'0W y '=W$%-C &>  | &AM )V& 0EF2P"JD– WW >  t <"<; _#EF;0Wr%'4 y W=W<-%'P!"0}&<-<<;’3=‹&  – 0)-.%'!*  '5I 2š_#  ƒ  ,'4 H%#'4_“ V9& < |7IEe'%NH!-
Contents Preface

iv

Abstract

v

1 Introduction

1

1.1

Motivation

. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2

1.2

Research Objectives and Contributions . . . . . . . . . . . . . .

3

1.3

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6

2 Background and Literature

8

2.1

Problem Solving . . . . . . . . . . . . . . . . . . . . . . . . . .

8

2.2

Nonlinear Programming . . . . . . . . . . . . . . . . . . . . . .

10

2.2.1

Evolutionary Algorithm . . . . . . . . . . . . . . . . . .

11

2.2.2

Constraint Handling . . . . . . . . . . . . . . . . . . . .

17

2.2.3

Rate of Convergence . . . . . . . . . . . . . . . . . . . .

19

2.2.4

Global Convergence . . . . . . . . . . . . . . . . . . . .

22

2.3

Connectionist Models . . . . . . . . . . . . . . . . . . . . . . .

22

2.4

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

24

3 Problem-Solution Abstraction

26

3.1

Environment and Organism . . . . . . . . . . . . . . . . . . . .

27

3.2

Representing the World . . . . . . . . . . . . . . . . . . . . . .

32

3.2.1

Symbolic . . . . . . . . . . . . . . . . . . . . . . . . . .

32

3.2.2

Connectionist . . . . . . . . . . . . . . . . . . . . . . . .

36

Summary and Conclusion . . . . . . . . . . . . . . . . . . . . .

39

3.3

viii 4 Connectionist Models 41 4.1 Deliberative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 4.2 Reactive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 5 Evolutionary Search 5.1 Expected Fitness Landscape 5.2 Step Size Control . . . . . . 5.3 Directed Search . . . . . . . 5.4 Hopeful Monsters . . . . . . 5.5 Summary . . . . . . . . . . 6 Constraint Handling 6.1 Penalty Method . . . 6.2 Stochastic Ranking . 6.3 Experimental Study 6.4 Summary . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

69 71 77 80 81 85

. . . .

86 87 89 92 98

7 Conclusion 100 7.1 Discussion of Contributions . . . . . . . . . . . . . . . . . . . . 101 7.2 Directions for Future Work . . . . . . . . . . . . . . . . . . . . 103 A Test Function Suite 104 A.1 Constrained . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 A.2 Unconstrained . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Bibliography

117

List of Tables

128

List of Figures

130

Mathematical Symbols

133

Glossary

134

Chapter 1

Introduction Solving a problem is the transformation of a given situation into a desired situation or goal [Hay81]. A solution exists in the form of a sequence of actions necessary to achieve this goal and the term ‘problem’ refers to a task specified by a set of actions, a goal, an initial state, and a set of reachable states. The work presented is committed to the design of efficient probabilistic problem solving strategies. The technique is referred to as ‘evolutionary problem solving’ because its strategy is inspired from principles of biological evolution. Nature has solved many technical problems and mankind has imitated some of its solutions. By harnessing these principles, additional technical problems may potentially be solved. If a problem domain has strong mathematical structure, strategies may exist that consistently locate an optimal solution in acceptable computation time. The development of novel strategies is challenging but often time consuming. When traditional techniques are applied, the conditions are simplified and may be a far way off from the real problem. In real world problems, objectives and constraints may not be analytically treatable, noisy, non-stationary, etc. For these reasons, evolutionary problem solving may be an attractive alternative. All that is needed is a data structure to represent a solution, a means of generating a variation of that structure, and the ability to determine whether one data structure is better than another. This minimalist approach, although inefficient at times, may just be the sensible all round method. Nature has solved numerous ‘problems’ and one may be inclined to infer that its problem solving strategy is in some sense optimal. However, these hopes are dashed by the so called no free lunch theorems [WM97]. According to the

2

Introduction theorems, the average time taken to seek a goal for any pair of search algorithms, which do not resample points, across all possible problems, is identical. Essentially, if nothing is known about the problem, there is no justification for selecting any particular search algorithm [MF00, p. 161].

1.1

Motivation

If the principles of biological evolution are to be used as a problem solving paradigm, its relation to human problem solving must be understood. It is in the end the human designer who sets up the description, operators, and evaluation criteria. Furthermore, the implementation of evolutionary problem solving on a computational device may also introduce limitations. Evolutionary algorithms are inspired from a model of natural evolutionary processes where organisms are ‘solutions’ to ‘problems’ set up by the environment. It is fair to say that most evolutionary algorithms assume that the environment, the problem, is the criterion for survival, which is constant over time [Sch95, p. 106]. For living systems, however, there is a coevolution of organism and environment [Lew00, p. 101] in which both are acting as both, the problem and solution. By isolating the problem from the solution, useful structure is lost that may have made problem solving more efficient and effective. The situation description, common to dynamic programming [Ber95] and reinforcement learning [SB98, BT96], is perhaps in this case a better model of natural processes. These are discrete-time dynamic systems whose state transition (environment) depends on a control (organism), which again depends on the current state. The fitness is then an accumulated cost that depends on the states visited. Solving problems is not about random search in the problem-solution space. The space is a frame in which the designer brings knowledge and when sufficient the search is unnecessary. An ‘intelligent’ problem solver will not limit itself to direct search but also to knowledge search [New90, p. 98]. This is the search for knowledge that guides the search. A living system must also necessarily know something in so far as it can act in the world. In other words, organisms are in some sense cognitive. Similarly, problem solvers may be made ‘cognitive’ by the insertion of specific knowledge, either by hand or as a result of automatic learning. In evolutionary problem solving, the knowledge of how to solve a problem is optimized. The evolutionary algorithm that

Research Objectives and Contributions allows for the greatest amount of automatic learning, will be the more ‘intelligent’ problem solver. Domain knowledge may be optimized not only for a single problem instance, but also for the problem class. The search for domain knowledge is also subject to the no free lunch theorems and therefore, it becomes necessary to know something about the cognitive architecture chosen. What is ultimately of interest is efficiently finding effective solutions to specific problem classes. An evolutionary problem solver may require a longer time optimizing the knowledge needed for a given problem class and when found any case may be solved instantaneously. An example of this is where learning rules are evolved for the training of neural networks [RJ00]. For natural systems, however, the search for knowledge is never ending because ‘the problem’ is forever changing. Understanding the nature of cognition, and hence how to search a knowledge space, is the key to designing universal evolutionary problem solvers.

1.2

Research Objectives and Contributions

The overall objective of the work is to gain a greater understanding of what makes an evolutionary problem solver effective and efficient. The problem solver is effective in the sense that it will on average locate near optimal solutions and doing so efficiently implies with little computational effort. The investigation may be broken down to three main interdependent areas of research: Describing a situation, operator design for changing a situation, and evaluation by selection. An attempt will be made to contribute an increased understanding in each of the respective areas. The following list describes the primary objectives of the research work presented and corresponding contributions.

1. ‘Intelligent’ problem-solution description The object is to develop a more rigorous methodology for setting up problemsolution architectures for evolutionary algorithms. Specific knowledge must be utilized for effective and efficient search. How this knowledge is embodied will depend on the architecture chosen. The main contribution here lies in the recognition that a problem-solution description is equivalent to knowledge representation. The most commonly cited architectures for knowledge repre-

3

4

Introduction sentation are: symbolic and connectionist. The most suitable architecture is, however, dependent on the task at hand. It must be easy and transparent for the designer to specify the domain knowledge. The representation must be adequate, e.g. span the entire portion of the world that is of interest. The search for alternative solutions depends on what is represented. The efficiency of the search for alternatives relies on how inferential the representation is. The most desirable search operator will have the ability to infer the most probable location of better solutions. The domain knowledge should be simple to modify to allow for automatic learning. A problem-solution description, which permits learning, will generate the more ‘intelligent’ problem solver. There will be a price paid in computation time to acquire domain knowledge, but as long as this knowledge is reused, the problem and further instances thereof may be solved in a shorter time. It will be argued that neuro-evolutionary methods are well suited for this task. The benefits of using connectionist models for knowledge representation is that there exists some idea of how to search the knowledge space. This is not the case for a symbolic representation, unless the designer supplies this knowledge. Two new neuro-evolutionary models are presented to illustrate how evolutionary problem solving can be made more ‘intelligent’. This is an attempt to break free from the conventional approaches to evolutionary computation and where all knowledge must be pre-specified by the designer. This is the case for any symbolic representation. The first architecture presented is a choice network that deliberates over a number of plausible actions to be taken for a given environmental state. An application of this model is presented where dispatch scheduling rules are evolved. The second architecture is an example of a reactive connectionist model where the model’s output is its action. This model is used to evolve training rules for neural networks. In this case, the learning rule becomes part of the neural network architecture.

2. Efficient and effective evolutionary search It is important to understand when evolutionary problem solving may locate an optimum in reasonable time. Every data structure can be assigned a ‘fitness’. This is analogous to the idea in biology that every genotype has a fitness. In evolutionary biology fitness is often thought of as a function of the genotype frequency in the population [Wri31]. The distribution of fitness over

Research Objectives and Contributions the space of genotypes constitutes a fitness landscape. In natural systems, this landscape deforms in response to changes in the environment and in response to coevolution [Kau93]. In evolutionary problem solving, the problem-solution space is predefined and, therefore, the fitness landscape is fixed in structure. It is intuitive to assume that the topology of the fitness landscape will determine problem difficulty. This means, landscapes that are multipeaked and rugged are difficult and those with few smooth peaks will be easy to search. Here it is implied that evolutionary search is a hill climbing strategy. The climbing is performed by the search operators. This leads to the notion of operator landscapes [JF95] where neighbours are considered in operator space. Any fitness landscape topology depends ultimately on how the distances between genotypes are defined. This is known as the distance metric space. As long as the search operators work in this metric space, the evolutionary search may perhaps be visualized as a hill climber. However, to complete the landscape description, the influence of selection must be considered. The result is yet another landscape concept, which will be called the expected fitness landscape. In this case, neighbourhoods are determined by the parent individuals, and the fitness by the expected fitness of their best offspring. It will be argued that the efficiency and effectiveness of the evolutionary search is dependent on the topology of the expected fitness landscape. This is the case where the fitness landscape, which may be rugged and multipeaked, is transformed to a smooth singlepeaked expected fitness landscape. Conditions are established for when at least linear convergence rates to the global optimum may be expected in general. The approach taken gives a greater insight to the general working mechanisms of evolutionary search, which is otherwise a difficult theoretical problem.

3. Constraint handling technique All real world problems have constraints. If solutions are set up so that constraints are not ‘naturally’ satisfied by the description and operators, they must be treated by selection. This introduces the dilemma of finding a proper fitness measure for feasible and infeasible individuals. Currently this is an open question. Due to the frequent appearance of constraints in real world problem, it is vital to investigate their handling further. Constraints will often transform a relatively well behaved fitness landscape

5

6

Introduction to a rugged and even disjointed landscape. Functions, which may have been easy to optimize, become difficult. By establishing a proper fitness measure for feasible and infeasible individuals it may be possible to maintain a smoother landscape and make the traversing of disjointed regions easier. A new approach, called stochastic ranking, is presented, which attempts to accomplish this goal. The method is the result of a new way of interpreting penalty function methods.

1.3

Overview

Chapter 2 presents a literature review and necessary background in the area of problem solving and evolutionary computation. Stepwise methodologies to human problem solving are related to the stepwise approach to designing evolutionary algorithms. Common evolutionary algorithms for nonlinear programming are described and in this context, constraint handling, local convergence rates, and global convergence are discussed. Furthermore, a brief introduction to connectionist models is given as they will become the prime means of problem-solution representation. In chapter 3 procedures for abstracting problems and their solutions for evolutionary search are established. This is discussed in terms of an organism and its environment. Two world models are presented, in one the organism is isolated from its environment, and in the other they are treated as a whole. These worlds may be represented using symbolic or connectionist models. A detailed example is given to emphasize the difference between symbolic and connectionist representations. Further developments of connectionist models for problem and solution representation are discussed in chapter 4. These models are separated into two distinct classes: deliberative and reactive connectionist architectures. The design of a choice network illustrates the application of a deliberative connectionist problem solving architecture. The model is used to evolve adaptive solutions to scheduling problems. The technique is compared with an evolutionary approach using a symbolic representation. An attempt is made to make the symbolic approach adaptive by introducing learnable alleles. For their implementation two learning techniques are examined: Lamarckian learning and the Baldwin effect. The reactive connectionist problem solving architecture is illustrated by evolving training rules for the optimization of linear neural

Overview networks. In chapter 5 an effort is made to gain a greater understanding of how evolutionary search works. This is aided by the examination of expected fitness landscapes. It will be argued that this new landscape concept is more suitable for the determination of problem difficulty than conventional fitness landscapes. The role of recombination in mutative step size control is examined empirically. Directed and hopeful monster mutations are also discussed and investigated. Twenty-four commonly used benchmark functions for unconstrained nonlinear programming are studied. The problem-solution abstracted may sometimes be unable to ‘naturally’ satisfy all of the constraints set and illegals can only be weeded out by selection. This introduces a dilemma of how to assign fitness to illegal individuals. The ranking procedure developed in chapter 6 illustrates how this may be accomplished. In order to understand how constraint handling techniques function, it is important to know how evolutionary search works. The method presented may be understood in terms of the concepts developed in the previous chapter 5. Finally, conclusions are presented in chapter 7. This includes a discussion of contributions and directions for future research.

7

Chapter 2

Background and Literature In this chapter, theoretical and practical background literature to evolutionary algorithms and its relation to problem solving is described. A detailed account of the evolutionary algorithm applied in the work for numerical optimization problems is given. Local convergence rate theory for these problems is presented but may be generalized to other problem classes, provided a distance metric is defined. Global convergence theorems for evolutionary algorithms are also presented. Finally, a brief overview of connectionist models is provided, as they are frequently used for knowledge representation in this work.

2.1

Problem Solving

To solve a problem there must preexist a description of the situation, operators for changing the situation, and an evaluation to determine whether the goal has been achieved. The prescriptive of how a problem solver should proceed as described by Polya [Pol57], is given by four phases: (1) understanding the problem; to see clearly what is required, (2) devising a plan; to see how the various items are connected and how the unknowns are linked to the data in order to obtain an idea of the solution, (3) carrying out the plan, and (4) looking back ; review the completed solution. Similar steps have been proposed by other authors [Hay81, BS84] but they usually include an explicit step for finding a representation of the problem. The characteristic sequence of problem solving noticed by Hayes [Hay81] is: (1) finding the problem; recognizing that there is a problem to be solved, (2) representing the problem;

Problem Solving understanding the nature of the gap to be crossed, (3) planning the solution; choosing a method for crossing the gap, (4) carrying out the plan, (5) evaluating the solution; asking “how good is the result?” once the plan is carried out, and (6) consolidating gains; learning from the experience of solving. Here Hayes breaks Polya’s looking back down into two components, the one focused on evaluating the immediate problem-solving effort and the other on learning something that may be useful in the future. Similar problem solving strategies may also be identified in engineering design [Hub87]. Furthermore, one may recognize the typical steps of an evolutionary algorithm [Hol75, Sch95, Fog95]. In particular, Hayes’ steps may be reinterpreted as shown below in figure 2.1.

problem-solution abstraction 1. identifying the problem; understanding and finding the problem 2. representation; design a data structure for the problem-solution evolutionary search 3. variation (initialization); devise a plan or modify an existing one 4. genotype to phenotype mapping; carry out the plan 5. selection (fitness evaluation); evaluating its utility 6. self-adaptation; learning from the experience of solving Figure 2.1: A stepwise approach to evolutionary problem solving

If the desired solution is not found, the process is repeated from step 3. In some cases it may be necessary to review steps 1 and 2. Perhaps the difference between human problem solving and evolutionary problem solving is essentially that a probabilistic operator is used to modify a given situation. Being a probabilistic process, it must necessarily be repeated a number of times for success. The best modified situation is kept and the whole process repeated. Evolutionary search, therefore, relies on multiple individuals engaged in processing the steps presented where the worst situations, relative to

9

10

Background and Literature

the ones currently available, are abandoned and the better ones are replicated more than once. The probabilistic operator for changing the situation is predefined and will reflect the designer’s understanding of where better solutions are most likely to be found for any given situation. In the standard text on the application of evolutionary algorithms, see for example [GC97, MF00], it is assumed that, the problem-solution is abstracted and its representation is left to the designer’s ‘intuition’. This exercise is probably the most challenging and creative aspect of problem solving. The implementation and theory of an evolutionary algorithm is therefore only concerned with the last four steps of problem solving illustrated in figure 2.1, i.e. that of search. There exists, however, numerous approaches to describing a problem-solution. The description chosen will clearly influence the search for alternatives. It is reasonable to inquire how the search for alternatives relates to the problem-solution description. Indeed the situation description and the operator for changing the situation should be co-determined [MF00]. In chapter 3 an attempt will be made to generate a better understanding of the properties of problem-solution descriptions and the probabilistic operators that variate them. The following section describes some theory and application of evolutionary algorithms for nonlinear programming.

2.2

Nonlinear Programming

The general nonlinear programming problem can be formulated as solving the objective function minimize f (x),

x = (x1 , . . . , xn ) ∈ Rn

(2.1)

where x ∈ S ∩ F, S ⊆ Rn defines the search space which is a n-dimensional space bounded by the parametric constraints xj ≤ xj ≤ xj ,

j ∈ {1, . . . , n}

(2.2)

and the feasible region F is defined by F = {x ∈ Rn | gk (x) ≤ 0 ∀ k ∈ {1, . . . , m}},

(2.3)

where gk (x), k ∈ {1, . . . , m}, are the inequality constraints. Only minimization problems need be considered without a loss of generality since max{f (x)} = − min{−f (x)}.

Nonlinear Programming

2.2.1

11

Evolutionary Algorithm

The most commonly known evolutionary algorithms for optimization are genetic algorithms (GA) [Hol75, Gol89], evolutionary programming (EP) [Fog95], and evolution strategies (ES) [Sch95, Rec94]. The typical evolutionary algorithm may be outlined as follows: 0. Initialization of a population of individuals composed of n objective variables and a number of algorithm control parameters known as strategy parameters. Usually this is done at random and may also be generated from known points. The population size is λ. 1. Recombination of individuals in the population. The averaging or combining of traits among individuals. 2. Mutation of individuals by random variations in the representation. 3. Selection of µ < λ individuals based on their fitness and replicated at least once such that a constant population size of λ is maintained in the next generation pool. 4. Termination when some criteria have been met, typically when a maximum number of generation is reached. The best found individual(s) for the entire evolution is returned. Otherwise, go to step 1. Each step of the evolutionary algorithm for numerical optimization will now be discussed in greater detail. Since many possible variations exists, emphasis will be placed on the version most frequently applied in this work. This algorithm is based on the evolution strategy [Sch95, Rec94]. References to modern evolutionary programming and genetic algorithms are mentioned when appropriate. Initialization The objective variables are typically initialized randomly as follows (0)

xi,j = xj + (xj − xj )uj

i ∈ {1, . . . , λ}, j ∈ {1, . . . , n},

(2.4)

where uj is a random number uniformly distributed over [0, 1]. Here xi,j denotes the j-th component of a vector xi . Unlike the evolution strategy and

12

Background and Literature

evolutionary programming, the canonical genetic algorithm works on a binary string (bitstring) representation. This means that an individual is a binary vector b = (b1 , . . . , b` ) ∈ {0, 1}` of a fixed length `. For continuous parametric optimization problems a means of decoding the space {0, 1}` to Rn is needed. Each variable xj is encoded by a bitstring of length `j and decoded by `j −1

xj = xj + ∆xj

X

b(`j −k) 2k

(2.5)

k=0

where ∆xj is the resolution of the now discretized interval [xj , xj ], ∆xj =

xj − xj 2`j − 1

.

(2.6)

The greater the precision required, the larger the string length `j needed. P These substrings are then concatenated to form a string of length ` = nj=1 `j . Strategy parameters, using random mutative control, are typically implemented in evolution strategies and evolutionary programming. These parameters are usually predetermined in genetic algorithms but this approach may be changing [EHM99]. The most common strategy parameter is the standard deviation of the mutation distribution, also known as the ‘mean step size’. Let ∆x be a rough measure of the expected distance to the optimum then the initial setting for the ‘mean step sizes’ is [Sch95, p. 117]: √ √ (0) σi,j = ∆xj / n ∝ (xj − xj )/ n,

i ∈ {1, . . . , λ}, j ∈ {1, . . . , n}.

(2.7)

Here each objective variable in the population has a standard deviation associated with it. Sometimes a common strategy parameter is used for a set of objective variables. From practical experience, it has been suggested that this initial value should be smaller, otherwise, there is a possibility that the algorithm may diverge [B¨ac96, p. 80]. A larger initial step size is, however, more reliable as it guarantees initial coverage over the search space. By using the initial values in (2.7) as upper bounds for the step sizes, divergence was avoided in this work. When each dimension has its own mean step size associated with it, the search is aligned to the coordinate system. To overcome this, another strategy parameter is required. This is the rotation angle α ∈ [−π, π], which is used to correlate the mean step sizes. The coordinate transformation for the two dimensional case may be read from figure 2.2. The rotation angle

Nonlinear Programming

13



 

 



 



   

Figure 2.2: Correlated mutation generated using rotation angle α. The ellipsoid represents the place of equal probability density. is initialized uniform randomly over [−π, π]. The angles are related to the covariances and variances by [B¨ac96, p. 70] tan(2αij ) =

2cij . − σj2

σi2

(2.8)

As many as n(n − 1)/2 rotation angles are needed when n different standard deviations are used. When nσ standard deviations are used then the number of angles is nα = (2n − nσ )(nσ − 1)/2. Recombination Recombination is the exchange or averaging of traits between one or more individuals in a population. A recombination is called global when all parents are involved in building a single offspring. The traditional approaches are: discrete, the exchange of variables, and intermediary, the averaging of variables. For the genetic algorithm this operator, also known as crossover, combines low-order, highly fit structures to form even more fit higher-order schema. This argument follows from the controversial building block hypothesis

14

Background and Literature

[Gol89]. The role of crossover is frequently debated. In evolution strategies, recombination acts as a statistical error correction which diminishes the harmful effects of mutation [Bey95a]. According to Beyer [Bey95a], the benefits of recombination are genetic variety and genetic repair. For the evolutionary strategy, a linear speed up in the convergence rate may be shown when recombination is used on the hypersphere model [Bey95a]. This is useful for local optimization but may be less beneficial to global optimization. Evolutionary programming simply ignores recombination. The number of different recombination operators available grows with each new implementation of an evolutionary algorithm. The most commonly used operators, in numerical optimization, associated with evolution strategies are (∀j ∈ {1, . . . , n}) [B¨ac96]:

(g) x ˆh,j

 (g)  xi,j     (g) (g)  xi,j or xk,j     (g) (g)    xi,j or xkj ,j =

(g)

no recombination discrete global discrete

(g)

(x + xk,j )/2 intermediate  i,j  (g) (g)   (xi,j + xkj ,j )/2 global intermediate     (g) (g) (g)   xi,j + χ(xk,j − xi,j ) generalized intermediate    x(g) + χ (x(g) − x(g) ) global generalized intermediate j kj ,j i,j i,j

(2.9)

where i, k ∈ {1, . . . , µ} are the two parents selected, g denotes the generation counter, and the index j in kj indicates that k is sampled anew for each value of j. Genetic algorithms typically use a discrete recombination, where sections of the parent strings are concatenated to form offspring. The traditional operator is one point crossover [Hol75], where a random position is chosen ∈ {1, . . . , `− 1}, and all bits to the right of this point are exchanged between two stings. This operation may be generalized to an nx -point crossover by repeating the one point crossover nx times. In order to keep the algorithm as simple as possible it is decided to ignore the recombination of objective variables and consider only using recombination on the mean step sizes. In particular global intermediate recombination is recommended [Sch95, p. 148] and implemented as follows (g)

(g)

(g)

σ ˆh,j = (σi,j + σkj ,j )/2,

kj ∈ {1, . . . , µ}.

(2.10)

Nonlinear Programming

15

An experimental investigation into the effect of recombination on the ‘mean step size’ is conducted in section 5.2. Following recombination, the strategy parameters and objective variables are mutated. Mutation Variation of strategy parameters is performed before the modification of objective variables. The new λ strategy parameters are generated from the µ parents from the previous generation. These strategy parameters are then used to variate the corresponding objective variables later. The mutation of the ‘mean step size’ is a multiplicative variation. The variation should preserve positive values, the median of the modification should be one, and small changes should occur more frequently [B¨ac96, p. 72]. The median of one guarantees an average neutrality of the process in the absence of selection. These conditions may be satisfied by a number of different rules. The ‘mean step sizes’ in both evolution strategy and modern evolutionary programming are updated according to the lognormal update rule [Sch95]: i = 1, . . . , µ, and h = 1, . . . , λ, (g+1)

σh,j

(g)

=σ ˆh,j exp(τ 0 N (0, 1) + τ Nj (0, 1))) j ∈ {1, . . . , nσ },

(2.11)

where N (0, 1) is a normally distributed one-dimensional random variable with an expectation of zero and variance one. The subscript j in Nj (0, 1) indicates that the random number is generated anew for each value of j. The learning p √ √ 0 ∗ rates τ and τ are set equal to ϕ / 2 n and ϕ∗ / 2n respectively, where ϕ∗ is the expected rate of convergence [Sch95, p. 144] and is usually set to one [B¨ac96, p. 72]. Most of the theoretical results in evolution strategies are derived from using a single global step size. This guarantees spherical symmetry of the search distribution. For a single mean step size, nσ = 1, the modification reduces to (g+1)

σh

(g)

=σ ˆh exp(τo N (0, 1)),

(2.12)

√ where τo = ϕ∗ / n. Lower bounds on the mean step sizes are commonly implemented. This parameter sets the desired precision of the objective variable and may be quite useful when known. However, it has also been shown empirically that its choice may influence the performance of the algorithm [LYL+ 98].

16

Background and Literature The rotation angles are updated as follows, (g+1)

αh,j

(g)

=α ˆ h,j + βNj (0, 1) j ∈ {1, . . . , nα }

(2.13)

where β ≈ 5π/180. So that the angles remain within the range [−π, π] they are mapped as follows |αj | > π ⇒ αj = αj − 2π sign(αj ).

(2.14)

Having varied the strategy parameters, each individual (xi , σi ), ∀ i ∈ {1, . . . , µ}, creates λ/µ offspring on average, so that a total of λ offspring are generated by (g+1)

xh,j

(g)

(g)

(g+1)

= xi,j + ∆xi,j = xi,j + σh,j

Nj (0, 1),

(2.15)

where the rotation angles are ignored. When rotations are used the transformation of ∆xi , as shown in figure 2.2, is required. Rotations introduce up to n(n − 1)/2 additional parameters to the search. This increases the complexity of the search and computational overhead. Rotation angles are not applied in the work presented and the reader is referred to [B¨ac96, p. 71] for their implementation in an n dimensional search space. The adaptation is therefore not independent of the chosen coordinate system. The influence of the coordinate system is discussed further in section 5.4. When the parametric bounds are also the parametric constraints for the problem, the variation of the objective variable may be retried until the variable is within its bounds. In order to save computation time, the mutation is retried a number of times and then ignored, leaving the object variable in its original state within the parameter bounds. Selection Selection works solely on the fitness values. In the canonical genetic algorithm, selection is probabilistic, using scaled absolute fitness values to determine the probability of mating. The more common approach to selection is based entirely on ranking a number of individuals λ0 according to some fitness function ψ, ψ(x1:λ0 ) ≤ ψ(x2:λ0 ) ≤ . . . ≤ ψ(xµ:λ0 ) . . . ≤ ψ(xλ0 :λ0 ),

µ < λ0

(2.16)

Nonlinear Programming

17

where xi:λ0 is the ith order statistic (i = 1, . . . , λ0 ). The fitness of an individual may be due to the objective function, multiple objectives, constraints, or even a combination of these. The handling of constraints by selection is discussed in the following section. The best x1:λ0 , . . . , xµ:λ0 individuals are then replicated (and varied) at least once for the next generation pool. The evolution strategy commonly ranks the entire population, i.e. λ0 = λ, and replicates each of the best µ individuals an average of λ/µ times, this selection strategy is called (µ, λ). Another selection strategy is the (µ + λ). This is an elitist selection strategy where parents are ranked along with their offspring, i.e. λ0 = µ + λ. The optimal setting of µ/λ will be discussed in the following section. Evolutionary programming uses also an elitist selection strategy, but it is probabilistic. In one version the individuals are selected from a subpopulation of size λ0 , sampled randomly with replacement from the pool of λ + λ parents and offspring. The number of parents is equal to the number of offspring. Typically λ0 = 10 and only the best of these is replicated. This tournament style selection is repeated until λ individuals have been generated. Termination An evolutionary algorithm is commonly terminated after a number of generations, G. Alternatives involve determining the difference in fitness values over a number of generation and whether it goes to zero or to a prescribed limit [Sch95, p. 114]. It is similarly possible to use the fitness difference between the best and worst individual current population.

2.2.2

Constraint Handling

One common approach to deal with constrained optimization problems is to introduce a penalty term into the objective function to penalize constraint violations [FM68]. The introduction of the penalty term enables the transformation of a constrained optimization problem into a series of an unconstrained one such as, ψ(x) = f (x) + rg φ(gj (x); j = 1, . . . , m) (2.17) where φ ≥ 0 is a real valued function which imposes a ‘penalty’ controlled by a sequence of penalty coefficients {rg }G 0 . The general form of function φ includes both the generation counter g (for dynamic penalty) and the population (for adaptive penalty). In the current notation, this is reflected in

18

Background and Literature

the penalty coefficient rg . The function ψ is the transformed fitness function. This transformation, i.e. equation (2.17), has been used widely in evolutionary constrained optimization [KP98, SS89]. In particular, the following quadratic loss function [FM68], whose decrease in value represents an approach to the feasible region, has often been used as the penalty function [MA94, JH94]: φ(gj (x); j = 1, . . . , m) =

m X

max{0, gj (x)}2 .

(2.18)

j=1

The penalty function method may work quite well for some problems, however, deciding an optimal (or near-optimal) value for rg turns out to be a difficult optimization problem itself! If rg is too small, an infeasible solution may not be penalized enough. Hence an infeasible solution may be evolved by an evolutionary algorithm. If rg is too large then a feasible solution is very likely to be found but could be of very poor quality. A large rg discourages the exploration of infeasible regions even in the early stages of evolution. This is particularly inefficient for problems where feasible regions in the whole search space are disjoint. In this case, it may be difficult for an evolutionary algorithm to move from one feasible region to another unless they are very close to each other. Reasonable exploration of infeasible regions may act as bridges connecting two or more different feasible regions. The critical issue here is how much exploration of infeasible regions (i.e., how large rg is) should be considered as reasonable. The answer to this question is problem dependent. Even for the same problem, different stages of evolutionary search may require different rg values. There has been some work on the dynamic setting of rg values in evolutionary constrained optimization [JH94, KP98, MA94]. Such work usually relies on a predefined monotonically nondecreasing sequence of rg values. This approach worked well for some simple problems but failed for more difficult ones because the optimal setting of rg values is problem dependent [Ree97]. A fixed and predefined sequence cannot treat a variety of different problems satisfactorily. A trial-and-error process has to be used in this situation in order to find a proper function for rg , as is done in [JH94, KP98]. An adaptive approach, where rg values are adjusted dynamically and automatically by an evolutionary algorithm itself, appears to be most promising in tackling different constrained optimization problems. For example, population information can be used to adjust rg values adaptively [SC97]. Different problems lead

Nonlinear Programming

19

to different populations in evolutionary search and thus lead to different rg values. The advantage of such an adaptive approach is that it can be applied to problems where little prior knowledge is available because there is no need to find a predefined rg value, or a sequence of rg values, which is ‘optimal’ for the problem. According to (2.17), different rg values define different fitness functions. A fit individual under one fitness function may not be fit under a different fitness function. Finding a near-optimal rg , adaptively, is equivalent to ranking individuals in a population adaptively. Hence, the issue becomes how to rank individuals according to their objective and penalty values. Rank-based selection is assumed and a novel method for ranking individuals without specifying an rg value is proposed in chapter 6. Experimental studies test the effectiveness and efficiency of the method, which can be regarded as an exterior penalty approach.

2.2.3

Rate of Convergence

In optimization, the convergence rate to an optimum point is of major interest. This information can be used to estimate the time complexity of an algorithm. In this section, the formulation for convergence rates for the (1, λ) evolution strategy is presented. It is shown how the optimal number of offspring may be computed and the expected running time to reach the vicinity of the optimum. Let s0 be the useful distance covered in the sense of approaching the optimum point. The progress rate is then the expectation of s0 : Z ∞ 0 ϕ = E(s ) = s0 w(s0 )ds0 (2.19) s0 =su

where for a comma strategy su = −∞ and the plus strategy su = 0. If the function w(s0 ) is symmetrical and unimodal then the expected value may be found as the value of s0 at which the probability density w(s0 ) reaches its maximum. Let w(s = s0 ) denote the probability density that a particular descendant gets exactly s0 closer to the objective. The probability that a descendant advances less than s0 is p(s < s0 ). For a single parent (1 +, λ) strategy1 , the probability of the best descendant covering the distance s0 is [Sch95, p. 123]: £ ¤λ−1 w(s0 ) = λw(s = s0 ) p(s < s0 ) (2.20) 1

The +, is shorthand for both ”,” and ”+” strategies.

20

Background and Literature

where

Z p(s < s0 ) =

s0

w(s = s0 )ds

(2.21)

s=−∞

The exact analytical solution of the progress rate for a number of objective function has been found for (1 + 1). Other strategies have been solved using approximate theory and simulation. Even for the incline plane (a simple linear model), the analytical solution for (1 +, λ) is difficult to find for all λ. For illustrative purposes, a uniform distribution is used, instead of normal distribution, to obtain an analytical solution. This example illustrates how the progress rates and optimal λ may be computed. Imagine the function to be a terrain in the (n + 1)-dimensional space, it appears as an inclined plane [Sch95, p. 124] f (x) = x1 . (2.22) Orient the coordinate system so that the plane only slopes in the direction of one axis x1 and the starting point under consideration lies on the origin. The useful distance s towards the objective is just part of the random vector lying along the x1 axis. For the uniform distribution in the hypercube ( 1 for |s0 | ≤ η, w(s = s0 ) = 2η (2.23) 0 otherwise, the result will depend on the rotation of the coordinate system since the mutation distribution is not spherically symmetric. The probability that a descendant advances less than s0 is Z s0 1 1 0 0 p(s < s ) = ds = (s + η), s0 ≤ η (2.24) 2η s=−η 2η and hence the progress rate is ϕ=λ

¡

¢ 1 λ 2η

Z

η

£ ¤λ−1 0 s0 s0 + η ds .

(2.25)

s0 =su

Then for su = 0, (1 + λ) strategy: ¸ · 1 λ−1 + ϕ=η λ + 1 2λ (λ + 1) and for su = −∞(≡ −η), (1, λ) strategy: · ¸ λ−1 ϕ=η λ+1

(2.26)

(2.27)

Nonlinear Programming

21

For the plus strategy a positive progress rate is maintained when λ = 1, but for the comma strategy the progress will in this case be zero. In order to minimize the number of function evaluations needed per generation on a serial computer and gain maximum progress over G generations, λ may be chosen such that G/λ X ϕ(g) g=1

is maximal or equivalently [Sch95, p. 127] ¯ ∂ ³ ϕ ´¯¯ ! =0 ∂λ λ ¯λ=λopt

(2.29)

For the two strategies then; λopt = 1 (1 + λ), and λopt = 2.4142 (2 or 3) (1, λ). Similarly, optimal values have been computed using the normal distribution for various function models. This analysis has shown that for λ ≥ 5 the maximal rate of progress is practically independent of whether or not the parent survives. It follows that for a (µ, λ) strategy the ratio λ/µ should not be less than 6 [Sch95, p. 145], and from practical experience, it is recommended that µ/λ be approximately 1/7 [Sch95, p. 148]. Finally, the expected running time is of central interest. For a continuous search space, it is possible to determine the number of generations for constant stationary normalized progress rate ϕ? = ϕ n/s and a relative distance change to the optimum so /s [Bey96], G=

n ln(so /s) ϕ?

(2.30)

where so is the initial distance to the local optimum and s is the desired distance. For the case when the progress rate is bounded by c from below, i.e. ϕ? ≥ c > 0, then one may bound the total number of generations needed by G<

n ln(so /s) c

(2.31)

Although these results are for the hypersphere model [Bey96], it is claimed that similar results might hold for the general case2 but proof is unavailable. Showing that c > 0 is a difficult theoretical problem, especially when one considers the full implementation of an evolutionary algorithm. 2

Communication with Hans-Georg Beyer.

22

Background and Literature

It is fair to say that progress rate theory has been limited to single peaked fitness landscapes, or local minima. This guarantees that individuals ranked by fitness values is equivalent to ranking them in accord with their distance to the optimum. This does, however, not hold for a multi-peaked fitness landscape. This case is of greater interest and will therefore be given special attention is chapter 5.

2.2.4

Global Convergence

Another important theoretical requirement of optimization is global convergence. This is just as important as fast convergence but always in direct opposition to approaching an optimum quickly and accurately. A globally optimal solution x∗ is one where f (x∗ ) ≤ f (x),

∀ x ∈ S ∩ F.

(2.32)

An evolutionary algorithm will locate the global optimum, with probability one, by means of the reachability property [B¨ac96, Rud97a], © ª P lim x(g) = x∗ = 1 g→∞

(2.33)

Reachability implies that any point in the search space can be generated from any other arbitrary point by either mutation or recombination. The global convergence property is valid for algorithms that maintain the best found individuals for the entire evolutionary run or use elitist selection. The theorems of global convergence follow from the realization that points generated for a sufficiently long time eventually generate the global optimum. Global convergence theorems of this type don’t help too much for real world applications where good solutions are desired in reasonable times. This is discussed further and illustrated empirically in section 5.4.

2.3

Connectionist Models

Connectionist models, often called parallel distributed models or neural networks, consist of nothing more than nodes and connections. Nodes are abstractions of biological neurons. It is generally understood that biological neural functions are stored in the neurons and the connections between them. The

Connectionist Models

23

nodes are simple processing units. Processing takes place via the propagation of activation among the units; via real-valued weighted connections. The ‘knowledge’ that governs processing consists of the values of the connection weights and learning occurs through the changes of the connection weights. The evolutionary algorithm described in the previous section may be used to optimize these weights and, therefore, ‘evolve knowledge’. The details presented in this section are sufficient for the implementation of connectionist models for knowledge representation in problem solving. The origin of the modern view of a network of artificial neurons may be traced back to the work of McCulloch and Pitts [MP43]. They demonstrated & '()+*-,/.1032$4

5&/66+2$'7,/.1082$4

93)+*()+*:, .1032$4  

   

 

 

 





; 

  

 



  "!   $#

 

  %  "!    #

  

 





 

 









 

Figure 2.3: A multilayered feed-forward neural network. The node activations are labelled by the variable a and the connection weights by w. An input activation layer is fed through the network to the output layer as shown. The number of hidden layers may be any number.

24

Background and Literature

that a neural networks could, in principle, compute any arithmetic or logical function. Later it was shown that a single hidden layer of neurons is sufficient for a network to be a universal function approximator [Fun89, HSW89, Cyb89]. A number of neurons, acting in parallel, form what is known as a layer. The network inputs then form the input layer and the layer whose output is the network’s output is called an output layer. All other layers are hidden layers. Figure 2.3 shows a sample network, a three layered feed-forward neural network where the nodes are the circles and the connections are the vector lines drawn between them. The architecture of network refers to the particular way the network is assembled. The inputs to a node i originate from other nodes or external sources that have a numerical activation value, denoted here by aj . The sending nodes activity travels along a connection line to the receiving node. The connection lines have varying connection strengths that are represented by the numerical value wij . A connection weight multiplies the signal travelling through it. The receiving node will then process the signal P by first summing up the net inputs, that is j wij aj . This net activation may be positive (excitatory) or negative (inhibitory) and is decided by a threshold that is determined by the bias node. The bias nodes are those units in figure 2.3 whose value is one. The activations at any given unit are then usually ‘squashed’ by a nonlinear activation function resembling a sigmoid. The most common functions are the log sigmoid transfer function a0i =

1 P 1 + exp(− j wij aj )

(2.34)

and the hyperbolic tangent sigmoid transfer function a0i =

2.4

2 P − 1. 1 + exp(−2 j wij aj )

(2.35)

Summary

In this chapter, background and literature for problem solving and numerical optimization by evolutionary algorithms was presented. Details of the implementation of a particular version of the evolution strategy, applied in this work, was described in detail. It was shown how the convergence rate for a (1 +, λ) evolution strategy may be computed in terms of the expected useful distance covered towards a known optimum. Optimal offspring numbers for serial computers were discussed and a lower bound on the number

Summary

25

of generations needed to reach the vicinity of the optimum. Conditions for convergence to a global optimum for an arbitrary problem was given. Finally, a brief overview of connectionist models was presented. The following chapter is devoted to setting up problems and their candidate solutions for effective and efficient evolutionary search.

Chapter 3

Problem-Solution Abstraction The aim of this chapter is to increase the understanding of how a situation should be described on a computational device for evolutionary problem solving. The description must embody the principles of evolution by natural selection, which are [Lew70]: 1. Phenotypic variation – there are processes that produce variations among individuals. 2. Differential fitness – different phenotypes have different rates of survival and replication in different environments. 3. Fitness is heritable – there is a correlation between parents and offspring in the contribution of each to future generations. In order to apply the principles, the stage must be set. What is needed is a description of the organism and its environment, which, in turn, represents in some way the situation to be optimized. It must then be determined whether the organism and the environment are to be treated in isolation or as a whole. This will be the topic of discussion in the following section. Only the fitness need be inherited. Differential fitness is due to different phenotypes. Different phenotypes in turn are due to the processes, variation operators, which produce them. Understanding this relationship is the key to designing efficient and effective evolutionary problem solvers. With the aid of an example, these issues will be addressed in the second section presented. A summary with main conclusions are given in the final section.

Environment and Organism

3.1

27

Environment and Organism

The notion of a problem–solution model for natural systems is awkward. In problem solving, there is the notion that a problem preexists its solution. This attitude is reflected in a world model where an outside force, the preexisting environment, sets up the ‘problem’ an organism must solve and the inside forces for variation that generate the organisms’ ‘solution’ to the ‘problem’ [Lew00]. For example, an aquatic environment poses the problem of swimming and the fish is a solution. On a digital computer the problem and its solution must be described by data structures or symbols, which the machine is capable of interpreting. The interpreter is a program that is determined by a system of rules that govern transitions from one state to the next. An analogy between symbols and genes is commonly drawn. The interpretation of the genome is presupposed and invariant. This is what is required in setting up the situation. In this model, only the genome is randomly varied, hence, the gene is often referred to as the unit of heredity. Differences in the genome encode for differences in the phenotype. In general, there exists a function fo : C → S

(3.1)

mapping points in the genotype or representation space C to the phenotype space S. Together fo and C form a representation of the attributes of the organism. In short, representations are caricatures of selected attributes of the problem-solution. Essentially, it is the designer of the algorithm who sets up the problem and the plausible solutions to it. The contents of the symbolic representation are the contents of the designer’s thought. The rules manipulating and transforming the symbols imitate the designer’s cognitive process, the process by which knowledge is acquired. The designer is, however, free of any commitment as to how this knowledge is represented. Let a function f :S→F

(3.2)

map points in the phenotype space to a fitness space F. This is the fitness function. In natural systems it is expected that a small change in the phenotype will result in a small change in reproductive fitness. Neighbourhoods in phenotype space map into neighbourhoods in fitness space [Lew85, p. 79].

28

Problem-Solution Abstraction

The phenotype and fitness value taken together form a continuous fitness landscape. As a consequence, correlation analysis of fitness landscapes have been proposed as a means of characterizing difficulty for evolutionary search methods [Man97, VFM00]. This involves estimating the landscape ‘ruggedness’. The efficiency of the evolutionary search is then reflected in the ability to anticipate how differences in the genotype are translated to differences in the phenotype. With this understanding, it is possible to design a variation operator that transforms a symbolic representation to a neighbourhood in the fitness space. However, this in itself is a meta-problem – the coding relationship problem. Different interpretation of symbols is equivalent to solving another problem. A local optimum under one interpretation may not be one under another. Escaping local optimum may therefore be possible under a changing representation. The fitness landscape changes and there exists an interpretation for which the original problem becomes trivial [LV90]. Searching for an effective interpretation of the symbols implies learning something about the problem. It may then also be possible to use this domain knowledge to solve another problem instance of the same class more efficiently in the future. The space of all possible interpretations is, however, larger than the original search space. When the domain knowledge space is represented again with symbols, meta-rules will be needed to interpret them otherwise the coding relationship problem will persist. Numerous attempts have been made to dynamically adjust symbolic representations in order to escape local optima [BWW00]. However, as long as these approaches consist of pure syntax operations carried out on symbols, no improvement can be expected in general. It is quite possible that the genetic code is some form of symbolic knowledge, but it can only be understood on the basis of pre-existing non-symbolic forms. Symbolic representations are theoretical primitives and there are no conceptual categories available for specifying the situation before the symbols come into being [Ste95]. Unless the rules for transforming and manipulating the symbols, based on their meaning, are clearly specified by the designer, it is impossible to know how to search in a symbolic space. The frequently cited alternative to a symbolic representation is the connectionist representation. The connectionist representation is propositional because the environment proposes the appropriate action. The symbolic representation requires a symbol for each plausible action. That is; the symbol

Environment and Organism

29

currently in use in the symbolic structure dictates the action taken. This kind of representation is compositional, which is characteristic of symbolic approaches. A propositional representation is more suitable for an alternative world view for living systems. This is the view that there is no predetermined problem but it is rather the activity of living systems that determines both the problems and solutions simultaneously. There is no environment without an organism and there is no organism without an environment [Lew83]. The organism both constructs and selects its environment. The organism determines the problem by establishing what parts of the environment are relevant. This view is expressed in figure 3.1. There is be no link from environment to organism in the conventional world view. The information flow would only be one way, from genome to environment. The same figure shows how the genome is taken out of the organism-environment loop. The genome is that part of the problem solving architecture which is randomly varied, independent of the organism and its milieu. When identifying a problem, all information potentially relevant to the problem at hand must be abstracted. Information are data structures plus their meaning. Information that are ‘easy’ to define and compute are known as primitives. The environment primitives are the simple sensations perceived by the organism. Instances of a primitive in problem solving may for example be the information regarding the contents of every square on a chess board, whose move it is, and so on. Activities of the organism are then reflected in legal chess moves. The unknown is the appropriate activity

¨

activity

-

¥

environment

genome

? §

organism

¾

sensory

¦

Figure 3.1: Interpretation of the environment-organism and genome system.

30

Problem-Solution Abstraction

for a given set of primitives. This interpretation is an ongoing process. There are two approaches to modelling the alternative world view. The first is a model which maps a sensory state directly to action. The second requires a scoring function or approximate cost-to-go function, which can determine the appropriate activity for a given sensory or problem solving state. The plausible activity (actions) and environment primitives must be pre-specified. The world model depicted in figure 3.1 is also found in dynamic programming [Ber95] and reinforcement learning [SB98]. These are so called actor-critic models [BT96, p. 36]. Figure 3.1 should in this case be reinterpreted as follows: the genome determines of the optimal cost-to-go function, the generation of new cost-to-go functions is the result of random variations of the genome, the actor (controller) is the organism, the system corresponds to the environment, and the critic is the result of selection. Scoring functions are based upon features that describe the problem solving states, an evaluation function may be very sensitive to the set of features chosen. In neuro-dynamic programming [BT96], the function is approximated by an artificial neural network. When lookahead is used, only a single output node is needed to estimate the worth of an action given the resulting state. The evaluation of states are used to determine whether one state is preferable to another. If no lookahead is used, then the neural network must have as many output nodes as there are plausible actions. In practice, this is not commonly done, since this requires a more complex scoring function. It may, however, be advantageous if the computational costs are high or lookahead is not possible. Note, that the objective of the task, the end goal, is not explicitly programmed. From the organism-environment perspective, it is possible to reinterpret the evolution strategy [Sch95, p. 106]. Consider the sphere model f (x) =

n X

x2i

i=1

where xi , i = 1, . . . , n are environment primitives. They may represent some nutrients absorbed by the organism and their values are the amount. The organism transforms these nutrients to products some of which are harmful to it. The concentration of these products, now part of the environment, is given by the value f (x). In order for the organism to maximize its survival rate, it must posses behaviour which will minimize f (x). The object of evolution

Environment and Organism

31

is then simply the organism’s environment. However, it is the activity of the population that determines the environment. For the evolution strategy the ‘self-adaptive parameter’ σ, the mean step size, determines this activity. With the inclusion of random changes, its variation is given by (g+1)

σi

(g)

= σi Z (g)

(3.4)

where Z is a random number. The organism is the object of these internal forces of variation, which operate independent of its functional needs or its relation to the outer world. That is what is meant by mutations being ‘random’. The main properties of the random change is that there is no deterministic drift in behaviour without selection and that small changes in behaviour occur more frequently than large ones [Sch95, page 143]. In this particular case, these requirements are satisfied when Z is log-normal distributed. If the mutation of σ is dependent on x, then the mutation is called a directed mutation. The replicated organism selects or constructs its environment according to the following behaviour, (g+1)

xi

(g)

= xi

(g+1)

+ σi

Ni (0, 1),

i = 1, . . . , n.

In this interpretation σ is the organism primitive and Z its internal variation. The environment primitives are x and f (x), and N (0, 1) the source of developmental noise. Evolution is not a property of these primitives, but of the whole ensemble. The ability to select and construct ones environment comes as a direct consequence for interacting with ones world. The ability to have knowledge about ones world necessitates an interaction with it. The organism is the medium for this interaction [Lew83] and the ‘knowledge’ is formed in the organization of the organism-environment system. The sharing of the environment with other organisms is essentially a medium for communication about the world. Problems, interpreted as such, may quite simply be solved by a team of organisms, each performing their behaviour independently. There is then no necessity in having a central directing agent in problem solving [Bro91, Hut95]. This is yet another possible problem solving architecture, however, not explored any further here. Both symbolic and connectionist descriptions presuppose the existence of some representation before they can discover other useful representations. This makes the initial architecture extremely important. If not chosen properly,

32

Problem-Solution Abstraction

it places additional burdens on problem solving. The following section will illustrate by example the use of symbolic and connectionist representations.

3.2

Representing the World

In this section, the nine-dot problem [NS72, Hay81] is considered to illustrate the concepts developed in the previous section. The problem is commonly found in the psychology literature (see for example [LD85]). The problem is such that it is difficult for the designer to formulate a problem-solution description and consequently, the change needed to improve a solution. This is the main reason for choosing this particular example. The problem is that a subject is presented with a square array of nine dots; ◦ ◦ ◦

◦ ◦ ◦

◦ ◦ ◦

and is directed to draw four straight lines without raising the pen from the paper so, that each of the dots is touched by at least one of the lines. If unfamiliar with the problem, the reader may want to try to solve it before proceeding. The appropriate questions for identifying the problem are [Pol57]: what is the unknown? four sequentially drawn lines, what are the data? the nine dots given in position, and what is the condition? the four lines should pass all nine dots at least once. Draw any four lines without raising the pencil. All possible lines drawn correspond to the problem space [New90, Hay81]. The sketching of four lines represents an idea of the solution. The knowledge required to draw these four lines abstracts from this representation. The difficulty is to determine how this knowledge is to be represented on a computational device.

3.2.1

Symbolic

The classical view is to treat a representation as a syntaxically-manipulative symbol. It is also imposed by the fact that a representation must be implemented in a computer by some data structure. Previously, the reader may have drawn four lines in an attempt to solve the problem. These lines must now be abstracted to symbols and what will be modelled is the abstract computation

Representing the World

33

achieved by the manipulation of the symbols. This is the replacement for the reader’s cognitive processes and activity that are now absent from the system [Hut95, p. 363]. Consider the general idea of a solution to the nine-dot problem as a drawing of four lines. Perhaps the most commonly used symbolic representation of a drawing in a computer is a plot-file. For example the HP-GL plotting commands needed to draw four lines sequentially are [ PU x1 ,y1 ; PD x2 ,y2 ;$ PD x3 ,y3 ;$ PD x4 ,y4 ;$ PD x5 ,y5 ;$ ].

(3.7)

The string is interpreted by reading it from left to right. The symbol PU represents the pen at the specified point, this is also known as the initial state. Symbol PD represents a line from the current point to the specified point, these instruction produce intermediate states. The actual solution is obtained when the entire string has been completely interpreted and will be referred to as the final state. There are a number of special symbols like ,;$ which assist the plotter in interpreting these instructions, for example $ is a terminator character. Other possible symbols may represent a circle, ellipse, arc, pen number, pen speed, pen pressure, etc. The representation is transparent and easy to apply. The plot-file is a symbolic representation of potentially any drawing with plotter-pens. However, the plotting commands used in (3.7) do not represent just any drawing, but a drawing which ‘naturally’ satisfies conditions of the problem. These are the conditions that only four lines may be drawn without lifting the pen. The symbols, which are unknown, are the five arbitrary coordinates: [ (x1 , y1 ) (x2 , y2 ) (x3 , y3 ) (x4 , y4 ) (x5 , y5 ) ]

(3.8)

The coordinates represent numbers. Usually the ten digit symbols “0”, “1”, . . ., “9”, are used to represent the numbers 0 through to 9. All other numbers may be represented by concatenating this small set of symbols. When choosing a representation it is common to hypothesize what the solution may look like. These speculations may be interpreted as hypothetical conditions which could be satisfied by the representation. How are the unknowns linked to the data? [Pol57] How are the lines linked to the nine dots? The common assumption is that the nine-dot problem can be solved as a dot-to-dot puzzle. The lines are simply drawn by connecting the dots. For representation (3.8), this would imply that the coordinates should only

34

Problem-Solution Abstraction

correspond to the coordinates of any of the nine dots. The domain of solutions under this hypothesis does, however, not contain the correct solution. If the coordinates cover the real number space then most of the search will be devoted to touching any dot at all. Another hypothesis is then that the solution lies on an extended grid rather than just the nine dots. In this case, the nine dots may be placed on a grid as shown: · · · · ·

a f k p u

·b ·c ·d ◦g ◦h ◦i ◦l ◦m ◦n ◦q ◦r ◦s ·v ·w ·x

·e ·j ·o ·t ·y

Lines may now be drawn simply by connecting dots. It is assumed that the correct solution lies on the grid and luckily for this hypothesis it does. This illustrates how important the expressive adequacy of the representation is. It must span that portion of the world which contains the correct solution, but at the same time it should be as small as possible. By labeling the grid with letters from the alphabet, four sequentially drawn lines may then be represented for example by the symbolic string [ g i s q g ].

(3.10)

The string is interpreted in a similar fashion to the previous string representations. The representations presented ‘naturally’ satisfy most of the conditions of the problem and hypothesis of what the solution may look like. If the representation is complete then the remaining conditions will be satisfied by at least one of the alternatives specified by it. The search for alternatives will clearly depend on the representation chosen. Operations performed on symbols without a regard to their meaning may be viewed as the great strength of the symbolic approach. If an organism is computational, its operations may apply to other computational systems as well. By imitating nature’s computational ways one may hope to discover some universal ‘tricks’ to general problem solving. One point crossing over and point mutations are two examples. The manipulation of symbols, without regard to their meaning and selection, may then in principle solve any problem. In order to justify such an approach, the schema theorem was proposed [Hol75, Hol00].

Representing the World

35

Attempts to explain the behaviour of such syntactic systems by interpretations of the schema theorem have, however, been unsuccessful. The notion of implicit parallelism, a misinterpretation of the schema theorem, has been used to argue that symbols of low-cardinality are ideal [Gol89], i.e., all binary. In this case, for the nine-dot problem, assuming a 4 × 4 grid, a solution may be represented as [ 0000 1111 1010 1001 0000 ]. (3.11) For the reasons given, binary representations are commonly found in the theoretical genetic algorithm literature. However, no efficient non-deterministic approach for arbitrary pseudoboolean optimization problems exists [B¨ac96, p. 56]. Gray codes have been shown to perform better empirically than natural binary encoding on discretized real-valued fitness landscapes [WMRD96]. This is because neighbourhoods in the real-valued space are preserved by Gray codes. In this case, the structure of the problem provides useful topological

&

  

% 



   

 

# #

$ #

% ##

%$ #

& ##

 ! "

Figure 3.2: Comparing integer, binary and Gray encoding for the nine dot problem.

36

Problem-Solution Abstraction

information for the algorithm. To illustrate the effect the representation has on evolutionary problem solving, an empirical comparison of three different symbolic representations for the nine dot problem is performed. A GA using natural binary and Gray coding of (3.8) are compared. This algorithm uses one point crossover and bit-wise mutations with probability 1/`, where the string length ` = 20. The population size is 50, using binary tournament selection, and an elitist strategy (the best found individual is preserved in the population). The average fitness of the best individual from 100 independent runs is depicted in figure 3.2, where the fitness value is simply the number of dots which remain untouched. From these runs the Gray GA performs better than the binary GA on average. If we assume this is because neighbourhoods in the integer-valued space is preserved, then perhaps operating directly on the integer space is best. There exists an ES algorithm for integer programming which does precisely this [Rud94]. The algorithm uses a mutation distribution constructed from the principles of maximum entropy. For a further analysis of integer programming using ES see [RSJ00]. A (10, 50) selection and a lower bound on the global mean step size of one is used. The result is worse than both the GA runs on average. For all experiments, attempts were made to tune the algorithm’s parameters, but the overall results remained the same. Is the reason for the Gray GA’s success because the integer 1 is the neighbour of 4? This example illustrates the coding relationship problem discussed in the previous section. It is not known how to reason with the symbols and therefore designing the appropriate variation operator is a hopeless task, which can only be solved by trial and error. It may be said that that representation is not inferentially adequate.

3.2.2

Connectionist

Evolutionary algorithms have been criticized for their lack of interaction with the environment. “Evolutionary methods ignore much of the useful structure of the reinforcement learning problem: they do not use the fact that the policy they are searching for is a function from states to actions” [SB98]. This is, however, not so much a criticism of the evolutionary approach but rather of the conventional world view chosen. Ultimately, an evolutionary algorithm need only satisfy the three principles which embody evolution by natural selection. These principles are independent of the particular world view chosen.

Representing the World

37

The connectionist models are propositional and hence a suitable representation for when the environment states propose appropriate action. The environment states, connectionist network architecture, and all plausible activity must be pre-specified. For the nine dot problem the activities are all the possible lines which may be drawn from the current point. If a 4 × 4 grid is assumed, as in the previous section, then the plausible activities are 16 at any given state. A scoring function, approximated by a numerical artificial neural network, will determine which activity is to be performed. The drawing of the four lines starts at a randomly chosen point on the grid. That is, the initial state is random. Now all 16 plausible lines drawn from this point must be examined by the scoring function. This is done by a one step lookahead procedure. A line is drawn and the resulting state is used as input to a linear network. The input nodes (sensory) correspond to the nine dots and their state (environment). When crossed the state is 1, and when not it is 0. If the pen is currently located on one of the nine dots then its state is set to −1. The network has one output node and no bias node. The connection weights are denoted by w = [ wg , wl , wq , wh , wm , wr , wi , wn , ws ] ∈ R9 ,

(3.12)

corresponding to the numbering in the figure given on page 34, with its first row and column removed. The scoring function is then defined as fgo (x) =

9 X

wi xi

(3.13)

i=1

where x are the states described. The activity resulting in a state with the highest output node activity, fgo (x), is the chosen activity. The process is repeated until all four lines have been sequentially drawn. The data structure to be varied and selected are the nine connection weights in (3.12). Unlike the symbolic data structures from the previous section, these are numerical. More importantly, it is known that a small change in the weights will result in a small change in behaviour. The coding relationship problem, encountered by the symbolic representation in the previous section, is therefore nonexistent here. The neural network is evolved using the evolution strategy where the variation of the connection strength is (g+1)

wi

(g)

= wi

+ σ (g+1) N (0, 1),

i = 1, . . . , 9

(3.14)

38

Problem-Solution Abstraction

and the ‘mean step size’ is modified, as described previously by (3.4). The selection strategy used is (10, 50) and the number of generations is 50. The result is compared with that of the symbolic representation from the previous section and shown in figure 3.3. The result is the average of 100 independent runs. The results illustrate how the connectionist representation has trivialized the nine-dot problem. The ability to gain knowledge about the world necessitates an interaction with it. The solutions become ‘intelligent’ as a consequence. The evolutionary search no longer results in a single best solution but the best behaviour for a given state. For the nine dot problem, a random initial state was chosen, as a consequence the network needed to find the optimal policy for all initial states. Therefore, the solution found can solve the nine dot problem starting

1

'(

+*+

. )

      !#" !$%&!#"      

-

-/. -

0 -

. --

. 0 -

1--

  , 

Figure 3.3: Linear neural network compared with symbolic representations for the nine dot problem.

Summary and Conclusion

39

from any of the dots g, l, h, s or y shown on page 34. An example solution is w = [ − .67, −.03, 1.28, .71, .54, .89, 1.91, .61, .81 ]. The connectionist representation used has eliminated the need for re-optimization, which would be the case for the symbolic representation for any change in initial conditions. The representation supports generalization and specialization. A small change in information does not require the modification of the representation. This type of representation is called acquisitionally adequate.

3.3

Summary and Conclusion

In this chapter it has been argued that the process of problem-solution abstraction is essentially that of knowledge abstraction. It follows that in choosing a suitable representation for evolutionary algorithms, similar issues to knowledge representation techniques in cognitive science must be addressed. They are [SWC+ 95, page 153]: 1. Ease of encoding – a concise and transparent encoding. Is simple knowledge encoded simply? 2. Expressive adequacy – the portion of the world spanned by the representational primitives contains the solution sought. 3. Acquisitional adequacy – the modification of information. Is the representation modular? How is old knowledge related to the new? 4. Inferential adequacy – the possibility to make logical deductions, or some sort of induction, when faced with uncertainty or missing knowledge. 5. The scheme and its maintenance – is everything encoded in advance or can the interpreter fill in the gaps? Can a new primitive be created and used? Representations are chosen so that they can be reasoned with. The reasoning should reflect the designer’s anticipation of how variations in represented situations correspond to fitness change. The properties of the variation operator are as follows:

40

Problem-Solution Abstraction 1. The variation must guarantee an average neutrality in the absence of selective pressure. The variation is random. 2. Small variations correspond to small changes in fitness. This implies that a distance metric space must be pre-specified, and that the fitness landscape is smooth. The variation operator works in the metric space, and should be capable of varying the distances travelled in this space. This introduces the notion of a ‘mean step size’. 3. At least one variation should on average be better than the best parent in the population. This is the attempt to maintain positive progress towards the goal state. The likelihood of this is increased when the landscape is smooth and smaller steps are taken more frequently than larger ones. The rate of progress may be maximized by adapting the mean step size.

An intelligent problem solver does not limit itself to direct search, but will attempt to gain domain knowledge during search. This is the knowledge that guides the problem search in an attempt to maximize progress. This knowledge also requires a representation. However, meta-knowledge is needed to search this space. When this meta-knowledge is not available, connectionist representations are the better alternative. They are numerical and it is possible to design variation operators where small steps correspond to small changes in ‘knowledge’. Therefore, a more detailed discussion on connectionist representation is presented in the following chapter.

Chapter 4

Connectionist Models The state-of-the-art evolutionary problem solver will use learning to avoid the need for the designer to provide knowledge for search. In the previous chapter it was claimed that connectionist models are well suited for this task. This chapter expounds the idea of using connectionist models as a problem solving architecture. The architectures presented may be classified into two distinct models of cognition. In the first model ‘intelligent’ search is partly the result of deliberation over possible courses of action based on some learned knowledge about the world. This is a deliberative model of cognition for which the connectionist model presented, in the previous chapter, serves as a typical example. In the second model, there is no deliberation but rather a direct mapping from perception (sensory) to action. This is a reactive model of cognition, which avoids the intermediate steps of representation and reasoning. In both models domain knowledge is adapted during search as a result of the random variation of connection weights and selection. There is an additional price paid in computation time for acquiring domain knowledge. This loss in efficiency will hopefully be compensated by the efficiency gained from applying the learned knowledge for search. This is in principle equivalent to the selfadaption of local search step sizes in evolution strategies in order to maximize progress. In some instances, the domain knowledge may be reused such that further problem instances of the same class may be solved in a shorter time. Furthermore, these architectures will (in some cases) eliminate the need for continued search, if problem conditions are modified slightly. Adaptive solutions of this kind are desirable for critical applications where sudden changes in the environment must be met with a compromise solution immediately.

42

Connectionist Models

An example for both, a deliberative and a reactive connectionist model for problem solving, is presented. These models serve to illustrate two plausible connectionist architectures for problem solving. It will not be claimed that the architectures designed are optimal in any sense. They serve only to illustrate the novelty of using connectionist models for evolutionary problem solving. The claims made will be supported by experimental studies presented previously in [RJ99a, RJ99b, RJ00].

4.1

Deliberative

Consider a production system, a program structure consisting of a simple set of production rules, of the form: if condition – then action. The condition part of a production rule consists of a series of condition elements, which describe the conditions that must be true for the rule to be valid. The action part of the rule describes the actions that are taken should the rule fire. Production systems are convenient to program and have been shown to be computationally complete [Pos43]. When building a production system, it must be known what conditions are pertinent and what the appropriate actions are for those conditions. The deliberative connectionist model is equivalent to a production system where these conditions are evolved and, therefore, must not be supplied by the designer. Production rules may be described by feed-forward connectionist models where data attributes are assigned input nodes, target concepts are assigned output units, and intermediate concepts or hypotheses are hidden units. When using a symbolic production system all rules must be pre-specified. In problem solving there may be conflicting opinions on what rules should be used. Hence many different procedures may be suggested for the same premise. The production system then contains rules with equivalent conditions but different action parts. Purely legal actions are those that have no other condition than that of being valid. A genetic algorithm which resolves rule conflicts by evolving rule patterns was first implemented in Holland’s classifier systems [BGH89]. A symbolic string is formed by concatenating production rules. The rules are then fired consecutively by reading the string from one end to the other. In this model a gene represents a set of activity or the behaviour of an organism. Each locus is occupied by alternative forms of a gene, which are known as alleles of one another. Each allele corresponds to

Deliberative

43

a unique production rule. The allele will at any given locus determine the action to be taken. If the activity suggested is illegal, the interpretation must be relaxed and rule conflicts are restored. Generally, production systems solve resolution conflicts by ranking the rules according to some priority. This type of predetermined priority may be a source of bias for the search. The bias could be avoided if a valid rule is chosen in a random manner. Figure 4.1 shows an example of a genetic string of preferred production rules used by a production system. The third gene from the left suggests a rule that cannot be fired because its actions are illegal. This is represented by a dashed box in the figure. The production system will select either the default rule “A” or some other random valid rule “?” instead of “c”. The consequences of selecting the different production rule may have varied effects on the validity of other suggested rules in the string as shown by the dashed boxes in the figure. This is the brittle nature of a symbolic production system. Actual rule values in genetic string A b

c D

···

Q b

s

t

s

t

s

t

⇓ Default rule applied A b A D or

···

Q b

Random rule applied A b

? D

···

Q b

Figure 4.1: An example genetic string of rules is pictured, the third rule from the left requires a interpretation since rule “c” is illegal – denoted by a dashed box, either default rule “A” or a random valid rule “?” is fired instead. Hinton and Nolan [HN87] denoted adaptive alleles as learnable “?”s. This notation is adopted here also as shown in figure 4.1. The rule fired is randomly chosen when a “?” is encountered. An allele of this type may also be added to the genetic string to force ‘symbolic learning’ between generations. The number of times the string is evaluated denotes the number of learning steps taken. The same string will produce varying solutions and the fitness value of the best solution found is returned. The learning process influences, therefore, only selection. Baldwin called this organic selection [Bal96]. Individuals possessing the ability (plasticity) to increase their fitness through

44

Connectionist Models

learned behaviour have a greater chance of being selected. This is the first step of the so called Baldwin effect, the second is genetic assimilation of the learned behaviour. There are two necessary conditions for genetic assimilation to occur: high correlation between genotype and phenotype spaces, and high relative costs for learning [Mar96]. The increase in an individual’s fitness may vary if learning contains a stochastic element. Consequently, the cost of learning is observed in the unreliable increase in fitness, which puts pressure on a more stable behaviour. As a result the number of “?” in the population will tend to dwindle over generations. Similarly, if random rules are applied to illegals rather than some default hierarchy and the same learning strategy is applied, there will be pressure on the individuals to become ‘more legal’ during evolution. Initially, however, it is expected that symbolic learning of this kind is favoured, but later selected against [Mar96]. In the symbolic production system all described rules are predefined. They reflect the designer’s idea of the most plausible rules for constructing solutions to the problem. It is then effectively the symbolic string that determines the activities performed by the organism. There is no feedback from the environment to the organism as pictured in figure 3.1 on page 29. When a connectionist model is used, complete rules are not supplied by the designer. The designer need only describe the organism’s environment and activities. It is then the environment, which proposes the activity of the organism. For example, when a cell encounters an increased concentration gradient of molecules of lactose, certain genes will be activated that allow the organism to feed upon this food source. The connectionist model will learn the appropriate activity for any given environmental state. Here the gene coordination is modelled by simple feed-forward neural networks. An analogy is drawn between a neural network and a network of interactions among information macromolecules responsible for gene coordination [Zuc97]. As a result evolutionary problem solving becomes an adaptive search. The approach produces plastic solutions, which are solutions capable of immediate self-organization in the event of an environmental change. If an allele is deleted, other genes will alter their expression pattern so that, a solution with (near) equivalent fitness is obtained. This is accomplished through the local gene interaction networks that coordinate gene expression. Genes regulate each other’s activity directly or through their products via these networks. Like living cells, the solutions will differ because they have dissimilar patterns of gene activity, not because they have

Deliberative

45

different genes [Kau91]. The gene coordination network’s task is to determine, which gene is to be expressed as a function of the environmental state of the genome. As genes are expressed, their products change the environment. Through the environment or directly, genes are capable of regulating the activity of other genes and so the regulation of transcription is a result of transcription itself. There is no predetermined environmental problem for which there is a solution; the genome constructs the environment and hence determines both the solution and problem simultaneously [Lew82]. The construction and reconstruction of their environment is, however, constrained by what they already are. The genome alters its environment based on patterns of the world, which are presented to the gene coordination network. The network consists of nodes and connections. The nodes are simple processing units whose activation is the weighted sum of their input from the environment and from other nodes. Knowledge is stored in the connections among the processing units and learning takes place through the adjustment of their connection strengths. Each state and corresponding response could be considered in isolation by the network, if an absolute value judgement is given. The response strength, gene activity, would then be an intervening variable, reinforced by some function of the individual’s fitness. In essence, the network would be attempting to predict the resulting individual’s fitness, based on the current environmental state and actions taken. Formulating reinforcement in isolation is, however, not a simple procedure. An evolutionary approach, such as that proposed in [CF99] and the one presented in the previous chapter (see section 3.2.2), produce absolute network output, but only make sense for the ranking of alternatives. It is believed that choice procedures may provide a better measure of the effects of reinforcement. The measures of ‘absolute values’ are only a result of choice dynamics [Wil94], which is the approach taken here. Gene expression is the result of choices made from a set of competitive compromises. In figure 4.2, a section of the genome model, depicted as a genetic string, is illustrated. Two different loci (l and m) are shown. An individual is imagined to contain multiple alleles. In general, however, living organisms have only two alleles, although a greater variety exists within the population. In the model, multiple alleles will compete for the right to be expressed, but only two of them at a time. The competition is resolved by the gene coordination network. As for any connectionist model, assumptions must be made about the number of

46

Connectionist Models

nodes, arrangement of connectivity and their interaction with the environment. Since the network is a competitive or choice network, the input will be the difference between two competing environmental states associated with the alleles. An array of environment state differences at locus l is denoted by j i ∆E i,j l = E l − E l , where allele i is competing with allele j. The environmental differences are connected to a hidden layer of nodes, which are connected to two output nodes as shown in the figure. The activations of the two output nodes – Olhs and Orhs – are real numbers between 0 and 1. The node with the higher activity is chosen. For each competition performed, two separate inquiries are made as to whether an allele should be chosen over the currently successful one. The results must be consistent and if the challenge of allele j is to be i,j i,j j,i j,i successful over allele i then: Olhs < Orhs and Olhs ≥ Orhs must be satisfied, where ∆E i,j = −∆E j,i . If the above inequality does not hold, allele i remains successful. With no useful information available from the environment, the network may respond in a contradictory manner and the successful gene will hold its position independently of changes in the environment. To achieve this, the model must remember, which allele was expressed previously at that locus. Loci which are sensitive to the environment are known as plasticity loci, and those insensitive to the environment mean loci. A genome containing only plasticity loci has been labelled the pleiotropy model by Scheiner [Sch98]. The pleiotropy model is a special case of the epistasis model, which contains also mean loci. direction of transcription ? ? E 1l E 2l E 3l E 4l E 5l E 6l E 1m E 2m E 3m E 4m E 5m E 6m

N+

∆E 3,6 l

environmental difference

= ® U ~ hidden }}m} nodes



4,2 ∆E m

= ® U ~ mmm}

+s +s ? ? w? / w/ w? / w/ } m output nodes m}

Figure 4.2: Genome model. Competing alleles at loci l and m where (?) marks the currently successful allele.

Deliberative

47

Two types of perturbations are possible at a given locus. The first is a minimal perturbation, where a previously expressed allele is forgotten and therefore will not hold through to the next generation. If the locus is a plasticity locus, it is likely that the allele will regain its position. If, however, the locus is a mean locus, the allele will loose its position. The second is a structural perturbation, where the gene coordination network itself is modified. This may be a modification of the network architecture or of the connection strengths. Viewed in this manner, a structural perturbation constitutes learning. The names for these perturbations are taken from [Kau91]. Additionally, it is possible to extend this model so that a previously expressed allele, and/or other alleles, may be deleted (removed from a locus) and others added. The entire genome is represented by a string containing ` loci as shown in figure 4.2. The string is transcribed systematically from left to right, processing one locus at a time. Within each locus there exist m alleles competing to be transcribed. Random competitions are held, where alleles compete with the currently successful one for its right to be expressed. The competition continues until a maximum number of competitions is reached or some time has elapsed. Each locus is occupied by a data structure containing a neural network for gene regulation and a list of competing alleles. The data structure may also hold information about which allele was previously expressed, training history for the network, etc. In the model presented the previously expressed allele will be remembered and in the next generation will be the default successful allele. If, however, this allele happens to be illegal in the following generation, a random legal allele is selected as the default, which then other alleles must compete with. There exist a number of possible methods for evolving the connection strengths of the gene coordination network. A simple and effective way would be to apply an evolution strategy as done in the previous chapter. A perhaps more elaborate method would be to use incremental learning procedures such as the methods of temporal difference [Sut88]. Indeed, the gene coordination network may be thought of as having the ability to predict the future based on the organisms environment and actions. Supervised learning, using backpropagation, is another possible learning strategy. Training data for learning may be sampled from the current population. During transcription, the environmental states (associated with the expressed alleles) are recorded in the loci data structure. This may be used as input training data. Once the genome

48

Connectionist Models

has been expressed completely, its total fitness is known. The output training pattern is then formed using this fitness. From a population of λ unique individuals a training set of the size λ(λ − 1) can be sampled. Should there be any useful information in this data, the network may learn it and hopefully generalize this knowledge. New individuals may be formed using standard recombination operators. Networks may be exchanged between two or more individuals using one, two, or multiple crossover sites. Mutation will play an important role in maintaining a diverse environment. Minimal perturbations will attempt to knock out successful alleles. It is expected that minimal perturbation will have less influence on plasticity loci. Structural perturbation will reset the connection strengths for the gene coordination networks and will permanently damage loci. It is also possible to view the training of a network as a more complex structural perturbation. The following section will illustrate the methods discussed on some well known benchmark problems for job scheduling.

Experimental Study In this section, the evolutionary problem solving architectures, described in the previous section, are tested on two frequently studied job-shop scheduling problems. They serve to illustrate further the difference between the symbolic and connectionist representation. The complete studies are presented in two separate papers devoted to genetic production systems [RJ99b], and gene coordination networks [RJ99a] respectively. In the symbolic model, the algorithm designer supplies the domain knowledge by hand. For the deliberative connectionist model, domain knowledge is acquired during search using the gene coordination network described in the previous section. In the job-shop problem, a set of jobs must be scheduled on a set of machines. Each job consists of a number of operations, which are processed on the machines in a predetermined order. The goal is to assign jobs on machines such that the overall production time, the makespan, is minimal. Schedules are formed by systematically assigning one job after the other at its earliest convenience. In the experiments an allele will determine the activity of the organism. The activity is jobs assignment.

Deliberative

49

Symbolic The classic 10 job 10 machine job-shop scheduling problem posed by Fisher and Thompson is studied [FT63]. There exist numerous rules for scheduling. A survey of one hundred such rules was given by Panwalker and Iskander [PI77]. For the simulations performed in this section, three priority scheduling rules are used. They are, assign job; (A) operation with earliest completion time, (B) first in, and (C) operation with shortest processing time. Since the machine order is fixed for each job, the legal moves are limited to selecting any of the uncompleted 10 jobs for scheduling: a, b, ..., i, j. Adaptive or learnable alleles “?” are added to give the individuals the plasticity to increase their fitness through learned behaviour. These alleles may be regarded as adaptive rules. Two approaches to learning are examined: Lamarckian learning and the Baldwin effect. The number of learning steps is one hundred. The Baldwin effect uses the “?” alleles to target its learning. The Lamarckian learning is a simple hill-climber where a random allele is mutated and kept if the modification is an improvement. This is also repeated one hundred times. The Lamarckian learning modifies the genotype whereas the Baldwin effect is a result of the selection process. Since there are in all 100 operations to be scheduled, the genetic string of rules will be of the equivalent length `. The production system is simply built by reading the symbolic string of production rules from left to right. The scheduling rules will at all times be valid and the legal moves only when the job it represents has not been completed. Two approaches are taken to decipher illegals. The first fires the default heuristic A. The second approach is to interpret illegals as learnable alleles “?” and apply one hundred learning steps. The second approach will be referred to as random decipher with Baldwin effect. The four different evolutionary approaches for the scheduling problem are then: 1.) without learning, 2.) Baldwin effect random decipher of default, 3.) Lamarckian learning, and 4.) Baldwin effect with “?” alleles added. All methods use by default rule A for illegals with the exception of approach 2.) which randomly deciphers illegals. The evolutionary algorithm used to evolve the rule patterns is a simple genetic algorithm. Individuals survive to the next generation by competing in a tournament. Two individual are sampled randomly out of the population and the better of the two is passed on to the next generation. The process

50

Connectionist Models Scheduling rule Without learning Baldwin effect random decipher Lamarckian Baldwin effect with ”?”

A 22.8 (4.2) 30.3 (4.2) 19.7 (3.5) 20.8 (3.7)

B 11.5 (3.6) 13.7 (3.5) 9.9 (3.0) 12.8 (3.6)

C 4.9 (2.3) 6.1 (2.3) 1.0 (1.0) 4.6 (2.4)

Default 21.1 (A) (4.1) 10.7 (?) (2.4) 11.6 (A) (3.0) 19.9 (A) (3.9)

None 39.6 (4.4) 39.2 (4.3) 57.9 (4.2) 42.0 (4.2)

Table 4.1: The mean and standard deviation of the percentage of time a rule is used to build the best schedule found.

percentage illegal

is repeated until there are as many individuals in the next generation as in the previous. Sampling is done without replacement. All individuals in the new generation undergo recombination where alleles are randomly exchanged between any two individuals. This is known as uniform crossover [Sys89]. An average of 40% of the alleles on a string will be crossed in this manner. Following recombination mutation is applied to the string with a probability of 1/`. Mutation modifies alleles by giving them other randomly chosen values. The population size used in the runs is one hundred and the best individual will always survive to the next generation (elitist). One hundred independent runs are performed. The average percentage of scheduling rules used in the final populations is given in table 4.1. Of the scheduling rules used rule A is most frequently applied followed by B and then C. The number of times that legal moves are preferred to the priority scheduling rules are, however, high. This would tend to indicate that the scheduling

15 14 13 12 11 0

100

200 300 generation

400

500

Figure 4.3: Average number of illegal alleles in the population as a function of generation for the random decipher method.

Deliberative

51

rules applied are inefficient. The Lamarckian runs, which effectively ‘legalize’ the solutions during learning, still contain 11% illegal rules on average. This also applies for the random decipher method and tends to indicate that the 1% mutation rate, mutating a single rule in the string, has the effect of making an average of 11 rules illegal. Figure 4.3 shows how the mean number of illegals develop over generations. The characteristic ‘hump’ is observed where learning (the illegals in this case) is favoured for initially and later selected against. In table 4.2 the statistics for the best solutions found for the runs is presented. The difference in the performance of evolution without learning, as to the random decipher method, indicates that the default reasoning approach for interpreting illegals is on average not the best method. Learning requires, however, more computation time and is proportional to the number of learning steps taken. The evolution that was stopped at one thousand generations and the improvement after 200 generations was minimal. The mutation operator can cause a disastrous change in an individual’s fitness value. The local minima found by Lamarckian learning produce, therefore, too diverse rule patterns. The Lamarckian runs are extremely noisy and, hence, unable to converge. Makespan Without learning Baldwin effect random decipher Lamarckian Baldwin effect with ”?”

Best 939 940

Mode 982 961

Mean 974 967

Worst 997 985

977 958

998 963

994 966

1014 985

Table 4.2: Makespan for the 100 independent runs taken, the optimal solution is 930.

Discussion on the symbolic approach In this study, a genetic production system for optimization and search was presented. The domain specific knowledge is introduced to the algorithm without having to modify the search process. This was done by letting the genetic algorithm evolve rule patterns, which represent the domain knowledge.

52

Connectionist Models

The variation of rules is performed without any regard to their meaning. Although the computational evidence from a single test problem is too small to serve a definite conclusion, it does validate some discussion. The results of the experiments indicate that evolution favours some rules more than others. Different stages of search favour different rules; this may be observed from the adaptive alleles examined over the generations. The frequency of purely legal moves preferred may be an indicator of the effectiveness of the heuristic rules suggested by the designer of the algorithm. The Baldwin effect was the superior one of the two learning approaches studied. The Lamarckian learning approach was too disruptive for the rules being processed by the genetic algorithm. Alternatively, it may be expected that Lamarckian learning forces premature convergence. Learning, using the adaptive alleles, converged too rapidly and the best solution found is similar to the mode found using the random decipher approach. The algorithm without learning did not perform badly considering it found the best solution and has the least computation time. The algorithm that assimilated legal solutions via Baldwin effect and random deciphering of illegals is on average the best. This indicates that improvement may be expected for a genetic algorithm without learning if the illegals are deciphered in a more intelligent manner. Connectionist In the symbolic study it was found that designing general rules for problem solving may be difficult. Its success is highly dependent on the designer’s insight into the problem being tackled. Different stages of the search also require different rules. Furthermore, the variation of a single rule may have drastic consequences on the validity of other rules used in the production system. It will now be demonstrated how most of these difficulties may be overcome by using the gene coordination network discussed in the previous section. The evolutionary algorithm used in the experimental study may be summarized as follows [RJ99a]: 1. Initialize population. Connection strengths for the gene coordination network is randomly set ∈ [−1, 1] at each locus. 2. Loop through the following steps until the termination criterion is met: (a) Transcribe loci by performing m random competitions at each locus

Deliberative

53

with the successful allele. Record which allele was transcribed and its environmental state. Compute individual’s fitness. (b) Store elite individual. (c) Select individuals from the current population using tournament selection to form the new population for the next generation. (d) Train gene coordination networks in the new population at loci which have not been trained before with a probability Pl . The Pl parameter will essentially dictate the initial rate of learning. Training samples are taken from the old population. When a network has been trained a training flag T for that locus is set to false. (e) If the elite has been lost inject a copy into the new population (elitist). (f) Shuffle new population and perform a two point crossover in order to exchange loci between selected individuals. The probability of crossover is Pc . (g) Perform a minimal perturbation with probability Pm by exchanging the currently successful allele at a locus by another randomly chosen allele. Perform a structural perturbation with probability Ps by randomly resetting the connection strengths for a network at a locus. In both cases set the training flag T to true. The test problems taken from [MT63] are of sizes 6 × 6 and 10 × 10. The optimal makespans are known and are 55 and 930 respectively. As a schedule is being constructed, a number of features of the solution become available. These are the environment states which may be associated with a job (allele). Alleles corresponding to jobs that have been completed are illegal and will not compete for expression. For the simulation performed three environment states are used: The time a job is available, the time it may be expected to finish and the total time still needed to complete the entire task (work remaining). These environment states are used as input data for the gene coordination network, which has one hidden layer with 6 nodes. For the training of the network the output pattern used is fi ≤ fj for the left output node and fi ≥ fj for the right output node, where f is the fitness value corresponding to environment states i and j. Note that these are Boolean operators and that the problem is one of minimization. A sample size, twice the size of the population, is extracted as discussed in the previous section. Samples for which ∆E = 0 are ignored.

54

Connectionist Models

The training algorithm used is back-propagation. The log-sigmoid transfer function returns the activations of the nodes squashed between 0 and 1. The mean square error is used as the performance function. A network is trained for 100 epochs and, if it survives, it may be trained further in some future generations. A population size of 30 is used for the 6 × 6 problem and 50 for the 10 × 10 problem. The probability for crossover is Pc = 1, for learning Pl = 0.2, and for minimal perturbations Pm = 1/`. The probability of structural perturbation for the smaller problem was zero. For the larger problem, it was found to be useful to add a very slight chance (0.01%) of a structural perturbation and an increase in minimal perturbations. Ten independent runs were taken for each problem. For the 6 × 6 problem, the optimal solution was found within 40 generations. The larger problem was stopped after 200 generations and the results varied from a makespan of 960 to 990. Results for a typical solution found for the two problems will be presented. To test the plasticity of the solutions found, all loci are systematically perturbed by deleting the successful allele and putting another in its place. This can be done in m − 1 different ways at each locus. The result of this is that

fitness

115 95 75 55

5

10

15

20

25

30

35

20

25

30

35

locus

fitness

115 95 75 55

5

10

15 locus

a)

Figure 4.4: Perturbation results. The top figure shows the fitness at a locus which has had its successful allele deleted and all loci to the right of it have been structurally perturbed. The bottom figure shows the fitness at a locus which has only had its successful allele deleted at that point.

Deliberative

55

on average 50% of the loci are immune to the perturbation for the 6 × 6 problem. Either other loci performed its function or another phenotype was produced that gave the same fitness value. Figure 4.5 shows six different solutions resulting from these perturbations. The bottleneck remains on the same machine but some job orders have changed. The means by which a solution is constructed as well as the problem itself are redundant. The bottom plot in figure 4.4 shows which loci are most immune to the perturbation by deletion. Regions that are in the start of the string are more susceptible to damage. This is reasonable since they must essentially predict much further into the future. To eliminate the possibility that this is a result of some other factors, such as the constraining of the problem-solution space, all networks to the right of the damaged locus were structurally perturbed. The result of this is shown in the top plot in figure 4.4 and illustrates how the fate of the last m loci is determined independent of the network when M − m loci have been expressed. Figure 4.6 shows the choice landscape for the 6 × 6 problem where the difference in the work remaining has been set to zero. The horizontal axis (A)

Machine

6

3

6

5

2

4

3

3

3

2

1

6

1

0

1

3

4

5

4

1

5

6 2

2

(B)

2

5 4

5

1

5

2 5

4

6

3

6

20

3

30

2

3 3

40

1

50

60

0

Machine

3

6

5 3

3

3

2

1

2

2

6

1

0

3

1

5

4

1

5

6

1

5 4

5

4

4

5

1

5

2 5

4

6

3

6

20

Machine

3

30

6

5

2

2

3

30

50

60

6 2

3

1

4

3

3

3

2

1

2

2

1

6

4 1

0

5 6 4

10

5

1

4

6

1

0

5

1

5

1

2 5

4

6

3

50

1 3

60

4 6

1

4

5

6 3

3

10

1 2 5

4 1

6

20

3

30

6

30

Time

2

40

3 3

60

1

2

5

40

4 1

0

50

1

5 3

60

5

1

1 2 5 6

3

3

20

4 6

4 1

4

10

4

4

5 6

2 5

6 2

2

1

5

50

6 2

2

3

3

20

5

4

4

5

4

6

4 6

4 1

5

(F)

3

4

2

40

2 5

6 2

2

1

5

40

2 5

3 6

(E) 6

1 2 5 6

1

20

3

2

3

3

10

1

3

10

6

4 6

4

4 1

4 6

(D)

2

2

4

5 3

4

6

(C) 6

1

4

4

5

4 1

2 5

6 2

2

1

5

6 2

2

3

3

10

1

6

4 1

4

6

4

6

30

2

40

5

50

60

Time

Figure 4.5: Gantt charts. Six different solutions obtained due to perturbation by deletion for the 6 × 6 job-shop problem’s elite solution.

56

Connectionist Models

displays the difference in time of completion and the vertical axis when a job is available. On both axes the scale is from −50 to 50. The landscape is the difference between the two output nodes, Olhs − Orhs . The darker regions are positive values whereas the lighter (white) are negative. The network, for example at locus 24, will prefer a job (allele) with a sooner completion time and later availability, but at locus 34, early completion time is preferred regardless of availability. Some of the nets are contradictory which will make their corresponding loci a mean loci. Examples of these are the first and the last locus. It is understandable that the last locus could be a mean locus since its fate is always decided. The first loci has one environment state always equal to zero. When we examine the choice landscapes also with respect to the work remaining, we find that most loci are of the plastic type. The same perturbations by deletion were performed on a 10 × 10 solution. The result was that on average 60% of the loci were immune to deletion. The results are depicted in figure 4.7. When an arbitrary gene is deleted, how many genes alter their expression pattern? The single perturbation brings about a cascade of change in the patterns of gene activation. If the mutations 1 (1)

2 (2)

3 (1)

4 (1)

5 (1)

6 (2)

7 (2)

8 (2)

9 (2)

10 (5)

11 (1)

12 (2)

13 (1)

14 (2)

15 (3)

16 (3)

17 (6)

18 (5)

19 (2)

20 (1)

21 (3)

22 (3)

23 (1)

24 (2)

25 (2)

26 (2)

27 (1)

28 (3)

29 (3)

30 (2)

31 (1)

32 (3)

33 (1)

34 (2)

35 (4)

36 (2)

Figure 4.6: Network landscapes. The choice landscapes for the gene coordination networks per locus. The locus number is given above each map with the number of time the network has been trained during its evolution over generations in parenthesis.

Deliberative

57

fitness

2000

1500

1000 0

20

40

60

80

100

60

80

100

locus

fitness

2000

1500

1000 0

20

40 locus

Figure 4.7: Perturbation results. The top figure shows the fitness at a locus which has had its successful allele deleted and all loci to the right of it have been structurally perturbed. The bottom figure shows the fitness at a locus which has only had its successful allele deleted at that point. are neutral the resulting solution – the phenotype – remains the same or the phenotype has changed but its fitness remains unchanged. In figure 4.8 the number of genes which alter their expression is plotted against the locus where the deletion occurred. Only the cases where the equivalent elite fitness was obtained is shown. Commonly it suffices that just one additional gene changes its expression to compensate for the damage. Changes for up to 19% of the string are also observed! Discussion on the connectionist approach An application of a general epistasis model for adaptive search and optimization was presented. The development of the model is like creating a language whose rules have not been formalized and where there is no priori definition of ‘purpose’ or ‘sense’. As the gene coordination networks evolve, their meaning develops alongside or with it. Gene expression is controlled by these networks where two alleles are matched up at a time. This is a contingent process. The proposed method performed well on two job-shop benchmark problems and the solutions found were highly plastic. For plasticity to be selected

58

Connectionist Models

number of changes

20 15 10 5 0 0

20

40

60

80

100

locus 1400

fitness

Figure 4.8: Expression changes. The figure shows how many genes have changed their 1200 expression as a function of locus where the successful allele was deleted. 1000 0 40 60 80 100 for, the environment must20be predictable. The redundant nature of both the locus problem and method by which solutions are built, has been a factor in producing highly fit plastic loci. Formulating gene expression as an ‘intelligent’ process introduces new possibilities for the role of learning in difficult search problems. Problem domain knowledge is introduced to the evolutionary algorithm without much involvement from its designer.

4.2

Reactive

The deliberative connectionist model, similar to the one presented in the previous section, is suitable when there is a small number of activities to select from. In this section an example of a reactive connectionist model is described for which the model’s output is the activity and, is continuous. The activity of the connectionist model is that of a learning rule for the optimization of neural networks. Connectionist models are universal function approximators and, therefore, the space searched already contains a simple gradient descent search procedure like back-propagation. The reactive connectionist model will rediscover, and perhaps improve, known mathematical methods. In essence, it is proposed that a learning rule be part of the neural network structure and, therefore, globally distributed. Although the method is a non-local learning rule, it does not require an ‘error’ signal to be propagated directly to the neuron in question. The ‘error’ signal will be channeled through the learning rule, i.e. the network. The viewpoint expressed challenges the idea that ‘learning rules’ must be local for biological plausibility. Indeed, how can a simple local

Reactive

59

rule give rise to anything cognitively significant like “learning”? If it does, it must be the result of some higher-order organization of the system as a whole. The ‘biological implausibility’ of learning algorithms is commonly debated in the literature [DG99], for example: “it is unlikely that animal networks employ any of the mechanisms used in training back-propagation networks (gradient descent, stochastic descent, genetic algorithms, etc.) because they all use information from across the entire network to decide how to adjust the value of an individual weight” [Bax92]. It is for this reason that Hebb-like [Heb49] local learning rules are regarded as biologically plausible. However, there do exist many return pathways in the brain although, they may not carry all the information needed for back-propagation learning [Cha90]. Another move towards biological relevance [MAJ91] are reinforcement learning methods [WGM73, BA85]. With these methods, learning may be achieved by a single reinforcer. A part from these arguments, there is the issue of the implementation of the ‘learning rule’ itself. Previous attempts for evolving learning rules have relied on predefined functions [Cha90, Bax92, BBC95]. For example, a simple quadratic function (where the delta rule could be found as a special case) with evolvable parameter was used by Chalmers [Cha90]. Chalmers recommends genetic programming for the evolution of even more complicated learning rules [Cha90]. It will show that sophisticated learning rules may be found using the neural networks themselves. It is well known that a neural network is a capable function approximator and for this reason quite able to both approximate and implement any learning rule. The learning rules will be discovered by an evolutionary search method. The implications of such a system may be the following: • faster trainers for given problem classes • a new approach to learning • different types of feedback (reinforcers) • a greater variety of learning rules The current work looks at the single layered neural network only. Because the decision boundary is linear, the single-layer network can only be used to recognize patterns that are linearly separable. Experimental studies for some linearly separable problems are presented in the following section. The working principles developed are fundamental to more complicated neural network

60

Connectionist Models

architectures. For non-linear problems, architectures similar to that of fixed on-line learning [YCC99] may be used. The concept developed is similar to fixed weight on-line learning, but the weights are not fixed, they are evolved. A learning rule is a procedure for modifying the weights and biases of a network, this procedure is also referred to as the training algorithm. The weight updating rule at time (t) is given by: (t)

(t−1)

wij = wij

+ ∆wij

(4.1)

where wij is the weight of the connection from node j to node i. The commonly used learning rule for the change in the weights ∆wij is ∆wij = −η

∂Ep ∂wij

(4.2)

where Ep is the error for pattern p, or by the delta rule [RHW86]: ∆wij = ηδip oj .

(4.3)

The change in the weights is a function of the learning rate η which is typically less than 1 (e.g. 0.1 or 0.3). A small learning rate is chosen because the final weights sought should work for all patterns and changing the weights per pattern must be done cautiously. The error term δip is the product of the actual error output, which is the difference between the target for node i, denoted by tip , and the actual output of that node oip , multiplied by the derivative of the node’s activation function. The error for the node is also related to the signal from the sending node, which is oj . For a pure linear activation function, the output layer is modified by, ∆wij = η(tip − oip )oj

(4.4)

and for the commonly used sigmoid activation function by ∆wij = ηoip (1 − oip )(tip − oip )oj .

(4.5)

In other words the learning rule is a function of the input and output node activations and the target value: ∆wij = η f (oj , oip , tip ).

(4.6)

Reactive

61

This function may be approximated by a neural network and hence the proposal to use another part of the network to model the learning rule. Computing the error term for hidden nodes is usually done by letting them inherit the errors of all nodes that they activate. Therefore a more general learning rule would take the form: ∆wij = η f (oj , oip , δip )

(4.7)

where now δip reflects in some way the ‘error’ assigned to wij by the principles of credit and blame [RHW86],[WGM73], or reinforcement [BA85]. The neural network approximating equations (4.6) or (4.7) would have 0 , which may be either positive of negative. For this an output value of ∆wij reason, the output node’s activation function must allow for both signs. By using the hyperbolic tangent sigmoid transfer function so the learning rule becomes: µ ¶ 2 0 P ∆wij = η∆wij =η −1 (4.8) 1 + exp(−2 k ωk ak ) where ωk are the weights of the learning rule and ak are the signals from the hidden nodes’ activation. Other activation functions may be equally accept #

  &





  '



 



 "!

  

 "$%

 



  



 



Figure 4.9: The neural network structure (bias is not shown). A Π (multiplier) node and accumulator node have been added to implement the weight update. Both tp and oj are sensors or the result of sensory input. The effector is oip , proposes the appropriate action.

62

Connectionist Models

able but (4.8) will be used here. The inputs for the neural network, the learning rule, are the function arguments in equation (4.6) or (4.7). In Chalmers’ [Cha90] quadratic function, the weights themselves (wij ) are part of the learning rule, however, there is no reason for including them here. The complete neural network structure is given in figure 4.9 for a learning rule with one hidden layer training a single layer network.

Experimental Study The evolutionary optimization algorithm presented is based on the evolution strategy, ES(µ, λ), [Sch95]. It follows the typical cycle for the evolution of learning rules [Yao99] and may be formulated as follows: 0. Initialize a population of λ individuals randomly, set the generation counter to g = 0 and the desired maximum number of generations G. Each individual is a vector (ω1 , . . . , ωn , σ1 , . . . , σn ) where ω = (ω1 , . . . , ωn ) are the neural network weights for the learning rule, and σ = (σ1 , . . . , σn ) are the ‘mean step’ sizes adapted during the search. The initial weights are randomly distributed between −1 and 1 and the initial mean step √ sizes are 2/ n. 1. Use each learning rule ωi ∀ i ∈ {1, . . . , λ} to train a neural network on a given set of training patterns. The initial weights w of the network to be trained are randomly distributed between −1 and 1. A number of different training patterns are used. The fitness of the training algorithm, ωi , is the averaged ‘mean root square error’ of the last epoch. 2. Select the best µ individuals to generate on average λ/µ offspring for the next generation. 3. The ‘mean step sizes’ are updated according to the log-normal update rule: i = 1, . . . , µ, h = 1, . . . , λ, and j = 1, . . . , n, (g+1)

σh,j

(g)

= σi,j exp(τ 0 N (0, 1) + τ Nj (0, 1))).

(4.9)

4. The neural network weights are varied using the normal distribution: (g+1)

ωh,j

(g)

(g+1)

= ωi,j + σh,j

Nj (0, 1)

(4.10)

Reactive

63

5. Increment the generation counter g = g + 1 and go to step 1 unless the maximum number of generations (G) has been reached. The algorithm presented is a simple evolution strategy which does not use any recombination. Some of the factors which we expect will influence the search for learning rules are: • whether the training samples can be learned by the chosen network architecture • the number of training samples used (the set must cover the problem space ‘sufficiently’) • the number of epochs used in training • the complexity of the learning rules (the number of hidden nodes) • repeated training on the same task in order for the results to be statistically significant. Some of these issues will be examined in the following experimental study.

Linearly Separable Tasks Linear prediction is a task for which single-layer networks are commonly used. Here experiments with Chalmers’ [Cha90] eight linearly separable tasks given in table 4.3 are performed. The tasks have only one output unit since a single layer network with more than one output unit is equivalent to a number of disjointed networks with a single output unit [Cha90]. Task 3 is, however, not linearly separable and therefore will not be used in the experiments. The effect of learning rule complexity and training time is examined in this study. The input and outputs are zeros and ones and the activation function for the output node of this single layer network is the log-sigmoid activation function: 1 P oip = (4.11) 1 + exp(− j wij oj ) Both the learning rule and the network being trained use a bias which is not shown in figure 4.9.

64

Connectionist Models

Table 4.3: Chalmers’ eight linearly separable tasks [Cha90]. o1 1 0 0 1 1 0 0 0 1 1 1 0

o2 1 0 1 1 0 1 1 1 1 0 0 0

o3 1 0 1 0 1 1 1 0 0 0 1 0

o4 1 0 1 0 0 0 1 0 0 1 1 1

o5 1 0 0 0 1 0 1 0 1 0 0 0

t11 1 0 0 0 1 0 1 0 1 0 0 0

t12 0 0 0 1 0 0 0 0 0 0 0 0

t13 0 1 1 1 0 1 1 1 0 1 0 0

t14 0 0 0 1 0 0 0 1 1 1 0 0

t15 1 0 1 0 0 1 1 1 0 1 0 1

t16 0 1 0 1 0 1 0 1 1 0 0 0

t17 1 0 1 1 1 0 1 0 1 1 1 0

t18 0 1 0 1 1 1 0 1 1 1 0 1

The evolution strategy, ES(30, 200), was run for 400 generation on the following experimental setups:

10 30 60 epochs

10 A B C

20 D E F

30 G H I

hidden nodes

where A, . . . , I denote the experiment number. Each of the experiments was conducted five times with a learning rate of η = 1/2. In all of the experiments similar response surfaces were evolved. The response surface for experiment F is compared with that of the delta rule given by equation (4.5). This comparison is shown in figure 4.10 for a target value of 1 and in figure 4.11 for a target value of 0. The figures also illustrate the actual change in the network behaviour by multiplying the change in weight by the input signal oj . The striking difference between the evolved networks and the delta rule is that the delta rule does not change its weights when the output node is zero or one, regardless of the error signal. For the evolved network, when the target value is tip = 1 (see figure 4.10), the change in weights is minimal when

Reactive

65

Table 4.4: The last epoch error for the best rules found for each experiment. F 0.0059

I 0.0070

G 0.0086

B 0.0120

E 0.0127

H 0.0137

A 0.0138

C 0.0227

D 0.0240

oip ≈ 1, but when oip ≈ 0 the change is maximal, but again little for oj ≈ 0. Similar behaviour is observed for when the target value is 0 (see figure 4.11), that is, maximal change when oip ≈ 1, but again minimal for oip , oj ≈ 0. When multiplying the input signal with the weight change, as shown in both figures 4.10 and 4.11, it is noticed that the major difference is that the delta rule does not modify weights when the error is at the extremum, however, the evolved network does. The best evolved learning rules for each of the experiments conducted was used to train a network on Chalmers’ tasks again but now for 200 epochs. The last epoch error for these learning rules, sorted in order of best to worst performance, are given in table 4.4 above. None of the training rules diverged but fluctuated slightly about a fixed error level. From these experiments is Evolved Rule: ∆wij

Delta Rule: ∆wij

1

0.2

0.5 0.1 0

−0.5 1

1 0.5 0.8

0.6

0.4

0.2

oip

0

0

1 0 1

0.5 0.8

0.6

oj

0.4

0.2

oip

Evolved Rule: ∆wij×oj

0

0

oj

Delta Rule: ∆wij×oj

1

0.2

0.5 0.1 0

−0.5 1

1 0.5 0.8

0.6

0.4

oip

0.2

0

0

oj

1 0 1

0.5 0.8

0.6

0.4

oip

0.2

0

0

oj

Figure 4.10: The target value is tip = 1 and the learning rate is one. The learning rule has 20 hidden nodes and a training time of 60 epochs during the evolution.

66

Connectionist Models

seems that the more complicated learning rules, with 20 and 30 hidden layers, performed better. Also, a longer training time resulted in a learning rule with lower errors. Figure 4.12 shows the evolution of error as a function of epoch for the best learning rule from experiments A and F . For comparison these tasks were trained using the delta rule with a η value five times higher than that used for the evolved rule (see figures 4.10 and 4.11). Indeed, a very large η value is needed for the delta rule to achieve the same learning rate as the evolved rules, however, in this case the weights do oscillate [MMR97, page 81]. The performance of the evolved rules on other unseen tasks were also good, even for Chalmers’ third task (which is not linearly separable) all but one pattern was solved. A similar result is obtained using the delta rule on this task. As a further example consider the following two cases: the logical OR function and the logical AND function. The training results for these cases are depicted in figure 4.13. It is clear from this demonstration that the evolved learning rules are biased.

Evolved Rule: ∆wij

Delta Rule: ∆wij

0.5

0

0 −0.1 −0.5

−1 0

−0.2 0 0

0.5

oip

1

0

0.5

0.5 1

0.5 1

o

oip

j

Evolved Rule: ∆wij×oj

1

o

j

Delta Rule: ∆wij×oj

0.5

0

0 −0.1 −0.5

−1 0

−0.2 0 0

0.5

0.5 1

oip

1

oj

0

0.5

0.5 1

oip

1

oj

Figure 4.11: The target value is tip = 0 and the learning rate is one. The learning rule has 20 hidden nodes and a training time of 60 epochs during the evolution.

Reactive

67 0.2

error

A rule F rule Delta rule

0.1

0 0

10

60

100

epoch

Figure 4.12: The learning process for the evolved rules from experiment A and F and using the delta rule on Chalmers’ tasks. OR 0.2 A rule F rule Delta rule

error

0.15 0.1 0.05 0 0

10

60

100

60

100

AND

0

10

−2

error

10

A rule F rule Delta rule

−4

10

−6

10

0

10 epoch

Figure 4.13: The learning process for the evolved rules from experiment A and F and using the delta rule on the logical OR (top) and AND (bottom) functions. Discussion The experimental results may be summarized as follows: 1) a neural network is capable of representing a learning rule, 2) the evolved rules are fast neural

68

Connectionist Models

network trainers, 3) the training algorithms do not diverge when a longer period of learning is used, 4) the accuracy achieved is limited, and 5) the learning rule is biased. The results indicate also that the shorter the training time given to the evolved learning rule, the less accurate the results but faster the learning. Conversely, the longer the training time the more accurate the results, but slower the learning. This seems reasonable since even an algorithm like back-propagation requires a certain number of epochs to reach a quality solution. All of the learning rules evolved have a limited accuracy and appear to be noisy. However, a network approximating equation (4.5) would also make for a ‘noisy’ learning rule. The learning rule is biased. It is ‘tuned’ to solve a given problem class fast. Even using the back-propagation algorithm, the tuning of learning and momentum rates is often required when new problems are solved. Note that a recurrent neural network structure could be used to model a learning rule with momentum. It is believed that the approach taken here opens up many possibilities for evolving fast problem specific optimizers.

4.3

Summary

In this chapter deliberative and reactive evolutionary problem solvers using connectionist models were presented. The first model requires some sort of deliberation process before action is taken. The second is reactive in the sense that the action taken is a direct mapping from environment state. Their possible application has also been presented. Having proposed various architectures for problem-solution abstraction, the question of search must now be addressed. The following chapter is devoted to understanding the conditions needed for efficient end effective evolutionary search.

Chapter 5

Evolutionary Search The previous two chapters are devoted to describing situations, or problemsolution abstraction. This chapter is committed to the operators that transform described situations to a desired one or a goal. This is the fundamental heuristic method in problem solving identified by Newell and Simon, known as means-ends analysis [NS72] (see also [Hay81, p. 33]). In means-ends analysis, the differences between current situation or state x and the goal state x∗ are listed and the operators appropriate for reducing it found. In evolutionary problem solving, this operator is probabilistic and denoted by variation operator ν, x(g+1) = ν(x(g) ) x = (x1 , . . . , xn ) ∈ C n . (5.1) One way to list the difference between two states is by defining a distance metric dC (x(g) , x(g+1) ) ≥ 0. In metric space (C n , dC ) the sequence x(1) , x(2) , . . . , x(g) ,. . . of points in C n converges in mean to x∗ ∈ C n , if h i lim E dC (x(g) , x∗ ) = 0. g→∞

For real number spaces, the distance metric commonly used is the Euclidean norm: dC (x(g) , x(g+1) ) = kx(g) − x(g+1) k. In order to estimate the operator’s effectiveness in reducing dC (x(g+1) , x∗ ), an expected rate of progress measure ϕ may be defined. In terms of the average distance to the optimum, the expected rate of progress, using (µ, λ) selection, could be computed as [Sch77]: " µ # ¯ µ ¯ 1X 1X (g) (g+1) (g) ∗ ∗ ¯ (g) ϕx = E (5.2) dC (xi , x ) − dC (xi:λ , x )¯x1 , . . . , xµ µ µ i=1

i=1

70

Evolutionary Search

where xi:λ is the ith order statistic (i = 1, . . . , λ) determined by the fitness, f (x1:λ ) ≤ f (x2:λ ) ≤ . . . ≤ f (xλ:λ ).

(5.3)

The progress rate may also be defined in terms of closeness to the optimal fitness function value f (x∗ ) [Bey95b, page 387], # " µ ¯ µ ³ ¡ ∗ ¢´ 1 X ¡ (g+1) ¢ ¡ ∗ ¢´¯ (g) 1 X ³ ¡ (g) ¢ (g) ϕf =E f xi − f x − f xi:λ − f x ¯¯x1 , . . . , xµ µ µ i=1 i=1 " µ # ¯ µ X X ¡ ¢ ¡ 1 1 (g) (g+1) ¢¯¯ (g) (g) =E f xi − f xi:λ ¯x1 , . . . , xµ . µ µ i=1

i=1

(5.4) For a constant stationary normalized progress rate ϕ? ≥ c > 0, a linear convergence order is expected in general (see equation (2.31) and discussion on page 21). Computing ϕ analytically is a difficult theoretical problem, especially when considering the full implementation of an evolutionary algorithm. Furthermore, the analysis is only valid for the fitness function investigated. To extrapolate the results to other problem classes it must be assumed that they have similar properties. For example, it is often supposed that the local structure of the fitness landscape can be approximated by a spherical or corridor model [Bey96]. An operator should reflect the designer’s anticipation of where improved situations are most likely to be located for any arbitrary situation. In evolution strategies, there is a preference for small phenotypic change over larger ones, “as common in nature” [BSK96]. This suggests that the fitness landscape is smooth and that evolutionary search is in some sense a hill climbing procedure. The designer may therefore want to choose a distance metric where a small variation is reflected in a small change in fitness. The next step is then to design a variation operator that works in this distance metric space. The operator is probabilistic and has no preference for the direction traversed in this space. The operator must, however, be generalized to jump arbitrary distances. This requires the introduction of a strategy parameter controlling the mean step size taken. This strategy parameter is then adapted during search in order to maintain maximal progress. In this way, an evolution strategy can be formalized for any problem, not just for real valued functions. The designer’s domain knowledge is reflected in the selection of a distance metric space, which now promises to form at least a semi-smooth fitness landscape.

Expected Fitness Landscape

71

At first glance, it may seem that the variation operator designed is a local search operator. A vector x∗ is said to be a local minimum of f over the set C n if it is no worse than its feasible neighbour. Formally, if there exists a ε > 0 such that f (x∗ ) ≤ f (x),

∀ x ∈ C n with dC (x, x∗ ) < ε.

The operator is therefore only “local” to the effect of the mutation strength, or the mean step size taken. A local minimum is also a global minimum when it is no worse than all other vectors that is, f (x∗ ) ≤ f (x), ∀x ∈ C n . When the search space is bounded, and mutation strength is large, one might call this global search. A function is called unimodal if it has exactly one local minimum, otherwise it is called multimodal. When the fitness landscape is unimodal then a local search corresponds to a global search. The distance metric and, hence, the fitness landscape has now been defined. The landscape promises to be smooth but may be multimodal. An operator has been designed capable of modifying individuals by some average distance in this landscape. There is now one factor which needs to be investigated; selection and results in a new landscape concept, called the expected fitness landscape, and is introduced in the following section. The properties of this landscape will determine how fast individuals will be located in the global region and whether global convergence in linear time may be expected in general. Step size control and directed search are then discussed in relation to the expected fitness landscape. The final section of this work investigates the use of large step sizes for faster search.

5.1

Expected Fitness Landscape

Essentially the progress of the single best individual is of central interest. Therefore, the alternative progress rate definition for evolutionary algorithms may be used [Rud97a, page 207]: ¯ h i (g) (g+1) ¯ (g) ϕ0f = E min{f (xi ) : i = 1, . . . , µ} − f (x1:λ )¯x1 , . . . , x(g) µ

(5.5)

and similarly ¯ h i ¯ (g) (g) (g+1) ϕ0x = E min{dC (xi , x∗ ) : i = 1, . . . , µ} − dC (x1:λ , x∗ )¯x1 , . . . , x(g) (5.6) µ

72

Evolutionary Search

The progress rates computed from fitness values, as the ones given by (5.5) and (5.4), indicate the progress towards a local minimum only. Progress towards the global minimum in a multimodal landscape can only be computed in terms of the distance metric and when the global minimum is known. When the support of the variation distribution covers the feasible region of the optimization problem, global convergence may be proven [B¨ac96, p. 130]. These results, however, do not of have much practical interest as they require unlimited time for the algorithm to converge. Most of the theoretical analysis of convergence rates for evolutionary algorithms are based on unimodal fitness functions. In this case, there is only one local minimum and the progress in terms of fitness corresponds to progress in terms of the distance metric. Fast global convergence for multimodal functions are nevertheless more interesting. As long as ϕ?x ≥ c > 0, the evolutionary algorithm works. For this reason an attempt will be made to investigate when this may be the case for multimodal landscapes. The approach taken is to establish the condition for which the progress in fitness corresponds to progress towards the global optimum defined in terms of the distance metric. That is, all parents, x ∈ C n , satisfy the following condition h ¡ h ¯ i ¢¯ i E f x1:λ/µ ¯x < f (x) ⇔ E dC (x1:λ/µ , x∗ )¯x < dC (x, x∗ ). (5.7) When (5.7) holds then positive progress is maintained and the evolutionary algorithm will converge in mean to the global optimum. Now consider the case when h ¡ h ¡ ¢¯ i ¢¯ i E f x1:λ/µ ¯xj < E f x1:λ/µ ¯xk ⇔ dC (xj , x∗ ) < dC (xk , x∗ ) (5.8) where j, k ∈ {1, . . . , µ}, j 6= k denote any two parents. This implies that the ¡ ¢¯ expected fitness landscape, E[f x1:λ/µ ¯x], is unimodal and monotone decreasing. Furthermore let the progress rate in terms of the fitness values be positive, i.e. h ¡ h ¡ ¢¯ i ¢¯ i E f x1:λ/µ ¯xj < f (xj ) and E f x1:λ/µ ¯xk < f (xk ). (5.9) When (5.8) and (5.9) are satisfied then (5.7) holds. This can be verified by substituting xk = x and xj = E[x1:λ/µ |x] in (5.8) and using condition (5.9). The left hand side of (5.8) will then read: E[f (x1:λ/µ )|(x1:λ/µ |x)] < E[f (x1:λ/µ )|x] £ ¡ ¢¯ ¤ which is equivalent to E f x1:λ/µ ¯xj < f (xj ) and therefore holds by (5.9).

Expected Fitness Landscape

73

¯ ¤ £ The right hand side of (5.8) will simply read: E dC (x1:λ/µ , x∗ )¯x < dC (x, x∗ ). £ ¡ ¢¯ ¤ Also, since x = xk then from (5.9) E f x1:λ/µ ¯x < f (x). The offspring produced by a parent closer to the optimum will on average be preferred to an offspring whose parent is further away from the optimum. The combined effect of variation and selection is to smooth the fitness landscape. If the expected fitness landscape is unimodal and its minimum is the same as the global minimum, the evolutionary algorithm will converge in mean to the global optimum when the progress rate in terms of fitness is positive. This concept will now be illustrated by a number of numerical examples and its consequences discussed. Having chosen a meaningful distance metric for the task, the expected fitness landscape will now depend on the chosen variation operator working in this metric space. A mutation operator is random in the sense that the variation resulting from it is independent from the organism and its milieu. The operator should guarantee an average neutrality of the evolution process

!#"%$'&)(+*,$-/.0 132

N 1

        

N . CB

?>

= A> ; < >@ 8:9 5764

      

P 1      P.

       

1

.

J 1

JLK

J7M

JON

. JQP P $ED FHGI"     & N

M

K

1

Figure 5.1: Expected fitness landscapes for the step function using the mean step sizes of σ = 0, 0.1, 0.3, 0.6, 1, and 10

74

Evolutionary Search

in the absence of selective pressure. There should be no deterministic drift as a result of this mutation without selection. It is natural selection alone which is responsible for the search bias. That is, the expectation of the variation of x should be x, i.e. E(ν(x)) = x. The central limit theorem of statistics says that data which are influenced by many small and unrelated random effects are approximately normally distributed. Assuming this to be the common case for natural systems the variation operator may be written as, x(g+1) = x(g) + σN (0, 1),

(5.10)

where N (0, 1) is a Gaussian distributed random variable with an expectation of zero and variance one. The variation operator is altered by modifying the mean step size σ. Consider the step function shown in figure 5.1, where the horizontal axis takes on the parent values x(g) and the vertical axis the expected fitness on £ (g+1) ¤ the best offspring out of λ/µ, i.e. E f (x1:λ/µ ) . The expectation is computed numerically by performing 10, 000 ‘one generation experiments’ for λ/µ = 10. The mean step sizes taken are σ = 0, 0.1, 0.3, 0.6, 1, and 10. For σ = 0 the function topology corresponds to the original step function. In the case where σ > 0, the fitness landscape is transformed to an expected fitness landscape. A positive progress rate follows from all curves below the original function. Negative progress is obtained when the expected fitness curve lies above the original function curve. No progress is expected where the curves touch. See figure 5.1 for examples of negative, zero, and positive progress for the various mean step sizes used. For an algorithm using variable step sizes the function £ (g+1) ¤ topology with the lowest expected fitness value E f (x1:λ/µ ) is selected. In other words, not only is the best x selected, but also the corresponding σ and, hence, progress is maximized. A fixed mean step size of 0.6 or 1, for the step function, results in a unimodal expected fitness function and positive progress. In this case the algorithm will converge to the global optimum on average. Finding the global optimum is more challenging when the expected fitness landscape is multi-modal. This is illustrated using Rastrigin’s function shown in figure 5.2. The step sizes applied in the previous example are used again here. In this case positive progress is assured most of the time for steps sizes 0.6 and 1. However, as the parents near the global optimum the progress becomes negative. This region is shown enlarged in figure 5.2. Notice that when a parent is located near the global minimum, a lower expected fitness value may be maintained by decreasing the mean step size. When the step

Expected Fitness Landscape

75

size is reduced, the expected fitness landscape becomes multimodal and there is a danger of being trapped in a local minimum. If no parent is located in the global region, getting there will now on average be impossible. Only by enforcing a larger step size, or perhaps when using a probabilistic selection method, could an individual escape the local minimum. The only reliable method, however, is to shift the unimodal expected fitness landscape down such that it lies at least below all local minima. This can only be achieved by increasing λ before a decision is made to decrease the mean step size and turn the unimodal expected fitness landscape to a multimodal one. Elitist selection is also a means of extending the number of offspring created by a parent, but this is done over a number of generations. Assuming the global region is reached, there exists still the possibility of being pushed out of it. This is the case for Shaffer’s function shown in figure 5.3. The figure shows the expected fitness landscapes for the mean

"!$# % &' )(+*,-.!/# %

GKE G'% H'E =<

      

HD% 98

7 8; 5 6 8: 34 1"20

( E (D% # E # % E % !FE

!"G

!IH

!F(

!J# %

#

(

H G

E

?> @BAC D' 

Figure 5.2: Expected fitness landscapes for Rastrigin’s function for the mean step sizes of σ = 0, 0.1, 0.3, 0.6, 1, 10

76

Evolutionary Search

step sizes 1 and 5. For a step size of 5, the expected fitness landscape is unimodal, however, as the step size decreases a peak emerges at the global optimum making the neighbouring local minima more attractive locations. The population is effectively pushed out of the global region and will on average be located in a local region. Staying in the global region would require a sharp decrease in the mean step size. To illustrate this effect, the function is optimized by a (15, 100) ES using the lognormal self-adaptation of the mean step size both with and without global intermediate recombination and an initial step size of 5. The result of these experimental studies are given in table 5.1, and are labelled (i) and (ii) respectively. It is anticipated from figure 5.3 that only a fixed step size of 5, which results in a unimodal expected fitness landscape, will on average locate the global optimum. However, the number of function evaluations required are greater, since λ/µ = 10 will result in negative progress near the global optimum. Two experiments are performed

 #%$    $( #  ! "   & '    *))  @

EC :9

65

4 58 2 3 57 /10 ,.-+



DB



?A@ B

? C

 $# ; ) %  <# ; ) = >

C

@ B

Figure 5.3: Expected fitness landscapes for Shaffer’s function with the mean step sizes: σ = 0, 1, 5

Step Size Control

77

with a fixed step size. The first uses the same population size as the first two experiments, in this case negative progress is expected near some local minima. The second experiment uses a larger population size, λ = 400, and maintains positive progress at all local minima. The total number of function evaluations used for each experiment is fixed at 100 × 1000 and is used as the termination criterion. The results are summarized also in table 5.1, and labelled by (iii) and (iv) respectively. As expected, only a fixed step size of 5 reliably locates the global optimum. This is guaranteed for experiment (iv) using the larger population size. For experiment (iii), which on average requires as many function evaluation to locate the global optimum, this is not as obvious. It may perhaps be explained by the fact that the individuals are located near the global region on average and that the mutation distribution covers the global optimum point. Table 5.1: Optimization of Shaffer’s function using the ES(15, 100) with an initial step size of 5 using i) lognormal update, ii) global intermediate recombination and lognormal update, iii) no update and iv) ES(15, 400) no update. study (i) (ii) (iii) (iv)

5.2

best 3.0E−3 3.0E−4 6.1E−5 7.1E−5

median 1.7E−2 9.7E−3 1.3E−3 7.5E−4

mean 3.9E−2 8.0E−3 2.0E−3 1.1E−3

st. dev. 3.9E−2 3.2E−3 1.8E−3 8.8E−4

worst 1.8E−1 1.0E−2 7.4E−3 3.2E−3

Gm 7 100 413 119

global hit 1/30 9/30 30/30 30/30

Step Size Control

The expected fitness landscapes, plotted for the various examples in the previous section, assumed that a fixed mean step size is used to produce all offspring. The actual evolution strategy introduces, however, a mutative step-size control that introduces random fluctuations [OGH95]. Intermediate recombination in large populations may lead to a significant reduction in these fluctuations. It has been suggested that this effect may perhaps be explained by the genetic repair hypothesis [Bey97]. This is possibly reasonable for strategy parameters working on the same local expected fitness landscape. For another ‘derandomizing’ step size control and further discussion on this problem see [OGH95]. One of the main differences between the current implementation of steps size control in evolution strategies and evolutionary programming is the use of

78

Evolutionary Search

recombination. Evolution strategy prefers the use of global intermediate recombination of the mean step sizes [Sch95, p. 148]. In this section the role of recombination of the mean step sizes is examined empirically. This is done by optimizing the 23 benchmark functions investigated in [YLL99], using the evolution strategy described in section 2.2.1, with and without recombination. A complete description of the benchmark functions is given in appendix A.2. The number of function evaluations used in the experiments are the same as in [YLL99]. For a population size of λ = 100, the number of generations used for the 23 problems are: 100 for test functions fu14 , fu16 , fu17 , fu18 , fu19 , fu21 , fu22 , fu23 , 1500 for fu1 , fu6 , fu10 , fu12 , fu13 , 5000 for fu3 , fu4 , fu9 , 2000 for fu2 , fu11 , and 20000, 3000, 9000, 4000, 200 for functions fu5 , fu7 , fu8 , fu15 , fu20 respectively. The experimental results for the test problem studied using no recombination are presented in table 5.2. When comparing these results with those using recombination, shown in table 5.3, it is clear that for these test cases recombination does on average better. However, for test problems fu3 and fu9 a slightly better result is observed using no recombination. The

Table 5.2: Traditional ES using no recombination of mean step sizes. fcn fu1 fu2 fu3 fu4 fu5 fu6 fu7 fu8 fu9 fu10 fu11 fu12 fu13 fu14 fu15 fu16 fu17 fu18 fu19 fu20 fu21 fu22 fu23

optimal 0.0 0.0 0.0 0.0 0.0 0.0 0.0 −12569.5 0.0 0.0 0.0 0.0 0.0 1.0 0.0003075 −1.0316285 0.397887 3.0 −3.86278 −3.322 −10.153 −10.403 −10.536

best 2.1E−05 0.09 431.19 0.36 18.10 73.0 0.063 −9667.65 56.77 5.025 0.039 0.522 0.318 1.003 0.0003075 −1.03163 0.397887 3.0 −3.86278 −3.322 −10.153 −10.403 −10.536

median 13.4 3.34 3266.68 1.38 163.93 773.0 0.269 −8857.22 78.98 9.757 1.422 4.154 6.544 2.677 0.0003087 −1.03163 0.397887 3.0 −3.86278 −3.322 −10.153 −8.235 −5.374

mean 52.6 4.87 3894.93 1.72 9213.54 1175.87 0.471 −8760.21 78.72 9.719 1.830 5.256 33.977 3.153 0.0011395 −1.03163 0.397887 3.0149 −3.86278 −3.283 −7.347 −6.922 −6.711

st. dev. 166.1 7.85 4965.84 1.98 44293.0 2209.04 0.815 1215.7 24.39 3.852 3.761 6.502 179.878 3.399 0.005840 8.1E−16 0.0E+00 0.146 4.0E−16 0.10 6.3 6.5 6.2

worst 372.3 15.02 10073.8 4.25 95941.2 4764 1.909 −7476.32 106.50 13.889 10.848 14.874 528.18 7.874 0.017466 −1.03163 0.397887 3.4465 −3.86278 −3.203 −2.630 −2.326 −2.427

Gm 1499 1999 4999 4999 18106 1146 2408 6975 3579 1499 1999 1499 1499 6 3998 50 52 49 73 186 98 98 99

Step Size Control

79

result for these two test cases are nevertheless poor for both versions. Note that a lower bound on the steps sizes is not implemented. One way of increasing the reliability of the algorithm may be to combine the strategies by using a mixed implementation of the global intermediate recombination. This may be performed as follows, ( (g) (g) (σi,j + σkj ,j )/2, kj ∈ {1, . . . , µ}, if Uj > 1/2 (g) (5.11) σ ˆh,j = (g) σi,j otherwise, where kj is an index generated at random and anew for each j and U is a random number uniformly distributed over (0, 1]. The experimental results using this mixed strategy is given in table 5.4. The result is a surprising improvement for functions fu3 and fu9 over previous strategies. On average the results are better or similar to the best result using the other two strategies. It is clear that different problems, and different stages of search, may require different step size controls as observed here and in the previous section. There exist yet another means of achieving this goal adaptively. Assumptions Table 5.3: Traditional ES using intermediate recombination of mean step sizes. fcn fu1 fu2 fu3 fu4 fu5 fu6 fu7 fu8 fu9 fu10 fu11 fu12 fu13 fu14 fu15 fu16 fu17 fu18 fu19 fu20 fu21 fu22 fu23

optimal 0.0 0.0 0.0 0.0 0.0 0.0 0.0 −12569.5 0.0 0.0 0.0 0.0 0.0 1.0 0.0003075 −1.0316285 0.397887 3.0 −3.86278 −3.322 −10.153 −10.403 −10.536

best 2.6E−14 5.2E−10 0.25 1.1E−06 8.7E−07 0.0 0.063 −11997 194.05 6.1E−08 0.0 2.8E−14 8.6E−13 0.998 0.0003075 −1.03163 0.397887 3.0 −3.86278 −3.322 −10.153 −10.403 −10.536

median 7.1E−13 6.2E−09 10230.6 1.1E−05 3.6E−05 0.0 0.114 −11464 225.86 1.7E−06 1.7E−15 5.9E−11 9.3E−10 1.047 0.0003075 −1.03163 0.397887 3.0 −3.86278 −3.322 −10.153 −10.403 −10.536

mean 4.1E−11 1.9E−08 8795.25 1.5E−4 0.313 0.0 0.112 −11460 225.08 0.0547 0.0077 0.0138 0.0066 1.602 0.0003075 −1.03163 0.397887 3.0 −3.86278 −3.301 −7.5871 −10.051 −10.273

st. dev. 2.2E−10 5.8E−08 9662.21 8.8E−04 1.852 0.0 0.045 754.9 18.99 0.5379 0.024 0.064 0.036 1.55 2.9E−19 8.1E−16 0.0E+00 0.0E+00 5.8E−15 0.081 5.29 2.40 2.58

worst 5.9E−10 1.5E−07 17861.3 2.3E−03 3.987 0.0 0.177 −10319 245.31 1.6413 0.0492 0.10367 0.1099 4.007 3.1E−04 −1.03163 0.397887 3.0 −3.86278 −3.203 −2.6304 −5.1288 −2.6434

Gm 1498 1998 3492.5 4993 19999 532 1810 2688 2782 1497 1992 1498 1496 25 2427 67 68 68 92 182 99 99 98

80

Evolutionary Search

made about the environment may be used to control search direction and step sizes. These are so called directed search methods and will be discussed briefly in the following section.

Table 5.4: Traditional ES using mixed intermediate recombination of mean step sizes. fcn fu1 fu2 fu3 fu4 fu5 fu6 fu7 fu8 fu9 fu10 fu11 fu12 fu13 fu14 fu15 fu16 fu17 fu18 fu19 fu20 fu21 fu22 fu23

5.3

optimal 0.0 0.0 0.0 0.0 0.0 0.0 0.0 −12569.5 0.0 0.0 0.0 0.0 0.0 1.0 0.0003075 −1.0316285 0.397887 3.0 −3.86278 −3.322 −10.153 −10.403 −10.536

best 3.4E−20 1.1E−14 5.6E−13 8.4E−04 1.5E−09 0.0E+00 4.9E−02 −11108.7 18.9 2.2E−11 0.0E+00 2.9E−20 3.7E−20 0.998 3.1E−04 −1.03163 0.397887 3.0 −3.86278 −3.32237 −10.1532 −10.403 −10.536

median 1.2E−17 1.2E−11 9.0E−10 9.6E−03 4.1E−06 0.0E+00 6.2E−02 −10447.4 42.8 1.6E+00 2.0E−02 1.0E−01 1.1E−02 1.992 3.1E−04 −1.03163 0.397887 3.0 −3.86278 −3.32237 −5.38515 −10.403 −10.536

mean 6.3E−15 1.4E−05 4.1E−09 2.4E−02 1.3E+00 1.2E+00 6.4E−02 −10368.4 44.2 1.7E+00 3.5E−02 2.8E−01 2.3E−01 2.276 1.3E−03 −1.03163 0.397887 3.0 −3.86278 −3.3075 −6.79 −8.6612 −9.1502

st. dev. 5.3E−14 1.4E−04 1.8E−08 4.8E−02 3.4E+00 1.2E+01 2.1E−02 788.2 18.5 2.2E+00 8.7E−02 1.0E+00 1.3E+00 2.60 5.0E−03 8.1E−16 0.0E+00 0.0E+00 4.1E−15 7.0E−02 5.7 5.4 5.1

worst 1.6E−13 4.3E−04 5.4E−08 1.2E−01 4.0E+00 36.0E+00 9.8E−02 −9310.8 676.6 4.8E+00 2.2E−01 2.9E+00 3.7E+00 6.903 1.2E−02 −1.03163 0.397887 3.0 −3.86278 −3.20316 −2.63047 −2.7519 −2.5497

Gm 1499 1999 4999 4999 19999 709 1774 1337 2169 1423 1744 1499 1499 11 3588 53 55 55 72 146 98.5 98 99

Directed Search

A directed variation is dependent on the organism-environment system. The mutation itself is under an environmental feedback. Consider a direct search technique like the method of Nelder-Mead [NM65], the algorithm maintains a simplex of approximations to an optimal point. The algorithm proceeds by

Hopeful Monsters

81

changing the worst individual, ((λ − 1) + λ) selection1 , to λ−1

x(g+1) =

(1 + χ) X (g) (g) xi:λ − χxλ:λ λ−1

(5.12)

i=1

where λ = n + 1 and typically χ ∈ {1, 2, 1/2, −1/2} depending on whether there is a reflection, expansion, outside or inside contraction step taken in the Nelder-Mead iteration [Kel99]. Similar strategies using different vertices are employed in differential evolution [SP97], generalized intermediate recombination [B¨ac96, p. 74], and other direct search methods [DT91]. In the case of evolutionary gradient search [Sal98] x(g+1) = x(g) − σ (g+1)

f ∇f fk k∇f

(5.13)

where the estimated gradient is computed as a weighted sum of λ trial points ti , f = ∇f

λ X ¡ ¢¡ ¢ f (ti ) − f (x(g) ) ti − x(g) , and i=1

¡ 2 (g) ¢ ti = x(g) + Ni 0, σn

The weights are determined by the fitness function. Using local landscape information may be misleading. It is expected that the simplex and gradient approximations will find the global optimum of some well behaved functions only. The methods do not utilize the smooth expected fitness landscape. If they would then the estimated gradient in (5.13) would use the parent position and best offspring fitness. Similarly for the directed search strategy, the simplex would be estimated from the parents but the ranking according to their best offspring. The strategies (5.13) and (5.12) are effectively single parent strategies and therefore cannot utilize information from the smoothed fitness landscape resulting from variation and selection.

5.4

Hopeful Monsters

The variation needed to satisfy the reachability property is called a hopeful monster mutation. According to the reachability property, discussed in sec1 When no improvement is obtained all but the best individual are modified in a shrink step.

82

Evolutionary Search

tion 2.2.4, the variation guarantees the location of the global optimum with probability one, although the waiting time may be long. It has been expressed that large mutations or jumps may keep the evolutionary algorithm from being caught in local minima [YL96, Kap96]. For this reason, a Cauchy mutation has been proposed as the alternative to the Gaussian one. However, for a decreasing step size even the Cauchy mutation will get stuck in a local minimum [Rud97b]. Only a Cauchy mutation with a fixed step size, or lower bound on the step size, can secure coverage [Rud97b]. The mutation described in this section is based on a uniform distribution over the search space and is independent of the parent’s position. The hopeful monster mutation is defined as ( xj + U (xj − xj ), if Uh < µ/λ and Uj < pm (g+1) (5.14) xh,j = (g) (g+1) xi,j + σh,j Nj (0, 1) otherwise. where U is a random number uniformly distributed over [0, 1]. On average

Table 5.5: Hopeful monster mutation and mixed recombination of mean step sizes. func. fu1 fu2 fu3 fu4 fu5 fu6 fu7 fu8 fu9 fu10 fu11 fu12 fu13 fu14 fu15 fu16 fu17 fu18 fu19 fu20 fu21 fu22 fu23

optimal 0.0 0.0 0.0 0.0 0.0 0.0 0.0 −12569.5 0.0 0.0 0.0 0.0 0.0 1.0 0.0003075 −1.0316285 0.397887 3.0 −3.86278 −3.322 −10.153 −10.403 −10.536

best 3.2E−16 3.2E−12 2.6E−10 1.4E−04 5.6E−09 0.0E+0.0 4.3E−02 −12569.5 0.0E0.0 8.0E−09 0.0E+00 1.2E−17 1.1E−15 0.998004 0.0003075 −1.0316 0.397887 3.0E+0.0 −3.86278 −3.322 −10.153 −10.403 −10.536

median 2.3E−14 7.7E−10 3.5E−09 1.3E−02 5.4E−05 0.0E+0.0 6.6E−02 −12569.5 0.0E0.0 1.3E−05 2.1E−02 4.4E−13 1.3E−10 0.998004 0.0003075 −1.0316 0.397887 3.0E+0.0 −3.86278 −3.322 −10.153 −10.403 −10.536

mean 6.2E−12 2.7E−06 5.5E−08 2.3E−02 7.3E−02 0.0E+0.0 6.7E−02 −12569.5 1.4E−15 1.5E−01 3.4E−02 3.5E−03 4.2E−03 0.998011 0.0003075 −1.0316 0.397887 3.0E+0.0 −3.86278 −3.303 −7.746 −9.290 −7.364

st. dev. 4.5E−11 2.5E−05 3.1E−07 6.0E−02 4.3E−01 0.0E+0.0 2.8E−02 3.3E−12 1.3E−14 6.1E−01 6.5E−02 3.4E−02 9.6E−03 5.1E−05 2.1E−18 1.2E−15 0.0E+00 9.6E−15 4.1E−15 8.1E−02 5.5 4.3 6.3

worst 1.4E−10 7.6E−05 8.6E−07 1.7E−01 1.2E+00 0.0E+0.0 1.0E−01 −12569.5 3.9E−14 1.2E+00 1.2E−01 1.0E−01 1.1E−02 0.998146 0.0003075 −1.0316 0.397887 3.0E+0.0 −3.86278 −3.203 −2.683 −2.752 −2.427

Gm 1499 1998 4999 4999 19999 686 1891 1810 3272 1498 1877 1499 1499 93 3942 77 57 83 80 163 99 98 98

Hopeful Monsters

83

Table 5.6: Result for 30 independent runs using Gaussian mutation only. function fu8 fu9 fu14 0 fu14

optimal −12569.5 0.0 1.0 1.0

best −11108.7 18.9 0.998 0.998

mean −10368.4 44.2 2.276 2.073

st. dev. 788.2 18.5 2.60 0.99

worst −9310.8 676.6 6.903 4.002

Gm 1337 2169 11 8

global hit 0/30 0/30 15/30 14/30

only one offspring will undergo this mutation. The mutation is aligned to the coordinate system when pm = 1/n, and pm = 1 is required to cover uniformly the entire search space. The univariate Cauchy mutation [Rud97b], as well as small mutation rates [Sal96], bias the search toward a grid that is aligned with the coordinate system. This is also true for a Gaussian mutation when there is no correlation between multiple mean step sizes. In order to evaluate the effectiveness of the hopeful monster mutation, experiments are carried out on the 23 benchmark function described in [YLL99]. The complete set of results are given in table 5.5. The hopeful monster experiments using pm = 1/n showed a remarkable improvement for the generalized Schwefel’s problem 2.26 (fu8 ), generalized Rastrigin’s (fu9 ), and Shekel’s foxholes function (fu14 ). These results, summarized in table 5.7, are better than those obtained using the univariate Cauchy mutation [YLL99]. The results using pm = 0 for the same problems are summarized in table 5.6. The reason for this great success of the hopeful monster mutation on the generalized Schwefel and Rastrigin functions is simply due to the independence of the parameters, which the pm = 1/n mutation exploits [Sal96]. The dependence on the rotation of the coordinate system can be further demonstrated by optimizing the rotated function. Shekel’s foxholes function, shown in figure 5.4, may be rotated by π/4 by letting x0 = [(x1 − x2 ), (x1 + x2 )]/2. 0 (x) = f 0 The rotated function is then denoted by fu14 u14 (x ). The number of global hits using the hopeful monster mutation is now equivalent to that of

Table 5.7: Result for 30 independent runs using the hopeful monster mutation along coordinate axis and Gaussian mutation. function fu8 fu9 fu14 0 fu14

optimal −12569.5 0.0 1.0 1.0

best −12569.5 0.0E0.0 0.998 0.998

mean −12569.5 1.4E−15 0.998 1.837

st. dev. 3.3E−12 1.3E−14 5.1E−05 0.96

worst −12569.5 3.9E−14 0.998 4.950

Gm 1810 3272 93 49

global hit 30/30 30/30 30/30 14/30

84

Evolutionary Search

Table 5.8: Result for 30 independent runs using the hopeful monster mutation uniformly distributed over the search space and Gaussian mutation. function fu8 fu9 fu14 0 fu14

optimal −12569.5 0.0 1.0 1.0

best −11286.4 29.8 0.998 0.998

mean −10457.2 49.7 1.841 1.459

st. dev. 470.0 11.9 1.189 0.688

worst −9529.5 85.6 5.916 3.968

Gm 2006 2116 15 71

global hit 0/30 0/30 19/30 21/30

using none. This may be verified by comparing the last rows in tables 5.6 and 5.7. Finally, in order to implement the hopeful monster mutation properly, i.e. independent of the coordinate system, then pm = 1. This result is given in table 5.8. In comparison with the results in table 5.6, it seems that no significant improvement may be expected using hopeful monster mutations. They do, however, guarantee global convergence in infinite time, but the experiments were conducted in finite time.

    

 

     











 

Figure 5.4: Shekel’s foxholes function

Summary

5.5

85

Summary

This chapter has given a new interpretation of when one may expect evolutionary search to perform effectively and efficiently. This is done by visualizing an expected fitness landscape. The topology of this landscape aids in the interpretation of when and how evolutionary search works. When this landscape is unimodal and its optimum is also the global optimum, the population will be located in the global region. Furthermore, if positive progress in fitness is maintained, the evolutionary algorithm will converge to the global optimum in mean. Different step size controls are needed for different problems. A small change in the mean step size is not necessarily the optimal strategy for all problems, Shaffer’s function is an example of this. Directed search techniques do not take advantage of the structure of the fitness landscape smoothed out by selection. They are effectively single parent strategies. Finally, although larger step sizes may increase the probability of escaping local minima for some problems, they may not contribute to faster evolutionary search for other problems. A more reliable means of escaping local minima is to preserve a larger step size and increase the number of offspring. One means of achieving this adaptively would be to generate the pre-specified number of offspring and when no improvement over the parent is found continue generating individuals until an improvement is obtained. Up until now it has been assumed that all situation descriptions naturally satisfy any constraint. However, this is not possible to implement in every case. The following chapter addresses this issue in detail.

Chapter 6

Constraint Handling When a representation and variation operators are unable to ‘naturally’ satisfy the constraints of a problem, then the only approach available is to eliminate infeasible solution by selection. This requires transforming the fitness function so that it includes a measure of the constraint violation. This is known as the penalty function method discussed in section 2.2.2 and is written as ψ(x) = f (x) + rg φ(gj (x); j = 1, . . . , m)

(6.1)

where φ ≥ 0 is a real valued function which imposes a ‘penalty’ controlled by a sequence of penalty coefficients {rg }G 0 . The penalty function φ used in this study is the quadratic loss function (2.18), but any other penalty function is equally valid. Different penalty functions characterize different problems. It is unlikely that a generic penalty function exists which is optimal for all problems. The introduction of penalties may transform the original smooth objective function into a rugged multimodal landscape. As discussed in the previous chapter, the search may then become easily trapped in local minima. For this reason, it is necessary to develop a penalty function method which attempts to preserve the topology of the objective function and yet enables the search to locate feasible solutions. A new way of interpreting penalty function methods for evolutionary search is discussed in the following section. The result is a new ranking method for constraint handling, which is presented in the second section. This is then followed by a section devoted to experimental studies illustrating the effectiveness of the proposed approach.

Penalty Method

6.1

87

Penalty Method

For a given penalty coefficient rg > 0 let the ranking of λ individuals be ψ(x1 ) ≤ ψ(x2 ) ≤ . . . ≤ ψ(xλ )

(6.2)

where ψ is the transformation function given by equation (6.1). Now examine the adjacent pair i and i + 1 in the ranked order: fi + rg φi ≤ fi+1 + rg φi+1 ,

i ∈ {1, . . . , λ − 1},

(6.3)

where the notation fi = f (xi ) and φi = φ(gj (xi ), j = 1, . . . , m)) are used for convenience. Introduce a parameter, rˇi , which will be referred to as the critical penalty coefficient for the adjacent pair i and i + 1, rˇi = (fi+1 − fi )/(φi − φi+1 ),

for φi 6= φi+1 .

(6.4)

For the given choice of rg ≥ 0, there are three different cases which may give rise to the inequality (6.3): 1. fi ≤ fi+1 and φi ≥ φi+1 : the comparison is said to be dominated by the objective function and 0 < rg ≤ rˇi because the objective function f plays the dominant role in determining the inequality. When individuals are feasible φi = φi+1 = 0 and rˇi → ∞. 2. fi ≥ fi+1 and φi < φi+1 : the comparison is said to be dominated by the penalty function and 0 < rˇi < rg because the penalty function φ plays the dominant role in determining the inequality. 3. fi < fi+1 and φi < φi+1 : the comparison is said to be nondominated and rˇi < 0. Neither the objective nor the penalty function can determine the inequality by itself. When comparing nondominant and feasible individuals, the value of rg has no impact on the inequality (6.3). In other words, it does not change the order of ranking of the two individuals. However, the value of rg is critical in the first two cases as rˇi is the flipping point that will determine whether the comparison is objective or penalty function dominated. For example, if rg is increased to a value greater than rˇi in the first case, individual i + 1 would change from a fitter individual into a less-fit one. For the entire population,

88

Constraint Handling

the chosen value of rg used for comparisons will determine the fraction of individuals dominated by the objective and penalty functions. Not all possible rg values can influence the ranking of individuals. They have to be within a certain range, i.e. rg < rg < rg , to influence the ranking, where the lower bound rg is the minimum critical penalty coefficient computed from adjacent individuals ranked only according to the objective function and the upper bound rg is the maximum critical penalty coefficient computed from adjacent individuals ranked only according to the penalty function. In general, there are three different categories of rg values: 1. rg < rg : All comparisons are based only on the fitness function. rg is too small to influence the ranking of individuals, call this under-penalization. 2. rg > rg : All comparisons are based only on the penalty function. rg is so large that the impact of the objective function can be ignored, call this over-penalization. 3. rg < rg < rg : All comparisons are based on a combination of objective and penalty functions. All penalty methods can be classified into one of the above three categories. Some methods may fall into different categories during different stages in search. It is important to understand the difference among these three categories because they indicate which function (combination of functions) is driving the search process and how search progresses. For example, most dynamic methods start with a low rg value (i.e., rg < rg ) in order to find a good region that may contain both feasible and infeasible individuals. Towards the end of search, a high rg value (i.e., rg > rg ) is often used in order to locate a good feasible individual. Such a dynamic method would work well for problems for which the unconstrained global optimum is close to its constrained global optimum. It is unlikely to work well for problems for which the constrained global optimum is far away from its unconstrained one because, the initial low rg value would drive the search towards the unconstrained global optimum and thus further away from the constrained one. The traditional constraint-handling technique used in evolution strategies falls roughly into the category of over-penalization since all infeasible individuals are regarded worse than feasible ones [Sch95, HS96, Deb99, JV99]. In fact, canonical evolution strategies allow only feasible individuals in the initial

Stochastic Ranking

89

population. To perform constrained optimization, an ES may be used to find a feasible initial population by minimizing the penalty function ([Sch95], page 115). Once a feasible population is found, the ES algorithm will then minimize the objective function and reject all infeasible solutions generated. It has been widely recognized that neither under- nor over-penalization is a good constraint-handling technique and there should be a balance between preserving feasible individuals and rejecting infeasible ones [GC97]. In other words, ranking should be dominated by a combination of objective and penalty functions and so the penalty coefficient rg should be within the bounds: rg < rg < rg . It is worth noting that the two bounds are not fixed. They are problem dependent and may change from generation to generation as they are also determined by the current population. A simple way to measure the balance of dominance of objective and penalty functions is to count how many comparisons of adjacent pairs are dominated by the objective and penalty function respectively. Such a number of comparisons can be computed for any given rg by counting the number of critical penalty coefficients given by (6.4) which are greater than rg . If there is a predetermined preference for the number of adjacent comparisons that should be dominated by the penalty function then a corresponding penalty coefficient could be found. It is clear from the analysis in this section that all a penalty method tries to do is to obtain the right balance between objective and penalty functions so that the search moves towards the optimum in the feasible space, not just towards the optimum in the combined feasible and infeasible space. One way to achieve such balancing effectively and efficiently is to adjust such balance directly and explicitly. This is what stochastic ranking, described in the next section, does.

6.2

Stochastic Ranking

Since the optimal rg is hard to determine, a different approach is used here to balance the dominance of the objective and penalty function. A probability Pf of using only the objective function for comparisons in ranking in the infeasible regions of the search space is introduced. That is, given any pair of two adjacent individuals, the probability of comparing them (in order to determine which one is fitter) according to the objective function is 1 if both

90

Constraint Handling

individuals are feasible, otherwise it is Pf . This appears to be similar to the use of a probability by Surry and Radcliffe [SR97] in deciding the outcome of competitions between two individuals in tournament selection. The technique presented is, however, quite different because it uses rank-based selection and does not require any extra computational cost for self-adapting Pf . More importantly, the motivation of stochastic ranking comes from the need for balancing objective and penalty functions directly and explicitly in optimization. Surry and Radcliffe’s method [SR97] does not attempt to balance the dominance of penalty and objective functions in a population. Ranking is achieved by a bubble-sort-like procedure1 in this work. The procedure provides a convenient way of balancing the dominance in a ranked set. In the bubble-sort-like procedure, λ individuals are ranked by comparing adjacent individuals in at least λ sweeps2 . The procedure is halted when no change in the rank ordering occurs within a complete sweep. Figure 6.1 shows the stochastic bubble sort procedure used to rank individuals in a population. The probability of an adjacent individual winning a comparison, i.e., holding the higher rank, in the ranking procedure is Pw = Pf w Pf + Pφw (1 − Pf )

(6.5)

given that at least one individual is infeasible. Pf w is the probability of the individual winning according to the objective function and Pφw is the probability of the individual winning according to the penalty function. In the case where adjacent individuals are both feasible Pw = Pf w , the probability of winning k more comparisons than losses is examined. The total number of wins must be k 0 = (N + k)/2 where N is the total number of comparisons made. The probability of winning k 0 comparisons out of N is given by the binomial distribution3 µ ¶ N 0 0 (6.6) Pw (y = k 0 ) = P k (1 − Pw )N −k . k0 w The probability of winning at least k 0 comparisons is 0 −1 µ ¶ kX N 0 0 Pwj (1 − Pw )N −j . Pw (y ≥ k ) = 1 − j j=0

1

It can be regarded as the stochastic version of the classic bubble sort. It would be exactly λ sweeps if the comparisons were not made stochastic. p 3 The standard deviation of the binomial distribution is N Pw (1 − Pw ). 2

(6.7)

Stochastic Ranking

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

91

Ij = j ∀ j ∈ {1, . . . , λ} for i = 1 to N do for j = 1 to λ − 1 do sample u ∈ U (0, 1) if (φ(Ij ) = φ(Ij+1 ) = 0) or (u < Pf ) then if (f (Ij ) > f (Ij+1 )) then swap(Ij , Ij+1 ) fi else if (φ(Ij ) > φ(Ij+1 )) then swap(Ij , Ij+1 ) fi fi od if no swap done break fi od

Figure 6.1: Stochastic ranking using a bubble-sort-like procedure where U (0, 1) is a uniform random number generator and N is the number of sweeps going through the whole population. When Pf = 0 the ranking is an overpenalization and for Pf = 1 the ranking is an under-penalization. The initial ranking is always generated at random. Equations (6.6) and (6.7) show that the greater the number of comparisons (N ) the less influence the initial ranking will have. It is worth noting that the probability Pw usually varies for different individuals in different stages of ranking (sorting). Now consider a case where Pw is constant during the entire ranking procedure, which is the case when fi < fj , φi > φj ; j 6= i, j = 1, . . . , λ. Then Pf w = 1 and Pφw = 0. If Pf = 12 is chosen then Pw = 21 . There will be an equal chance for a comparison to be made based on the objective or penalty function. Only a feasible individual is desired as the final solution, therefore Pf should be less than 21 so that there is a bias against infeasible solutions. The strength of the bias can be adjusted easily by adjusting only Pf . When parameter N , the number of sweeps, approaches ∞ then the ranking will be determined by the bias Pf . That is if Pf > 21 the ranking is based on the objective function, and when Pf < 12 the ranking is the over-penalty ranking. Hence, an increase in the number of ranking sweeps is effectively equivalent to changing parameter Pf , i.e., making it smaller or larger. Thus

92

Constraint Handling

N = λ can be fixed and Pf adjusted to achieve the best performance. These points are illustrated by optimizing a set of benchmark functions presented in appendix A.1 using different Pf values. Table 6.1 presents the average results for 30 independent runs of the evolutionary strategy presented in section 2.2.1 using stochastic ranking. The numbers in the table indicate the percentage of feasible individuals in the final population. The details about the experiment are given in the following section. It is quite clear from the table that as Pf > 1 2 finding feasible solutions becomes very difficult unless the unconstrained optimum happens to be the same as the constrained optimum, as is the case for problem g12. Table 6.1: Average percentage feasible individuals in the final population as a function of Pf (N = λ) and test function presented in appendix A.1. fcn\Pf g01 g02 g03 g04 g05 g06 g07 g08 g09 g10 g11 g12 g13

6.3

0.525 0 4 0 0 0 0 0 0 0 0 0 100 0

0.500 0 30 0 17 0 0 1 100 16 0 0 100 0

0.475 11 50 0 78 78 0 71 100 83 0 85 100 73

0.450 83 57 2 78 82 53 80 100 83 74 90 100 79

0.000 82 58 22 80 81 81 79 100 86 73 70 5 44

Experimental Study

The evolutionary optimization algorithm applied in this study is the evolution strategy described in section 2.2.1. One reason for choosing evolution strategy is that it does not introduce any specialized constraint-handling variation operators. It will be shown that specialized and complex variation operators for constrained optimization problems are unnecessary although they may be quite useful for particular types of problems (see for example [MNM96]). A simple extension to the evolution strategy, i.e., the use of the stochastic ranking scheme proposed in the previous section, can achieve significantly better results than other more complicated techniques [RY00].

Experimental Study

93

Thirteen benchmark functions are studied. The first 12 are taken from [KM99] and the 13th from [Mic95]. The details, including the original sources, of these functions are listed in appendix A.1. Problems fc2 , fc3 , fc8 , and fc12 are maximization problems. They may be transformed to minimization problems using −f (x). For each of the benchmark problems 30 independent runs are performed using a (30, 200)-ES and the stochastic ranking procedure described in the previous section. All runs are terminated after G = 1750 generations except for fc12 , which was run for 175 generations. Problem fc12 is the harder version studied in [KM99] where the feasible region of the search space consists of 93 disjointed spheres with a radius of 0.25. Table 6.2 summarizes the experimental results obtained using Pf = 0.45. The median number of generations for finding the best solution in each run is indicated by Gm in the table. The table also shows the known ‘optimal’ solution for each problem and statistics for the 30 independent runs. These include the best objective value found, median, mean, standard deviation and worst found. The statistics are based on feasible solutions only. All equality constraints have been converted into inequality constraints, |h(x)| − δ ≤ 0, using the degree of violation δ = 0.0001. As a result of this approximation, some results might be better than the optimum. However, the tolerated violation is more stringent than others [Mic95] where δ = 0.001 was used. In order to evaluate the impact of Pf on the results generated by the algorithm, the same set of experiments are run many times using Pf = 0, 0.025, . . .,

Table 6.2: Experimental results on thirteen benchmark functions using ES with stochastic ranking (Pf = 0.45). 30 independent runs were carried out. fcn fc1 fc2 fc3 fc4 fc5 fc6 fc7 fc8 fc9 fc10 fc11 fc12 fc13

optimal −15.000 −0.803619 −1.000 −30665.539 5126.498 −6961.814 24.306 −0.095825 680.630 7049.331 0.750 −1.000000 0.053950

best −15.000 −0.803515 −1.000 −30665.539 5126.497 −6961.814 24.307 −0.095825 680.630 7054.316 0.750 −1.000000 0.053957

median −15.000 −0.785800 −1.000 −30665.539 5127.372 −6961.814 24.357 −0.095825 680.641 7372.613 0.750 −1.000000 0.057006

mean −15.000 −0.781975 −1.000 −30665.539 5128.881 −6875.940 24.374 −0.095825 680.656 7559.192 0.750 −1.000000 0.067543

st. dev. 0.0E+00 2.0E−02 1.9E−04 2.0E−05 3.5E+00 1.6E+02 6.6E−02 2.6E−17 3.4E−02 5.3E+02 8.0E−05 0.0E+00 3.1E−02

worst −15.000 −0.726288 −1.000 −30665.539 5142.472 −6350.262 24.642 −0.095825 680.763 8835.655 0.750 −1.000000 0.216915

Gm 741 1086 1146 441 258 590 715 381 557 642 57 82 349

94

Constraint Handling

Table 6.3: Experimental results on thirteen benchmark functions using ES with stochastic ranking (Pf = 0). 30 independent runs were carried out. fcn fc1 fc2 fc3 fc4 fc5 fc6 fc7 fc8 fc9 fc10 fc11 fc12 fc13

optimal −15.000 −0.803619 −1.000 −30665.539 5126.498 −6961.814 24.306 −0.095825 680.630 7049.331 0.750 −1.000000 0.053950

best −15.000 −0.803578 −0.327 −30665.539 5126.945 −6961.814 24.322 −0.095825 680.632 7117.416 0.750 −0.999972 0.919042

median −15.000 −0.785253 −0.090 −30665.538 5225.100 −6961.814 24.367 −0.095825 680.657 7336.280 0.953 −0.999758 0.997912

mean −15.000 −0.783049 −0.105 −30664.710 5348.683 −6961.814 24.382 −0.095825 680.671 7457.597 0.937 −0.999766 0.993372

st. dev. 0.0E+00 1.5E−02 7.2E−02 3.8E+00 2.7E+02 1.9E−12 5.9E−02 2.7E−17 3.8E−02 3.4E+02 5.4E−02 1.4E−04 1.5E−02

worst −15.000 −0.750656 −0.014 −30644.897 6050.566 −6961.814 24.598 −0.095825 680.772 8464.816 0.973 −0.999488 0.998316

Gm 697 1259 61 632 213 946 546 647 414 530 1750 90 1750

0.525. As expected, neither small nor large (i.e., > 0.5) Pf gave very good results. The best results are obtained when 0.4 < Pf < 0.5. This indicates that a minor bias toward the dominance of the penalty function encourages the evolution of feasible solutions while still maintaining infeasible regions as potential ‘bridges’ to move among feasible regions in the whole search space. This is illustrated by plotting the expected fitness landscape for the highly disjointed test function fc12 . The result is shown in figure 6.2 for a mean step size of 0.2 and the two values of Pf = 0 and Pf = 0.45. The dotted line in the figure represents the unconstrained function. The second and third variables are held at the constant value 5. For an over-penalization (Pf = 0) there exist regions of negative progress. Positive progress may be regained in these regions by increasing the value of Pf towards 0.5. The rate of progress for this particular problem will be at a maximum when optimizing the unconstrained problem (Pf = 1). By maintaining positive progress towards the optimum in the infeasible regions the evolutionary algorithm is able to traverse these areas. Tables 6.3 and 6.4 give two sets of experimental results when Pf = 0 and Pf = 0.475 respectively. Pf = 0 is an extreme case where all infeasible individuals were ranked lower than feasible individuals. Among feasible solutions, the ranking was based solely on the objective function. Among infeasible solutions, the ranking is based only on the penalty function. This extreme case is somewhat similar to [Deb99], but not the same because it does not use

Experimental Study

95

the worst fitness value of feasible solutions. Although the algorithm did not perform as well as when Pf = 0.45 for problems fc3 , fc4 , fc5 , fc11 , fc12 , and fc13 , it performed roughly the same as when Pf = 0.45 for other problems. When Pf = 0.475, the penalty against infeasible solution is weakened. The algorithm could only find a feasible solution 6 times out of 30 runs for problem fc10 although it found a feasible solution 100% times for all other problems. In general, the algorithm improved its performance and found best solutions when Pf was changed from 0.45 to 0.475, except for problems fc1 , fc3 , and fc6 . The improvement is especially noticeable for functions fc13 and fc4 . It is important to emphasize that the performance of any evolutionary algorithm for constrained optimization is determined by the constraint-handling technique used as well as the evolutionary search algorithm (including parameters). Throughout the study, modification of the ES have been kept to the

   @ <  @ ) (  %$ # $' ! " $&     @BC





  @BA

?> < =



+0 * ,.- / +* ,.-  + 1* ,.- 2 3 457698;:. Figure 6.2: Expected fitness landscapes for Michalewicz and Koziel’s function (fc12 ) for the mean step sizes of σ = 0.2 and Pf = 0, 0.45. The dotted line represents the unconstrained function value.

96

Constraint Handling

Table 6.4: Experimental results on thirteen benchmark functions using ES with stochastic ranking (Pf = 0.475). 30 independent runs were carried out. ?) Based on the 6 feasible solutions found out of the 30 runs. fcn fc1 fc2 fc3 fc4 fc5 fc6 fc7 fc8 fc9 ?) fc10 fc11 fc12 fc13

optimal −15.000 −0.803619 −1.000 −30665.539 5126.498 −6961.814 24.306 −0.095825 680.630 7049.331 0.750 −1.000000 0.053950

best −15.000 −0.802760 −0.998 −30665.539 5126.518 −6871.345 24.307 −0.095825 680.630 7202.108 0.750 −1.000000 0.053945

median −5.736 −0.792272 −0.995 −30665.539 5127.276 −6603.846 24.317 −0.095825 680.634 7343.603 0.750 −1.000000 0.054000

mean −6.793 −0.786084 −0.995 −30665.539 5128.538 −6572.309 24.328 −0.095825 680.640 7384.116 0.750 −1.000000 0.054179

st. dev. 3.5E+00 1.8E−02 2.1E−03 1.1E−11 3.1E+00 2.1E+02 2.6E−02 2.7E−17 1.4E−02 1.8E+02 8.7E−06 0.0E+00 5.0E−04

worst −2.356 −0.731226 −0.990 −30665.539 5141.085 −6058.588 24.392 −0.095825 680.676 7688.864 0.750 −1.000000 0.056224

Gm 951 1301 935 349 448 14 1568 463 1449 1321 118 69 573

minimum, i.e. changing only the selection scheme. The parameters were also set according to previous recommendations in published books and articles. This, however, does not imply that the search algorithm plays an unimportant role in constrained optimization. To illustrate that the combined effect of a constraint handling technique and a search algorithm can make a big difference, the experiment on function fc10 is repeated using ϕ∗ = 1/4 (instead of 1) for the computation of the learning rates τ and τ 0 . These results are given in table 6.5. A significant improvement was achieved in comparison with the results in tables 6.2 and 6.3. Table 6.5: Experimental result on function fc10 using ES with stochastic ranking and ϕ∗ = 1/4. Pf 0.45 0.00

optimal 7049.331 7049.331

best 7049.852 7049.955

median 7054.111 7062.673

mean 7056.163 7074.044

st. dev. 5.7E+00 3.1E+01

worst 7068.633 7196.647

Gm 1733 1745

An interesting question that arises naturally here is whether or not the ES was fully responsible for the good results obtained, e.g., those in table 6.2, in other words, whether stochastic ranking contributed anything to the good results. To answer this question, an additional set of experiments was carried out using exactly the same ES but with a different constraint handling technique — the dynamic penalty method of [JH94]. The results are summarized

Experimental Study

97

Table 6.6: Experimental results using the dynamic penalty method of [JH94] with rg = g/2. The subscript in the function name indicates the number of feasible solutions found if it was between 1 and 29 inclusive. “–” means no feasible solutions were found. fcn fc1 fc2 fc3 fc4 fc5 fc6 fc7 fc8 fc9 fc10 fc11 (16) fc12 fc13 (25)

optimal −15.000 −0.803619 −1.000 −30665.539 5126.498 −6961.814 24.306 −0.095825 680.630 7049.331 0.750 −1.000000 0.053950

best −14.990 −0.803597 – −30648.7 – −6897.969 24.347 −0.095825 680.632 – 0.750 −1.000000 0.281242

median −14.970 −0.783042 – −30007.6 – −6534.206 24.417 −0.095825 680.638 – 0.750 −1.000000 0.448114

mean −14.968 −0.777241 – −30021.8 – −6502.478 24.479 −0.095354 680.648 – 0.758 −1.000000 0.474460

st. dev. 1.3E−02 2.3E−02 – 1.7E+02 – 2.3E+02 1.6E−01 1.5E−03 2.7E−02 – 2.5E−02 0.0E+00 1.1E−01

worst −14.943 −0.710725 – −29804.6 – −5962.775 24.934 −0.087752 680.761 – 0.850 −1.000000 0.901263

Gm 122 1007 – 5 – 12 109 278 109 – 14 65 1750

in tables 6.6 and 6.7. Comparing tables 6.2 and 6.6, it is clear that stochastic ranking performed better than the dynamic penalty method with rg = g/2 [JH94] according to all four criteria (best, median, mean, and worst) for all benchmark functions except for fc2 , fc9 , and fc12 . The two methods performed the same on problem fc12 . The dynamic penalty method found a better best than stochastic ranking for problem fc2 , but performed worse than stochastic ranking according to median, mean, and worst. On the other hand, stochastic ranking found a better best (i.e., the optimum) for problem fc9 , but performed worse than the dynamic penalty method according to median, mean, and worst. The results in table 6.7 with rg = (g/2)2 improved those in table 6.6 for most, but not all, problems. Feasible solutions can now be found for problem fc3 . The results for several problems are now better than those in table 6.6. However, none of these improvements has changed the general picture. The results from stochastic ranking are still better than those from the dynamic penalty method [JH94] with rg = (g/2)2 . In fact, the dynamic penalty method with rg = (g/2)2 is only better than stochastic ranking for problem fc2 , but has lost its advantage for problem fc9 . This is not very surprising because the dynamic penalty method relies on a predefined sequence of rg while stochastic ranking is an adaptive method without any predefined sequence.

98

Constraint Handling

Table 6.7: Experimental results using the dynamic penalty method of [JH94] with rg = (g/2)2 . “–” means no feasible solutions were found. fcn fc1 fc2 fc3 fc4 fc5 fc6 fc7 fc8 fc9 fc10 fc11 fc12 fc13

optimal −15.000 −0.803619 −1.000 −30665.539 5126.498 −6961.814 24.306 −0.095825 680.630 7049.331 0.750 −1.000000 0.053950

best −15.000 −0.803587 −0.583 −30365.488 – −6911.247 24.309 −0.095825 680.632 – 0.750 −1.000000 0.514152

median −15.000 −0.785907 −0.045 −30060.607 – −6547.354 24.375 −0.095825 680.648 – 0.750 −0.999818 0.996674

mean −15.000 −0.784868 −0.103 −30072.458 – −6540.012 24.421 −0.095825 680.659 – 0.750 −0.999838 0.965397

st. dev. 7.9E−05 1.5E−02 1.4E−01 1.2E+02 – 2.6E+02 2.2E−01 2.8E−17 3.2E−02 – 9.1E−06 1.3E−04 9.4E−02

worst −15.000 −0.75162 −0.001 −29871.44 – −5868.028 25.534 −0.095825 680.775 – 0.750 −0.99957 0.99816

Gm 217 1235 996 4 – 13 180 421 1739 – 61 68 1750

Predefined sequences are unable to adjust rg according to different problems and different search stages for a problem. While a predefined sequence may work well for one problem it may not for a different problem. This is what happened to the dynamic penalty method. On the other hand, an adaptive method like stochastic ranking can adjust the balance between objective and penalty functions automatically for different problems and during different stages of evolutionary search. However, there is no free lunch in optimization [WM97]. The price paid by an adaptive method is slightly longer search time for the algorithm to adapt. This can be observed from the last column in tables 6.2 and 6.6.

6.4

Summary

This chapter proposed a new constraint handling technique — stochastic ranking. The technique does not introduce any specialized variation operators. It does not require a priori knowledge about a problem since it does not use any penalty coefficient rg in a penalty function. Stochastic ranking is motivated from the analysis of penalty methods from the point of view of dominance. The balance between the objective and penalty functions is achieved through a ranking procedure based on the stochastic bubble sort algorithm. The introduction of a single probability of Pf produces a convenient means of specifying an agreeable bias towards the objective function in ranking in-

Summary

99

dividuals. Experimental results have been presented and are an improvement on the performance of previous evolutionary optimization methods [RY00]. The experimental results suggest that a value of 0.4 < Pf < 0.5 would be appropriate for many constrained optimization problems.

Chapter 7

Conclusion The difference between human problem solving and problem solving by natural systems is, that for the latter case problems are not predefined, but rather they co-evolve with their solution. Essentially, evolutionary problem solving requires setting up a situation in order to get the simulation off the ground. The form of a solution must be pre-specified. The problem-solution must be abstracted and so the search space is defined. On a computational device, this information appears in the form of a data structure, similar to what base sequences of nucleic acids seem to provide in natural systems. With information comes meaning, instructions to build solutions, which make only sense in relation to the problem (environment). The search for solutions requires variations of the data structure. For intelligent search, these variations effectively reason with the structure. The variation is an inference that determines the location of the most viable solutions. It is often thought that random variations operate blindly on data structures without any regard to their meaning. This gives the impression of mutations doing more harm than good, but can only be presumed when the meaning of the data is unknown. For ‘intelligent’ search the meaning of the data and its reasoning by variation must be co-adapted for every problem-solution. It is the designer of the algorithm who supplies this knowledge and when sufficient, search may not be necessary at all. The more ‘intelligent’ problem solver is nevertheless capable of automatic learning, requiring fewer instructions from its designer, but this also requires search. For ‘intelligent’ general problem solving, an architecture must be chosen such that the search is meaningful, regardless of what the problem may be. The symbolic structures do not have this property, they can only be understood on the

Discussion of Contributions

101

basis of pre-existing non-symbolic forms. Connectionist models do, however, provide meaningful search, small changes in the connection weights provide in general small changes in behaviour. The meaning of the connectionist model adapts to its variation.

7.1

Discussion of Contributions

The primary objective of this work has been to enhance the understanding of how evolutionary problem solving works. This has been done by treating the method like any other problem solving strategy using a description of a situation, operators for changing the situation, and an evaluation of the situation.

1. Problem-solution description An attempt has been made to gain a wider understanding of how problems and their candidate solutions may be described for ‘intelligent’ problem solving. The problem corresponds to the environment and the solution to the organism. This is illustrated in figure 3.1. The most common approach in evolutionary computation is to isolate the problem from its solution. By doing so all knowledge about the problem must be reflected in the solution representation and variation operators. How this may be done is not always trivial, but when done properly search is probably unnecessary. This difficulty is inherent for symbolic representations as illustrated by example in sections 3.2.1 and 4.1. ‘Intelligent’ search requires automatic learning about the problem. This can be done more effectively and efficiently when the solution and problem interact as shown in figure 3.1. The search for the knowledge of how to solve a problem requires itself meta-knowledge. A meaningful search for this knowledge was illustrated using connectionist models in section 3.2.2. These connectionist models were extended in chapter 4 and two novel architectures for evolutionary problem solving were presented. The first architecture, a deliberative connectionist model, produces plastic solutions. It not only solves the problem set but also variations of it. This implies that re-optimization is not necessary when any conditions are changed. This would, however, be the case for the symbolic model. The novelty of the approach was demonstrated for scheduling problems. The second architecture, a reactive connectionist model,

102

Conclusion

learned how to optimize neural network weights. Any new linearly separable task is, therefore, solved without the need for any further evolutionary search.

2. Evolutionary search It may be that the “building block thesis” [Hol00] holds, but understanding the algorithm’s behaviour remains difficult when the ‘blocks’ are manipulated without any regard to their meaning. When symbolic representations are used, it must first be understood how the system functions without a code and then it may be imagined what would be necessary so that some aspects of this functioning could be ‘encoded’. Once the ‘coding relationship’ is in place, it will be known how the ‘blocks’ should be varied. It is believed that the architecture of natural systems is such that random variations correspond to intelligent inferences. Meaningful variations are created by specifying meaningful distance metrics for the problem-solution studied. Small changes correspond to small changes in fitness. The variation operates in the metric space. The data structures and corresponding fitness values form a fitness landscape. A number of probabilistic inferences are made in order to locate fitter solutions. Only one of these need be successful on average in order to maintain search progress. The result of generating a number of alternative solutions, followed by selection, is effectively to smooth the fitness landscape. This introduces a new landscape concept, called the expected fitness landscape. Neighbourhoods are determined by the parents and the fitness by the expected fitness of their best offspring. This landscape is shaped by the distance metric chosen, the operators working in this space, the number of offspring produced, and their fitness value. In order for the evolutionary search to work, this landscape must be unimodal, or at least locally so, and the local optimum of the expected fitness landscape must be equivalent to that of the goal. Furthermore, if global convergence in mean is to be obtained, positive progress in terms of fitness must also be maintained on average. In some cases, this may be achieved by adaptively increasing the number of offspring generated.

3. Constraint handling When constraints are not ‘naturally’ satisfied by the problem-solution description or the variation operator, it must be treated by selection. This requires an additional measure for constraint violations. Individuals are selected based

Directions for Future Work

103

on their original fitness value and the degree of constraint violation. Preference must be given to solutions that satisfy all constraints. In order to understand how different constraint handling techniques perform, evolutionary search must first be understood. Clearly the evolutionary search must also work for the unconstrained problem. This understanding was enhanced by the description of expected fitness landscapes. This concept can now be applied to interpret the influence of a constraint handling technique on evolutionary search. By introducing a measure for constraint violation a smooth landscape may be transformed to a rugged one. The more emphasis that is put on the constraints, the more rugged it becomes. This was illustrated by example in figure 6.2 by plotting the expected fitness landscape for when the infeasible individuals are over-penalized. When a balance is struck between the unconstrained fitness and the measure of constraint violations, as illustrated in the same figure using stochastic ranking, then the degree of ruggedness may be controlled. Individuals are able to traverse infeasible regions, but only when they locate superior feasible regions. For this a balance must be maintained between venturing across infeasible regions and exploring the current feasible region. This is what the new constraint handling technique, stochastic ranking, does effectively.

7.2

Directions for Future Work

A problem solving architecture must include in some form representations which can be interpreted by the human designer, otherwise the solutions will have no meaning. Similarly, designers must be capable of describing their goals and conditions. In other words, some representation will always be presupposed, this is what is required in setting up a situation. An ‘intelligent’ problem solver will be capable of adapting any further representations and variations thereof for efficient and effective search. This, in turn, requires a greater understanding of the architectures of cognition and the nature of representation. With this insight, it may be possible to design general ‘intelligent’ problem solvers in the future. Currently, connectionist models look as if they are the more suitable universal representation for evolutionary search.

Appendix A

Test Function Suite A.1

Constrained

Floundas & Pardalos Minimize [FP87]: fc1 (x) = 5

4 X i=1

xi − 5

4 X i=1

x2i −

13 X

xi

(1.1)

i=5

subject to: g1 (x) = 2x1 + 2x2 + x10 + x11 − 10 ≤ 0 g2 (x) = 2x1 + 2x3 + x10 + x12 − 10 ≤ 0 g3 (x) = 2x2 + 2x3 + x11 + x12 − 10 ≤ 0 g4 (x) = −8x1 + x10 ≤ 0 g5 (x) = −8x2 + x11 ≤ 0 g6 (x) = −8x3 + x12 ≤ 0 g7 (x) = −2x4 − x5 + x10 ≤ 0 g8 (x) = −2x6 − x7 + x11 ≤ 0 g9 (x) = −2x8 − x9 + x12 ≤ 0 (1.2) where the bounds are 0 ≤ xi ≤ 1 (i = 1, . . . , 9), 0 ≤ xi ≤ 100 (i = 10, 11, 12) and 0 ≤ x13 ≤ 1 . The global minimum is at x∗ = (1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 3, 3, 1) where six constraints are active (g1 , g2 , g3 , g7 , g8 and g9 ) and fc1 (x∗ ) = −15.

Constrained

105

Keane Maximize [KM99]: ¯ ¯ Pn Q ¯ i=1 cos4 (xi ) − 2 ni=1 cos2 (xi ) ¯ ¯ ¯ qP fc2 (x) = ¯ ¯ n 2 i=1 ixi

(1.3)

subject to: g1 (x) = 0.75 −

n Y

xi ≤ 0

i=1

g2 (x) =

n X

xi − 7.5n ≤ 0

i=1

(1.4) where n = 20 and 0 ≤ xi ≤ 10 (i = 1, . . . , n). The global maximum is unknown, the best we found is fc2 (x∗ ) = 0.803619 (which, to the best of our knowledge, is better than any reported value), constraint g1 is close to being active (g1 = −10−8 ).

Michalewicz et. al. Maximize [MNM96]: n Y √ xi fc3 (x) = ( n)n

(1.5)

i=1

h1 (x) =

n X

x2i − 1 = 0

(1.6)

i=1

where n = 10 and 0 ≤ xi ≤ 1 (i = 1, . . . , n). The global maximum is at √ x∗i = 1/ n (i = 1, . . . , n) where fc3 (x∗ ) = 1.

Himmelblau, problem 11 Minimize [Him72]: fc4 (x) = 5.3578547x23 + 0.8356891x1 x5 + 37.293239x1 − 40792.141

(1.7)

106

Test Function Suite

subject to: g1 (x) = 85.334407 + 0.0056858x2 x5 + 0.0006262x1 x4 − 0.0022053x3 x5 − 92 ≤ 0 g2 (x) = −85.334407 − 0.0056858x2 x5 − 0.0006262x1 x4 + 0.0022053x3 x5 ≤ 0 g3 (x) = 80.51249 + 0.0071317x2 x5 + 0.0029955x1 x2 + 0.0021813x23 − 110 ≤ 0 g4 (x) = −80.51249 − 0.0071317x2 x5 − 0.0029955x1 x2 − 0.0021813x23 + 90 ≤ 0 g5 (x) = 9.300961 + 0.0047026x3 x5 + 0.0012547x1 x3 + 0.0019085x3 x4 − 25 ≤ 0 g6 (x) = −9.300961 − 0.0047026x3 x5 − 0.0012547x1 x3 − 0.0019085x3 x4 + 20 ≤ 0 (1.8) where 78 ≤ x1 ≤ 102, 33 ≤ x2 ≤ 45 and 27 ≤ xi ≤ 45 (i = 3, 4, 5). The optimum solution is x∗ =(78, 33, 29.995256025682, 45, 36.775812905788) where fc4 (x∗ ) = −30665.539. Two constraints are active (g1 and g6 ).

Hock & Schittkowski, problem 74 Minimize [HS81]: fc5 (x) = 3x1 + 0.000001x31 + 2x2 + (0.000002/3)x32

(1.9)

subject to: g1 (x) = −x4 + x3 − 0.55 ≤ 0 g2 (x) = −x3 + x4 − 0.55 ≤ 0 h3 (x) = 1000 sin(−x3 − 0.25) + 1000 sin(−x4 − 0.25) + 894.8 − x1 = 0 h4 (x) = 1000 sin(x3 − 0.25) + 1000 sin(x3 − x4 − 0.25) + 894.8 − x2 = 0 h5 (x) = 1000 sin(x4 − 0.25) + 1000 sin(x4 − x3 − 0.25) + 1294.8 = 0 (1.10) where 0 ≤ x1 ≤ 1200, 0 ≤ x2 ≤ 1200, −0.55 ≤ x3 ≤ 0.55 and −0.55 ≤ x4 ≤ 0.55. The best known solution [KM99] x∗ = (679.9453, 1026.067, 0.1188764, −0.3962336) where fc5 (x∗ ) = 5126.4981.

Flounds & Pardalos Minimize [FP87]: fc6 (x) = (x1 − 10)3 + (x2 − 20)3

(1.11)

Constrained

107

subject to: g1 (x) = −(x1 − 5)2 − (x2 − 5)2 + 100 ≤ 0 g2 (x) = (x1 − 6)2 + (x2 − 5)2 − 82.81 ≤ 0 (1.12) where 13 ≤ x1 ≤ 100 and 0 ≤ x2 ≤ 100. The optimum solution is x∗ = (14.095, 0.84296) where fc6 (x∗ ) = −6961.81388. Both constraints are active.

Hock & Schittkowski, problem 113 Minimize [HS81]: fc7 (x) = x21 + x22 + x1 x2 − 14x1 − 16x2 + (x3 − 10)2 + 4(x4 − 5)2 + (x5 − 3)2 + 2(x6 − 1)2 + 5x27 + 7(x8 − 11)2 + 2(x9 − 10)2 + (x10 − 7)2 + 45 (1.13) subject to: g1 (x) = −105 + 4x1 + 5x2 − 3x7 + 9x8 ≤ 0 g2 (x) = 10x1 − 8x2 − 17x7 + 2x8 ≤ 0 g3 (x) = −8x1 + 2x2 + 5x9 − 2x10 − 12 ≤ 0 g4 (x) = 3(x1 − 2)2 + 4(x2 − 3)2 + 2x23 − 7x4 − 120 ≤ 0 g5 (x) = 5x21 + 8x2 + (x3 − 6)2 − 2x4 − 40 ≤ 0 g6 (x) = x21 + 2(x2 − 2)2 − 2x1 x2 + 14x5 − 6x6 ≤ 0 g7 (x) = 0.5(x1 − 8)2 + 2(x2 − 4)2 + 3x25 − x6 − 30 ≤ 0 g8 (x) = −3x1 + 6x2 + 12(x9 − 8)2 − 7x10 ≤ 0 (1.14) where −10 ≤ xi ≤ 10 (i = 1, . . . , 10). The optimum solution is x∗ = (2.171996, 2.363683, 8.773926, 5.095984, 0.9906548, 1.430574, 1.321644, 9.828726, 8.280092, 8.375927) where fc7 (x∗ ) = 24.3062091. Six constraints are active (g1 , g2 , g3 , g4 , g5 and g6 ).

Schoenauer & Xanthakis Maximize [KM99]: fc8 (x) =

sin3 (2πx1 ) sin(2πx2 ) x31 (x1 + x2 )

(1.15)

108

Test Function Suite

subject to: g1 (x) = x21 − x2 + 1 ≤ 0 g2 (x) = 1 − x1 + (x2 − 4)2 ≤ 0 (1.16) where 0 ≤ x1 ≤ 10 and 0 ≤ x2 ≤ 10. The optimum is located at x∗ = (1.2279713, 4.2453733) where fc8 (x∗ ) = 0.095825. The solution lies within the feasible region.

Hock & Schittkowski, problem 100 Minimize [HS81]: fc9 (x) = (x1 − 10)2 + 5(x2 − 12)2 + x43 + 3(x4 − 11)2 +

(1.17)

10x65 + 7x26 + x47 − 4x6 x7 − 10x6 − 8x7 subject to: g1 (x) = −127 + 2x21 + 3x42 + x3 + 4x24 + 5x5 ≤ 0 g2 (x) = −282 + 7x1 + 3x2 + 10x23 + x4 − x5 ≤ 0 g3 (x) = −196 + 23x1 + x22 + 6x26 − 8x7 ≤ 0 g4 (x) = 4x21 + x22 − 3x1 x2 + 2x23 + 5x6 − 11x7 ≤ 0 (1.18) where −10 ≤ xi ≤ 10 for (i = 1, . . . , 7). The optimum solution is x∗ = (2.330499, 1.951372, −0.4775414, 4.365726, −0.6244870, 1.038131, 1.594227) where fc9 (x∗ ) = 680.6300573. Two constraints are active (g1 and g4 ).

Hock & Schittkowski, heat exchanger design Minimize [HS81]: fc10 (x) = x1 + x2 + x3

(1.19)

Constrained

109

subject to: g1 (x) = −1 + 0.0025(x4 + x6 ) ≤ 0 g2 (x) = −1 + 0.0025(x5 + x7 − x4 ) ≤ 0 g3 (x) = −1 + 0.01(x8 − x5 ) ≤ 0 g4 (x) = −x1 x6 + 833.33252x4 + 100x1 − 83333.333 ≤ 0 g5 (x) = −x2 x7 + 1250x5 + x2 x4 − 1250x4 ≤ 0 g6 (x) = −x3 x8 + 1250000 + x3 x5 − 2500x5 ≤ 0 (1.20) where 100 ≤ x1 ≤ 10000, 1000 ≤ xi ≤ 10000 (i = 2, 3) and 10 ≤ xi ≤ 1000 (i = 4, . . . , 8). The optimum solution is x∗ = (579.3167, 1359.943, 5110.071, 182.0174, 295.5985, 217.9799, 286.4162, 395.5979) where fc10 (x∗ ) = 7049.3307. Three constraints are active (g1 , g2 and g3 ).

Maa & Shanblatt Minimize [KM99]: fc11 (x) = x21 + (x2 − 1)2

(1.21)

subject to: h(x) = x2 − x21 = 0 (1.22) where −1 ≤ x1 ≤ 1 and −1 ≤ x2 ≤ 1. The optimum solution is x∗ = √ (±1/ 2, 1/2) where fc11 (x∗ ) = 0.75.

Michalewicz & Koziel Maximize [KM99]: fc12 (x) = (100 − (x1 − 5)2 − (x2 − 5)2 − (x3 − 5)2 )/100

(1.23)

subject to: g(x) = (x1 − p)2 + (x2 − q)2 + (x3 − r)2 − 0.0625 ≤ 0 (1.24)

110

Test Function Suite

where 0 ≤ xi ≤ 10 (i = 1, 2, 3) and p, q, r = 1, 2, . . . , 9. The feasible region of the search space consists of 93 disjointed spheres. A point (x1 , x2 , x3 ) is feasible if and only if there exist p, q, r such that the above inequality holds. The optimum is located at x∗ = (5, 5, 5) where fc12 (x∗ ) = 1. The solution lies within the feasible region.

Hock & Schittkowski Minimize [HS81]: fc13 (x) = ex1 x2 x3 x4 x5

(1.25)

subject to: h1 (x) = x21 + x22 + x23 + x24 + x25 − 10 = 0 h2 (x) = x2 x3 − 5x4 x5 = 0 h3 (x) = x31 + x32 + 1 = 0 (1.26) where −2.3 ≤ xi ≤ 2.3 (i = 1, 2) and −3.2 ≤ xi ≤ 3.2 (i = 3, 4, 5). The optimum solution is x∗ = (−1.717143, 1.595709, 1.827247, −0.7636413, −0.763645) where fc13 (x∗ ) = 0.0539498.

A.2

Unconstrained

Sphere Model fu1 (x) =

30 X

x2i

i=1

−100 ≤ xi ≤ 100,

min(fu1 ) = fu1 (0, . . . , 0) = 0

Schwefel’s Problem 2.22 fu2 (x) =

30 X i=1

−10 ≤ xi ≤ 10,

|xi | +

30 Y

|xi |

i=1

min(fu2 ) = fu2 (0, . . . , 0) = 0

Unconstrained

111

Schwefel’s Problem 1.2 30 X

fu3 (x) =

i=1

−100 ≤ xi ≤ 100,

 2 i X  xj  j=1

min(fu3 ) = fu3 (0, . . . , 0) = 0

Schwefel’s Problem 2.21 fu4 (x) = max{|xi |, 1 ≤ i ≤ 30} i

−100 ≤ xi ≤ 100,

min(fu4 ) = fu4 (0, . . . , 0) = 0

Generalised Rosenbrock’s Function fu5 (x) =

29 X

[100(xi+1 − x2i )2 + (xi − 1)2 ]

i=1

−30 ≤ xi ≤ 30,

min(fu5 ) = fu5 (1, . . . , 1) = 0

Step Function fu6 (x) =

30 X

(bxi + 0.5c)2

i=1

−100 ≤ xi ≤ 100,

min(fu6 ) = fu6 (0, . . . , 0) = 0

Quartic Function with Noise fu7 (x) =

30 X

ix4i + random[0, 1)

i=1

−1.28 ≤ xi ≤ 1.28,

min(fu7 ) = fu7 (0, . . . , 0) = 0

Generalised Schwefel’s Problem 2.26 30 ³ ³p ´´ X |xi | fu8 (x) = − xi sin i=1

−500 ≤ xi ≤ 500,

min(fu8 ) = fu8 (420.9687, . . . , 420.9687) = −12569.5

112

Test Function Suite

Generalised Rastrigin’s Function fu9 (x) =

30 X

[x2i − 10 cos(2πxi ) + 10)]

i=1

−5.12 ≤ xi ≤ 5.12,

min(fu9 ) = fu9 (0, . . . , 0) = 0

Ackley’s Function v  Ã ! u 30 30 u1 X X 1 x2i  − exp cos 2πxi + 20 + e fu10 (x) = −20 exp −0.2t 30 30 

i=1

−32 ≤ xi ≤ 32,

i=1

min(fu10 ) = fu10 (0, . . . , 0) = 0

Generalised Griewank Function 30

30

i=1

i=1

1 X 2 Y xi − cos fu11 (x) = 4000 −600 ≤ xi ≤ 600,

µ

x √i i

¶ +1

min(fu11 ) = fu11 (0, . . . , 0) = 0

Generalised Penalised Functions fu12 (x) =

π 30 +

(

29 X 10 sin (πy1 ) + (yi − 1)2 [1 + 10 sin2 (πyi+1 )] + (yn − 1)2

30 X

2

i=1

u(xi , 10, 100, 4)

i=1

−50 ≤ xi ≤ 50,

min(fu12 ) = fu12 (1, . . . , 1) = 0

(

29 X fu13 (x) = 0.1 sin (π3x1 + (xi − 1)2 [1 + sin2 (3πxi+1 )] + 2

i=1 2

)

(xn − 1)[1 + sin (2πx30 )]

+

30 X

u(xi , 5, 100, 4)

i=1

−50 ≤ xi ≤ 50,

min(fu13 ) = fu13 (1, . . . , 1) = 0

)

Unconstrained where

113  m  xi > a,  k(xi − a) , u(xi , a, k, m) = 0, −a ≤ xi ≤ a,   k(−x − a)m , x < −a. i i 1 yi = 1 + (xi + 1) 4

Shekel’s Foxholes Function 

−1 25 X 1 1  fu14 (x) =  + P 500 j + 2i=1 (xi − aij )6 j=1

−65.536 ≤ xi ≤ 65.536,

min(fu14 ) = fu14 (−32, −32) ≈ 1

where à (aij ) =

−32 −16 0 16 32 −32 · · · −32 −32 −32 −32 −32 −16 · · ·

0 16 32 32 32 32

!

Kowalik’s Function ¸2 11 · X x1 (b2i + bi x2 ) fu15 (x) = ai − 2 bi + bi x3 + x4 i=1 −5 ≤ xi ≤ 5,

min(fu15 ) ≈ fu15 (0.1928, 0.1908, 0.1231, 0.1358) ≈ 0.0003075

Six-hump Camel-Back Function 1 fu16 = 4x21 − 2.1x41 + x61 + x1 x2 − 4x22 + 4x42 3 −5 ≤ xi ≤ 5 xmin = (0.08983, −0.7126), (−0.08983, 0.7126) min(fu16 ) = −1.0316285

114

Test Function Suite

Table A.1: Kowalik’s Function fu15 i 1 2 3 4 5 6 7 8 9 10 11

ai 0.1957 0.1947 0.1735 0.1600 0.0844 0.0627 0.0456 0.0342 0.0323 0.0235 0.0246

b−1 i 0.25 0.5 1 2 4 6 8 10 12 14 16

Branin Function

µ ¶2 ¶ µ 5.1 2 5 1 fu17 (x) = x2 − 2 x1 + x1 − 6 + 10 1 − cos x1 + 10 4π π 8π −5 ≤ x1 ≤ 10,

0 ≤ x2 ≤ 15

xmin = (−3.142, 12.275), (3.142, 2.275), (9.425, 2.425) min(fu17 ) = 0.398

Goldstein-Price Function fu18 (x) = [1 + (x1 + x2 + 1)2 (19 − 14x1 + 3x21 − 14x2 + 6x1 x2 + 3x22 )] ×[30 + (2x1 − 3x2 )2 (18 − 32x1 + 12x21 + 48x2 − 36x1 x2 + 27x22 )] −2 ≤ xi ≤ 2,

Hartman’s Family f (x) = −

4 X i=1

min(fu18 ) = fu18 (0, −1) = 3

 ci exp −

n X

 aij (xj − pij )2 

j=1

with n = 3, 6 for fu19 (x) and fu20 (x), respectively, 0 ≤ xj ≤ 1. The coefficients are defined by Tables A.2 and A.3, respectively.

Unconstrained

115

Table A.2: Hartman Function fu19 i 1 2 3 4

aij , j 3 0.1 3 0.1

= 1, 2, 3 10 30 10 35 10 30 10 35

ci 1 1.2 3 3.2

pij , j = 1, 2, 3 0.3689 0.1170 0.2673 0.4699 0.4387 0.7470 0.1091 0.8732 0.5547 0.038150 0.5743 0.8828

Table A.3: Hartman Function fu20 i 1 2 3 4

10 0.05 3 17

aij , j = 1, · · · , 6 3 17 3.5 10 17 0.1 3.5 1.7 10 8 0.05 10

1.7 8 17 0.1

8 14 8 14

ci 1 1.2 3 3.2

0.1312 0.2329 0.2348 0.4047

0.1696 0.4135 0.1415 0.8828

pij , j = 1, · · · , 6 0.5569 0.0124 0.8307 0.3736 0.3522 0.2883 0.8732 0.5743

0.8283 0.1004 0.3047 0.1091

For fu19 (x) the global minimum is equal to −3.86 and it is reached at the point (0.114, 0.556, 0.852). For fu20 (x) the global minimum is −3.32 at the point (0.201, 0.150, 0.477, 0.275, 0.311, 0.657).

Shekel’s Family f (x) = −

m X [(x − ai )(x − ai )T + ci ]−1 i=1

with m = 5, 7 and 10 for fu21 (x), fu22 (x) and fu23 (x), respectively, 0 ≤ xj ≤ 10. These functions have 5, 7 and 10 local minima for fu21 (x), fu22 (x), and fu23 (x), respectively. xlocal opt ≈ ai , f (xlocal opt ) ≈ 1/ci for 1 ≤ i ≤ m. The coefficients are defined by Table A.4.

0.5886 0.9991 0.6650 0.0381

116

Test Function Suite

Table A.4: Shekel Functions fu21 , fu22 , fu23 i 1 2 3 4 5 6 7 8 9 10

aij , j = 1, · · · , 4 4 4 4 4 1 1 1 1 8 8 8 8 6 6 6 6 3 7 3 7 2 9 2 9 5 5 3 3 8 1 8 1 6 2 6 2 7 3.6 7 3.6

ci 0.1 0.2 0.2 0.4 0.4 0.6 0.3 0.7 0.5 0.5

Bibliography [BA85]

A.G. Barto and P. Anandan. Pattern recognizing stochastic learning automata. IEEE Trans. Syst. Man Cybern., 4:360–375, 1985.

[B¨ac96]

T. B¨ack. Evolutionary Algorithms in Theory and Practice. Oxford University Press, New York, 1996.

[Bal96]

J. M. Baldwin. A new factor in evolution. American Naturalist, 30:441–451, 1896.

[Bax92]

J. Baxter. The evolution of learning algorithms for artificial neural networks. Complex Systems, 1992.

[BBC95]

S. Bengio, Y. Bengio, and J. Cloutier. On the search for new learning rules for ANNs. Neural Processing Letters, 2(4):1–5, 1995.

[Ber95]

D. P. Bertsekas. Dynamic Programming and Optimial Control, Vols. I and II. Athena Scientific, Belmont, Massachusetts, 1995.

[Bey95a]

H.-G. Beyer. Toward a Theory of Evolution Strategies: On the Benefit of Sex – the (µ/µ, λ)-Theory. Evolutionary Computation, 3(1):81–111, 1995.

[Bey95b]

H.-G. Beyer. Toward a Theory of Evolution Strategies: The (µ, λ)-Theory. Evolutionary Computation, 2(4):381–407, 1995.

[Bey96]

H.-G. Beyer. Toward a Theory of Evolution Strategies: SelfAdaptation. Evolutionary Computation, 3(3):311–347, 1996.

[Bey97]

H.-G. Beyer. An Alternative Explanation for the Manner in which Genetic Algorithms Operate. BioSystems, 41:1–15, 1997.

118

BIBLIOGRAPHY

[BGH89]

L. Booker, D. E. Goldberg, and J. H. Holland. Classifier systems and genetic algorithms. Artificial Intelligence, 40(1-3):235–282, 1989.

[Bro91]

R. A. Brooks. Intelligence without representation. Artificial Intelligence, 47:139–159, 1991.

[BS84]

J. D. Bransford and B. S. Stein. The ideal problem solver: A guide for improving thinking, learning, and creativity. Freeman, New York, 1984.

[BSK96]

T. B¨ack, M. Sch¨ utz, and S. Khuri. Evolution strategies: An alternative evolution computation method. In J. M. Alliot, E. Lutton, E. Ronald, M. Schoenhauer, and D. Snyers, editors, Artificial Evolution, pages 3–20. Springer, Berlin, 1996.

[BT96]

D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, Belmount, Massachusetts, 1996.

[BWW00]

L. Barbulescu, J.-P. Watson, and D. Whitley. Dynamic representations and escaping local optima: Improving genetic algorithms and local search. In The Seventeenth National Conference on Artificial Intelligence. AAAI, 2000.

[CF99]

K. Chellapilla and D. B. Fogel. Evolving neural networks to play checkers without expert knowledge. IEEE Transactions on Neural Networks, 10(6):1382–1391, November 1999.

[Cha90]

D. Chalmers. The evolution of learning: An experimentation in genetic connectionism. In Connectionists models: Proceedings of the 1990 summer school, pages 81–90. San Mateo, CA: Morgan Kaufmann Publishers, 1990.

[Cyb89]

G. Cybenko. Approximation by superposition of a sigmoid function. Mathematics of Control, Signal and Systems, 2:303–314, 1989.

[Deb99]

K. Deb. An efficient constrained handling method for genetic algorithms. In Computer Methods in Applied Mechanics and Engineering, page in press, 1999.

BIBLIOGRAPHY

119

[DG99]

I. E. Dror and D. P. Gallogly. Computational analysis in cognitive neuroscience: In defence of biological implausibility. Psychonomic Bulletin & Review, 6(2):173–182, 1999.

[DT91]

J. E. Dennis and V. Torczon. Direct search methods on parallel machines. SIAM Journal on Optimization, 1(4):448–474, November 1991.

[EHM99]

´ E. Eiben, R. Hinterding, and Z. Michalewicz. Parameter conA. trol in evolutionary algorithms. IEEE Transactions on Evolutionary Computation, 3(2):124–141, 1999.

[FM68]

A. V. Fiacco and G. P. McCormick. Nonlinear Programming: Sequential Unconstrained Minimization Techniques. Wiley, NewYork, 1968.

[Fog95]

D. B. Fogel. Evolutionary Computation: Towards a New Philosophy of Machine Intelligence. IEEE Press, 1995.

[FP87]

C. Floundas and P. Pardalos. A Collection of Test Problems for Constrained Global Optimization, volume 455 of Lecture Notes in Computar Science. Springer-Verlag, Berlin, Germany, 1987.

[FT63]

H. Fisher and G. Thompson. Probabilistic learning combinations of local job-shop scheduling rules. In J. F. Muth and G. L. Thompson, editors, Industrial Scheduling. Prentice Hall, 1963.

[Fun89]

K. Funahashi. On the approximate realization of continuous mappings by neural networks. Neural Networks, 2:183–192, 1989.

[GC97]

M. Gen and R. Cheng. Genetic Algorithms and Engineering Design. Wiley, New-York, 1997.

[Gol89]

D. E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, Reading, MA, 1989.

[Hay81]

J. R. Hayes. The Complete Problem Solver. The Franklin Institute Press, USA, 1981.

[Heb49]

D. O. Hebb. The Organization of Behaviour. Wiley, New York, 1949.

120

BIBLIOGRAPHY

[Him72]

D. Himmelblau. Applied Nonlinear Programming. McGraw-Hill, New-York, 1972.

[HN87]

G. Hinton and S. Nowlan. How learning can guide evolution. Complex Systems, 1:495–502, 1987.

[Hol75]

J. H. Holland. Adaptation in Natural and Artificial Systems. The University of Michigan Press, Ann Arbor, MI, 1975.

[Hol00]

J. H. Holland. Building blocks, cohort genetic algorithms and hyperplane-defined functions. Evolutionary Computation, 8(4):373–391, 2000.

[HS81]

W. Hock and K. Schittkowski. Test Examples for Nonlinear Programming Codes. Lecture Notes in Economics and Mathematical Systems. Springer-Verlag, Berlin, Germany, 1981.

[HS96]

F. Hoffmeister and J. Sprave. Problem independent handling of constraints by use of metric penalty functions. In L. J. Fogel, P. J. Angeline, and Th. B¨ack, editors, Proceedings of the Fifth Annual Conference on Evolutionary Programming, pages 289–294, Cambridge MA, 1996. The MIT Press.

[HSW89]

K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward network are universal approximators. Neural Networks, 2:359–366, 1989.

[Hub87]

V. Hubka. Principles of Engineering Design. Edition Heurista, Z¨ urich, Swiss, 1987.

[Hut95]

E. Hutchins. Cognition in the Wild. The MIT Press, Cambridge, Massachusetts, 1995.

[JF95]

T. Jones and S. Forrest. Fitness distance correlation as a measure of problem difficulty for genetic algorithms. In L. J. Eshelman, editor, Proceedings of the 6th Int. Conference on Genetic Algorithms. Kaufman, 1995.

[JH94]

J. Joines and C. Houck. On the use of non-stationary penalty functions to solve nonlinear constrained optimization problems

BIBLIOGRAPHY

121

with GAs. In Proc. IEEE International Conference on Evolutionary Computation, pages 579–584. IEEE Press, 1994. [JV99]

F. Jim´enez and J. L. Verdegay. Evolutionary techniques for constrained optimization problems. In Proc. of the 7th European Congress on Intelligent Techniques and Soft Computing (EUFIT’99), Germany, Berlin, 1999. Springer-Verlag.

[Kap96]

C. Kappler. Are evolutionary algorithms improved by large mutations? In Parallel Problem Solving From Nature – PPSN IV, Berlin, 1996. Springer.

[Kau91]

S. A. Kauffman. Antichaos and adaptation. Scientific American, 265(2):64–70, 1991.

[Kau93]

S. A. Kauffman. The Origins of Order: Self-Organization and Selection in Evolution. Oxford University Press, New York, 1993.

[Kel99]

C. T. Kelley. Detection and remediation of stagnation in the Nelder-Mead algorithm using a sufficient decrease condition. SIAM Journal of Optimization, 10:43–55, 1999.

[KM99]

S. Koziel and Z. Michalewicz. Evolutionary algorithms, homomorphous mappings, and constrained parameter optimization. Evolutionary Computation, 7(1):19–44, 1999.

[KP98]

S. Kazarlis and V. Petridis. Varying fitness functions in genetic algorithms: Studying the rate of increase in the dynamic penalty terms. In Parallel Problem Solving from Nature, volume 1498 of Lecture Notes in Computer Science, pages 211–220, Berlin, Germany, 1998. Springer.

[LD85]

C.-T. Lung and R. L. Dominowski. Effects of strategy instructions and practice on nine-dot problem solving. Journal of Experimental Psychology: Learning, Memory and Cognition, 11(4):804–811, 1985.

[Lew70]

R. C. Lewontin. The units of selection. Annual Review of Ecology and Systematics, 1:1–18, 1970.

122

BIBLIOGRAPHY

[Lew82]

R. C. Lewontin. Organism and environment. In H.C. Plotkin, editor, Learning, Development and Culture, page 162. John Wiley & Sons Ltd, New York, 1982.

[Lew83]

R. C. Lewontin. The organism as the subject and object of evolution. Scientia, 118:63–82, 1983.

[Lew85]

R. C. Lewontin. The Dialectical Biologist, chapter Adaptation, pages 65–84. Harward University Press, New York, 1985.

[Lew00]

R. C. Lewontin. The Triple Helix: Gene, Organism, and Environment. Harvard University Press, Cambridge, Massachusetts, 2000.

[LV90]

G. Liepins and M. Vose. Representation issues in genetic algorithms. Journal of Experimental and Theoretical Artificial Intelligence, 2:101–115, 1990.

[LYL+ 98]

K.-H. Liang, X. Yao, Y. Liu, C. Newton, and D. Hoffman. An experimental investigation of self-adaptation in evolutionary programming. In et al. Porto, editor, Proc. of the 7th Annual Conference on Evolutionary Programming, pages 291–300, Berlin, Germany, 1998. Springer-Verlag.

[MA94]

Z. Michalewicz and N. Attia. Evolutionary optimization of constrained problems. In L. J. Fogel and A.V. Sebald, editors, Proc. of the 2nd Annual Conference on Evolutionary Programming, pages 98–108, River Edge, NJ, 1994. World Scientific Publishing.

[MAJ91]

P. Mazzoni, R. A. Andersen, and M. I. Jordan. A more biological plausible learning rule for neural networks. Proc. Natl. Acad. Sci. USA, 88:4433–4437, May 1991.

[Man97]

B. Manderick. Fitness landscapes: Correlation analysis. In T. B¨ack, D.B. Fogel, and Z. Michalewicz, editors, Handbook of Evolutionary Computation, pages B2.7:10–14. IOP Publishing Ltd and Oxford University Press, 1997.

[Mar96]

G. Marley. Landscapes, learning costs, and genetic assimilation. Evolutionary Computation, 4(3):213–234, 1996.

BIBLIOGRAPHY

123

[MF00]

Z. Michalewicz and D. B. Fogel. How to Solve It: Modern Heuristics. Springer-Verlag, Berlin Heidelberg, 2000.

[Mic95]

Z. Michalewicz. Genetic algorithms, numerical optimization and constraints. In L.J. Eshelman, editor, Proceedings of the 6th International Conference on Genetic Algorithms, pages 151–158, San Mateo, CA, July 15-19 1995. Morgan Kaufman.

[MMR97]

K. Mehrotra, C. K. Mohan, and S. Ranka. Elements of Artificial Neural Networks. The MIT Press, Cambridge, Massachusetts, 1997.

[MNM96]

Z. Michalewicz, G. Nazhiyath, and M. Michalewicz. A note on usefulness of geometrical crossover for numerical optimization problems. In L.J. Fogel, P.J. Angeline, and T. B¨ack, editors, Proc. of the 5th Annual Conference on Evolutionary Programming, pages 305–312. MIT Press, Cambridge, MA, 1996.

[MP43]

W. McCulloch and W. Pitts. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5:115–133, 1943.

[MT63]

J. F. Muth and G. L. Thompson. Industrial Scheduling. Prentice Hall, New Jersey, 1963.

[New90]

A. Newell. Unified Theories of Cognition. Harvard University Press, 1990.

[NM65]

J. A. Nelder and R. Mead. A simplex method for function minimization. The Computer Journal, 7:308–313, 1965.

[NS72]

A. Newell and H. A. Simon. Human Problem Solving. PrenticeHall, Inc., Englewood Cliffs, New Jersey, 1972.

[OGH95]

A. Ostermeier, A. Gawelczyk, and N. Hansen. A derandomized approach to self-adaptation of evolution strategies. Evolutionary Computation, 2(4):369–380, 1995.

[PI77]

S. S. Panwalker and W. Iskander. A survey of scheduling rules. Operations Research, 25(1):45–61, 1977.

124

BIBLIOGRAPHY

[Pol57]

G. Polya. How to Solve It: A New Aspect of Mathematical Method. Princeton University Press, Princeton, New Jersey, second edition, 1957.

[Pos43]

E. L. Post. Formal reductions of the general combinatorial decision problem. American Journal of Mathematics, 65:197–215, 1943.

[Rec94]

I. Rechenberg. Evolutionstrategie ’94. Stuttgart, 1994.

[Ree97]

C. R. Reeves. Genetic algorithms for the operations researcher. INFORMS Journal on Computing, 9(3):231–247, 1997.

[RHW86]

D. E. Rumelhart, G. Hinton, and R. Williams. Learning internal representations by error propagation. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition: Vol. 1. Foundations, pages 318–362. Cambridge, MA: MIT Press, 1986.

[RJ99a]

T. P. Runarsson and M. Th. Jonsson. Evolution of gene coordination networks. In B. McKay, X. Yao, C.S. Newton, J.-H. Kim, and T. Furuhashi, editors, Lecture Notes in Artificial Intelligence 1585, pages 430–437. Springer, 1999.

[RJ99b]

T. P. Runarsson and M. Th. Jonsson. Genetic production systems for intelligent problem solving. Special Issue: Computational Intelligence in Intelligent Manufacturing, 10(3):181–186, April 1999.

[RJ00]

T. P. Runarsson and M. Th. Jonsson. Evolution and design of distributed learning rules. In The first IEEE symposium on Combinations of Evolutionary Computation and Neural Networks. IEEE, May 2000.

[RSJ00]

T. P. Runarsson, R. Sarker, and M. Th. Jonsson. Constrained nonlinear integer programming, self-adaptation and evolution strategies. International Journal of Knowledge-Based Intelligent Engineering Systems, 4(3):164–171, July 2000.

Frommann-Holzboog,

BIBLIOGRAPHY

125

[Rud94]

G. Rudolph. An evolutionary algorithm for integer programming. In Parallel Problem Solving From Nature, volume 3, pages 139– 148, 1994.

[Rud97a]

G. Rudolph. Convergence Properties of Evolutionary Algorithms. Verlag Dr. Kovaˇc, Hamburg, 1997.

[Rud97b]

G. Rudolph. Local convergence rates of simple evolutionary algorithms with cauchy mutations. IEEE Transactions on Evolutionary Computation, 1(4):249–258, 1997.

[RY00]

T. P. Runarsson and X. Yao. Stochastic ranking for constrained evolutionary optimization. IEEE Transactions on Evolutionary Computation, 4(3):284–294, September 2000.

[Sal96]

R. Salomon. Some comments on evolutionary algorithm theory. Evolutionary Computation Journal, 4(4):405–415, 1996.

[Sal98]

R. Salomon. Evolutionary algorithms and gradient search: Similarities and differences. IEEE Transactions on Evolutionary Computation, 2(2):45–55, 1998.

[SB98]

R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 1998.

[SC97]

A. E. Smith and D. W. Coit. Penalty functions. In T. B¨ack, D. B. Fogel, and Z. Michalewicz, editors, Handbook on Evolutionary Computation, pages C5.2:1–6. Oxford University Press, 1997.

[Sch77]

H.-P. Schwefel. Numerische Optimierung von Computer– Modellen mittels der Evolutionsstrategie, volume 26 of Interdisciplinary Systems Research. Birkh¨auser, Basle, Switzerland, 1977.

[Sch95]

H.-P. Schwefel. Evolution and Optimum Seeking. Wiley, NewYork, 1995.

[Sch98]

S. M. Scheiner. The genetics of phenotype plasticity. vii. evolution in a spatially-structured environment. Journal of Evolutionary Biology, 11(3):303–320, 1998.

126

BIBLIOGRAPHY

[SP97]

R. Storn and K. Price. Differential evolution - a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, 11:341 – 359, 1997.

[SR97]

P. D. Surry and N. J. Radcliffe. The COMOGA method: Constrained optimisation by multiobjective genetic algorithms. Control and Cybernetics, 26(3), 1997.

[SS89]

W. Siedlecki and J. Sklansky. Constrained genetic optimization via dynamic reward-penalty balancing and its use in pattern recognition. In International Conference on Genetic Algorithms, pages 141–149, 1989.

[Ste95]

J. Stewart. Cognition = life: Implications for higher-level cognition. Behavioural Processes, 35:311–326, December 1995.

[Sut88]

R. S. Sutton. Learning to predict by the method of temporal differences. Machine Learning, 3(9–44), 1988.

[SWC+ 95]

N. A. Stillings, S. E. Weisler, C. H. Chase, M. H. Feinstein, J. L. Garfield, and E. L. Rissland. Cognitive Science: An Introduction. MIT Press, Cambridge, Massachusetts, second edition, 1995.

[Sys89]

G. Syswerda. Uniform crossover in genetic algorithms. In Third International Conference on Genetic Algorithms, pages 2–9, 1989.

[VFM00]

V. K. Vassilev, T. C. Fogarty, and J. F. Miller. Information characteristics and the structure of landscapes. Evolutionary Computation, 8(1):31–60, 2000.

[WGM73]

B. Widrow, N. K. Gupta, and S. Maitra. Punish/reward: Learning with a critic in adaptive threshold systems. IEEE Trans. Syst. Man Cybern., 4:455–465, 1973.

[Wil94]

B. A. Williams. Reinforcement and choice. In N.J. Mackintosh, editor, Animal Learning and Cognition, pages 81–108. Academic Press Inc., London, 1994.

[WM97]

D. H. Wolpert and W. G. Macready. No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, 1(1):67–82, 1997.

BIBLIOGRAPHY

127

[WMRD96] D. Whitley, K. Mathias, S. Rana, and J. Dzubera. Evaluating evolutionary algorithms. Artificial Intelligence Journal, 85, 1996. [Wri31]

S. Wright. Evolution in mendelian populations. Genetics, 16:97, 1931.

[Yao99]

X. Yao. Evolving artificial neural networks. Proceedings of the IEEE, 87(9):1423–1447, 1999.

[YCC99]

A. S. Younger, P. R. Conwell, and N. E. Cotter. Fixed-weight online learning. IEEE Transactions on Neural Networks, 10(2):272– 283, March 1999.

[YL96]

X. Yao and Y. Liu. Fast evolutionary programming. In Proceeding of the Fifth Annual Conference on Evolutionary Programming, pages 451–460, Cambridge (MA), 1996. MIT Press.

[YLL99]

X. Yao, Y. Liu, and G. Lin. Evolutionary programming made faster. IEEE Transactions on Evolutionary Computation, 3(2):82–102, July 1999.

[Zuc97]

E. Zuckerkandl. Neutral and nonneutral mutations: the creative mix-evolution of complexity in gene interaction systems. Journal of Molecular Evolution, 44(Suppl 1):S2–S8, 1997.

List of Tables 4.1

The mean and standard deviation of the percentage of time a rule is used to build the best schedule found. . . . . . . . . . .

50

Makespan for the 100 independent runs taken, the optimal solution is 930. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51

4.3

Chalmers’ eight linearly separable tasks [Cha90]. . . . . . . . .

64

4.4

The last epoch error for the best rules found for each experiment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

65

Optimization of Shaffer’s function using the ES(15, 100) with an initial step size of 5 using i) lognormal update, ii) global intermediate recombination and lognormal update, iii) no update and iv) ES(15, 400) no update. . . . . . . . . . . . . . . . . . .

77

5.2

Traditional ES using no recombination of mean step sizes. . . .

78

5.3

Traditional ES using intermediate recombination of mean step sizes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79

Traditional ES using mixed intermediate recombination of mean step sizes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

80

Hopeful monster mutation and mixed recombination of mean step sizes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

82

5.6

Result for 30 independent runs using Gaussian mutation only. .

83

5.7

Result for 30 independent runs using the hopeful monster mutation along coordinate axis and Gaussian mutation. . . . . . .

83

Result for 30 independent runs using the hopeful monster mutation uniformly distributed over the search space and Gaussian mutation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

84

4.2

5.1

5.4 5.5

5.8

LIST OF TABLES 6.1

6.2

6.3

6.4

6.5 6.6

6.7

A.1 A.2 A.3 A.4

129

Average percentage feasible individuals in the final population as a function of Pf (N = λ) and test function presented in appendix A.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Experimental results on thirteen benchmark functions using ES with stochastic ranking (Pf = 0.45). 30 independent runs were carried out. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Experimental results on thirteen benchmark functions using ES with stochastic ranking (Pf = 0). 30 independent runs were carried out. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Experimental results on thirteen benchmark functions using ES with stochastic ranking (Pf = 0.475). 30 independent runs were carried out. ?) Based on the 6 feasible solutions found out of the 30 runs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Experimental result on function fc10 using ES with stochastic ranking and ϕ∗ = 1/4. . . . . . . . . . . . . . . . . . . . . . . . Experimental results using the dynamic penalty method of [JH94] with rg = g/2. The subscript in the function name indicates the number of feasible solutions found if it was between 1 and 29 inclusive. “–” means no feasible solutions were found. . . . . Experimental results using the dynamic penalty method of [JH94] with rg = (g/2)2 . “–” means no feasible solutions were found. . Kowalik’s Function fu15 . . . . Hartman Function fu19 . . . . . Hartman Function fu20 . . . . . Shekel Functions fu21 , fu22 , fu23

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

92

93

94

96 96

97 98 114 115 115 116

List of Figures 2.1

A stepwise approach to evolutionary problem solving . . . . . .

9

2.2

Correlated mutation generated using rotation angle α. The ellipsoid represents the place of equal probability density. . . .

13

A multilayered feed-forward neural network. The node activations are labelled by the variable a and the connection weights by w. An input activation layer is fed through the network to the output layer as shown. The number of hidden layers may be any number. . . . . . . . . . . . . . . . . . . . . . . . . . . .

23

2.3

3.1

Interpretation of the environment-organism and genome system. 29

3.2

Comparing integer, binary and Gray encoding for the nine dot problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

35

Linear neural network compared with symbolic representations for the nine dot problem. . . . . . . . . . . . . . . . . . . . . .

38

An example genetic string of rules is pictured, the third rule from the left requires a interpretation since rule “c” is illegal – denoted by a dashed box, either default rule “A” or a random valid rule “?” is fired instead. . . . . . . . . . . . . . . . . . . .

43

Genome model. Competing alleles at loci l and m where (?) marks the currently successful allele. . . . . . . . . . . . . . . .

46

Average number of illegal alleles in the population as a function of generation for the random decipher method. . . . . . . . . .

50

3.3

4.1

4.2 4.3

LIST OF FIGURES 4.4

131

Perturbation results. The top figure shows the fitness at a locus which has had its successful allele deleted and all loci to the right of it have been structurally perturbed. The bottom figure shows the fitness at a locus which has only had its successful allele deleted at that point. . . . . . . . . . . . . . . . . . . . .

54

Gantt charts. Six different solutions obtained due to perturbation by deletion for the 6 × 6 job-shop problem’s elite solution.

55

Network landscapes. The choice landscapes for the gene coordination networks per locus. The locus number is given above each map with the number of time the network has been trained during its evolution over generations in parenthesis. . . . . . .

56

Perturbation results. The top figure shows the fitness at a locus which has had its successful allele deleted and all loci to the right of it have been structurally perturbed. The bottom figure shows the fitness at a locus which has only had its successful allele deleted at that point. . . . . . . . . . . . . . . . . . . . .

57

Expression changes. The figure shows how many genes have changed their expression as a function of locus where the successful allele was deleted. . . . . . . . . . . . . . . . . . . . . .

58

The neural network structure (bias is not shown). A Π (multiplier) node and accumulator node have been added to implement the weight update. Both tp and oj are sensors or the result of sensory input. The effector is oip , proposes the appropriate action. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61

4.10 The target value is tip = 1 and the learning rate is one. The learning rule has 20 hidden nodes and a training time of 60 epochs during the evolution. . . . . . . . . . . . . . . . . . . . .

65

4.11 The target value is tip = 0 and the learning rate is one. The learning rule has 20 hidden nodes and a training time of 60 epochs during the evolution. . . . . . . . . . . . . . . . . . . . .

66

4.12 The learning process for the evolved rules from experiment A and F and using the delta rule on Chalmers’ tasks. . . . . . . .

67

4.13 The learning process for the evolved rules from experiment A and F and using the delta rule on the logical OR (top) and AND (bottom) functions. . . . . . . . . . . . . . . . . . . . . . . . . .

67

4.5 4.6

4.7

4.8

4.9

132

LIST OF FIGURES 5.1 5.2 5.3 5.4 6.1

6.2

Expected fitness landscapes for the step function using the mean step sizes of σ = 0, 0.1, 0.3, 0.6, 1, and 10 . . . . . . . . . . . . . Expected fitness landscapes for Rastrigin’s function for the mean step sizes of σ = 0, 0.1, 0.3, 0.6, 1, 10 . . . . . . . . . . . . . . . . Expected fitness landscapes for Shaffer’s function with the mean step sizes: σ = 0, 1, 5 . . . . . . . . . . . . . . . . . . . . . . . . Shekel’s foxholes function . . . . . . . . . . . . . . . . . . . . . Stochastic ranking using a bubble-sort-like procedure where U (0, 1) is a uniform random number generator and N is the number of sweeps going through the whole population. When Pf = 0 the ranking is an over-penalization and for Pf = 1 the ranking is an under-penalization. The initial ranking is always generated at random. . . . . . . . . . . . . . . . . . . . . . . . . Expected fitness landscapes for Michalewicz and Koziel’s function (fc12 ) for the mean step sizes of σ = 0.2 and Pf = 0, 0.45. The dotted line represents the unconstrained function value. .

73 75 76 84

91

95

Mathematical Symbols Symbol

Short Description

n x x∗ λ µ σ τo τ τ0 ϕx ϕf ϕ∗ ω ψ φ rg ` E[·] P {·} Pf N (0, 1) U, u G, Gm

search space dimension vector of objective variables x global minimum point number of offspring individuals number of parent individuals mean step size √ learning rate, ϕ∗ / n p √ individual step size learning rate, ϕ∗ / 2 n √ global step size learning rate, ϕ∗ / 2n rate of progress in terms of distance metric rate of progress in terms of fitness expected rate of convergence connectionist model’s connection strength fitness function penalty function penalty coefficient length of string expectation value probability of an event probability of using objective function in stochastic ranking normally distributed random variable with 0 mean and variance 1 random number uniformly distributed over [0 1] total and median number of generations generation counter

(g)

Glossary Allele alternative forms of a gene. Building block a short above-average schemata. Building block hypothesis states that a genetic algorithm using crossover works well when short, low-order, highly fit schemas recombine to form even more highly fit, higher-order schemas. Chromosome a chain of genes. Gene a unit of heredity. Genome the collection of genes held by the organism. Locus position on a chromosome occupied by a gene. Phenotype the manifested attributes of an organism. Schema a schema describes a subset of strings that have similarities at certain positions. Schemata are defined as strings of length ` in the extended representation space I ? = I × ?, where ? denotes a ‘wildcard’ matching any alphabet in I. Schema theorem a theorem devised by Holland [Hol75] to give a lower bound for the expected number of instances of a schema that are represented in the next generation in a canonical genetic algorithm using one-point crossover, mutation, and proportional selection.

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 PDFFOX.COM - All rights reserved.