Hybrid Particle Swarm-Based-Simulated Annealing Optimization [PDF]

desirable attributes. In this paper, two new hybrid Particle ... The proposed hybrid algorithms are based on using the P

0 downloads 10 Views 5MB Size

Recommend Stories


metode hybrid particle swarm optimization
Your big opportunity may be right where you are now. Napoleon Hill

Optimization by Mean Field Annealing
Everything in the universe is within you. Ask all from yourself. Rumi

Hybrid Particle Swarm Optimization for Multi-Sensor Data Fusion
Make yourself a priority once in a while. It's not selfish. It's necessary. Anonymous

Display Advertising Optimization by Quantum Annealing Processor
Happiness doesn't result from what we get, but from what we give. Ben Carson

Particle Swarm Optimization with Crossover
Open your mouth only if what you are going to say is more beautiful than the silience. BUDDHA

Simplified particle swarm optimization algorithm
Don’t grieve. Anything you lose comes round in another form. Rumi

A Hybrid Algorithm based on Invasive Weed Optimization and Particle Swarm Optimization for
The greatest of richness is the richness of the soul. Prophet Muhammad (Peace be upon him)

Annealing
Silence is the language of God, all else is poor translation. Rumi

Global Optimization of General Failure Surfaces in Slope Analysis by Hybrid Simulated Annealing
Just as there is no loss of basic energy in the universe, so no thought or action is without its effects,

Idea Transcript


Hybrid Particle Swarm-Based-Simulated Annealing Optimization Techniques Nasser Sadati

Intelligent Systems Laboratory Electrical Engineering Department Sharif University of Technology Tehran, IRAN

Abstract Particle Swarm Optimization (PSO) algorithms recently invented as intelligent optimizers with several highly desirable attributes. In this paper, two new hybrid Particle Swam Optimization schemes are proposed. The proposed hybrid algorithms are based on using the Particle Swarm Optimization techniques in conjunction with the Simulated Annealing (SA) approach. By simulating three different test functions, it is shown how the proposed hybrid algorithms offer the capability of converging toward the global minimum or maximum points. More importantly, the simulation results indicate that the proposed hybrid particle swarm-based simulated annealing approaches have much superior convergence characteristics than the previously developed PSO methods.

I. INTRODUCTION PSO is a particle swarm optimization algorithm for global optimization that originally was introduced by Kennedy and Eberhart in 1995 [1], [2]. This approach differs from other well-known Evolutionary Algorithms (EA) that has already been developed, as shown in [1], [3]-[6], in that no operators, inspired by evolutionary procedures are applied on the population to generate new promising solutions. Instead, in PSO, each individual, named as particle, of the population, called swarm, adjusts its trajectory toward its own previous best position, and toward the previous best position attained by any member of its topological neighborhood [7]. In the global variant of PSO, the whole swarm is considered as the neighborhood. Thus, the global sharing of information takes place and the particles profit from the discoveries and previous experience of all other companions during the search for promising regions of the landscape. For example, in the single-objective minimization case, such regions possess lower function values than others, visited previously, but PSO has deficiency on finding the global min or maximum points. The simulated annealing (SA) algorithm is a kind of global optimization method that mimics the nature in the way that a thermal system is cooled down to its lowest energy states. It has an explicit strategy to avoid the local minima [8]. The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chance of finding configurations with lower internal energy than the initial one.

1-4244-0136-4/06/$20.00 (C2006 IEEE

Hamid Reza Feyz Mahdavian

Majid Zamani

Intelligent Systems Laboratory Electrical Engineering Department Sharif University of Technology Tehran, IRAN

Intelligent Systems Laboratory Electrical Engineering Department Sharif University of Technology Tehran, IRAN

SA algorithm can find the global minimum using stochastic searching technology from the means of probability and it assures that a global minimum can be found when the parameter space is sampled infinitely many times during annealing period. II. SWARM OPTIMIZATION

Assuming that the search space is D-dimensional, the i-th particle of the swarm is represented by the D-dimensional vector Xi = (xi] Xi2 ...XXid ) and the best particle in the swarm, i.e. the particle with the smallest function value, is denoted by the index g (Pg = (Pg, Pg2 ,--- Pgd )) . The best previous position of the i-th particle is recorded and represented as t= (Pil PI2 , Pid ), while the position change (velocity) of the i-th particle is represented as Vi =(Vil.Vi2vid), which is clamped to a maximum velocity Vmax =(Vmaxi,Vmax2 -Vmaxd) specified by the user. Following this notation, the particles are manipulated according to the following equations: Vid =

WVid + c1

rand(.)(Pid -Xid)

(1)

+ C2 rand(-)(Pgd- Xid) Xid = Xid + Vid

(2)

where w can be expressed by the inertia weights approach [9], c1 and c2 are the acceleration constants which influence the convergence speed of each particle [10], and rand(.)is a random number in the range of [0, 1]. For equation (1), the first part represents the inertia of the previous velocity; the second part is the "cognition" part, which represents the private thinking by itself and the third part is the "social" part which represents the cooperation among the particles [11]. If the summation in (1) would cause the velocity Vid on that dimension to exceed vmaxd , then Vid is limited to Vmaxd Vmax determines the resolution with which regions between the present position and the target position are searched [11], [12]. If Vmax is too large, the particles might fly the past good solutions. If Vmax is too small, the particles may not explore sufficiently beyond local solutions. In many experiences with PSO, Vmax is often set to the maximum of dynamic range of the variables on each dimension. The constants Cl and c2

644

represent the weighting of the stochastic acceleration terms that pull each particle toward pi and pg positions. Low values allow particles to roam far from the target regions before being tugged back. On the other hand, high values result in abrupt movement toward, or past, the target regions. Hence, the acceleration constants Cl and c2 are often set to be 2.0 according to the past experiences [3]. Suitable selection of inertia weight w provides a balance between global and local explorations, thus requiring less iterations on average to find a sufficiently optimal solution. As originally developed, w often decreases linearly from about 0.9 to 0.4 during a run. In general, the inertia weight w is set according to the following equation:

SA has proved to be an effective global optimization algorithm because of the advantage described as; 1) suitability to problem in wide area, 2) no restriction on the form of cost function, 3) high probability to find global optimization, 4) easy implementation by programming. But SA is not universal and its performance is mainly dependable on the following four "enough"; 1. the initial temperature is high enough, 2. the temperature is cooled slowly enough, 3. the parameter space is sampled often enough, 4. the stop temperature is low enough. These requirements make the SA converge very slowly in most cases.

Wmax Wmin itermax

The PSO-B-SA is an optimization algorithm which combin the PSO with the SA. In fact by combining PSO with SA, the strong points of SA can be used in PSO. This is the basic idea of the PSO-B-SA. The PSO-B-SA algorithm's searching process is started from initializing a group of random particles. In this paper, if only the pg that is the leader of the swarm is based on SA, independently from other particles, the algorithm is named to be the PSO-B-SAL. But, if all particles are based on SA, the algorithm is named to be PSO-B-SA2, and then a group of new individuals are generated. In this case, particles of the new generation are obtained after transforming each particle's velocity and position according to the equations (1) and (2). This process evolves through time until the terminating condition is satisfied. In the process of simulated annealing, the new individuals are given randomly around the original individuals. Here we set the original particles as a parameter rt, to each particle:

W=

Wm

-

.iter

(3)

where itermax represents the maximum number of iterations, and iter is the current number of iterations or generations. Moreover, wmax and wmin are the maximum and minimum weight values, respectively. From the above discussion, it is obvious that PSO resembles, to some extent, the "mutation" operator of Genetic Algorithms through the position update equations (1) and (2). However it should be noted, that in PSO, the "mutation" operator is guided by the particle's own "flying" experience and benefits from the swarm's "flying" experience. In other words, PSO is considered as performing mutation with a "conscience" as pointed out by Eberhart and Shi [13].

III. SIMULATED ANNEALING As its name implies, the simulated annealing (SA) exploits an analogy between the way in which a metal cools into a minimum energy crystalline structure (the annealing process) and search for a minimum in a more general system. According to the computational procedure, simulated annealing can be divided into four basic steps: 1) A new state generator; which is used to generate a new solution just based on the previous one. The changes to the previous solution are randomly generated in each iteration, with a statistic distribution of Cauchy or Gaussian type. 2) An acceptance function; which is used to accept or reject the new solution based on the change of the cost function. The new solution will always be accepted if the cost function decreases, otherwise the new solution will be randomly accepted with a

specified probability. 3) A temperature schedule; which is used to determine how the temperature is to be cooled and to generate a new value of temperature is used in the next iteration. 4) A stop criterion; which is used to determine the points to stop the temperature cooling and finish the optimization.

645

IV. PARTICLE SWARM-BASED-SIMULATED ANNEALING

present = present + r, rand(.)

(4)

In the above equation the rt rand (.) is a random number between 0 and 1. Now to find the global minimum of the following optimizing problem min f(xl,x2,...,x ) s.t. xi E [ai,b1] ; i = 1,2, ... ,n

(5)

the steps of the particle swarm-based-simulated annealing optimization is as follows: Initialize a group of particles (the scale is m), 1) including random position and velocity. Evaluate each particle's fitness. 2) If the chosen algorithm is PSO-B-SAI, then the pg 3) is based on SA independently and a new global best position ( pg ) is obtained. If the algorithm is PSOB-SA2, then each particle is based on SA independently and a group of new individuals are obtained.

For each particle, compare its fitness and its personal best position ( pi ). If its fitness is better, replace pi with its fitness. For each particle, compare its fitness and the global best position ( pg ). If its fitness is better, then

4)

5)

initialized with a random position and a random velocity where in both cases the values in every dimension have been randomly chosen according to a uniform distribution over the initial range [X innXax].* The values of Xn and Xmax depend on the objective function. During a run of an algorithm, the position and velocity of a particle is not been restricted to the initialization intervals, but a maximum velocity Vmax = Xmax has been used for every component of velocity vector vi. The set of test functions (see Table I) contains functions that are commonly used in the field of continuous function optimization. Table II shows the values that have been used for the dimension of these functions, the range of the corresponding initial position and the velocities of the particles, and the goals that have to be achieved by the algorithms. The first two functions (Sphere and Rosenbrock) are unimodal functions (i.e., they have a single local optimum that is also the global optimum) and the remaining function (Rastrigin) is multimodal (i.e., it has several local optima). All tests have been run over 10000 iterations. The swarm size that has been used in the experiments, is equal to 40 (m = 40). In this experiment the number of iterations required to reach a certain goal for each test function has been determined comparing PSO-g, PSO-1, H-PSO, A H-PSO, V H-PSO, PSOB-SAI and PSO-B-SA2 . We used the results of [16] for the different variants of PSO. For PSO-g, PSO-1 and H-PSO, 2 parameter sets have been used. All algorithms have been tested with two different parameter sets that were taken from the literature. One parameter set is w =.6 and C= c2= 1.7, as suggested in [15] for a faster convergence rate. The other parameter set has the common parameter values w =.729 and cl = C2 = 1.494. Algorithms that use the first (second) set of parameter values are denoted by appending "-a" (respectively, "-b") to its name, e.g. PSO-g-a denotes PSO-g with the first parameter set.

replace pg with its fitness. 6) Transform each particle's velocity and position according to the expressions (1) and (2). 7) This process given evolves through time until the terminating condition is satisfied. The proposed approaches are used to optimize three different test functions as described in Tables I and II. TABLE I THE TEST FUNCTIONS

Sphere: n F xS (X) = Ex) -4

i=l

Rosenbrock:

+ox21x )2 +(1 FROS(X) = Z(X(-x

)

nl i=l

-

Rastrigin: -*

n

FRaS(X) = Z(x-1Ocos(2Ai) + IO) i R

TABLE II PARAMETERS OF THE TEST FUNCTIONS

Function

Dim.

Initial Range

Goal

Sphere

30

[-100;100]n

0.01

Rosenbrock

30

[-30;30]n

100

Rastrigin

30

[-5.12;5.12]n

100

A. Significance In the experiments done using the above algorithms, the number of iterations required to reach a specified goal for each function was recorded. If the goal could not reach within the maximum number of 10000 iterations, the run was considered unsuccessful. The success rate denotes the percentage of successful runs. For the successful runs, the average, median, maximum, and minimum number of iterations required to achieve the goal value were calculated. The expected number of iterations has been determined as (average/success rate). The results are all shown in Table III.

V. SIMULATION RESULTS In this section, the experiments that have been done to compare the different variants of PSO with PSO-B-SAl and PSO-B-SA2 for continuous function optimization are described. Since we are interested in understanding whether the proposed modifications of the standard PSO algorithm can improve its performance, we have focused our experimental evaluation on the comparison with other PSO algorithms. It should be noted however that PSO is known to be a competitive method which often produces results that are comparable or even better than those produced by other metaheuristics (e.g., see [14]). In all our experiments, the PSO algorithms use the parameter values w =.729 and c, = c2 =1.494, as recommended in [15], unless stated otherwise. Each run has been repeated 100 times and the average results are presented. The particles have been

B. Results

646

The proposed approaches are first compared with the PSO. The results are shown in Fig.l. Moreover, the PSO-B-SAl and PSO-B-SA2 are compared with other PSO algorithms. The comparison is based on the number of iterations required to reach a certain goal for each of the test functions. The algorithms all use the swarm size of m = 40. In Table III, the

average, median, maximum and minimum iterations required to achieve the goal value are shown. Also, the success rate and the expected number of iterations (average/success rate) to reach the goal are given. (PSO-B-SA2 is the fastest algorithm to achieve the desired goal in less iterations). Only for the Sphere function, the PSO-B-SAl does require an average of 9.6 iterations, where it is taken to be 10. For all other test functions, it obtains the lowest average and expected number of iterations required to reach the goal. Rastrigin

1

1

T

T T

speed up the rate of convergence continually in the optimizing process, as shown in Table III. It is shown that the proposed approaches have higher searching efficiency. They can also escape from local minimums as shown in Fig. 1. These two algorithms are applied to several test functions as optimization problems. The PSO-B-SAl algorithm has better speed than the PSO-B-SA2 but the PSO-B-SA2 algorithm has better convergence than PSO-B-SAL. However, both have much smaller number of iterations and faster convergence rates than the other PSO algoirthms. It is shown that the PSO-B-SA2 could be used as a new and a promising technique for solving optimization problems.

10

VII. REFERENCES *~ ~ ~*'.~

a

o

2.06

[1]

. .

-10 Ci)

lo24 12.04

[2]

_

0

20

40

60 80 Iteration

100

120

[3] Rosenbrock

6

[4]

[5]

D

cn

[6] 1 04L

0

50

100

Iteration

150

10°

[7]

1 o-2

[8]

210

[9]

C/)

c0 D1 0~

i

1.

1

t--------------------------------

t

[10]

--------------------------------

-10(

5

10

15

20 Iteration

25

30

35

40

Fig. 1. Solution quality for PS0 ( ), PSO-B-SA1( -), PSO-B-SA2( ), with w =.729, c1 =C2 =1.494,

r, = 0.01

[11]

and swarm size m = 40

[12]

VI. CONCLUSION In this paper, two novel approaches for optimization based on Particle Swarm and Simulated Annealing optimization techniques are presented. The Particle Swarm Optimization based Simulated Annealing can narrow the field of search and

647

[13]

J. Kennedy and R. Eberhart, "Particle Swarm Optimization," in Proc. IEEE Int. Conf Neural Networks, vol. 4, 1995, pp. 1942-1947. R. Eberhart, J. Kennedy, "A new optimizer using particle swarm theory," in Proc. 6th Int. Symposium on Micro Machine and Human Science, 1995, pp. 39-43. R. C. Eberhart, P. Simpson, and R. Dobbins, Computational Intelligence PC Tools, Academic Press: 1996, pp. 212-226. J. Kennedy and R.C. Eberhart, Swarm Intelligence, Morgan Kaufmann Publishers: 2001. R. C. Eberhart and Y. Shi, "Comparison between genetic algorithms and particle swarm optimization," in Proc. of the 7th Annual Conf on Evolutionary Programming, 1998,pp. 611-616. P. J. Angeline, "Evolutionary optimization versus particle swarm optimization: philosophy and performance difference," in Proc. of the 7th Annual Conf on Evolutionary Programming, 1999, pp. 601610. J. Kennedy, "The behavior of particles," Evolutionary Programming VII, 1998, pp.581-587. S. Kirkpatrick, C. D. Gelatt, and M. P., Vecchi, "Optimization by simulated annealing," Science, vol. 220, no. 4598, 13, 1983, pp. 671-680. Y. Shi, R. Eberhart, "A modified particle swarm optimizer," in Proc. IEEE Int. Conf on Evolutionary Computation, 1998, pp. 69- 73. R. Eberhart, Y. Shi, "Particle swarm optimization: developments, applications and resources," in Proc. IEEE Int. Conf on Evolutionary Computation, 2001, pp. 81-86. J. Kennedy, "The particle swarm: social adaptation of knowledge," in Proc. IEEE Int. Conf on Evolutionary Computation, 1997, pp. 303-308. Y. Shi, R. Eberhart, "Parameter selection in particle swarm optimization," in Proc. of the 7th Annual Conf on Evolutionary Programming, 1998, pp. 591-600. R. Eberhart and Y. H. Shi, "Evolving artificial neural networks," in Proc. Int. Conf on Neural Networks and Brain, 1998.

[14] J. Kennedy and W. M. Spears, "Matching algorithms to problems: an experimental test of the particle swarm and some genetic algorithms on the multimodal problem generator," in Proc. Int. Conf Evolutionary Computation, 1998, pp. 78-83.

[15] I. C. Trelea, "The particle swarm optimization algorithm: convergence analysis and parameter selection," Inform. Process. Lett., vol. 85, 2003. [16] S. Janson and M. Middendorf, "A hierarchical particle Swarm Optimizer and its adaptive variant," IEEE Trans. on Systems, Man, and Cybernetics, vol. 35, 2005.

TABLE III STEPS REQUIRED TO ACHIEVE A CERTAIN GOAL FOR PSO-G, PSO-1, H-PSO, A H-PSO, V H-PSO, PSO-B-SAI AND PSO-B-SA2. AVERAGE (AVG), MEDIAN (MED), MAXIMUM (MAX), MINIMUM (MIN) AND EXPECTED (EXP:= AVG/SUCC) NUMBER OF ITERATIONS; "-a" AND "_b" DENOTE THE USED PARAMETER VALUES AS DESCRIBED IN SECTION V.

Algorithm

Avg

Med

Max

(1) (2) (3) (4) (5) (6) (7) (8) (9) (10)

309.4 449.4 360 363 563.2 453.9 351.5 209.6 9.6 10

303 452 361 355 563 453 348 205 10 9

530 482 414 539 625 529 503 319 25 14

PSO-g-a PSO-l-a H-PSO-a PSO-g-b PSO-l-b H-PSO-b A H-PSO V H-PSO

(1) (2) (3) (4) (5) (6) (7) (8) PSO-B-SA1 (9) PSO-B-SA2 (10)

104 185.7 432.7 142 270 500.9 151.2 184.4 147 38.8

101.5 176 291 132 220.5 367.5 127 117 148 36

191 611 2482 282 3657 1703 677 4192 231 124

PSO-g-a (1) PSO-l-a (2) H-PSO-a (3) PSO-g-b (4) PSO-l-b (5) H-PSO-b (6) A H-PSO (7) V H-PSO (8) PSO-B-SA1 (9) PSO-B-SA2 (10)

497.1 704.3 528.3 641.2 798 780.2 702 352.7 270.2 52.64

302.5 497 340.5 382.5 615 471 369.5 262.5 260 54

4189 7851 5924 5165 5164 5315 4183 2251 389 132

PSO-g-a PSO-l-a H-PSO-a PSO-g-b PSO-l-b H-PSO-b A H-PSO V H-PSO PSO-B-SA1 PSO-B-SA2

Min Sphere 238 406 301 289 516 388 301 173 3 5 Rastrigin 64 102 95 82 119 142 65 52 98 13 Rosenbrock 195 326 247 230 408 295 238 144 203 37

648

Succ

Exp

1 1 1 1 1 1 1 1 1 1

309.4 449.4 360 363 563.2 453.9 351.5 209.6 9.6 10

.98 .99 .99 .98 1 1 1 1 1 1

106.1 187.6 437.1 144.9 270 500.9 151.2 184.4 147 38.8

1 1 1 1 1 1 1 1 1 1

497.1 704.3 528.3 641.2 798 780.2 702 352.7 270.2 52.64

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 PDFFOX.COM - All rights reserved.