EVALife Technical Report no.2002-02
Abstract
The particle swarm optimization(PSO)algorithm is a new population bad arch strat-
大难临头egy,which has exhibited good performance on well-known numerical test problems.How-
ever,on strongly multi-modal test problems the PSO tends to suffer from premature
convergence.This is due to a decrea of diversity in arch space that leads to a to-
tal implosion and ultimatelyfitness stagnation of the swarm.An accepted hypothesis is that maintenance of high diversity is crucial for preventing premature convergence in图书馆活动
multi-modal optimization.
We introduce the attractive and repulsive PSO(ARPSO)in trying to overcome the problem
孔雀东南飞读后感
of premature convergence.It us a diversity measure to control the swarm.The result is
an algorithm that alternates between phas of attraction and repulsion.The performance
of the ARPSO is compared to a basic PSO(bPSO)and a genetic algorithm(GA).The
results show that the ARPSO prevents premature convergence to a high degree,but still
恶意软件keeps a rapid convergence like the basic PSO.Thus,it clearly outperforms the basic PSO
as well as the implemented GA in multi-modal optimization.
Keywords
Particle Swarm Optimization,Diversity-Guided Search
1Introduction
The PSO model is a new population bad optimization strategy introduced by J.Kennedy et al.in1995(Kennedy95).It has already shown to be comparable in performance with tra-ditional optimization algorithms such as simulated annealing(SA)and the genetic algorithm (GA)(Angeline98;Eberhart98;Krink01;Vesterstrom01).
A major problem with evolutionary algorithms(EAs)in multi-modal optimization is premature converge
nce(PC),which results in great performance loss and sub-optimal so-lutions.As far as GAs are concerned,the main reason for premature convergence is a too high lection pressure or a too high geneflow between population individuals.With PSOs the fast informationflow between particles ems to be the reason for clustering of particles. Diversity declines rapidly,leaving the PSO algorithm with great difficulties of escaping local optima.Conquently,the clustering leads to low diversity with afitness stagnation as an overall result.
The problem with premature convergence will always persist,since we obviously must check the whole arch-space in order to ensure that a result is not sub-optimal.In spite of this fact,and although the goals of maintaining high diversity and obtaining fast convergence
1For more information on neighborhood topology we refer to(Kennedy99)and(Krink01).
2For a more generalized velocity update formula we refer to(Kennedy99).
|S|·|L|·|S|
i=1
p j)2,(4)
where S is the swarm,|S|is the swarmsize,|L|is the length of longest the diagonal in the arch space,N is the dimensionality of the problem,p ij is the j’th value of the i′th particle and p.Note that this diversity measure is independent of swarmsize,the dimensionality of the problem as well as the arch range in each dimension.
Finally,the velocity-update formula,eq.2,is changed by multiplying the sign-variable dir to the two last terms in it.This decides directly whether the particles attract or repel each other:
v(t+1)=ω· v(t)+dir(φ1( p(t)− x(t))+φ2( g(t)− x(t))).(5) 3Experimental Settings
3.1Benchmark Functions
We have tested the modified PSO model on four standard multi-modal objective functions. All four are widely known benchmark functions for testing the performance of different evo-lutionary optimization strategies such as evolutionary programming,simulated annealing, genetic algorithms and particle swarm optimization.The four test functions are: Griewank,n-dimensional
f1( x)=1
√
1
n
n
i=1
cos(2π·x i) ,(7)
where−30≤x i≤30 Ronbrock,n-dimensional
f3( x)=n−1
i=1
100·(x i+1−x2i)2+(x i−1)2(8)
where−100≤x i≤100 Rastrigin,n-dimensional
f4( x)=
n
i=1
x2i+10−10·cos(2πx i)(9)
where−5.12≤x i≤5.12
√
√
of time t to emphasize local arch
t max
towards the end of the test run.4The diversity parameters d low and d high were t at5.0·10−6 and0.25respectively.5
个人简历表
3.3Genetic Algorithm
The genetic algorithm ud in the comparison is more or less a straight-forward implementa-tion of the standard,real-valued GA.It us tournament-lection with a tournamentsize of2, one-point crossover(applied to an individual with probability p c)and a N(0,σ2)-distributed mutation operator(applied independently at each gene-coordinate with probability p m).We optimized the variance parameterσ2to be a decreasing function of time to emphasize local arch towards the end of the test run.The variance was either t to the linearly decreasing σ2(t)=1−t
1+√
3e appendix D for further information on benchmark functions.
小猫外形特点描写4t
max denotes the time when the algorithm stops.
5The ttings are copied form[Urm01]and proved to be reasonable in preliminary tests.
6i.e the number of evaluations in the different dimensions is20d=400000evaluations,50d=1000000 evaluations,and100d=2000000evaluations.
20100
Performance on Griewank
GA8.51·10−2
bPSO1.35·10−2
ARPSO3.05·10−2
ARPSO*2.99·10−2
GA0.376
bPSO0.668
ARPSO0.027
奉献的反义词田七苗ARPSO*0.96·10−2
GA199.41
bPSO30.08
ARPSO10.43
ARPSO*0.116
GA45.36
bPSO47.14
ARPSO0.20·10−1
ARPSO*0
7e table8.2on the Griewank function.