swarm intelligence 虞台文. content overview swarm particle optimization (pso) – example ant...
Post on 02-Jan-2016
242 Views
Preview:
TRANSCRIPT
Swarm Intelligence
虞台文
Content Overview Swarm Particle Optimization (PSO)
– Example Ant Colony Optimization (ACO)
Swarm Intelligence
Overview
Swarm Intelligence
Collective system capable of accomplishing difficult tasks in dynamic and varied environments without any external guidance or control and with no central coordination
Achieving a collective performance which could not normally be achieved by an individual acting alone
Constituting a natural model particularly suited to distributed problem solving
Swarm Intelligence
http://www.scs.carleton.ca/~arpwhite/courses/95590Y/notes/SI%20Lecture%203.pdf
Swarm Intelligence
Swarm Intelligence
Swarm Intelligence
Swarm Intelligence
Swarm Intelligence Particle Swarm Optimization (PSO)
Basic Concept
The Inventors
Russell Eberhart James Kennedysocial-psychologistelectrical engineer
Particle Swarm Optimization (PSO)
PSO is a robust stochastic
optimization technique based
on the movement and
intelligence of swarms.
PSO applies the concept of
social interaction to problem
solving.
Developed in 1995 by James Kennedy and Russell Eberhart.
PSO Search Scheme
It uses a number of agents, i.e., particles, that constitute a swarm moving around in the search space looking for the best solution.
Each particle is treated as a point in a N-dimensional space which adjusts its “flying” according to its own flying experience as well as the flying experience of other particles.
Particle Flying Model
pbest the best solution achieved so far by that particle. gbest the best value obtained so far by any particle in the ne
ighborhood of that particle.
The basic concept of PSO lies in accelerating each particle toward its pbest and the gbest locations, with a random weighted acceleration at each time.
Particle Flying Model
kskpbest
kgbest
kv1kv
1ks
kpbestd
kgbestd
1 2
k kpbestk gbestd dv w w 11 ()c rw and 22 ()c rw and
kv
Particle Flying Model
Each particle tries to modify its position using the following information:– the current positions, – the current velocities,– the distance between the curre
nt position and pbest,– the distance between the curre
nt position and the gbest.
kskpbest
kgbest
kvkv1kv 1kv
1ks
kpbestd
kgbestd
1 2
k kpbestk gbestd dv w w 11 ()c rw and 22 ()c rw and
kv kv
Particle Flying Model
1 1k k ki i is s v
1k k ki i iv v v
1 2() ( ) () ( )k k k k ki i i iv c rand pbest s c rand gbest s
kskpbest
kgbest
kvkv1kv 1kv
1ks
kpbestd
kgbestd
1 2
k kpbestk gbestd dv w w 11 ()c rw and 22 ()c rw and
kv kv
PSO Algorithm
For each particle Initialize particleEND
Do For each particle Calculate fitness value If the fitness value is better than the best fitness value (pbest) in history set current value as the new pbest End
Choose the particle with the best fitness value of all the particles as the gbest For each particle Calculate particle velocity according equation (*) Update particle position according equation (**) End While maximum iterations or minimum error criteria is not attained
For each particle Initialize particleEND
Do For each particle Calculate fitness value If the fitness value is better than the best fitness value (pbest) in history set current value as the new pbest End
Choose the particle with the best fitness value of all the particles as the gbest For each particle Calculate particle velocity according equation (*) Update particle position according equation (**) End While maximum iterations or minimum error criteria is not attained
1k k ki i iv v v
1 2() ( ) () ( )k k k k ki i i iv c rand pbest s c rand gbest s
1k k ki i is s v
*
**
Swarm Intelligence Particle Swarm Optimization (PSO)
Examples
Schwefel's Function
1
( ) ( ) sin( )
with 500 500
n
i ii
i
f x x x
x
( ) 418.9829;
Global
420.9687, 1
minimum
: i
f x n
x i n
Simulation Initialization
Simulation After 5 Generations
Simulation After 10 Generations
Simulation After 15 Generations
Simulation After 20 Generations
Simulation After 25 Generations
Simulation After 100 Generations
Simulation After 500 Generations
Summary
Iterations gBest
0 416.245599
5 515.748796
10 759.404006
15 793.732019
20 834.813763
100 837.911535
5000 837.965771
Optimun 837.9658
400
450
500
550
600
650
700
750
800
850
1 4 16 64 256 1024 4096
"sample.dat"
Exercises Compare PSO with GA
Can we use PSO to train neural networks? How?
Particle Swarm Optimization (PSO)
Ant Colony
Optimization
(ACO)
Facts
Many discrete optimization problems are difficult to solve, e.g., NP-Hard
Soft computing techniques to cope with these problems:– Simulated Annealing (SA)
Based on physical systems– Genetic algorithm (GA)
based on natural selection and genetics– Ant Colony Optimization (ACO)
modeling ant colony behavior
Ant Colony Optimization
Background
Introduced by Marco Dorigo (Milan, Italy), and others in early 1990s.
A probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs.
They are inspired by the behaviour of ants in finding paths from the colony to food.
Typical Applications
TSP Traveling Salesman Problem Quadratic assignment problems Scheduling problems Dynamic routing problems in networks
Natural Behavior of Ant
ACO Concept
Ants (blind) navigate from nest to food source
Shortest path is discovered via pheromone
trails
– each ant moves at random, probabilistically
– pheromone is deposited on path
– ants detect lead ant’s path, inclined to follow, i.e.,
more pheromone on path increases probability of
path being followed
ACO System
Virtual “trail” accumulated on path segments Starting node selected at random Path selection philosophy
– based on amount of “trail” present on possible paths from starting node
– higher probability for paths with more “trail”
Ant reaches next node, selects next path Continues until goal, e.g., starting node for TSP, reached Finished “tour” is a solution
ACO System A completed tour is analyzed for optimality “Trail” amount adjusted to favor better
solutions– better solutions receive more trail– worse solutions receive less trail
higher probability of ant selecting path that is part of a better-performing tour
New cycle is performed Repeated until most ants select the same tour
on every cycle (convergence to solution)
, cont.
Ant Algorithm for TSP
Randomly position m ants on n cities
Loop
for step = 1 to n
for k = 1 to m
Choose the next city to move by applying
a probabilistic state transition rule (to be described)
end for
end for
Update pheromone trails
Until End_condition
Randomly position m ants on n cities
Loop
for step = 1 to n
for k = 1 to m
Choose the next city to move by applying
a probabilistic state transition rule (to be described)
end for
end for
Update pheromone trails
Until End_condition
Pheromone Intensity
( )ij t(0) , ,ij C i j
Ant Transition Rule
( ) .
( )( ) .
ki
ij ijkij
il ill J
tp t
t
Probability of ant k going from city i to j:
the set of nodes applicableto ant k at city i
kiJ :
, 0
1/ij ijd
visibility
Ant Transition Rule
= 0 : a greedy approach = 0 : a rapid selection of tours that may not be optimal. Thus, a tradeoff is necessary.
( ) .
( )( ) .
ki
ij ijkij
il ill J
tp t
t
Probability of ant k going from city i to j:
, 0
1/ij ijd
Pheromone Update
/ ( ) ( , ) ( )( )
0
k kkij
Q L t i j T tt
otherwise
( ) (1 ) ( ) ( )ij ij ijt t t
Q: a constantTk(t): the tour of ant k at time tLk(t): the tour length for ant k at time t
1
( ) ( )m
kij ij
k
t t
Demonstration
top related