• K-Means

    Separating data into distinct clusters, organizing diverse information and simplifying complexity with vibrant clarity

  • Principal Components Analysis

    Making high dimensions comprehensible and actionable, it captures the maximum amount of variance in the data with a reduced number of features

  • Random Forest for Regression

    Combining decision trees, it provides predictive accuracy that illuminates the path to regression analysis

  • Support Vector Machines for Regression

    Leveraging mathematical precision, it excels in predicting values by carving precise pathways through data complexities

Monday, July 24, 2023

Particle Swarm Optimization


The Concept of "Optimization"

Optimization is a fundamental aspect of many scientific and engineering disciplines. It involves finding the best solution from a set of possible solutions, often with the goal of minimizing or maximizing a particular function. Optimization algorithms are used in a wide range of applications, from training machine learning models to solving complex problems in many other fields.  

In logistics, optimization algorithms can be used to find the most efficient routes for delivery trucks, saving time and fuel. In finance, they can be used to optimize investment portfolios, balancing the trade-off between risk and return. In manufacturing, they can be used to optimize production schedules, maximizing efficiency and minimizing downtime. In energy production, they can be used to optimize the operation of power plants, reducing costs and emissions. The list goes on.


What is PSO?

One of the most famous and widely used optimization models is the Particle Swarm Optimization (PSO) algorithm. PSO is a population-based stochastic optimization technique inspired by the social behavior of bird flocking and fish schooling. It was developed by Dr. Eberhart and Dr. Kennedy in 1995, and since then, it has been applied to solve various complex optimization problems.

The PSO algorithm works by initializing a group of random solutions in the search space, known as particles. Each particle represents a potential solution to the optimization problem. These particles then search the best solution through the space with a velocity that is dynamically adjusted according to its own and its companions' results. A visual example is the next:

Consider a swarm of ants searching the best place in their anthill to keep digging and expanding it, these ants are scattered through all the space. At an instance, one ant finds a good place and warns the others, they approach it to continue searching around that point. But, another ant finds a better place, it warns the others and they approach to it. This is repeated until the swarm finds the best possible place



The beauty of PSO lies in its simplicity and its ability to efficiently solve complex optimization problems. It doesn't require gradient information, which makes it suitable for non-differentiable and discontinuous functions. Moreover, it's easy to implement and has few parameters to adjust, making it a practical choice for many optimization tasks.


Mathematical Description

The PSO algorithm consists on initializing a swarm of particles, each particle has a position vector $P$ and a velocity vector $V$. The position vector, denoted as $P_i$ (for the i-th particle), represents the current solution, while the velocity vector $V_i$ determines the direction and distance that the particle move in the respective iteration. In addition to its current position and velocity, each particle remembers the best position it has ever encountered, denoted as $P_{bi}$ (personal best position). The best position among all particles in the swarm is also tracked, denoted as $G_b$ (global best position).

The PSO algorithm updates the position and velocity of each particle at each iteration. The velocity is computed based on the equation:


\begin{equation*}V_i^{t+1} = W \cdot V_i^t + C_1 U_1^t \otimes \left (  P_{bi}^t - P_i^t \right) +  C_2 U_2^t \otimes \left (  G_{b}^t - P_i^t \right) \end{equation*}


Where $W$ is a constant value known as inertia weight, $C_1$ and $C_2$ are two constant values, $U_1$ and $U_2$ are two random vectors with values between $[0, 1]$ and same dimension as the position. Is important to notice that the operator $\otimes$ represent a component-to-component multiplication.

The inertia weight  $W$ controls the impact of the previous velocity of a particle on its current velocity, the "personal component" $C_1 U_1^t \otimes \left (  P_{bi}^t - P_i^t \right)$ represents the particle's tendency to return to its personal best position, and the "social component" $C_2 U_2^t \otimes \left (  G_{b}^t - P_i^t \right)$ represents the particle's tendency to move towards the global best position.

To update the position of each particle we just follow the equation:


\begin{equation*}P_i^{t+1} = P_i^{t} + V_i^{t+1} \end{equation*}


We can describe the PSO algorithm as follows:



Python Implementation


Importing Libraries

Before we can implement the Particle Swarm Optimization (PSO) algorithm, we first need to import the necessary libraries. For this implementation, we use numpy for numerical computations, and matplotlib for data visualization.

# Importing libraries
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D

# Seed for numpy random number generator
np.random.seed(0)


Defining and Visualizing the Optimization Function

Before we can apply the PSO algorithm, we first need to define the function that we want to optimize. We select the Griewank function, a commonly used test function in optimization. The Griewank function is known for its numerous, regularly distributed local minima, which makes it a challenging optimization problem.

The Griewank function takes a 2-dimensional array as input and returns a scalar output. The function is composed of a sum of squares term and a cosine product term, which together create a complex landscape of local minima.

# Griewank function
def griewank(x):
    sum_sq = x[0]**2/4000 + x[1]**2/4000
    prod_cos = np.cos(x[0]/np.sqrt(1)) * np.cos(x[1]/np.sqrt(2))
    return 1 + sum_sq - prod_cos

To better understand the optimization problem, it's helpful to visualize the function that we're trying to optimize. We can do this by creating a grid of points within a specified range, calculating the value of the function at each point, and then plotting the function as a 3D surface.

# Defining the limits of the x and y axes
xlim = [-11, 11]
ylim = [-11, 11]
# Creating a grid of points within these limits
x = np.linspace(xlim[0], xlim[1], 500)
y = np.linspace(ylim[0], ylim[1], 500)
grid = np.meshgrid(x, y)
# Calculating the value of the Griewank function at each point in the grid
Z = griewank(grid)

# Creating a 3D figure
plt.rcParams['font.size'] = '16'
fig = plt.figure(figsize=(10, 8))
ax = fig.add_subplot(111, projection='3d')
ax.patch.set_alpha(0)

# Plotting the surface
surf = ax.plot_surface(grid[0],grid[1], Z, cmap='viridis', edgecolor='none')

# Configure style
fig.colorbar(surf, shrink=0.5, aspect=5)
ax.set_box_aspect([1, 1, 0.6])
ax.set_zticks([])
ax.grid(False)
plt.tight_layout()

# Add a title
plt.suptitle("Griewank Function", y=0.95, x=0.45, fontsize=24)

# Display the plot
plt.show()

The resulting plot gives us a visual representation of the optimization problem that we're trying to solve with the PSO algorithm.


We can notice that this function has many local minima distributed throughout the space, so it can be a complex challenge to find its minimum value.


Defining the Particle Class

The PSO algorithm operates on a swarm of particles, where each particle represents a potential solution to the optimization problem. To implement this in Python, we can define a Particle class that encapsulates the corresponding properties and behaviors. Here's the code to define the Particle class:

class Particle:
    def __init__(self, lim_range, no_dim):
        self.dim = no_dim
        # Initialize positions and velocities randomly
        self.pos = np.random.uniform(lim_range[0],lim_range[1],no_dim)
        self.vel = np.random.uniform(lim_range[0],lim_range[1],no_dim)
        # Set best position as the current one
        self.best_pos = np.copy(self.pos)
        # Set best value as the current one
        self.best_val = griewank(self.pos)

    def compute_velocity(self, global_best):
        # Update the particle's velocity based on its personal best and the global best
        t1 = 0.7298 * self.vel
        U1 = np.random.uniform(0, 1.49618, self.dim)
        t2 = U1 * (self.best_pos - self.pos)
        U2 = np.random.uniform(0, 1.49618, self.dim)
        t3 = U2 * (global_best - self.pos)
        self.vel = t1 + t2 + t3

    def compute_position(self):
        # Update the particle's position based on its velocity
        self.pos += self.vel

    def update_bests(self):
        new_val = griewank(self.pos)
        if new_val < self.best_val:
          # Update the particle's personal best position and value
          self.best_pos = np.copy(self.pos)
          self.best_val = new_val

We defined a Particle class with several methods:

The __init__ method initializes a particle with random positions and velocities within a specified range. It also sets the particle's best position and best value to its initial position and value.

The compute_velocity method updates the particle's velocity based on its current velocity, the difference between its best previous position and its current position, and the difference between the global best position and its current position. We defined the constants $W=0.7298$ and $C_1 = C_2 = 1.49618$.

The compute_position method updates the particle's position by adding its velocity to its current position.

The update_bests method updates the particle's best position and best value if its current position has a better value.


PSO Implementation

Now that we have defined the Particle class, we can use it to implement the PSO algorithm described earlier.

def PSO(no_part, it, limits, no_dim):
    # Initialization
    # Create a list to hold the swarm
    swarm = []
    # Fill the swarm with particles
    for i in range(no_part):
        swarm.append(Particle(limits, no_dim))
        # Set the first particle as the best in the swarm
        if i == 0:
            # Best values in the swarm
            G_best_position = np.copy(swarm[i].pos)
            G_best_value = swarm[i].best_val
        # Compare with the previous particle
        if i > 0 and swarm[i].best_val < G_best_value:
            # If the particle is better than the previous one
            G_best_value = swarm[i].best_val
            G_best_position = np.copy(swarm[i].pos)

    # Main loop
    for _ in range(it):
        for i in range(no_part):
            # Compute new velocity
            swarm[i].compute_velocity(G_best_position)
            # Compute new position
            swarm[i].compute_position()
            # Update best personal values
            swarm[i].update_bests()
            # Update the best global values of the swarm
            swarm[i].update_bests()
            if swarm[i].best_val < G_best_value:
                G_best_position = np.copy(swarm[i].pos)
                G_best_value = swarm[i].best_val

    return G_best_position, G_best_value

The PSO function takes the number of particles, the number of iterations, the limits of the search space, and the number of dimensions as inputs. The function initializes a swarm of particles, then enters a loop where it updates the positions and velocities of the particles, evaluates the objective function at the new positions, and updates the personal best positions and the global best position. The function returns the global best position and its corresponding value at the end of the iterations.


Applying PSO algorithm

Now we can use the PSO function defined earlier to find the minimum of the Griewank function. We run the algorithm with 300 particles for 150 iterations, and we search in the range from -10 to 10 in both dimensions. We store the resulting best position and its corresponding value.

# Applying the PSO algorithm
best_pos, best_val = PSO(no_part = 300, it = 150, limits = [-10, 10], no_dim = 2)

To visualize the result, we plot the level curves of the Griewank function and mark the minimum found by the PSO algorithm as a red point.

# Creating the figure
fig, ax = plt.subplots(1,1, figsize=(8,6), facecolor='#F5F5F5')
fig.subplots_adjust(left=0.1, bottom=0.06, right=0.97, top=0.94)
plt.rcParams['font.size'] = '16'
ax.set_facecolor('#F5F5F5')

# Plotting of the Griewank function's level curves
ax.contour(grid[0], grid[1], Z,  levels=10, linewidths = 1, alpha=0.7)
# Plotting the minimum found by the PSO algorithm
ax.scatter(best_pos[0], best_pos[1], c = 'red', s=100)
ax.set_title("Griewank Level Curves")

# Display the figure
plt.show()

The resulting plot gives us a visual representation of the optimization problem and the solution found by the PSO algorithm.



The PSO algorithm found that the minimum value of the Griewank function is $\mathbf{0}$ located in the position $\mathbf{(0,0)}$. Below is a video showing the process of PSO algorithm, visualizing all the particles at each iteration.



Conclusions

Through this post, we've gained a deeper understanding of the Particle Swarm Optimization algorithm and its application in optimization problems. We've explored the mathematical background of the algorithm, implemented it in Python, and applied it to find the minimum of the Griewank function, a commonly used benchmark function in optimization. The PSO algorithm has proven to be an effective method for solving the optimization problem, as evidenced by the results we obtained. The visualization of the Griewank function and the minimum found by the PSO algorithm provided a clear illustration of the algorithm's ability to navigate the search space and converge to a solution.

This exploration of the PSO algorithm serves as a foundation for further study and application of swarm intelligence algorithms in optimization. Whether you're a data scientist looking for a new optimization technique, a researcher studying swarm intelligence, or a curious reader interested in machine learning, I hope this post has provided valuable insights and sparked further interest in this fascinating area.


Share:

About Me

My photo
I am a Physics Engineer graduated with academic excellence as the first in my generation. I have experience programming in several languages, like C++, Matlab and especially Python, using the last two I have worked on projects in the area of Image and signal processing, as well as machine learning and data analysis projects.

Recent Post

Particle Swarm Optimization

The Concept of "Optimization" Optimization is a fundamental aspect of many scientific and engineering disciplines. It involves fi...

Pages