Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Food and Scientific Reports

ISSN 2582-5437

Particle Swarm Optimization and its applications


in agricultural research
Santosha Rathod1 and Amit Saha2 and Kanchan Sinha3
1
ICAR- Indian Institute of Rice Research, Hyderabad; 2Central Sericultural Research & Training Institute (CSRTI), Mysuru; 3ICAR-
Indian Agricultural Statistics Research Institute, New Delhi
ABSTRACT
Particle Swarm Optimization (PSO) is a non-derivative, nature inspired evolutionary optimization algorithm to solve the
complex real time problems. It is a robust stochastic optimization technique based on the movement and intelligence of
swarms. As like Genetic Algorithm (GA), the PSO also have fitness function. The PSO has advantage of both local and
global optima over only local optimization in GA. PSO can be employed to many areas of agriculture namely precision
farming, Irrigation scheduling, machinery power optimization, Fertilizer application optimization, Active Ingredient
optimization in chemical treatment of plants, parameter optimization of numerical crop simulation models, stock market
price determination, cost optimization, optimal control of plant growth etc. As a contextual investigation monthly
maximum temperature (oC) of nine districts North Karnataka has been considered to evaluate PSO in optimizing the
parameters of Space Time Autoregressive Moving Average (STARMA) model. The proposed STARMA-PSO model
outperformed the classical STARMA model in both training and testing data set.
Particle swarm optimization (PSO) is a nature inspired and experience of its neighbours. PSO has numerical
evolutionary optimization technique to solve applications in many areas such as geology, agriculture,
computationally hard or difficult optimization problems. finance, climate and ecology, electrical science etc.
It is a robust stochastic optimization technique based on Coming to agricultural application PSO can be
the movement and intelligence of swarms. It was employed in all the tasks where optimum and efficient
developed by James Knnedy and Russ Eberhart in 1997 utilization of resources are required.
based on the social behaviour of biological organisms
that move in groups (swarms) such as birds and fishes. Let, 𝐴 ⊂ 𝑅 be search space and the swarm is
It has been applied successfully in wide variety of defined as a set 𝑆 = {𝑋 , 𝑋 , … , 𝑋 } of 𝑀 particles
search and optimization problems by abstracting the (candidate solution), where 𝑀 is a user-defined
working mechanism of natural phenomenon. Since PSO parameter of the algorithm. Then 𝑖 particle dimension
is a population-based (swarm) evolutionary algorithms, of 𝑑 is defined as 𝑋 = (𝑋 , … , 𝑋 ) , 𝑖 = 1,2, … , 𝑀.
which has some similarities with GA. However, a Each particle is a potential solution to a problem,
fundamental difference between these paradigms is that characterized by three quantities: velocity 𝑉 =
the evolutionary algorithms are based on natural (𝑉 , … , 𝑉 ) , current position 𝑋 = (𝑋 , … , 𝑋 ) and
evolution concepts i.e. based on a competitive personal best position
philosophy, it means only the fittest individuals tends to 𝑝𝑏𝑒𝑠𝑡 = (𝑝𝑏𝑒𝑠𝑡 , … , 𝑝𝑏𝑒𝑠𝑡 ) . Let, 𝑡 denote current
survive. Conversely, PSO incorporates a cooperative iteration and 𝑔𝑏𝑒𝑠𝑡 denote its global best position
approach to solve a problem, given that all individuals achieved so far by any of its particles. Initially, swarm is
(particles), which can survive, change themselves over randomly dispersed within search space and random
time and one particle’s successful adaptation is shared velocity is assigned to each particle. Particles interact
and reflected in the performance of its neighbours. The with one another by sharing information to discover
basic element of PSO is a particle, which can fly optimal solution. Each particle moves in the direction of
throughout search space towards an optimum by using its personal best position (𝑝𝑏𝑒𝑠𝑡) and its global best
its own information as well as the information provided position (𝑔𝑏𝑒𝑠𝑡). To search optimal solution, each
by other particles comprising its neighbourhood. In particle changes its velocity according to the cognitive
PSO, a swarm of n particles (individuals) communicate and social parts given by:
either directly or indirectly with one another using
search directions (gradients). The algorithm adopted 𝑉 (𝑡 + 1) = 𝑤(𝑡)𝑉 (𝑡) + 𝑐 𝑅 𝑝𝑏𝑒𝑠𝑡 (𝑡) − 𝑋 (𝑡)
was a ‘set of particles’ flying over a search space to + 𝑐 𝑅 𝑔𝑏𝑒𝑠𝑡 (𝑡) − 𝑋 (𝑡)
locate global optimum. During an iteration of PSO, each
Where, 𝑖 = 1,2, … , 𝑀 and 𝑗 = 1,2, … , 𝑑. However, in
particle updates according to its previous experience
case of swarm explosion effect, corresponding velocity

April 2020 │Volume 1: Issue 4 │Page 37


Food and Scientific Reports
ISSN 2582-5437

component is restricted to following closest velocity global best position achieved so far among all particles
bound: at iteration 𝑡. 𝑅 and 𝑅 are random values, which are
mutually independent and uniformly distributed over
𝑉 (𝑡 + 1) = −𝑉 if 𝑉 (𝑡 + 1) < −𝑉 [0,1], 𝛽 is a constraint factor used to control velocity
= 𝑉 If, 𝑉 (𝑡 + 1) > 𝑉 weight, whose value is usually set equal to 1. Positive
constants 𝑐 and 𝑐 are usually called “acceleration
After updating its velocity, each particle moves to a new factors”. Factor 𝑐 is sometimes referred to as
potential solution by updating its position as follows “cognitive” parameter, while 𝑐 is referred to as
“social” parameter. Inertia weight at iteration 𝑡 is 𝑤(𝑡)
𝑋 (𝑡 + 1) = 𝑋 if 𝑋 (𝑡 + 1) < 𝑋
and is used to balance global exploration and local
= 𝑋 (𝑡)+𝛽𝑉 (𝑡 + 1) , if 𝑋 ≤ exploitation. This can be determined by:
𝑋 (𝑡 + 1) ≤ 𝑋 𝑤(𝑡) = 𝑤 − 𝑤 −𝑤 𝑡/𝑇
=𝑋 , if 𝑋 (𝑡 + 1) > 𝑋 Where,𝑡 is current iteration number, 𝑤 and 𝑤 are
Where,𝑖 = 1,2, … , 𝑀; 𝑗 = 1,2, … , 𝑑. In the above desirable lower and upper limits of 𝑤 and 𝑇 is
equations𝑉 ,𝑋 and 𝑝𝑏𝑒𝑠𝑡 are respectively velocity, maximum number of iterations (Dai et al (2018), Gilli et
al (2011) and Mohanty 2018).
current position and personal best position of particle 𝑖
on the 𝑗 dimension, and 𝑔𝑏𝑒𝑠𝑡 is the 𝑗 dimension Frame work of PSO:

Fig 1: Graphical framework of PSO

Numerical Example As a contextual investigation optimizing the parameters of Space Time


monthly maximum temperature (oC) of nine districts Autoregressive Moving Average (STARMA) model.
north Karnataka has been considered to evaluate PSO in Data on monthly maximum temperature (oC) of north
April 2020 │Volume 1: Issue 4 │Page 38
Food and Scientific Reports
ISSN 2582-5437

Karnataka districts from January, 2000 to August, 2016 𝐺 = 𝜎 𝐼 𝑖𝑠 𝑠 = 0


𝑠)] = and 𝐸[𝜀(𝑡)𝜀 (𝑡 + 𝑠)] =
were gathered from http://globalweather.tamu.edu and 0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
www.indiawaterportal.org. The data from January, 2000 0, 𝑓𝑜𝑟 𝑠 > 0.
to August, 2015 has been utilized for model building Process of optimizing parameters of Space Time
and data from September, 2015 to August, 2016 used Autoregressive Moving Average (STARMA) model by
for model validation (Forecasting performance). The PSO is described in following steps:
space-time models explain the systematic dependencies
over both space and time is modeled through the class Step 1: Select the data under consideration: The data
of STARMA models was developed by Pfeifer and for the proposed methodology has to select by studying
Deutsch (1980b). The autoregressive and moving its characteristics.
average form of space time model represented by
Step 2: Data preparation: To develop the model and
STARMA model are characterized by single variable
the learning algorithm it is necessary to split the given
Zi(t), observed at N fixed spatial locations (i = 1, 2,…,
data points in two different groups viz. Training data
N) on T time periods (t = 1, 2, . . ., T). The N spatial
and testing data set
locations can be a geographical location, country, state,
etc. The spatial dependencies between N times series is Step 3: Stationarity checking: To build the suitable
incorporated through N*N spatial weight matrices. STARMA model, the data under consideration should
Analogous to univariate time series, Z(t) is expressed as follow the time series analysis assumption such as
a linear combination of past observations and errors. statiaionaity. If the given data set is non stationary, then
The STARMA model (Pfeifer and Deutsch, 1980), one should go for differencing and other measures to
denoted by 𝑆𝑇𝐴𝑅𝑀𝐴(𝑝 ,..., ,𝑞 ,..., ) can make the data stationary and then one can proceed
, ,

be represented in the matrix equation as follows; further.

𝒑 𝝀𝒌 Step 3: Model Identification: The suitable STAR and


STMA order can be obtained based on the lowest AIC
𝒁(𝒕) = 𝝓𝒌𝒍 𝑾𝒍 𝒁(𝒕 − 𝒌)
and BIC values.
𝑲 𝟏𝒍 𝟎
𝒒 𝒎𝒌
Step 4: Parameter Estimation: The parameters of
− 𝜽𝒌𝒍 𝑾𝒍 𝜺(𝒕 − 𝒌) + 𝜺(𝒕)
STARMA models are first estimated using Maximum
𝑲 𝟏𝒍 𝟎
Likelihood method and the estimated parameters are
considered as bases for initial values selection in PSO
Where, Technique.
𝒛(𝒕) = 𝒛𝟏 (𝑡), … … , 𝒛𝑵 (𝑡) is a N × 1 vector of
observations at time t = 1,…, T, Step 5: Initialization and PSO parameter setting:
p is the autoregressive order (AR) with respect to time, Search space of STAR and STMA are restricted to
q is the moving average order (MA) with respect to ranges of maximum, randomly generate M number of
time, initial particles of swarm set 𝑆 and also initial particles
𝜆 is the spatial order of the kth AR term, velocity (𝑉 ) restricted to the range of [−𝑉𝑚𝑎𝑥, 𝑉𝑚𝑎𝑥]
𝑚 is the spatial order of the kth MA term, are randomly generated.
𝜙 is the AR parameter at temporal lag k and spatial
lag l (scalar), Step 6: Define the objective function: The objective
𝜃 is the MA parameter at temporal lag k and spatial function can be defined in terms of fitness function
based on actual and fitted values of the model. The
lag l (scalar) and
fitness function is defined in terms of Mean Absolute
𝑊 is the N*N spatial weight matrix with spatial order l
Percentage Error.
with diagonal elements zero and non-diagonal
elements is the relation between sites.
The spatial weight matrix 𝑊 ( ) = IN i.e. Identity matrix 𝑓𝑖𝑡𝑛𝑒𝑠𝑠 =
and each row of 𝑊 must add up to one. The random
error vector 𝜀(𝑡) = [𝜀 (𝑡), 𝜀 (𝑡), … , 𝜀 (𝑡)] is normally Step 7: Evaluate each particle’s fitness value: For
distributed at time t with 𝐸[𝜀(𝑡)] = 0, 𝐸[𝜀(𝑡)𝜀 (𝑡 + each candidate particle of initial particles, train the

April 2020 │Volume 1: Issue 4 │Page 39


Food and Scientific Reports
ISSN 2582-5437

model using k-fold cross-validation to evaluate Table 2. STARMA-PSO Model parameters


minimum value of 𝑀𝐸𝑅𝐶𝑟𝑜𝑠𝑠−𝑉𝑎𝑙𝑖𝑑𝑎𝑡𝑖𝑜𝑛. Then, find
and update the personal best position (𝑝𝑏𝑒𝑠𝑡) achieved Spatial lag Slag 0 Slag 1 Slag 2
so far by each particle and global best position (𝑔𝑏𝑒𝑠𝑡) AR MA AR MA AR MA
achieved so far by any of its particles according to Parameters - 0.27 0.11 0.24 0.58 0.36
minimum cross validation value 0.43

Step 8: Set the iteration number: Now set up the Multivariate Box-Pierce Non Correlation Test of
iteration number from 1 to maximum number of residuals: Chi-square=67.12 (p=0.36). Values in the
iterations and evaluate inertia weight 𝑤(𝑡) generation parenthesis indicates the standard error.
by generation.
Table 3: Modeling Performance (MAPE) under
Step 9: Compute and update velocity of each particle training data set
as explained in methodology section.
Sl. Location STARMA STARMA-
Step 10: Compute and update position of each particle. No PSO
1 Gulbarga 1.30 1.21
Step 11: Termination: Repeat the search process from 2 Bijapur 1.29 1.17
Step (6) to Step (10) until stop conditions, such as 3 Raichur 1.24 1.09
maximum iteration, are met. 4 Bagalkot 1.49 1.30
5 Belgaum 2.07 1.39
Step 12: Finally, optimal parameters are utilized to 6 Dharwad 1.69 1.42
build STARMA-PSO model on training data set and test 7 Gadag 1.56 1.40
data (Validation set) sets. 8 Koppal 1.41 1.28
9 Bellary 1.24 1.08
Step 13: Modelling and forecasting performance:
Modeling performance of proposed STARMA-PSO
Table 4: Modeling Performance (MAPE) under testing
model are performance by calculating MAPE under
data set
both training and testing data set (For model validation
purpose). Sl. Location STARMA STARMA-
No PSO
Table 1: PSO Parameter Specifications 1 Gulbarga 3.89 1.53
2 Bijapur 3.28 2.42
Cost Function CostFunction=@(x) rathod(x); 3 Raichur 3.93 1.76
Number of decision nVar=6 4 Bagalkot 2.77 1.10
Variables 5 Belgaum 3.08 1.57
Variable Size VarSize=[1 nVar] 6 Dharwad 3.63 1.05
Lower bound VarMin=[0.5, -0.19, -0.0090, - 7 Gadag 2.92 1.44
0.15, 0.10, -0.040] 8 Koppal 3.19 1.33
Upper bound VarMax= [1, -0.09, -0.0010, - 9 Bellary 3.10 1.25
0.10, 0.30, -0.020]
The mean absolute percentage error (MAPE) has been
Maximum Iteration MaxIt=200
computed to compare the forecasting performance of all
Swarm nPop=100
the models under considerations in both training and
size/Population size
validation data set for all the locations separately. The
Personal Learning c1=1.5
MAPE values under training and testing data set for
Coefficient
maximum temperature series of north Karnataka MAPE
Global Learning c2=2.0
values under training and testing data set STARMA-
Coefficient
PSO model was less and it has outperformed over
Velocity Limits VelMax=0.1*(VarMax-VarMin)
traditional STARMA model in both training and testing
VelMin=-VelMax data set.

April 2020 │Volume 1: Issue 4 │Page 40


Food and Scientific Reports
ISSN 2582-5437

Applications in Agriculture: Gilli, M., D. Maringer and E. Schumann.


(2011). Numerical Methods and Optimization in
As illustrated above Particle Swarm Optimization has Finance. Elsevier.
numerical applications in many areas such as geology, J. Kennedy, The particle swarm: social adaptation of
agriculture, finance, climate and ecology, electrical knowledge, IEEE International Conference on
science etc. Coming to agricultural application PSO can Evolutionary Computation, 1997 Indianapolis, IN.
be employed in precision farming, Irrigation scheduling, https://ieeexplore.ieee.org/document/592326
machinery power optimization, Fertilizer application
optimization, Active Ingredient optimization in Mohanty, P. (2018). NTPL online certification course
chemical treatment of plants, parameter optimization of on selected topics on decision modelling, Particle
numerical crop simulation models, stock market price Swarm Optimization, IIT Khargapur.
determination, cost optimization, optimal control of https://www.youtube.com/watch?v=uwXFnzWaC
plant growth etc. PSO can be employed in all the tasks Y0
where optimum and efficient utilization of resources are
required. Pfeifer, P.E., and Deutsch, S.J. (1980). A Comparison of
Estimation Procedures for the Parameters of the
References STAR Model. Communication in Statistics,
simulation and Comput., B9(3), 255-270.
Dai, H.-P.; Chen, D.-D.; Zheng, Z.-S. Effects of
Random Values for Particle Swarm Optimization
Algorithm. Algorithms 2018, 11, 23.
https://www.mdpi.com/1999-4893/11/2/23.

April 2020 │Volume 1: Issue 4 │Page 41

You might also like