Professional Documents
Culture Documents
01EES-2024 Mangey Ram
01EES-2024 Mangey Ram
1 2024
1
Economics Ecology Socium Vol. 8 No.1 2024
2
Economics Ecology Socium Vol. 8 No.1 2024
3
Economics Ecology Socium Vol. 8 No.1 2024
The probability that a system operates Update the global best solution (brightest
satisfactorily for a predetermined amount of flame): best_solution = argmin (f (xi)).
time under specified conditions is defined as IV. For Each Moth.For each Moth xi in the
the reliability given by Wang et al. [38]. To population:
highlight the necessity for additional studies Calculate light intensity (fitness) for the
using actual case studies, a brief discussion on current position: I (xi) = f (xi).
integrating human aspects into systems to Calculate the distance to the brightest
analyze lower levels of autonomous cars is also flame:
included. Zhang et al. [39] suggested a k-out- d (xi, best_solution) = || xi - best_solution||.
of-n: G subsystems, a brand-new general Calculate the movement vector towards
reliability model optimization model based on the brightest flame: move_vector =
a mixed redundancy strategy and a continuous- (xi - best_solution) / d(|| xi - best_solution)
time Markov chain used to establish. RAP and + epsilon)),
RRAP optimizations were performed using a where epsilon is a small positive constant
pseudo-parallel genetic algorithm (PPGA). to avoid division by zero.
Negi et al. [40] optimized the complex Apply a random perturbation to the
system’s reliability and cost using GWO. Moth’s position for exploration: rand_vector =
random_vector().
3. Methodology. Update the position of the Moth:
This segment gives the details about the xi = xi + step_size * move_vector +
proposed algorithm. exploration step_size rand_vector,
Where, step_size and exploration
3.1. Moth Flame Optimization step_size are control parameters.
Algorithm (MFO). Evaluate the fitness of the new position:
The Moth-Flame Optimization (MFO) f_new = f (xi).
algorithm is a nature-inspired metaheuristic If f_new is better than the previous fitness,
optimization algorithm developed based on the update xi and f (xi).
behaviour of Moths attracted to light sources. It V. Optional Mechanisms.
was proposed by Seyedali Mirjalili in 2015 [17]. Increment the iteration counter: iteration
MFO falls under the category of swarm = iteration + 1.
intelligence algorithms and is designed to solve End of While Loop.
optimization problems by mimicking the natural VI. Result.
behaviour of moths. Return the best solution found:
Here is a more concise mathematical best_solution.
representation of the MFO algorithm:
3.2. Whale Optimization Algorithm
I. Initialization. Initialize a population of
(WOA).
Moths with random positions:
M = {x1, x2,………. xn}, The Whale Optimization Algorithm
where xi represents the position of the ith (WOA) is a metaheuristic optimization
Moth in the search space. algorithm introduced by Seyedali Mirjalili in
Evaluate the fitness of each Moth in the 2016 [19]. The WOA approach is inspired by
population: f (xi). humpback whales' social behaviour and
II. Main Loop. Set the maximum hunting strategies, which work together to
iterations: max_iterations. catch prey. The main idea behind the WOA is
Prepare the iteration counter: iteration = 0. to mimic the hunting behaviour of Whales to
Initialize global best solution (brightest flame): find optimal solutions to optimization
best_solution = argmin (f (xi)). problems.
III. While Loop. Certainly, here’s a concise mathematical
While iteration < max_iterations). representation of the WOA:
Sort Moths based on their fitness I. Initialization. Initialize a population of
(brightness): M = sort (M, f (xi)). Whales:
X= {x1, x2,………. xn},
4
Economics Ecology Socium Vol. 8 No.1 2024
where xi represents the position of the ith 3.3. Coati – Optimization Algorithm
Whale in the search space. (COA).
Define the search space boundaries: xmin The Coati-Optimization Algorithm
and xmax. (COA) is a new bio-inspired metaheuristic
II. Evaluation Objective Function. optimization approach introduced by Dehghani
Evaluate the fitness of each Whale’s in 2023. The Coatis are regarded as population
position: members of this metaheuristic’s population-
f (xi) for i = 1, 2,…….N. based COA strategy. The placement of each
III. Main Optimization Loop. coati within the search region defines the
While a termination requirement is not values of the choice variables. Consequently,
satisfied. the coatis’ viewpoint is a possible solution to
(I). Exploration Phase this problem in the COA.
Select a leader Whale with the best The mathematical representation of the
fitness: xleader = min (f (xi)). COA is:
Update the positions of the follower I. Initialization.
Whales using the encircling equation: Xi : = xij =lbj + r (ubj - lbj),
D = | xleader - xi |, where, Xi= the ith Coati’s location in the
A = 2. r1- 1 (where r1is a random number search space, xij = the jth decision variable’s
between 0 and 1), value, n, m = Coati number and decision
C = 2. r2 (where r2 is another random variable number respectively, r = random real
number between 0 and 1), number in the interval [0, 1], lbj, ubj = Lower
b = 1 (constant), bound and upper bound of ith decision variable,
p = (A . D . e ^ b . A. cos (C . D)). respectively.
Update the position of the ith follower II. The population matrix of Coatis.
Whale: xi = xleader – p.
(II). Exploitation Phase
Update the positions of the follower
Whales using the spiral equation:
D = | xleader - xi |,
A = 2. r1- 1 (where r1is a random number
between 0 and 1),
C = 2. r2 (where r2 is another random
When possible, various values for the
number between 0 and 1),
problem’s objective function are assessed.
b = 1 (constant),
Solutions are placed in decision variables,
p = (A . D . e ^ b . A. cos (C . D)).
Update the position of the ith follower
Whale:
xi = xleader – p.
IV. Boundary Check.
Ensure that the positions of Whales
remain within the defined search space
boundaries: F – obtained objective function’s vector,
xi = clip (xmin, xmax, xi). and Fi – obtained the objective function’s value
V. Update Best Solution. of the determined using the ith coati.
After each iteration, update the best III. Updating the position of COA.
solution found so far: xbest = min (f (xi). Here update on the new positions of Coats using
This mathematical representation the exploration and exploitation phase
summarizes the key components and equations (I) Exploration phase (Hunting and attacking
of the WOA. This provides a structured strategy on Iguana)
overview of the main operations of the
algorithms.
5
Economics Ecology Socium Vol. 8 No.1 2024
,
Once the Iguana touches down, its
location within the search region is randomly
chosen. Here, = New position for the ith Coat,
Based on this random position, ground-
based Coats move into the virtual search space, = Objective function value,
r= Random real number between [0, 1],
t= Iteration counter’
= The jth decision variable’s
lower local and higher local, respectively,
= Decision variable’s lower bound
and upper bound, respectively.
IV. Update the best solution found.
V. End COA algorithm.
Updated new position-
3.4. Gazelle Optimization Algorithm
(GOA).
The Gazelle Optimization Algorithm
(GOA) is a natural population-based
, optimization method introduced by Agushaka
in 2023 [25].
Here, = New position for the ith Coat, The mathematical representation of the
GOA is as follows:
= Objective function value, I. Initialization.
r= Random-real number between 0 and 1, Initialize a population of Gazelle in matrix
Iguana = Iguana’s position in search form,
space,
I = Randomly integer number between {1,
2},
= Position of the Iguana on the
ground,
= Value of objective function,
= Floor function.
(II). Exploitation phase X = current candidate position generated
Relative to each Coati’s location, a by randomly using the below equation,
random position is produced using the equation
below:
,
where, ubj, lbj = Upper and lower bound,
rand= random number, Xi,j = Position of the jth
dimension of the ith population.
II. Compute the fitness function of Gezelle.
III. Construct Elite Gazelle matrix.
The strongest Gazelles are selected as top
Gazelles to build an Elite matrix for selecting
6
Economics Ecology Socium Vol. 8 No.1 2024
the next step for the Gazelles, and the best VII. Update the best solution.
outcomes are established after each iteration: VIII. End GOA.
3.5. Dragonfly Algorithm (DA).
The Dragonfly Algorithm (DA) is a
nature-inspired optimization algorithm that
draws inspiration from the hunting behavior of
dragonflies. It was first proposed by Mirjalili in
2016 [25]. The algorithm was designed to solve
optimization problems by simulating the
foraging behavior of dragonflies in search of
prey. The key features and principles of DA are
as follows.
where, = Top Gazelle vector.
Here are the key features and principles of
IV. Start the main loop of GOA.
the DA:
Here update on the new position using
I. Initialization.
exploitation and exploration phases.
Initialize a population of Dragonflies:
(I) Exploitation Phase.
D = {d1, d2,…… di}, where di represents
Update the position of Gazelles,
the position of the ith Dragonfly in the search
space.
Define the search space boundaries:
d_min and d_max.
where, = Solution at the current II. Objective Function Evaluation.
iteration, Evaluate the fitness of each Dragonfly’s
position: f(di) for (i = 1, 2,……..N.
= Solution of the next iteration, III. Main Optimization Loop.
= Vector containing random number, While a termination condition is not met
R= Vector of a uniform random number in (e.g., a maximum number of iterations or a
[0, 1], convergence criterion):
s=Grazing speed of the Gazelles. (I) Exploration Phase (Swarming
(II). Exploration Phase Behavior). Randomly select a leader Dragonfly
Update the position of Gazelles in the from the population as the “group leader”.
exploration phase, Update the positions of the other
Dragonflies (followers) using the swarming
equation, which encourages exploration,
Update the position of the ith follower
Dragonfly:
where, S = Top speed, = Random
number’s vector based on Levy distributions. r1 and r2 are random numbers between
How the Predator Chased the Gazelle: 0 and 1,
v is a random vector,
c1 and c2 are control parameters,
stepi = step size.
(II). Exploitation Phase (Hunting
Behavior). Update the position of the leader
where, dragonfly based on hunting behavior to exploit
this represents the parameter that controls promising areas,
the movement of the predator. Adjust the leader’s position using an
V. Elite update. equation inspired by the hunting behavior of
VI. Applying PSR effect and update. dragonflies.
IV. Boundary Check.
7
Economics Ecology Socium Vol. 8 No.1 2024
8
Economics Ecology Socium Vol. 8 No.1 2024
Parameter Settings
where, Rl = Reliability of lth component. No. of iterations 350
Fig. 2 shows the block diagram of the Lower boundary limit 0.50
LSSSC. Upper boundary limit 0.99
No. of variables for CBS 5
1 No. of variables for LSSSC 4
2 Search agent no 10
4 5.2. Analysis of the Cost Using the
Suggested Techniques for Complex
IN 3 OUT
Systems.
9
Economics Ecology Socium Vol. 8 No.1 2024
Fig. 3. CBS Convergence Graph for All Approaches (COA, WOA, GOA, MFO, DA).
From the observation of the graph, the solutions for LSSSC, as shown in the
authors found that COA is the best algorithm convergence graph below (Fig. 4). Analysis of
for CBS’s cost and reliability parameters as all five approaches (COA, GOA, DA, MFO,
compared to other existing approaches, and and WOA) provided the best solution for the
also provided a faster solution. The suggested overall system’s cost and reliability
methods provide the most cost-effective parameters.
Fig. 4. LSSSC Convergence Graph for All Approaches (COA, WOA, GOA, MFO, DA).
10
Economics Ecology Socium Vol. 8 No.1 2024
this test was used to assess the significance of COA, GOA, WOA, DA, and MFO algorithms.
the data. This non-parametric test is also Furthermore, Fig. 6 shows the CPU time of all
employed to rank the algorithms for the approaches.
complex system costs that have been Table 2. Friedman Ranking Test.
examined. Algorithms Mean Rank Sum Rank
The Friedman test’s null hypothesis, H0
COA 1.4 1
(p-value > 5%), denotes that there was no
GOA 3.2 3
discernible difference between the algorithms
MFO 2.9 3
under comparison.
The counter-hypothesis H1, which is true DA 5.1 5
for all 10 runs, indicates a significant variation WOA 4 4
between the compared five algorithms.
Each algorithm is assigned a rank based A comparison of the best results obtained
on how well it performs in this test. The best by COA for complex systems using the four
algorithms were those that used small ranks. approaches is presented in Table 3 and Table 4.
The Friedman rank test results are displayed in The optimal solution is highlighted in the
Table 2 and the corresponding Fig. 5 for the tables below.
11
Economics Ecology Socium Vol. 8 No.1 2024
12
Economics Ecology Socium Vol. 8 No.1 2024
13
Economics Ecology Socium Vol. 8 No.1 2024
REFERENCES
1. Rocco, C. M., Miller, A. J., Moreno, J. A., & Carrasquero, N. (2000). A cellular
evolutionary approach applied to reliability optimization of complex systems. In Annual Reliability
and Maintainability Symposium. 2000 Proceedings. International Symposium on Product Quality
and Integrity (Cat. No. 00CH37055) (pp. 210-215). IEEE.
2. F. A. Tilman, C. Hwang, L. Fan, K. C. Lal (1970). Optimal Reliability of a Complex
System. Transactions on Reliability, vol. R-19, no. 3, (pp. 95–100), IEEE.
3. Kumar, A., Pant, S., & Singh, S. B. (2017). Reliability optimization of complex systems
using a cuckoo search algorithm. In Mathematical Concepts and Applications in Mechanical
Engineering and Mechatronics (pp. 94-110). IGI global.
4. Kumar, A., Pant, S., & Ram, M. (2017). System reliability optimization using gray wolf
optimizer algorithm. Quality and Reliability Engineering International, 33(7), 1327-1335.
5. Kumar, A., Pant, S., Singh, M. K., Chaube, S., Ram, M., & Kumar, A. (2023). Modified
Wild Horse Optimizer for Constrained System Reliability Optimization. Axioms, 12(7), 693.
6. Umamaheswari, E., Ganesan, S., Abirami, M., & Subramanian, S. (2018). Reliability/risk
centered cost-effective preventive maintenance planning of generating units. International Journal
of Quality & Reliability Management, 35(9), 2052-2079.
7. Beji, N., Jarboui, B., Eddaly, M., & Chabchoub, H. (2010). A hybrid particle swarm
optimization algorithm for the redundancy allocation problem. Journal of Computational
Science, 1(3), 159-167.
8. Wei, Y., & Liu, S. (2023). Reliability analysis of series and parallel systems with
heterogeneous components under random shock environment. Computers & Industrial
Engineering, 179, 109214.
9. Dahiya, B. P., Rani, S., & Singh, P. (2019). A hybrid artificial grasshopper optimization
(HAGOA) meta-heuristic approach: A hybrid optimizer for discover the global optimum in given
search space. International Journal of Mathematical, Engineering and Management Sciences, 4(2),
471.
10. Durgadevi, A., & Shanmugavadivoo, N. (2023). Availability Capacity Evaluation and
Reliability Assessment of Integrated Systems Using Metaheuristic Algorithm. Computer Systems
Science & Engineering, 44(3).
11. Choudhary, S., Ram, M., Goyal, N., & Saini, S. (2023). Reliability and cost optimization
of series–parallel system with metaheuristic algorithm. International Journal of System Assurance
Engineering and Management, 1-11.
12. Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Grey wolf optimizer. Advances in
Engineering Software, 69, 46-61.
13. Pahuja, G. L. (2020). Solving reliability redundancy allocation problem using grey wolf
optimization algorithm. In Journal of Physics: Conference Series, 1706(1), 012155. IOP
Publishing.
14. Mahdavi-Nasab, N., Abouei Ardakan, M., & Mohammadi, M. (2020). Water cycle
algorithm for solving the reliability-redundancy allocation problem with a choice of redundancy
strategies. Communications in Statistics-Theory and Methods, 49(11), 2728-2748.
15. Gandhi, B. R., & Bhattacharjya, R. K. (2020). Introduction to shuffled frog leaping
algorithm and its sensitivity to the parameters of the algorithm. Nature-Inspired Methods for
Metaheuristics Optimization: Algorithms and Applications in Science and Engineering, 16, 105-
117.
16. Mirjalili, S. (2015). Moth-flame optimization algorithm: A novel nature-inspired heuristic
paradigm. Knowledge-based systems, 89, 228-249.
17. Sahoo, S. K., Saha, A. K., Ezugwu, A. E., Agushaka, J. O., Abuhaija, B., Alsoud, A. R., &
Abualigah, L. (2023). Moth flame optimization: theory, modifications, hybridizations, and
applications. Archives of Computational Methods in Engineering, 30(1), 391-426.
18. Mirjalili, S., & Lewis, A. (2016). The whale optimization algorithm. Advances in
14
Economics Ecology Socium Vol. 8 No.1 2024
15
Economics Ecology Socium Vol. 8 No.1 2024
16