Professional Documents
Culture Documents
Masterthesis
Masterthesis
Masterthesis
My deepest gratitude goes to the almighty God for giving me the gift of life, health
and strength which has enabled me pursue my academic pursuits to the best of my
ability.
I would like to express my sincere gratitude to my supervisors Dr. Majd Saeid
and Dr. Clovis Francis. Its my pleasure that I worked with such supportive inspiring,
motivating and tolerant team. Your encouragement, guidance and trust enabled me
to elaborate this work and complete this thesis.
I would also like to thank the members of my dissertation committee, Dr. Abdel-
latif Samhat and Pr. Youssef Harkous for their great attention, discussion, ideas and
feedback that have been absolutely invaluable and enabled me finally to finish this
work.
Finally, I must express my very profound gratitude to my family for providing me
with unfailing support and continuous encouragement throughout my years of study
and through the process of researching and writing this thesis. This accomplishment
would not have been possible without their love.
1
Publications
2
Abstract
An Unmanned Aerial Vehicle (UAV) is known as a powered flying vehicle that does
not carry a human operator, it can be operated remotely or autonomously and can
carry a payload. UAVs can be used in both military and civilian applications. They
can carry out tasks without placing human pilots in risk. However, in some cases,
a single UAV cannot well perform some complex missions, such as large payloads
transportation, searching objects in large area, etc. Motivated by these potential
applications, researchers are more and more attracted by the cooperation of multiple
UAVs. The main challenge in this field is to find the local interaction rules between
these vehicles to achieve a global desired comportment.
To deal with this issue, different formation control structures and architectures
are proposed in the literature. In addition, several methods exist for synthesizing the
control laws. Optimization approaches are widely used in the control of multi-agent
systems in which the objective is formulated as an optimization problem providing
effective tools to find the optimal solution with respect to the considered criteria.
However, if a fault/failure occurs in the formation, these control laws can be very
limited, resulting in an unstable system behavior. The development of fault tolerant
control tools becomes then essential to maintain control performance in the presence
of faults.
In this thesis, we propose the use of the Articial Bee Colony algorithm to solve
an online optimization problem for the leaderless distributed control of a fleet of
hexarotor UAVs. The ABC algorithm is used to minimize a cost function for each
UAV guaranteeing convergence to the desired position while avoiding obstacles and
collisions with other agents. We then studied the cases of loss of agents due to actuator
faults in the fleet. The method is tested in simulation on a fleet of hexarotor UAVs.
Test results for various cases are presented.
Key Words: Formation control, Distributed control, ABC algorithm, Fault Tol-
erance.
3
General Introduction
In recent years, the control of multiple vehicles has attracted increasing attention
from scholars around the world, owing to its potential applications whether in the
military or civilian fields and by the fact that many benefits can be obtained when
a single complicated vehicle is equivalently replaced by multiple yet simpler vehicles.
This is partly made possible thanks to the technological developments such as the
miniaturization of the electronic and mechanical components, the constantly improv-
ing wireless communication devices, and the increasing capabilities of the storage and
computation.
First, the study of collective behaviors of animals that lead to a global objective
was inaugurated by biologists. They studied flocks of birds, swarms of insects, herds
of quadruped and schools of fish. They tried to understand the secret of free-collision,
harmonic and collective motion of individuals. Moreover, they searched to discover
motivations that lead animals to aggregate in groups like flocks of birds. The natural
behavior of animals operating as a team has inspired scientists in different disciplines
to investigate the possibilities of networking a group of systems in order to accomplish
a given set of tasks without requiring an explicit supervisor. Therefore, multi-agent
systems have appeared broadly in several applications including multi-vehicle system,
formation flight of unmanned air vehicles (UAVs), clusters of satellites, etc.
Motivation
Recently, relatively cheap Unmanned Aerial Vehicles (UAV), equipped by sensing
and actuation capabilities, are emerging rapidly. Moreover, with the miniaturization
revolution, these devices are able to perform local decision-making as well as short-
range communications. These novelties motivate scientists and researchers to raise
different questions about the techniques that could be introduced to coordinate and
control these devices. One of the biggest challenges in this new field of research is
to define local interaction rules, between these devices, that lead to a global desired
behavior, in other words, ”Think globally, act locally”. A global desired behavior
could be any objective or mission achieved by one device, for instance, surveillance,
inspection, transportation, etc. However, this objective, if it is done by one device,
could be expensive in terms of time, reliability and computational power. Therefore,
the question could be reformulated to be ”How should the interaction rules and the
control algorithms be designed to efficiently achieve the global desired objective?”
Different control architectures used for autonomous vehicle fleet control exist in
the literature in addition to the theoretical tools used to synthesize the control laws.
Some methods proposed the use of consensus protocol to achieve a defined fleet goal,
where the theory of algebraic graphs is the main theoretical tool used to solve control
4
5
problems. Other methods are based on optimization approaches where the controllers
are designed by minimizing or maximizing an optimization criterion. This method is
used in this thesis, where the primary goal is to implement an efficient, fast-converging
and suitable algorithm for fleet problems.
The formation control is based on advanced control techniques that aim to achieve
high performance. Nevertheless, if a fault or failure occurs in a fleet, these control
strategies may be very limited, inducing unwanted behaviors and may even lead to
instability of the agents. In such situations, it is necessary to develop fault-tolerant
tools and methods (FTC). By definition, the fault tolerance of a system is its ability
to perform its function despite the presence or occurrence of defects, whether physical
degradations of hardware or software defects. It should be noted that fault tolerance
in multi-agent systems is more general than its definition for a single system. In
reality, the operation of a multi-agent system is not affected only by the appearance
of defects on each agent, but also collisions, the loss or drift of agents, the modification
or the loss of the communication network, can be considered as defects for multi-agent
systems.
Thesis Contribution
The main contributions of this thesis are:
• Proposition and simulation validation of the use of the Articial Bee Colony
algorithm (ABC) for the leaderless distributed control of a fleet of hexarotor
UAVs with collision avoidance.
• Convergence time comparison between the ABC and the Particle Swarm Opti-
mization algorithm (PSO) in four different scenarios.
Thesis Outline
The thesis is organized in 5 chapters:
• Chapter 2: The mathematical model of the hexarotor vehicle used for sim-
ulation and control design is derived and the control strategy applied to the
6
• Chapter 3: The Artificial Bee Colony algorithm principle and procedure are
explained and the optimization problem for the control of the fleet is formulated.
• Chapter 4: The ABC results are presented and compared to PSO results. A
further improvement in ABC is then introduced and its results are shown.
Finally the thesis concludes with a summary about the obtained results and an
outlook about potential improvements of our work.
Chapter 1
Literature Review
1.1 Introduction
Automatic formation control is a field that addresses the control of multiple robots
in order to realize certain geometrical patterns. Inspired by nature, these geometrical
patterns could be regular, such as the V-shape seen in migratory birds, or irregular
as seen in a flock of birds. Several techniques of automatic formation control are
used, developed and applied on aircrafts and multiple aerial robots [1]. This chapter
describes the different classifications of structures and architectures existing in this
field, in addition to the different applied control laws and the theoretical tools used
to design them. It also defines the fault tolerance concept and its applied techniques
for single and multi-agent systems.
7
CHAPTER 1. LITERATURE REVIEW 8
leader, the whole formation will be affected. Moreover, error propagation and non-self-
organization formation are some disadvantages of this structure. Solutions of some of
these drawbacks are found in [3] and [6]. In [6], the leader in the formation is replaced
by a virtual one. In fact, the virtual leader is a reference trajectory sent to all the
agent in the formation. This solution resolves the reliability problem when depending
on a physical leader as well as the problem of error propagation. In [3], a multiple
leader solution is proposed to solve the problem of one leader dependency. However,
the self-organization problem is still persistent in this structure.
agents applying the rules of Reynolds.The work in [12] introduces a Null Space-based
behavioral control laws to follow the rules of Reynolds.
Many advantages can be noticed by this structure. One of the main ones is that a
behavioral-based structure is easily self-organized since each agent has to follow a set
of rules and knows the objective trajectory or destination. Another crucial one is the
scalability which can be seen when increasing or decreasing the number of agents in the
formation, where each agent still interacts only with neighbors, thus its computational
capabilities are not affected. The interaction with neighbors shows also the distributed
control aspect.
Yet, one main disadvantage of this structure is that we cannot ensure a fixed
geometrical pattern of the formation. Patterns such as V-shape, rectangles could not
be guaranteed. Only lattice patterns with fixed inter-agent distances could be formed.
Moreover, stability analysis of systems using this structure is relatively difficult.
In the literature, we can find several works such as [14] and [15]. This kind of
architecture is clearly identified in the aforementioned virtual structure. Moreover, a
centralized control architecture could be implemented in the leader-follower structure,
especially in the leader mode. The works in [7] and [8] show experimental works that
used the centralized control architecture. Its main advantage is the global knowledge
of the states of all agents in the formation. This global knowledge is beneficial in
designing a controller to achieve an optimal path planning and self-organization.
CHAPTER 1. LITERATURE REVIEW 10
using other techniques [28], [29]. Different works in the literature proposed the use
of the ABC approach to solve multi-robot path planning in a given environment with
an ultimate objective to select the shortest path length of all the robots without
hitting any obstacle in the world map [30]. In [31], satellite formation keeping problem
was treated, in which the feedback gain parameter that ensures bounded motion and
minimum fuel consumption was estimated using ABC. These works have shown the
feasibility of applying this technique for path planning and optimization of feedback
gain controller parameters with a good quality of the solutions and fast convergence.
After generating the trajectories for a formation of UAVs, a certain control law
must be designed whose duty is to track the vehicles to the calculated reference
trajectory. Control laws may be classified into two major types, linear and nonlinear.
For linear control, the model should firstly be linearized, then a linear control technique
is used, such as, PID, Linear Quadratic, etc.
An efficient and reliable PID technique based on quaternion for a hexacopter
control and stabilization has been presented and discussed by means of a wide exper-
imentation in [32]. In [33], an optimum linear control algorithm is proposed, that is
able to stabilize the attitude of a hexarotor micro aerial vehicle in indoor environment.
Some nonlinear control techniques are backstepping, sliding mode, nested satura-
tion, or Lyapunov redesign. In [34], a control algorithm based on the backstepping
technique is presented and applied to the trajectory tracking problem for a hexarotor
aerial robot. In [35], the Sliding Mode Control method, which is known to be insen-
sitive to external disturbances and parameter uncertainties, was used to control the
altitude, attitude and position of the hexarotor.
the theoretical value provided by the model under the nominal operating conditions
[38], [39]. On the other hand, external methods are suitable when no analytical model
is available for the system to be controlled. The only knowledge available is based on
the human expertise associated with a strong experience feedback. These methods
are based on Artificial Intelligent Techniques including pattern recognition, artificial
neural networks, expert systems, etc.
The objective of the fault-tolerant control is then to determine a control strategy
that can prevent, or at least limit, the effects of faults on the stability and performance
of the system. In the presence of a low-amplitude defect, a simple and robust control
can preserve the nominal stability and performance of the controlled system. It is
called passive fault-tolerant control [40], [41]. However, in presence of severe defects,
it is necessary to implement an active fault-tolerant control strategy. In this case,
one distinguishes accommodation, reconfiguration and restructuring according to the
desired performance after the occurrence of the fault. This approach makes it possible
to deal with unforeseen faults, but it requires the synthesis of an FDI scheme to provide
information as precise as possible.
The problem of fault tolerance has been widely discussed in the last two decades.
The major problem encountered when designing such control is that most FDI tech-
niques are used as a monitoring tool to detect and locate open loop faults and do
not integrate the control part. The study of multi-agent systems and particularly
autonomous vehicle fleet control requires, in comparison to the control of a single
system, the consideration of additional problems which may compromise the execution
of a given mission such as loss of communication, changes in network topology, data
loss between agents, collision avoidance, defects or loss of agents in the fleet or
problems of error propagation. All these aspects are considered in the fleet control
framework as defects in the multi-agent system. Although the study of FDI and FTC
methods for a single dynamic system is widely known and developed, few results relate
to the concept of multi-agent systems. In [42] the consensus problem in multi-agent
systems in the case of actuator fault detection is studied. By adopting an approach
based on virtual actuators, it is shown that the effects of faults can be effectively
compensated. The authors in [43] propose a predictive-based (MPC) fault-tolerant
method for stabilizing and navigating a heterogeneous formation in a 3D space. In
[44], the authors studied problems related to the detection of defects that occur in a
formation control. He was particularly interested in the case of agent loss in the fleet
and actuator faults. Simulation and experimental results were presented on a fleet of
quadrotors based on two algorithms, the consensus and the PSO algorithm.
1.6 conclusion
This chapter presents the several formation control structures and architectures
found in the literature focusing on the advantages and disadvantages of each. It then
lists some control laws used for controlling hexarotor UAV systems and describes the
different theoretical methods used to synthesize these control laws for multi-agent
systems. Finally, the fault tolerance concept is explained, together with its existing
types and methods and some applied examples on formation control problems.
Chapter 2
Modelling
2.1 Introduction
Hexarotors are aerial robots that have been extensively used in the last decades. In
both, research and industrial fields, hexarotors seem to be a promising platform. They
are a type of Unmanned Aerial Vehicles (UAV) that have useful properties compared
to fixed-wing airplanes and helicopters. This chapter presents the basic mathematical
modelling of a hexarotor UAV, which has been used to develop proper methods for
stabilization and trajectory control. It is controlled by adjusting the angular velocities
of the rotors which are spun by electric motors. The model of fleet of hexarotors is
then deduced using the graph theory. A general idea of the PID controller is finally
given, focusing on the advantages of using it in our work.
• The center of gravity and the origin of the body-fixed frame are assumed to
coincide
• The thrust and the drag are proportional to the square of the rotors speed
14
CHAPTER 2. MODELLING 15
Figure 2.1: Hexarotor ”+” configuration Figure 2.2: Hexarotor ”X” configuration
Describing the dynamics under the external forces that are applied to the center of
mass of the hexarotor system using the Newton Euler formalism results in the following
nonlinear model:
ζ̇ = v (2.1)
mζ̈ = G + RU (2.2)
η̇ = W Ω (2.3)
Where:
T
• ζ = x y z is the position of the center of mass of the hexarotor in the
inertial frame.
T
• v = vx vy vz is the vector of linear velocities in the inertial frame.
T
• η = ϕ θ ψ is the vector of Euler angles in the body-fixed frame named as
Roll, Pitch and Yaw respectively.
T
• Ω = ωBx ωBy ωBz is the vector of the angular velocities in the body-fixed
frame.
• R is the rotation matrix from the body frame to the inertial frame.
• U = 0 0 F is the thrust vector.
T
• τ = τφ τθ τψ is the torque vector.
CHAPTER 2. MODELLING 16
fi = Kf × ωi2 (2.4)
where Kf is a constant that includes aerodynamic parameters of the blade. The overall
thrust generated by the six rotors is given as follows:
F = f1 + f2 + f3 + f4 + f5 + f6 (2.5)
Moreover, a drag moment is induced on each rotor due to aerodynamic forces. The
direction of the drag moment is opposite to the rotation of the rotor. This moment is
given by:
τi = Kt × ωi2 (2.6)
where Kt is a constant that includes the aerodynamic parameters relating the angular
velocities and the drag moment. These forces produce moments around the x, y and
z axes called the roll τφ , pitch τθ and yaw τψ moments respectively, and given by the
following equations:
1 1 1 1
τφ = l(− f1 + f2 + f3 + f4 − f5 − f6 ) (2.7)
2 2 2 2
√
3
τθ = l (f1 + f2 − f4 − f5 ) (2.8)
2
τψ = τ2 + τ3 + τ5 − τ1 − τ4 τ6 ) (2.9)
U1 = BW (2.10)
where:
F
τφ
U1 =
τ θ
(2.11)
τψ
Kf Kf Kf Kf Kf Kf
−Kf l Kf √2l Kf l Kf 2l√ −Kf √2l −Kf l
B= √2 (2.12)
Kf l 3 Kf l 3 0 −Kf l 3 −Kf l 3 0
2 2 2 2
−Kt Kt Kt −Kt Kt −Kt
CHAPTER 2. MODELLING 17
2
w1
w22
2
w3
W =
w42 (2.13)
2
w5
w62
Thus, by adjusting the angular velocities of the rotors, the system states including
translation and rotational position and velocity vectors are controlled by using a
suitable controller that acts on the above described model for each UAV, to track
it to the desired trajectory, achieving by this the whole formation goal.
Referring to Fig 2.3 and to the equations 2.1 to 2.9, we can describe the full
dynamic model of the hexarotor UAV by the given equations [45], where:
U
ẍ = ((cosφsinθcosψ) + (sinφsinψ)) (2.14)
m
U
ÿ = ((cosφsinθsinψ) − (sinφcosθ)) (2.15)
m
U
z̈ = (cosθcosφ) −g (2.16)
m
And:
Jy − Jz τφ
φ̈ = θ̇ψ̇ + (2.17)
Jx Jx
Jz − Jx τθ
θ̈ = φ̇ψ̇ + (2.18)
Jy Jy
Jx − Jy τψ
ψ̈ = φ̇θ̇ + (2.19)
Jz Jz
CHAPTER 2. MODELLING 18
γi = {j ∈ V : aij 6= 0}
where:
The control signal is thus a sum of three terms as in Fig 2.4: The P-term (which
is proportional to the error), the I-term (which is proportional to the integral of
the error), and the D-term (which is proportional to the derivative of the error).
The controller parameters are the proportional gain K, the integral time Ti , and the
derivative time Td . The proportional control helps in reducing the steady state error,
thus makes the system more stable. In addition, it has the effect of reducing the rise
time, so the slow response of an overdamped system can be made faster by the help
of the proportional action. The integral action is used to eliminate the steady state
error, and thus improving the accuracy. But it makes the transient response worse.
The derivative part reduces the overshoot and improves the transient response of the
control system thus increasing its stability.
When compared to other control algorithms, like model-based or matrix based
ones, PID controller presents several advantages: it is easier to implement (just a
simple equation) and easier to tune by simple trial and error, it uses lower resources,
it is more robust to tuning mismatches and has a better response to unmeasured
disturbances. Model-based controllers recover from unmeasured disturbances with
only an integral type of action, while PID has also the proportional and derivative
actions that immediately act on an unknown disturbance.
CHAPTER 2. MODELLING 20
τθ = −σθ Kθ θ̇ + Kθ (θ − θref )) (2.24)
σαi (s) = s if − αi 6 s 6 αi
σαi (s) = −αi if s < αi
The Kai (a = φ, θ, i = 1, 2) are positive tuning gains. φref and θref are desired
references, i.e. the outputs of the x-y position controllers.
CHAPTER 2. MODELLING 21
2.5 Conclusion
In this chapter, we presented the model of a hexarotor system followed by the model
of the fleet of hexarotors. In addition to the control law that is used to control the
attitude and altitude of the hexarotor during this work. This control law is based on
nested saturation control for the roll and pitch and on a PID control for the yaw and
the altitude. The control by nested saturations presents the advantage of being simple
to be implemented onboard the UAV like a PID controller. In addition, it is suitable
for a hexarotor UAV since it guarantees that the angles and the angular velocities
remain bounded during the flight.
Chapter 3
Proposed Methodology
3.1 Introduction
This chapter discusses a distributed planner method for multiple unmanned aerial
vehicles (UAV), based on a leaderless approach, using an Artificial Bee Colony (ABC)
optimization algorithm. It explains how does this algorithm proceed and how is this
problem formulated.
22
CHAPTER 3. PROPOSED METHODOLOGY 23
associated solution. The number of employed bees and onlooker bees is equal to the
number of solutions in the population.
3.2.2 Procedure
In the ABC algorithm, each cycle of the search consists of three steps [29]:
1. Initialization
ABC generates a randomly distributed initial population P of Np solutions (food
source positions) and evaluates the nectar amount (fitness value) of each of them.
Each solution Xi(i=0,1,2,,Np −1) is a D dimensional vector where D is the number of
optimization parameters. This initialization is modeled as:
xij = xmin
ij + rand(0, 1)(xmax
ij − xmin
ij ) (3.1)
Where Λ(Xi ) represents the value of the cost function at the solution Xi .
f it(Xi )
Pi = Pn (3.3)
i=1 f it(Xi )
Where f it(Xi ) is the fitness value of the solution Xi evaluated by its employed
bee, which is proportional to the nectar amount of the food source in the
position i and Np is the number of food sources which is equal to the number of
employed bees. After that, as in case of employed bee, onlooker bee produces a
modification on the position (solution) in her memory according to equation 3.4
and checks the nectar amount of the candidate source (solution). Providing that
its fitness is better than that of the previous one, bee memorizes the new position
0
and forgets the old one. In order to find a solution Xi in the neighborhood of Xi ,
a solution parameter j and another solution Xk are selected on random basis.
CHAPTER 3. PROPOSED METHODOLOGY 24
0
Except for the value of chosen parameter j, all other parameter values of Xi are
the same as in the solution xi , this is shown as follows:
0
0
Xi = xi0 , xi1 , xi2 , , xi(j−1) , xij , xi(j+1) , , x(i(D−1) (3.4)
0 0
The value of xij parameter in Xi solution is computed using the following
expression:
0
xij = xij + u(xij − xkj ) (3.5)
The sets of elements ddij representing the desired distances between each two agents
i and j forms the desired adjacency matrix Ad . In addition, collision between agents
CHAPTER 3. PROPOSED METHODOLOGY 26
with c ∈ R being the smallest acceptable distance between any two neighbors. In the
matrix A(t), this constraint is formulated as:
To introduce this constraint in the cost function, a new function αij (t) is defined
between each hexarotor i and its neighbor j (24):
c−aij (t)
αij (t) = 1 + e σ (3.10)
The value of the αij (t) function depends on the difference between aij (t) ; the distance
between two agents i and j with the safety distance c (where l > c), and it is all the
greater when aij (t) < c and converges to the value l when aij (t) >> c. Also, collision
with obstacles should be avoided. This constraint is written as:
where qk is the position vector of an obstacle k. To take into account obstacle avoidance
in the fleet control strategy, an additional term is also introduced into the cost function:
c−dik (t)
Obsik (t) = e σ (3.12)
The value of the Obsik (t) function also depends on the difference between dik (t) ; the
distance between agent i and obstacle k with the safety distance c, and it converges
to zero when the distance is much greater than the safety distance.
3.4 Methodology
The design of the whole controller is schematized in the following diagram in Fig
3.2.
between agents.
At first, let us define a cost function Λi (t) for each UAV i such that:
q m
X X
Λi (t) = ρ(kPd − [qi (t) + h]k − ddip ) + αij (t)(kqj (t) − [qi (t) + h]k − ddij )) + Obsik (t)
j=0 k=1
(3.13)
3.5 Conclusion
This chapter has presented a distributed path planning approach for the control of
a fleet of Unmanned Aerial Vehicles. The control objective has been formulated as an
online optimization problem. The minimization of a cost function allowed to generate
in a distributed way the trajectories of the UAVs which guarantee the control of the
fleet. The main advantage of the proposed approach is that it is only dependent on the
trajectory generation part and it is independent from the model or from the control
law of the UAVs, which makes it easier to use.
Chapter 4
4.1 Introduction
The proposed UAVs fleet control method in this thesis was used in [24] to control
a fleet of quadrotors based on a leaderless strategy, where the formation problematic
was formulated as an optimization problem with the same cost function and the same
constraints, and solved using the particle swarm optimization (PSO) algorithm. This
method has been validated by simulations on Matlab under different scenarios, using
both ABC and PSO algorithms. Then the following trajectories are shown when the
algorithm is tested on a hexarotor fleet simulator on Matlab. This chapter shows
the results of the different scenarios and compares between ABC and PSO, it then
presents a modified version of the ABC to further improve its performance.
28
CHAPTER 4. SIMULATION AND RESULTS 29
• First, the ABC technique is applied on a single agent to generate its path from
an initial position (3,0) to a desired position (-3,0) without an obstacle. Fig 4.1
shows the generated path.
• In a third scenario, two agents should converge to a rendezvous point (0,0) while
keeping a circle configuration of radius 2. The generated trajectories are shown
in Fig 4.3.
• The fourth scenario is similar to the third one but with six agents where the
rendezvous point is Pd = (15, 10) and the radius is r = 6 m. The time constant
is set to 1 second (τ = 1s). The rendezvous point’s position Pd and the desired
configuration for the UAVs set formed by the adjacency matrix Ad are fixed.
The matrix Ad of elements ddij that describe the desired distance between UAVs
is constructed to have a configuration in a circle around the rendezvous point
CHAPTER 4. SIMULATION AND RESULTS 30
Figure 4.2: ABC path for one agent with obstacle avoidance
Figure 4.3: ABC path for two agents with collision avoidance
And: ddip = r, ∀i ∈ [1, 6] with ddip being the desired distances between UAV i and
the rendezvous point Pd . The initial positions considered are illustrated in table
4.3.
CHAPTER 4. SIMULATION AND RESULTS 31
Fig 4.4 shows the evolution paths generated by the ABC for each UAV from the
initial position to the final position to reach the desired circle as shown by the
final positions in Fig 4.5.
Figure 4.4: ABC paths for six agents with collision avoidance
Figure 4.5: ABC final positions for six agents with collision avoidance
To assess the temporal behavior of the fleet, the cost functions Λi (t) of each
UAV i and the distances between the agents and the rendezvous point ddip (t) are
CHAPTER 4. SIMULATION AND RESULTS 32
shown in Fig 4.6 and Fig 4.7 respectively. The convergence time of the fleet is
estimated to be around 80 seconds.
Figure 4.6: ABC Cost function convergence for six agents with collision avoidance
Figure 4.7: ABC Radius convergence for six agents with collision avoidance
M axIteration − it
b1 (it) = (b1i − b1f ) + b1i (4.4)
M axIteration
M axIteration − it
b2 (it) = (b2i − b2f ) + b2f (4.5)
M axIteration
Where ”it” is the number of current iteration, and with the parameters shown in table
4.4.
Parameter Value
InertiaW eight amin = 0.4 , amax = 0.9
Cognitiveweight b1i = 2.6, b1f = 0.4
Socialweight b1i = 0.4, b1f = 2.6
P opulationsize 100
• Fig 4.8 Shows the generated path for the first scenario by the PSO.
• In the second scenario, PSO succeeded in obstacle avoidance and the generated
trajectory is shown in Fig 4.9.
• In the third scenario, the two agents reach the desired circle diametrically
opposite following the generated trajectories by the PSO as shown in Fig 4.10.
• In the last scenario, the six agents succeed in reaching the rendezvous point
with a circle configuration around it by the PSO. The evolution paths of the six
agents are shown in Fig 4.11. Fig 4.12 shows that the convergence of the fleet
using the PSO algorithm takes 112 seconds.
CHAPTER 4. SIMULATION AND RESULTS 34
Figure 4.9: PSO path for one agent with obstacle avoidance
4.3 Comparison
To compare the performance of the two algorithms (ABC and PSO), 100 simulations
were done and the average results were taken. For this purpose, table 4.5 was
illustrated to summarize the results of the two algorithms. It presents the average
convergence time after 100 runs for each algorithm under the different scenarios with
the same conditions and the same solutions quality.
CHAPTER 4. SIMULATION AND RESULTS 35
Figure 4.10: PSO paths for two agents with collision avoidance
Figure 4.11: PSO paths for six agents with collision avoidance
Table 4.5: Average convergence time of ABC and PSO for different scenarios
According to the obtained results, it is clear that ABC converges faster than the PSO.
The results reveal that the ABC algorithm is efficient, fast-converging and suitable
for such applications. The whole approach inspired from previous similar works using
CHAPTER 4. SIMULATION AND RESULTS 36
Figure 4.12: PSO Cost function convergence for six agents with collision avoidance
PSO optimization offers many advantages in particular in terms of its generic aspect
since it can be applied to various types of vehicles, its simplicity of implementation
and adjustment, and also its distributed architecture and real time aspect.
global best (gbest) solution to guide the search of candidate solutions, we modify the
solution search equation described by equation 3.5 as follows:
0
xij = xij + u(xij − xkj ) + Ψ(yj − xij ) (4.6)
Where the third term in the right-hand side of equation 4.6 is a new added term
called gbest term, yj is the j th element of the global best solution, Ψ is a uniform
random number in [0, C], where C is a non-negative constant and chosen to take
the value 1.5 according to [50]. According to equation 4.6, the gbest term can drive
the new candidate solution towards the global best solution, therefore, the modified
solution search equation described by equation 4.6 can increase the exploitation of
ABC algorithm. This modified version of the ABC algorithm (GABC) was then
tested on the fourth scenario (six agents are supposed to reach a fixed target point
Pd while making a circle around it with a fixed radius r with collision avoidance), the
results are shown in Fig 4.13 and 4.14. Fig 4.13 shows that the generated trajectories
Figure 4.13: GABC paths for six agents with collision avoidance
by GABC are smoother than those generated by the ABC since the position steps are
smaller, this is due to the fact that the search equation is improved and is affected
now by the global best position found among the population, which decreases the
algorithms randomness. Fig 4.14 shows that the convergence of GABC took around
60 seconds which is much less than the convergence time took by the ABC. These
two advantages revealed by the GABC lead to considering it superior to the classical
ABC and using it for testing the behavior of the fleet on a hexarotor fleet simulator
on Matlab.
Figure 4.14: GABC cost function convergence for six agents with collision avoidance
Figure 4.15: GABC paths for six agents with obstacle and collision avoidance
GABC. And Fig 4.17 shows these distances according to the real positions of the
UAVs. We can see that all the inter-distances are larger than the safety distance
which ensures the collision avoidance of each UAV with the obstacle.
Figure 4.16: GABC calculated inter-distance between each UAV and the obstacle
Figure 4.17: Real inter-distance between each UAV and the obstacle
nested saturation control for the roll and pitch, and the PID controller for the yaw
and altitude were able to allow each hexarotor UAV to follow its desired trajectory
generated by the GABC algorithm.
Fig 4.21 shows the inter-distances between each pair of hexarotor UAVs, it is
CHAPTER 4. SIMULATION AND RESULTS 40
clear that all the inter-distances calculated from the positions generated by GABC
are greater than the predefined security distance which validates that the collision
avoidance constraint is verified.
Fig 4.22 shows the actual interdistances between each two UAVS trying to track
the calculated inter distances.
CHAPTER 4. SIMULATION AND RESULTS 41
4.6 Conclusion
This chapter presented the results of the ABC algorithm on four different scenarios
showing that it is an efficient algorithm and fast converging compared to the PSO
algorithm that is previously applied on fleet problems. In an attempt to improve its
performance more and more, a modified version of the ABC has been presented and
applied on a hexarotor fleet simulator on Matlab showing satisfying results to reach
the desired configuration and avoid collision between UAVs and with obstacles.
Chapter 5
5.1 Introduction
The objective of this chapter is to extend the results obtained with the control
technique proposed in chapter three to cases of presence of defects. We are particularly
interested in the cases of loss of agents due to actuator faults in the fleet. Simulations
on a fleet of hexarotors are carried out in order to validate the proposed approaches.
Three cases are considered when an agent is lost, one is without recovery of the faulty
agent that is detected as faulty before starting the simulation. Another is the same
one but with sudden detection of the fault during the flight of the fleet. In both cases,
the remaining faultless agents in the fleet are supposed to continue the mission. The
last studied case is with accompanying a fault tolerant control allocation module with
each agent in order to recover it.
42
CHAPTER 5. FAULT TOLERANT CONTROL FOR FORMATION CONTROL 43
A zero value of a parameter kj (t) would then cancel the influence of an agent j on the
trajectory generation of the other agents of the fleet. The loss of an agent would then
be taken into account in the objective functions of the optimization modules, which
greatly facilitates the formation control.
Figure 5.3: GABC trajectories for six agents with a faulty agent
In the Fig 5.6 and Fig 5.7, we can see that UAV4 is detected defective at the start
of the simulation and remains in the same position. The other five hexarotor UAVs
continue the mission and move in formation towards the calculated desired positions.
Figure 5.4: GABC cost function convergence in the case of loss of an agent
each agent. The faulty agent becomes motionless after the injection of fault.
In the Fig 5.9 and Fig 5.10, we can see that the UAV4 is detected defective after 50
seconds of the simulation and remains in the same position. The other five hexarotor
UAVs continue the mission and move in formation towards the new calculated desired
positions. The vertical dashed lines indicate the injection fault time.
Figure 5.6: Desired and Real Positions in X for the six UAVs with a faulty agent detected
at the start
Figure 5.7: Desired and Real Positions in Y for the six UAVs with a faulty agent detected
at the start
Figure 5.8: GABC trajectories for six agents with a faulty agent detected during flight
CHAPTER 5. FAULT TOLERANT CONTROL FOR FORMATION CONTROL 47
Figure 5.9: Desired and Real Positions in X for the six UAVs with a faulty agent detected
during flight
Figure 5.10: Desired and Real Positions in Y for the six UAVs with a faulty agent detected
during flight
desired adjacency matrix remain unmodified. For this purpose, fault tolerant control
allocation is suggested to recover the faulty hexarotor.
One major advantage noticed with a hexarotor is the motor redundancy. Due to
this redundancy, it can continue flying when one motor got failed. The main objective
of control allocation is to determine how to generate a desired control effect from
a redundant set of actuators and effectors. In particular, if the set of actuators and
effectors is partially affected by faults, one can modify the control allocation scheme by
preventing the use of inefficient/ineffective devices in the generation of control effect
or compensating for the loss of efficiency. However, one key point for successfully
re-allocating the control is the availability of adequate information about the faults
that have occurred. Indeed, some accurate fault estimation and a correct isolation of
the faulty actuators or effectors is necessary to address the reconguration problem. In
our case, we suppose that there is a functioning FDI module which detects the fault
and isolate it properly. The type of fault considered is the actuator fault in one of the
six motors of the hexarotor UAV.
As in the first case, the FDI module provides a signal informing about the detected
fault. This signal will be received by the fault tolerant control allocation, which will act
on the control input to the UAV system by eliminating the influence of the defective
CHAPTER 5. FAULT TOLERANT CONTROL FOR FORMATION CONTROL 48
part through a matrix W(t) and modify its output. By other words, the speeds of the
motors will be modified in a way to compensate for the fault.
With W (t) be a diagonal matrix N × N of elements wi (t) where:
For a hexarotor UAV, the virtual control input v of the system will be the total
thrust F and the roll, pitch and yaw moments τφ , τθ and τψ around the three axes,
computed according to the control laws to make the hexarotor move as desired. The
real control input is the motors speeds vector. The relation between these two vectors
inputs is described by the control effectiveness matrix B which depends on the hexaro-
tor configuration and the rotors distribution. The low-level controller is the motors
speed controller. In case of one or more actuators failures, the healthy actuators can
be reconfigured by the control allocation system without having to change the motion
controller structure and tuning. The weighted pseudo-inverse method is often used
to solve unconstrained linear control allocation problems. Neglecting any constraints
and saturations on the input, an explicit solution for the control allocation problem
can be obtained from the minimization of a quadratic problem as follows:
u = W B T (BW B T )−1 ∗ u
(5.6)
T
where W is the matrix described above. τ = F τφ τθ τψ is the virtual control
T
inputs vector and u = ω12 ...ω62 are the motors speeds. The fixed matrix B is defined
as in equation 2.12.
Because we neglect constraints and saturations, there is no guarantee that the
solution will not violate those constraints, and it may become unacceptable when
considering the saturations on the control inputs. This is the main drawback of this
method. However, its main advantage is its easy construction and high allocation
efficiency.
For validation purpose, we considered the case of failure of motor 2 in UAV4 in
the fleet of hexarotors after 40 seconds from the beginning of the simulation and we
tested the behavior of the fleet after recovering this faulty UAV on a hexarotor fleet
simulator on Matlab by fault tolerant control allocation. The results are shown in the
figures below.
Figures 5.12 to 5.17 show the angular speeds of the six rotors of the faulty hexaro-
tor. The first dashed line indicates the failure injection time and the second the
detection and recovery time.
The control allocation succeeded in eliminating the influence of the defective rotor
and modify the control input to the UAV4 system and recover it within 1 second a
figure of x or y is needed here. After the recovery of UAV4 , it tracks the desired
trajectory and continues its flight with the other members of the fleet to reach the
formation target.
5.4 Conclusion
This chapter introduced the idea of fault tolerance for formation control of UAVs.
It studied the problem of losing an agent from the fleet in three cases. Two of them
do not consider the recovery of the faulty agent and one aims to design fault tolerant
control allocation for the faulty UAV in an attempt to recover it. Simulations on a
CHAPTER 5. FAULT TOLERANT CONTROL FOR FORMATION CONTROL 50
fleet of hexarotors are carried out in order to validate the proposed approaches. In all
cases, the fleet succeeded in completing the mission after detecting the fault.
CHAPTER 5. FAULT TOLERANT CONTROL FOR FORMATION CONTROL 51
In this final section, we first summarize the contributions concerning the develop-
ment and simulation validation of applying ABC on formation control problem. We
then suggest possible improvements upon our work and topics for future studies.
Conclusions
The main objective of this thesis was to explore an autonomous vehicle fleet control
technique and approach for the purpose of designing and realizing UAV formation
flight control in real time. We also aimed to take into account the case of faults and
failures in the fleet by proposing fault-tolerant control methods in the multi-agent
context.
We started with a bibliographic research on the existing works. Taking into account
the richness of the field covered in this topic, we have chosen specific research directions
in trying to propose relevant contributions. We were interested in fleet controls based
on optimization techniques. A large number of algorithms are proposed in the litera-
ture which are derived from swarm intelligence. An approach based on a distributed
trajectory generation is proposed. In this one, an optimization block, implemented on
each vehicle, allows minimizing an objective function to calculate the optimal reference
trajectory guaranteeing the formation control of the fleet to reach a target point
with a desired topology while avoiding collisions and obstacles. The minimization
of objective functions in each vehicle is done in a fully distributed manner. The
real-time aspect also requires the choice of an optimization algorithm adapted with a
minimum calculation time. The ABC algorithm, used in our approach, is very suitable
for this dynamic optimization problem, it was then compared with PSO applied in
literature on the same problem and have shown better performance regarding the
convergence time. However, there is still an insufciency in ABC algorithm regarding
the solution search equation, which is used to generate new candidate solutions based
on the information of previous solutions. Inspired by PSO, which in order to improve
the exploitation, takes advantage of the information of the global best solution to guide
the search of candidate solutions, an improved version of the ABC was presented in
which the solution search equation is modified. The whole approach inspired from
previous similar works using PSO optimization offers many advantages in particular
in terms of its generic aspect since it can be applied to various types of vehicles, its
simplicity of implementation and adjustment, and also its distributed architecture and
real time aspect.
Our method was first tested in simulation on Matlab and then on a hexarotor
fleet simulator. The hexarotor was modeled with nonlinear dynamics and controlled
by nested saturation and PID controller. The main advantage of our approach is the
52
CHAPTER 5. FAULT TOLERANT CONTROL FOR FORMATION CONTROL 53
ability to separate the local control part of each vehicle that can be done independently,
with the path generation part for the fleet control. The control by nested saturations
presents the advantage of being simple to be implemented onboard the UAV like a
PID controller. In addition, it is suitable for a hexarotor UAV since it guarantees that
the angles and the angular velocities remain bounded during the flight. However, the
disadvantage noticed in this method was that the generation of positions after each
sample time which is done by the ABC algorithm was faster than the controller ability
to track these desired positions in the same time. This is due to the fact that the PID
controller is a simple linear controller while the hexarotor was modeled as a nonlinear
system.
In the last part of this thesis we have proposed the design of fault tolerant control
laws for the proposed fleet control approach. We were interested in problems related
to the loss of agents in the fleet, and actuator faults. We have assumed in these
developments that an almost perfect fault detection and isolation module FDI is
implemented upstream to our control.
Perspective
In future works, we intend to consider a nonlinear controller such as the sliding mode
controller in order to better adapt the hexarotor system to the desired trajectory in a
faster response to the new generated position after each sample time. The proposed
architecture will be tested experimentally on a fleet of real hexarotors.
The extensions proposed in the last chapter on fault tolerance remain simplistic
for the moment and represent only preliminary results. We were interested only in
cases of loss of agents and actuator faults. The aspect of detection and fault tolerance
should be studied in more detail.
In addition, further work is to be considered concerning other faults types such
as problems related to telecommunications, data loss, delays and location errors. It
would be interesting to develop the approaches proposed in this direction in order to
be able to perform training flights in an external environment.
Bibliography
[1] J-A. Guerrero, “UAV ight formation control,” John Wiley-ISTE, 2012.
[2] F. Giulietti, “Autonomous formation flight,” IEEE Control Systems, vol. 20, no.
6, pp. 34-44, 2000.
[3] Z. Hou, “Distributed leader-follower formation control for multiple quadrotors with
weighted topology,” 10th System of Systems Engineering Conference (SoSE), pp.
256-261, 2015.
[4] M. Chiaramonti, “Formation control laws for autonomous flight vehicles,” 14th
Mediterranean Conference on Control and Automation, pp. 1-5, 2006.
[5] G. Vasarhelyi, “Outdoor flocking and formation flight with autonomous aerial
robots,” IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS), pp. 3866-3873, 2014.
[6] H. Shi, “Virtual leader approach to coordinated control of multiple mobile agents
with asymmetric interactions,” Physica D: Nonlinear Phenomena,vol. 213, no. 1,
pp. 51-65, 2006.
[9] CW. Reynolds, “Flocks, herds, and schools: A distributed behavioral model,”
Computer Graphics, pp. 25-34, 1987.
[12] G. Antonelli, “Flocking for multi-robot systems via the null-space-based behav-
ioral control,” Swarm Intelligence, vol. 4, no. 1, pp. 37-56, 2010.
54
BIBLIOGRAPHY 55
[14] J. Bellingham, “Cooperative path planning for multiple UAVs in dynamic and
uncertain environments,” Proceedings of the 41st IEEE Conference on Decision and
Control, pp. 2816-2822, 2002.
[15] A. Richards, “Aircraft trajectory planning with collision avoidance using mixed
integer linear programming,”. Proceedings of the American Control Conference, pp.
1936-1941, 2002.
[16] X. Zhang, “An output feedback nonlinear decentralized controller design for
multiple unmanned vehicle coordination,”. American Control Conference, 2006.
[17] D. Roberson, “Decentralized control and estimation for a platoon of autonomous
vehicles with a circulant communication network,” American Control Conference,
2006.
[18] L. Bakule L, “Decentralized control: An overview,” Annual Reviews in Control,
vol. 32, no. 1, pp. 87-98, 2008.
[19] MR. Jovanovic, “Modeling, Analysis, and Control of Spatially Distributed
Systems,” PhD Thesis, University of California Santa Barbara, 2004.
[20] L. Brunet, “Consensus-Based Auction Approaches for Decentralized Task Assign-
ment,”. AIAA Guidance, Navigation and Control Conference and Exhibit, 2008.
[21] X WASAS, “Consensus in the network with uniform constant communication
delay,” IEEE 51st Annual Conference on Decision and Control (CDC), pp. 3252-
3257, 2012.
[22] T. Yang, “Consensus for multi-agent systems-synchronization and regulation for
complex networks,” American Control Conference (ACC), pp. 5312-5317, 2011.
[23] J. Qin, “Cluster consensus control of generic linear multi-agent systems under
directed topology with acyclic partition,” Automatica, vol. 49, no. 9, pp. 2898-2905,
2013.
[24] A. Belkadi, “UAVs Fleet Control Design Using Distributed Particle Swarm
Optimization: A Leaderless Approach,” International Conference on Unmanned
Aircraft Systems (ICUAS), 2016.
[25] A. Belkadi, “Distributed Path Planning for Controlling a Fleet of UAVs:
Application to a Team of Quadrotors,” IFAC-PapersOnline, vol. 50, no. 1, pp.
15983-15989, 2017.
[26] Abdmouleha, “Review of optimization techniques applied for the integration of
distributed generation from renewable energy sources,” Renewable Energy, vol. 113,
pp. 266-280, 2017.
[27] D. Basturk, “Articial Bee Colony (ABC) Optimization Algorithm for Solving
Constrained Optimization Problems,” Springer-Verlag, Berlin, Germany, pp. 789-
798, 2007.
[28] D. Basturk, “On the performance of articial bee colony (ABC) algorithm,” Appl.
Soft Comput., vol. 8, no. 1, pp. 687-697, 2008.
BIBLIOGRAPHY 56
[31] H. SOKaD, “Satellite formation keeping via chaotic ABC,” Aircraft Engineering
and Aerospace Technology, vol. 89, no. 2, pp. 246-256, 2015.
[33] N.-D. Salim, “PID plus LQR Attitude Control for Hexarotor MAV in Indoor
Environments,” IEEE International Conference on Industrial Technology, 2014.
[34] C.-A. Arellano, “Backstepping control with sliding mode estimation for a
hexacopter,” 10th International Conference on Electrical Engineering, Computing
Science and Automatic Control (CCE), 2013.
[37] PATTON RJ, “Robust model-based fault diagnosis: the state of the art,” IFAC
Proceedings Volumes, vol. 27, no. 5, pp. 1-24, 1994.
[41] K. Zhou, “A new controller architecture for high performance, robust and fault
tolerant control,” Proceedings of the 39th IEEE Conference on Decision and Control,
2000.
[42] Zhou, “Cooperative control for consensus of multi-agent systems with actuator
faults,” Computers Electrical Engineering, vol. 40, no. 7, pp. 2154-2166, 2014.
[44] A. Belkadi, “Conception de commande tolérante aux défauts pour les systèmes
multi-agents : application au vol en formation d’une otte de véhicules autonomes
aériens,” PhD Thesis, Université de Lorraine, 2017.
[46] DD. Karaboga, “An Idea Based On Honey Bee Swarm for Numerical Optimiza-
tion,”. Tech. Report TR06, Kayseri, Turkey: Dept. Comput. Eng., Erciyes Univ.,
2005.
[47] R. Diestel, “Graph Theory,” vol. 173 of Graduate Texts in Mathematics, 3rd ed.
Heidelberg: Springer-Verlag, 2005.
[48] R. Eberhart, “A new optimizer using particle swarm theory,” Proceedings of the
sixth international symposium on micro machine and human science, pp. 39-43,
1995.
[49] M.-M. Zirkohi, “Design of Radial Basis Function Network Using Adaptive Particle
Swarm Optimization and Orthogonal Least Squares,” Software Engineering &
Applications, 2010.
[50] G. Zhua, “Gbest-guided articial bee colony algorithm for numerical function
optimization,” Applied Mathematics and Computation, 2010.