Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 28

Control Systems for Quadcopters and AUV’s

Mg. Ing Horacio M. Simonetti Dr. Ing. Alejandro H. Molina Ing. Mabel Bottoni
Grupo de Desarrollo de Grupo de Desarrollo de Grupo de Desarrollo de
Herramientas Computacionales Herramientas Computacionales Herramientas Computacionales
Facultad Regional Bahía Blanca Facultad Regional Bahía Blanca Facultad Regional Bahía Blanca
Universidad Tecnológica Nacional Universidad Tecnológica Nacional Universidad Tecnológica Nacional
Bahía Blanca, Argentina Bahía Blanca, Argentina Bahía Blanca, Argentina
piddef0912@gmail.com ale.molina@frbb.utn.edu.ar ing_mabel_bottoni@hotmail.com

Abstract
Unmanned Aerial Vehicles (AUVs) and particularly quadcopters, have four motors/propellers assemblies typically mounted
in a crossover arrangement; two pairs rotate clockwise and the other turn counterclockwise, to balance torque, actions,
monitored and executed by a control system.
In this review, the various existing control schemes are analyzed, in order to find control configuretions to optimize flight
tasks in presence of strong winds, within the geographic environment of the Southwest of the Province of Buenos Aires,
Argentina. There the applications of these vehicles are limited, mainly by the characteristics of winds that exceed 35 km / h
(20 knots) and affect flight conditions for precision work. Decisions on the use of quadcopters in this area are based
primarily on their simplicity and versatility.
Quadcopters do not have complex mechanical control connections because they rely on fixed pitch rotors and use the
variation in engines speed for vehicle control (Huo et al., 2014). Nevertheless, the control of a quadcopter is a difficult
task, mainly due to its associated physics and its particular design methods (Mahony et al., 2012). In addition, the
dynamics of a quadcopter is not linear, which generates uncertainties during long-haul flights (Lee et al., 2013); this makes
its flight control a challenge and therefore, different control algorithms have been proposed, as it arises of the literature on
the subject.
In this work, the available literature is reviewed to carry out an analysis of the different control systems applied in
quadcopters. For this, a short review of the math model of the device is made, an evaluation of the control systems used in
quadcopters and a tabular comparison of them, based on which some conclusions are made regarding their use in control
models applicable to vehicles operating in high wind conditions.

Keywords: Drones, AUVs, Control Systems, Methods, Techniques.

1. Introduction

This review analyzes the various existing control schemes, in order to determine control configuration that optimize flight
problems in the presence of strong winds, within the geographic environment of the Southwest of the Province of Buenos
Aires, Argentina. In this area, the applications of these vehicles are mainly limited by the characteristics of winds that
exceed 35 km / h (20 knots and affect flight conditions, for precision work.
Also, the dynamics of the quadcopter is not linear, which generates uncertainties during long-haul flights (Lee et al., 2013).
This makes its flight control a challenge and therefore different control algorithms have been proposed, according to the
literature on the subject.
In this work, the available literature is reviewed to carry out an analysis of the different control systems applied in
quadcopters. For this, a short review of the math model of the device is made, an evaluation of the control systems used in
quadcopters and a tabular comparison of them, based on which some conclusions are made regarding their use in control
models applicable to vehicles operating in high wind conditions.

2. Materials and Methods.

2.1. Mathematical model of the dynamics of the quadcopter

Model is obtained starting from Fig. 1


Figure 1
Bouabdallah (Bouabdallah, 2007) proposes as equations of motion of the quadcopter:

where Φ, θ y φ are the roll, pitch and yaw angles respectively; Ix, Iy, Iz are the moments of inertia in the axes x, y, z
respectively; Jr is the inertia of the rotor; Ω is the angular velocity of the rotor; l is the length of the rotor arm from the
origin of the coordinate system; and the inputs to the four rotors are given by the expressions:

where b and d are, respectively, the coefficients of push and drag.

2.2. Description of the most used control algorithms

Until recently, developing an autonomously controlled miniature-scale aerial vehicle was a dream of many researchers,
who were limited by the restrictions imposed by previously existing hardware. What made the construction of autonomous
aerial robots possible was recent technological advances in small-scale actuators and sensors (mems - Micro
Electromechanical Systems), as well as in energy storage and data processing.
Furthermore, the development of control systems for this type of vehicle is not trivial, mainly due to the complex dynamics
inherent in aerodynamic systems, which are multivariable, under-actuated and also have various non-linear characteristics.
This means that the classical laws of linear and single variables control can have a very limited basin of attraction, causing
instabilities when operating in conditions not too far from those of equilibrium. On the other hand, the techniques
developed for fully actuated robots are not directly applied to the case of under-actuated nonlinear mechanical systems
(Fantoni and Lozano, 1995).
To increase both the reliability and the performance of these systems, advanced control strategies are usually required that
allow taking into account, on the one hand, the complexity of these systems, and on the other, the uncertainties inherent in
any modeling. Such requirements may be possible using nonlinear modeling techniques and nonlinear control theory,
which allows achieving high performance in autonomous flights (Castillo et al., 2005), and in different flight conditions
(stationary, fixed point, landing / take-off). The objectives of a flight control system can be classified into three phases,
depending on the autonomy achieved by the system: System to increase stability (Stability Augmentation Systems): This
type of system seeks to help the piloting of the vehicle, stabilizing the system with a low level control. This prevents the
pilot from having to act based on the dynamic behavior of a system, which once away from a certain point of equilibrium,
ceases to be intuitive for human reasoning; Systems to increase behavior (Control Augmentation Systems): These systems
are at a higher hierarchical level than SAS. Thus, in addition to stabilizing the vehicle, these systems must be capable of
providing a response with certain benefits to references from the pilot, such as pitch angle monitoring; Automatic piloting
systems (Autopilots): They constitute the hierarchically superior level of control. They are fully automatic control systems
that are capable of performing certain types of maneuvers by themselves, such as take-off, landing, or hovering at a certain
height.

Figure 2

There are several characteristics of control systems to keep in mind when applied to drone control. It should be taken into
account that in order to achieve a good operation of a Control System when it is implemented, the use of an adequate noise
filtering, the characteristics of the high frequency response, the adjustment of the degrees of freedom with respect to break-
even point, actuator saturation effects, parameter setting method and computational implementation. These considerations
require the establishment of a model that includes disturbances and noise, taking the traditional model to the one shown in
Figure 2.

Figure 3
In this new model, the process P is subject to disturbances: the load disturbance d (which represents those effects that
separate the process from its desired behavior), and the measurement noise n of the process variable x is the true physical
variable that you want to control. But the control is based on the measured signal and it is corrupted by noise n. The
controller is shown divided into two parts: the feedback compensator C and the feed-forward compensator F. The process is
influenced by the controller through the control variable u. The process thus turns out to be a system of three inputs (u, d, n)
and one output (y). Figure 3 shows the load disturbance acting at the input of the process, but in reality the disturbance can
enter the process in a multitude of different ways, this representation being adopted to simplify its description.
By making a summary of the general design considerations for a controller, we can identify the basic requirements as:
Stability, Ability to follow reference signals, Reduction of the effects of load disturbances, Reduction of the effects of
measurement noise and Rejection of variations in process parameters and / or uncertainties in the model used. Depending
on the specific application, one or more of the indicated requirements will prevail over the others. This new model features
three inputs: r, d and n, which affect variables u, x and y, which are those that describe the operation of the System.
Assuming that the system is linear, there are nine relationships expressible as transfer functions between the input and
output variables. If with X, Y, U, D, N, R we represent the Laplace transforms of x, y, u, d, n, r, Leaving aside the complex
argument s for simplicity, it can be expressed:

We observe that several of the transfer functions are equal and that all relations are expressed as combinations of a set of
six functions, as stated Åström (Åström, 2002)

The transfer functions in the first column determine the responses of the process variable x and the control variable u to the
command variable r. The second column gives the same signals for the case of pure feedback with error, that is, assuming F
= 1. The function P / (1 + PC) in the third column defines the reaction of the process variable x to a load disturbance d,
while C / (1 + PC) gives the response of the control signal to the measurement noise. The system with F = 1 is called pure
error feedback control. In this case, the system is completely characterized by four transfer functions:

These functions are obtained by considering the closed-loop transfer function T of the system and the variation suffered by
the process if the process P experiences a small disturbance around the nominal value of its parameters:
The sensitivity function S then makes it possible to express the relative variation of the closed-loop transfer function with
small variations in the process. From these equations, it is observed that:
S+T=1

So, the closed-loop transfer function T is also called the complementary sensitivity function. This analysis is the same as if
another type of Control system conFiguretion is used, as the one represented in Figure 4, where we could obtain the same
results as before if we set the condition F = A / C

Figure 4

Various control algorithms have been used according to the nature of the drone dynamics. Each control scheme has
advantages and disadvantages, in a first approximation the control systems used could be divided into linear and non-linear.
For a first categorization, this analysis can be restricted to different control systems within these categories.

2.3. Proportional and Integral plus Derivative (PID) (LINEAR) control system
The PID controller was patented in 1939 by Albert Callender and Allan Stevenson of Imperial Chemicals Limited
(Northwich, England). The PID controller represented a huge advance over previous automatic control methods. The PID
control algorithm consists of three different parameters: the proportional, the integral, and the derivative. The proportional
value depens on the current error, the integral depends on the past errors, and the derivative is a prediction of future errors.
The sum of these three actions is used to adjust the process by means of a control element.

2.3.1. PID Control System Fundamentals


Given a system whose variables are made explicit in the control loop of Figure 5,

Figure 5
The system presents a behavior described by the equation:

2.3.2.- Use of PID Control Systems in drones

PID control systems have been used in a large number of applications (Bouabdallah, 2007), due to its simplicity and great
hardiness. One of the main problems in its application in drones is the lack of linearity associated with the mathematical
model and the imprecise nature of the model due to exact mathematical models or the lack of detailed models that include
all the dynamic variables of the drone.
Therefore, the application of a PID control system on drones limits their performance.
In a study Lee (Lee et al, 2012), used a PID controller was used to control of a quadcopter, and a DSC was employed for
altitude´s control. The study showed that by applying the Lyapunov stability criteria, that all quadcopter signals were
uniformly bounded during hovering. Which meant that the quadcopter control worked properly in unstable flight edits.
However, from the analysis of the simulation and experimental graphs, it was verified that the PID controller performed
better in tracking the tilt angle, while in tracking the roll angle errors were observed in steady state.
In another work, Li and Li (Li - Li, 2011), a PID controller was used to control the position and orientation of a quadcopter.
It was observed that the performance of the PID controller in terms of altitude stabilization was relatively good, as well as
response time, the steady-state error was almost zero, and it only presented a slight over damping as an unwanted
characteristic.
Houari (Houari, A., et al., 2020) used multiple Attitude and Altitude PID controllers for a simple linear model of a tilt-rotor
aircraft in order to meet the desired trajectory, the multiple PID controller model did not take In consideration of all the
coupling that exists between the degrees of freedom of our model, adopting an LQR controller for complex maneuvers. A
linearization of the model was used to facilitate its implementation. The results compared some performance indices, such
as boost, response time and control precision, the LQR controller was more efficient than the PID controller in all
performance indices for both vertical take-off and longitudinal movements.
In general, the different reviews of the literature indicate that this type of controller has been applied with success in the
control of quadcopters, although with some limitations. Tuning the PID controller could be inconvenient, as it must be
located close to breakeven for good performance.

2.4. Gaussian Quadratic Linear Control Systems (LQR / G) (LINEAR)

Linear Quadratic Gaussian (LQG) control is one of the fundamental methods of optimal control. Where working with
uncertain linear systems disturbed by additive Gaussian white noise, furthermore the state information is incomplete (that
is, not all state variables are measured and available) and subjected to quadratic control functions. The solution in these
cases is unique and is easily calculated and implemented. Finally the LQG controller is also essential for the optimal
control of disturbed non-linear systems. The LQG controller is simply the combination of a Kalman filter (a linear
quadratic estimator (LQE)) with a linear quadratic regulator (LQR). The principle of superposition ensures that these can be
designed and calculated independently. LQG control systems apply to both invariant linear systems and time-varying linear
systems.
2.4.1.- Fundamentals of the LQR / G Control System
The block diagram of an LQG controller for a quadcopter is shown in Figure 6.
Figure 6
The generic description in terms of a linear dynamical system would be:

Where x represents the vector of the system state variables, u the vector of the control inputs and y the vector of measured
outputs available for feedback. Both the additive white noise of the system v(t), as noise from the measurement w (t) affect
the system. Taking into account that the objective is to find the values of the control input u(t) for each time t, this will
depend only on the latest measurements y(t'), for 0< t'<t , such that the following cost function is minimized:

Where E is the expected value. T can be finite or infinite, but if it tends to infinity, the first term of the cost function J can
be neglected. To keep the cost function in a finite value, J / T can be used instead of J. The LQG controller that solves the
problem by posing the following equations:

The matrix L (t) is called the Kalman gain of the associated Kalman filter present in the first equation. For each t the filter
generates estimates of x (t) using the previous measurements and inputs. The Kalman gain L (t) is calculated from the
matrices A (t), C (t), the two matrices V (t), W (t) of the white noises v (t) and w (t) and finally the matrix of the values E [x
(0)] expected at the origin t = 0. These five matrices determine the Kalman gain through the matrix associated with the
Riccardi differential equation:

Which fulfills that in the initial instant:

The solution for P (t) at 0 <t <T allows the Kalman gain to be obtained as:
Where the solution S (t) for 0 <t <T allows obtaining the feedback gain as:

It is observed that the matrix expressions of the Riccardi differential equations are similar, this is called Duality. The first
equation solves the linear quadratic estimation (LQE) problem and the second solves the linear quadratic regulator (LQR)
problem; Since the problem is dual, both equations solve the linear quadratic Gaussian control (LQG) problem. For this
reason, the LQG problem is said to be separable, since it is solved by solving the LQE and LQR problems separately.
When A (t), B (t), C (t), Q (t), R (t) and the noise matrices V (y) and W (t) do not depend on t as T approaches infinity, the
LQG controller becomes an invariant dynamical system and the second Riccardi differential equation can be replaced by
the Riccardi algebraic equation. If the discrete analysis of this problem is proposed, the solution would be reached in a
similar way to that carried out in the continuous analysis.

2.4.2. LQR / G Control Systems in drones

Bouabdallah and Siegwart used the LQR algorithm to apply it to a quadcopter and compared its performance with that
obtained by applying a PID controller (kemikdallah & Siegwart, 2004). The PID controller was applied using a simplified
dynamic model and the LQR controller used the full model. The results were comparable results, but LQR approach
performed better, bearing in mind it ran in a more complete model.
An LQR controller was also applied for tracking a simple trajectory using a full dynamic model of the quadrotor, this work
by Cowling (Cowling et al., 2007), demonstrated that precise trajectory tracking could be accomplished in simulation in
real time, even with wind presence and other instabilities.
Giraldo Suarez (Giraldo Suarez, E, et al., 2016) used a dynamic model for a quadcopter and applied a control system to
obtain a stable state in 100ms, applying a least squares MIMO identification algorithm to the non-linearized model. Using
an LQR controller, he was able to control and identify the initial system in a shorter setup time than with a conventional
PID controller and also achieved better tracking of a fixed path.
Chovancová (Chovancová, A. Et al., 2020) developed a test bench where three control systems applied to a quadcopter
were evaluated: a PID system, an LQR controller and a Backstepping control. It was compared based on the absolute error
of position and power consumption. The results indicated that the LQR controller managed to follow a fixed path with less
absolute error and less power consumption of the controlled system. The Baskstepping controller had the highest absolute
errors and highest power consumption.
LQR controllers allow better tracking of trajectories without the need to determine their status. In combination with other
controllers it far exceeds its individual performance.

2.5. Sliding Mode Control Systems (SMC) (NON-LINEAR)

This type of control is a non-linear control algorithm.

2.5.1. Fundamentals of Slider Mode Control Systems

Sliding mode control (SMC) is a non-linear control method that alters the dynamics of a non-linear system by applying a
discontinuous control signal (or more accurately, a pre-set set of control signals) that forces the system to "slide" along a
cross section of the normal behavior of the system.
Control signals follow feedback rules that are not a continuous function in time. Instead, it can change from one continuous
structure to another based on its position in state space. Therefore, slider mode control is a control method with a variable
structure. These control structures are designed so that the trajectories always move towards an adjacent region with a
different control structure, so the final trajectory will not exist within the control structure, instead it will slide along the
boundaries control structures. The movement of the system as it glides along these limits is called the sliding mode, and the
locus containing these limits is called the (hyper) sliding surface.
If we consider a non-linear dynamic system described by:
With

a state vector dimension n, y

It is an input vector of dimension m used as feedback. The functions f and B are assumed to be continuous and
differentiable, so as to guarantee that the solution x (t) exists and is unique.
The design of the control rules must guarantee the stability of the equation on the origin so that whenever the system is
restarted it quickly returns to it.
The layout of a slider mode involves:
1. Select a hypersurface (that is, the sliding surface) so that the path of the system exhibits desirable behavior when limited
to it.
2. Find feedback gains so that the path of the system crosses and remains on the hypersurface.
Because the SMC rules are not continuous, they have the ability to take the system to these trajectories for a finite time, that
is, the stability of the sliding surface is better than the asymptotic one. However, once the trajectories reach the slip surface,
the system takes on the character of the slip mode, and for example, at the origin they only have asymptotic stability on this
surface.
Therefore, when designing an SMC Control System, a ф real domain function must be chosen, that represents how “far” x
is from the sliding surface. So, ф(x)=0, the state is on the sliding surface and if ф(x)=1, the state is outside the sliding
surface.
The set of control rules, changes from one state to another depending on the value of ф , so that it takes it in the direction of
ф(x)=0. These trajectories x (t) will approach the sliding surface in a finite time (because the control rules are not
continuous). The sliding surfaces have a dimension n x m (n number of states x and m number of input signals u). Thus, the
set of SMC control rules must ensure that the surface exists with ф(x)=0 y that it can reach this through the trajectories of
the system, with which this state must be reached from any initial condition and once in it, it must be able to maintain it
there. Figure 7 shows the scheme of this type of control.

Figure 7
2.5.2. Use of SMC Control Systems in Quadcopters

The main advantage of the SMC is that by not simplifying dynamic quadcopter operation through linearization, you get
good tracking of a route. XU and Ozguner (Xu and Ozguner, 2006) applied the SMC for the stabilization of under triggered
cascade arrangements. The quadcopter control system was divided into fully activated subsystems and non-activated
subsystems.
Runcharoon and Srichatrapimuk studied an SMC based on Lyapunov's stability theory. By applying an SMC controller,
they were able to steadily steer the quadcopter to the desired position with minimal pitch. The tests were repeated by
injecting noise, demonstrating the good robustness of the SMC controller. Vieira (Vieira, H, et al., 2020) developed a new
type of vector recoil SMC control (BSMC) for positioning and tracking the trajectory of a quadcopter. He also combined a
unified theoretical framework for vector BSMC design and analysis. The results of the simulation for the positioning and
tracking of the quadcopter showed a good performance; in addition, they showed that this type of control could also be
applied to any autonomous airship.
The different studies analyzed indicate that the SMC control has an advantage by not simplifying the dynamic operation of
the quadcopter through linearization; it obtains a good follow-up of different trajectories.

2.6. Control systems Backstepping (Integrator) (NON-LINEAR)

This type of control technique divides the controlling system and progressively stabilizes each of the subsystems obtained,
by means of a recursive algorithm. Its main advantage is that the algorithm converges quickly, which requires fewer
computational resources and can better handle many types of disturbances.

2.6.1. Backstepping Control Systems Principles

The Backstepping approach provides a recursive method to stabilize the origin of a system by feedback only. If a system is
considered as follows:

Where x is real, z1,…, zk are scalars, u is the scalar input to the fx system, f1,…, fk have a value of 0 at the origin (that is, fi
(0,0,…, 0) = 0) and g1,… .gk are distinct from or in the domain. If it is also assumed that the subsystem is stable at its
origin (x = 0) for some control input ux (x) where ux (0) = 0. That is, the subsystem x is stabilized in some way and the
backstepping extends its stability to the surrounding z environment. It is said that in strict backstepping mode around a
stable subsystem x: the control input u has its stabilizing impact on Zn; the zn state acts as a stabilizer for the previous zn-1
state and this process continues until each zi state is stabilized by the zi + 1 state. The backstepping approach tends to
stabilize subsystem x using z1, then try to get z2 to z1 keeping control so that x remains stable. This procedure is continued
until the control input u is reached. Figure 8 shows us a diagram of this type of control.
Figure 8

2.6.2. Use of Backstepping Control Systems (BSC) in dromes

The main problem of the process is its bad robustness. Madani et al. (Madani, et all, 2006) applied this type of control
system to stabilize a quadcopter with the propeller, fully driven, and sub-driven subsystems. Good tracking of yaw angle
and position was achieved in the study; Lyapunov' s theory, helped to stabilize pitch and roll angles.
To improve problems produced by external disturbances, an integrator is added to the Backstepping algorithm and the
algorithm becomes a BSC of the integrator (Fang and Gao, 2011). The comprehensive approach has been shown to
eliminate system steady state errors, reduce response time, and restrict overshoot of control parameters.
Benaddy (Benaddy, A. et al., 2020) obtained a model that uses the Newton-Euler method to capture the dynamics of a
quadcopter, designed non-linear controllers based on Lyapunov's theory and compared three synthesis non-linear control
strategies: Feedback linearization, Backstepping and SMC control. The results show that the Backstepping control
improves the behavior against changes in the parameters of the controlled system.
This control method has a good response to changes in navigation conditions, but this ability is achieved at the cost of its
robustness, its use combined with other methods increases such robustness and improves response time.

2.7. Adaptive algorithm control systems (NOT linear)


Adaptive control algorithms aim to adapt to changes in the parameters of a system. Parameters can be uncertain or vary
over time.

2.7.1. Fundamentals of Adaptive Algorithm Control Systems


Adaptive control differs from robust control in that it does not need a priori information on the limits of variation of the
parameters or their variation over time. Robust control ensures that if parameter changes are within certain limits, it is not
necessary to change control rules, while adaptive control deals with changing control rules and how they can change
themselves. The basis of adaptive control is in the estimation of these parameters, which is a part of the system.
Common estimation methods include recursive least squares and gradient descent. Both methods provide update laws that
are used to modify estimates in real time (that is, as the system works). Lyapunov stability is used to derive these
actualization laws and show convergence criteria (typically persistent excitation; relaxation of this condition is studied in
adaptive control of concurrent learning). Projection and normalization are commonly used to improve the robustness of
estimation algorithms.
In general, it is convenient to distinguish between direct adaptive control and feedback adaptive control, these refer to the
location of the estimator. A distinction must also be made between Direct Methods, Indirect Methods and Hybrid Adaptive
Methods. Direct methods are those in which the estimated parameters are used directly on the controllers. In contrast,
indirect methods are those in which the estimated parameters are used to calculate the required parameters of the controller.
Hybrid methods are based on both parameter estimation and direct modification of control rules.
There are several classifications of adaptive control systems (they may vary according to different authors), the following
general classification can be used as a guide:
• Dual adaptive controllers: based on dual control theory
◦ Optimal Dual Controllers - Difficult to Design
◦ Sub-optimal dual controllers
• Non-dual adaptive controllers
◦ Adaptive Pole Placement
◦ Endpoint Search Drivers
◦ Iterative learning control
◦ Reference model adaptive controllers (MRACs). It incorporates a reference model that defines desirable closed-loop
performance.
▪ Gradient optimization MRAC
▪ MRAC with optimized stability
◦ Adaptive controllers
◦ Model Identification Adaptive Controllers (MIACs): perform identification while the system is running
▪ Cautious adaptive controllers: use current SI to modify control law, allowing SI uncertainty
▪ Adaptive Certainty Controllers: Take the current SI as the true system, don't assume uncertainty
• Non-parametric adaptive controllers
• Adaptive parametric controllers
◦ Adaptive Drivers for Explicit Parameters
◦ Adaptive Implicit Parameter Drivers
◦ Multiple Models - Uses a large number of models, which are distributed in the uncertainty region and are based on
the responses of the controlled system and the models. A model is chosen for each moment, which is more suitable
according to some metric.
A diagram of a system with direct adaptive control is shownn in Figure 9.

Figure 9

2.7.2. Use of Adaptive algorithm control systems in quadcopters

Diao et al. Implemented a continuous adaptive controller that showed good performance (Diao et all, 2011).
Palunko and Fierro implemented an adaptive control system that used FBL (Palunko and Fierro, 2011) for quadcopters,
with dynamic variations in the center of gravity.
De Monte and Lohmann (De Monte and Lohmann, 2013) used an adaptive control technique based on the rectilinear
distance standard (L1) with a trade-off between control performance and robustness. The modified (linearized) model was
able to compensate for constant and moderate wind gusts.
Recently Sanches-Fontes (Sanchez-Fontes et al., 2020) performed numerical simulations to analyze the behavior of a
model developed based on a steerable rotor for the execution of different flight modes and adaptive control, the system
demonstrated great stability by construction.

2.8. Control systems using Robust Control (RC) algorithms (NON-LINEAR)

RC algorithms are designed to handle uncertainty in system parameters or disturbances, ensuring good performance
within acceptable disturbance ranges or non-modeled system parameters. A major limitation found was poor tracking
capabilities.

2.8.1. Fundamentals of Control Systems with Robust Control algorithms

Robust control is an approach to the design of controllers that explicitly deals with uncertainty, they are designed to work
correctly whenever a set of uncertain parameters or disturbances is encountered. Robust methods aim to achieve robust
performance and / or stability in the presence of limited model errors.
The early methods of Bode and others were quite robust, but in the 1960s and 1970s more extensive tests showed that they
lacked robustness to parameter variation, prompting research to improve them. This was the beginning of robust control
theory, which took shape in the 1980s and 1990s and continues today. In contrast to adaptive control, robust control is
static, instead of adapting to variations measurements, the controller is designed to work assuming that certain variables
will be unknown but bounded.
A controller designed for a particular set of parameters is said informally to be robust if it also performs well under a
different set of parameters. High-gain feedback is a simple example of a robust control method; with a sufficiently high
gain, the effect of any variation of the parameters will be negligible. From the perspective of the closed-loop transfer
function, a high open-loop gain leads to substantial rejection of disturbances in the face of the uncertainty of the system
parameters. But the main obstacle to achieving high profits is the need to maintain loop stability. Loop conFiguretion that
allows stable operation can be a technical challenge.
Robust control systems often incorporate advanced topologies that include multiple feedback loops and direct loops. The
control rules would thus be represented by high-order transfer functions required to simultaneously achieve the desired
disturbance rejection performance with robust closed-loop operation.
One of the most important examples of a robust control technique is the infinite loop form in H, developed by Duncan
McFarlane and Keith Glover of the University of Cambridge; This method minimizes the sensitivity of a system over its
frequency spectrum, and this ensures that the system will not deviate much from expected trajectories when disturbances
enter the system.
Another form of robust control is the slider mode control, which is a variation of the variable frame control. While robust
control has traditionally been treated with deterministic approaches, in recent decades this approach has been criticized for
being too rigid to describe actual uncertainty. Robust probabilistic control has been introduced as an alternative, which
interprets robust control within the so-called scenario optimization theory. Another example is loop transfer recovery (LQG
/ LTR), which was developed to overcome robustness problems of linear-quadratic-Gaussian (LQG) control. Other robust
techniques include quantitative feedback theory (QFT), passivity-based control, Lyapunov-based control, etc.
When the behavior of the system varies considerably from its normal operation, multiple control rules may have to be
devised. Each rule addresses a specific behavior mode of the system.
One of the challenges is to design a control system that takes into account the different operating modes of a system and
allows a smooth transition from one mode to the next as quickly as possible. Such a state machine driven composite control
system is an extension of the gain scheduling idea where the entire control strategy changes based on changes in the
behavior of the system.
A diagram of a Robust Control system are shown in figure 10.
Figure 10

2.8.2. Use of Robust Control Systems in quadcopters

A work by Bai and others on robust controller implemented for an altitude control subsystem based on linear control with
robust compensation (Bai et al., 2012).
Tony and Mackunisy proposed a robust tracking algorithm that accomplishes asymptotic stability in the for unknown
parametric uncertainties and non-linear disorders (Tony and Mackunisy, 2012). This was achieved with few control rules,
without feedback, using approximation functions or adaptive update rules online.
Elhennawy and Habib, (Elhennawy and Habib, 2018) designed a robust control in second order sliding mode (2-SMC). The
controller is divided into two sub-controllers for degrees of freedom both on and off. The stability of the proposed
controller was tested using the Lyapunov criterion. Several simulations were carried out to ensure the validity of the
proposed control scheme. The experimental results show the robustness of the proposed SMC against external disturbances.
Jasim and Veres (Jasim and Veres, 2020) noted that drones often need to fly close to buildings and over people in adverse
wind conditions, and therefore require high maneuverability, precision, and rapid response capabilities to ensure safety.
That is why they presented a new and robust non-linear multirotor controller based on essential modifications of the
standard dynamic inversion control, which makes it insensitive to changes in payload and also to large gusts of wind. The
conformation of the system consisted of a robust attitude controller and a lateral and vertical position control in an outer
loop. The authors provided theoretical evidence of robust performance, using a simulation with a realistic non-linear
dynamic model of an aircraft that includes rotor dynamics and its speed limitations to show robustness. Lyapunov stability
methods were used to demonstrate the stability of the robust control system.
2.9. Control systems using Optimal Control algorithms - OCA (LINEAR and NON-LINEAR)

OCAs reduce a variable and obtain the best cost function out of a set of options. A special type of optimization is the
convex optimization, which uses a mathematical optimization technique to minimize a convex variable in a convex set of
variables. Most algorithms consist of the Gaussian Linear Quadratic Control (LQG) problem, which is a combination of a
Kalman filter that is, a Linear Quadratic Estimator (LQE) with a Linear Quadratic Regulator (LQR). A major limitation of
various optimization algorithms is, in general, their poor robustness.

2.9.1. Fundamentals of Control Systems using Optimal Control algorithms

The relevant behavior of various systems can be quantified by n variables. The identification of these variables and the
description of the system in terms of them is more complicated when building a mathematical model of the system, starting
from a modeling already carried out, the behavior of the system will be described by the values of the state vector x, whose
components xi , i = 1,. . . , n, are the state variables. The system evolves with time t, therefore the state variables are
functions of t and will be governed by n first-order differential equations, which have the general form:

ẋ = f (t, x, u)
The equation of state will depend on the state of the system, the time and the control variables u, which form an m-
dimensional control vector and we can restrict the minimum and maximum values of any control that can be used. In that
case, we can make a linear change in the variable to replace ui with another variable vi, defined by

where it is verified for vi that -1 ≤ vi ≤ 1, whence the realizable values of u must meet this condition.
The evolution of the system begins at t = t0 where x (t0) = x0, towards x (t1) = x1 where t = t1, said final time can be fixed
or free. On the other hand, the final state of the system can be known or it may be that said state is on a certain curve or
surface. In general, the condition of the terminal state x1 is that it is within a certain objective set F. The final ingredient of
this general approach to an optimal control problem is cost. The most general form for the cost that will be:

The cost has two parts, the first called terminal cost, which is a penalty with respect to the final state of the system (it only
applies when the final state of the system is not predetermined). The second part depends on the state of the system
throughout the solution and time trajectory and, more importantly, on the control values used in the solution. An important
and special case is when F ≡ 1, and φ ≡ 0, there is no terminal cost. In this case, the cost J (x, u) is equal to the time
required for the system to go from the initial state to the end. We will then have a problem of optimal time; which has of
course to be a non-fixed terminal time problem.

The system can evolve from its initial position to its final position by applying the controls of the set U, while the evolution
of the state is governed by ẋ = f (t, x, u). If there are no appropriate controls, we will say that the system is not controllable
from the initial state to the target set F, and the problem has no solution. The optimal control problem consists of
determining the minimum value of J (x, u) or optimal cost and the value of the control vector u, for which this minimum
cost is obtained; in this case u is the optimal control. We can also determine the solution corresponding to the equations of
state, which give the optimal trajectory, as well as the final value of time, the optimal time, and the final state, if these have
not been predetermined.
Along with the classification of the optimal control problems in continuous and discrete, we can also make a classification
according to the time variable t, whether or not it is explicitly included in the equations of state. In this way we have
problems: Autonomous: where the equation of state does not explicitly depend on time: x = f (x, u) y Non-autonomous:
where the variable t is present in the above equation: x = f (t, x, u). On the other hand, the set U of the admissible controls
can be Unbounded, Bounded or Bang-Bang, depending on whether the U values are bounded or not, or vary only between
discrete values (for example 0 or 1).
An example of what the diagram of a system with an optimal controller would look like is shown in Figure 11

Figure 11

2.9.2. Use of control through Optimal Control algorithms in quadcopters

Satici et al. Applied this technique to a relatively robust controller for altitude and heading tracking (Satici et al., 2013).
Controller’s performance has been experimentally confirmed and was efficient in minimizing errors and rejecting
determined disturbances. Raffo et al. realized an optimal integral predictive controller H infinite for stabilizing the
rotational motion of a quadcopter and for tracking the trajectory, using a predictive control model (Raffo et al., 2010). The
controller consistently rejected the disturbances and exhibited a robust behavior. Comprehensive action was key to achieve
good tracking.
Falkenberg et al. Applied an infinite H-loop optimization algorithm to a quadcopter for iterative parameter identification
and altitude control (Falkenberg et al., 2012). The algorithm performed well with regard to disturbance rejection, also under
conditions of disturbance saturation. Nevertheless, the wind’s thrust effect compensator, founded on the Riccati equations,
did not appear showing computational efficiency.
Pawłowski and Konatowski (Pawłowski and Konatowski, 2020) used the Particle Swarm Optimization (PSO) algorithm for
linear control of a BSP drone, to allow precise tracking of a predefined trajectory. The PSO algorithm was implemented
with a quadratic cost function, observing an unstable behavior of the algorithm, for which a coefficient was introduced that
suppresses the movement of particles in the search space, this ensured the convergence of the solution. In the simulation
research on tracking, the previously generated trajectory, speed and heading from the linear dynamic BSP model were used.
The results of these tests showed that by selecting the appropriate parameters, the algorithm allowed the repeatability of the
results, as well as the adequate mapping of the trajectory at the time of the algorithm operation in an optimal time.
Qu and others (Qu et al., 2020) formulated the problem of planning a route for an unmanned aerial vehicle, to obtain an
optimal route in the field of difficult modeling. To do this they used a novel hybrid algorithm called HSGWO-MSOS that
combines the simplified SGWO optimizer and MSOS search. In the proposed algorithm, exploration and planning skills are
efficiently combined. The SGWO algorithm accelerated the convergence rate and retention of exploration capacity. The
SOS algorithm modified and simplified the GWO to improve the exploitability. The experimental results of the simulation
showed that the HSGWO-MSOS algorithm can obtain a feasible and efficient route, and its performance is superior to the
GWO, SOS and SA algorithm.

2.10. Control systems using feedback linearization algorithms (FL) (Non-linear)

FL control algorithms convert a non-linear system into an equivalent linear one by changing variables. Some limitations of
this method are due to the loss of precision when linearizing variables and it requires having an exact model for its
implementation..

2.10.1 Fundamentals of Control Systems using FL algorithms


Feedback linearization is an approach used to control non-linear systems. The approach proposes a transformation of the
nonlinear system into an equivalent linear system through a change of variables and an appropriate control input. It applies
to nonlinear systems of the form:

Where x is a vector of real components, as is the vector of inputs u. The goal is to get a control input as:
u=a(x) + b(x)v
Which generates a linear input-output map between the new input and the output, resulting in a linear control system that
can be applied by an external control circuit. In the case of FL for a single input and output system (SISO), results are
obtained that can be extended to multiple input and output systems (MIMO). If both u and y are real, the objective is to find
a transformation of coordinates z = T (x), which transforms the equation of the system through the feedback already
indicated. This leads to a new map of linear input and output relationships between each input v and each output y. To
guarantee that the transformation of the system is an equivalent representation of the original, it must be not only bijective
(invertible), but must be infinitely differentiable (it admits derivatives of any order) at the origin of coordinates. Assuming
that the system has n degrees of freedom:
It follows from the equations that the input u does not contribute in the first n-1 derivatives. The coordinate transformation
T (x) that takes the system to normal form takes the form:

This transformation of the path from the original x coordinate system to the z system must satisfy the dimorphism condition
of x, leading to a single z coordinate system, described by:

Whereby the control rule u:

gets a new input-output map that is linear for z1 = y, leading to linearization:

The implementation requires a cascade of n integrators and an outer control loop v, with value;
v= -Kz
Where the state vector z is the output and, adding the n-1 derivatives, we obtain a system
ż = Az

With

Choosing k appropriately, if they fix the poles of the linearized closed loop system. A diagram of these systems is presented
in Figure 12:
Figure 12

2.10.2 Use of Control Systems using FL algorithms in quadcopters

In a work by Palunko and Fierro (Palunko and Fierro, 2011), the FL of the output is realized as an adaptive control
approach in stabilizing and tracking the trajectory of a quadcopter with dynamic changes in its gravity’s center. The device
stabilized and could reconfigure it in real time whenever the gravity’s center varied.
Roza and Maggiore (Roza and Maggiore, 2012) implemented FL and input dynamic inversion to create a path-following
controller that let the designer specify the speed profile and yaw angle as a function of displacement along the path. In
addition, two simulation cases were considered with the quadrilateral flying at different velocities along a trajectory,
showing convergence of speed and yaw angle.
Lee et al. (Lee et al., 2009) compared the FL with the adaptive slider mode control and found that the FL controller and
simplified dynamics were very susceptible to sensor noise and little robust. The SMC performed well in noisy conditions
and the device could estimate uncertainties such as ground influence. Therefore, the non-linear control of FL has good
tracking, but reduced disturbance rejection, but FL used with another algorithm of lower sensitivity to noise, provides good
performances.
Nguyen and Hong (Nguyen and Hong, 2018) presented a new FL approach to program the proportional derived gain (PD-
GS) of a quadcopter's position control in high-wind flight conditions. The advantage of the method used was to provide
smooth, easy-to-implement tracking for real-time applications and rejection of the influence of the wind for position
maintenance. A mathematical model of an unmanned quadcopter in an inertial frame was used to design the controller. The
response between the proposed PD-GS controller and the conventional one was compared through a simulation and the
results obtained showed the efficacy of the proposed approach. This same problem was also addressed by Craig and others
(Graig et al., 2020) seeking greater stability and performance in high wind flight for a quadcopter equipped with flow
probes that measure wind speed. Wind speed measurements were used for attitude and position control through the use of
models based on the estimation of aerodynamic forces and moments. An FL controller with inputs based on the
aerodynamic models was used. The controller also provides for thrust saturation using a variable gain algorithm. The
simulation and experimental results showed the benefits of the FL of the flow and of the variable gain for the control of the
attitude and position.

2.11. Control by Fuzzy Logic Control (FLC) systems (Non-linear)

AI systems apply various methods, some of them biologically inspired, to control a system. Some cases comprise fuzzy
logic, neural networks, machine learning, and genetic algorithms, that generally carry significant uncertainty and
mathematical difficulty. A fuzzy control system is a control system based on fuzzy logic, a mathematical system that
analyzes input analog values in terms of logic variables that take continuous values between 0 and 1, in contrast to classical
or digital logic, which uses discrete values 1 or 0. This complexity and the amount of computational resources required are
limitations for its use. While AI control is not limited to fuzzy logic and neural networks, they are the two most widely
used.

2.11.1. Fundamentals of Fuzzy Logic Control


The CLD is a type of control, usually of a feedback type, that is based on rules. It is aimed at improving the characteristics
of "classical" control, for example, incorporating knowledge that cannot be described in the analytical model on which the
design of the control algorithm is based, and that usually, in "classical" control, is left for manual modes of operation or
other limit or safety mechanisms. CLD applications can be divided into two classes: Those in which the CLD is a
supervisory control, that is, it complements the conventional feedback control, and those in which the CLD replaces the
conventional control.
The CLD works by applying a set of rules that is combined using fuzzy logic. A rule is activated ("triggered") if the
conditions described in the rule's premises are met. The evaluation of those conditions is carried out in a diffuse way, taking
into account the inherent uncertainty of the available knowledge. The input variables are interpreted as linguistic variables.
It is not unusual for more than one rule to be triggered for the same combination of input variables, in this case, the
inference machine in a CLD acts as a parallel processor, that is, all the rules that have some degree of truth in their premises
are triggered and contribute to the fuzzy set of output. Applying Mamdani's inference, the result produced by each of the
rules is combined to give the result of the set, which is the union of the outputs of each of the triggered rules.
The consequents of all the triggered rules are related in the range [-1,1], being combined locally by a logical OR. A logical
OR is a T conorm, for example the maximum point function. It is important to mention that any T conorm could be used for
this, the max function is the most used in real time applications. The expression for the fuzzy set of the output variable
given by Mamdani's inference is then:

There are several methods to build the de-diffusion interface of a CLD, such as: center of gravity, average of the supreme or
weight. The center of gravity method is the most widely used; it consists of obtaining the abscissa of the center of the area
that is formed under the function that represents the combined output fuzzy set. The average of the supreme is obtained
considering only the lines with the maximum membership value within the whole set. The weight method considers the
value (equivalent to the degree of certainty) obtained by each of the individual triggered rules. The center of gravity of each
consequent of these rules, which is previously known, (typically trapezoids or triangles) is weighted by the value of the
height in each case, and a weighted average of all the consequents represented in the output set is obtained. Figure 13
shows the differences between the “classical” control process and the CLD

Figure 13
A CLD applied to the quadcopters control is shown in figure 14
Figure 14

2.11.2. Using Fuzzy Logic Control in Quadcopters

Fuzzy logic algorithms deal with many values logic, not with discrete levels of truth. Santos and others (Santos et al., 2010)
applied an intelligent fuzzy controller to control the position and orientation of a quadcopter that obtained a good response
in the simulations. However, an important limitation of the work was the use of a trial and error method to adjust the input
variables. Fu et al. (Fu et al., 2016) proposed exploring the potential of non-singleton fuzzy logic controllers (FLC)
(NSFLC), or multi-thread FLC, for the control of a quadcopter under various input noise conditions, with different diffuser
levels, and a comparator. The study was carried out using three CRFs of the aforementioned type. Simulation-based
experiments showed that control performances of NSFLC are better than SFLC, especially under noisy conditions.
Mesdaghi and Moghadam (Mesdaghi and Moghadam, 2019) raised the problems of the complexity of the manual control
system of a quadcopter, where the user is the observer in addition to controlling the vehicle. Thus, they used a fuzzy logic
algorithm for the main and subsidiary movements of control through the physical-voluntary movements of the human head
or wrist. Thus, the user can control the robot, without looking at the control board, only changing the control of the head for
voluntary or physical movements. The simulation results showed that using a fuzzy algorithm to determine the bending
scale at different angles can decrease human errors and processor calculations. Also, a fuzzy logic algorithm can optimally
track voluntary physical movements of the user. In addition, the output noises of the system can be adjusted using the
involuntary movements of the user.
Kemik et al. (Kemik et al., 2020) used adaptive neuro-fuzzy controllers for tracking control of the trajectory of an
unmanned aerial vehicle. The proposed controller update rules were derived using the sliding mode control theory, a sliding
surface was generated using neuro-fuzzy controller parameters to achieve zero error in steady state. To evaluate the
effectiveness of the proposed control scheme, a Parrot AR.Drone 2.0 drone was used, in which PID controllers were also
installed to compare the performance of the proposed controller. Based on different trajectories, the experimental results
obtained showed good performance against initial positioning errors other than zero.
Ferdaus et al. (Ferdaus et, al., 2020) analyzed the state of the art of various fuzzy systems from the basic fuzzy system to
the evolving fuzzy system, their application in a rotary wing unmanned aerial vehicle (RUAV) for takeoff. and vertical
landing, to float and perform quick maneuvers, to contribute to the development of fully autonomous control models. They
analyzed from the basic fuzzy system to the evolving fuzzy system, its application in a quadcopter with existing limitations
and possible opportunities, also discussed its impediments and posited possible solutions using fuzzy controllers recently
used in other applications.

2.12. Control by Neural Network systems (Nonlinear)

A neural controller performs adaptive control, learning the dynamics of the controlled system. This control takes the form
of a multilayer nonlinear network whose adaptive parameters are the connection weights between neurons. Somehow, the
network learns to control the system by interacting with it, without the need for a priori knowledge of its characteristics.

2.12.1. Fundamentals of Control through Neural Networks (NN) systems

Artificial neural networks have characteristics similar to those of the human brain. For example, they are able to learn from
experience, to generalize from previous cases to new cases, to abstract essential characteristics from inputs that represent
irrelevant information, etc. This means that they offer numerous advantages and that this type of technology is being
applied in multiple areas. Advantages include: Adaptive Learning (ability to learn to perform tasks based on training); Self-
organization (ability to create their own representation of information through learning); Fault tolerance (certain
capabilities of a NN can persist after taking great damage). Real-time operation (the operation of the NNs can be carried
out in parallel using special hardware); and Easy insertion into existing technology (using specialized NN chips that
improve the capabilities of certain tasks, facilitating their integration into existing systems).
A schematic of an NN is shown in Figure 15:
Figure 15

The input function of the NN can be described as follows:

inputi = (ini1 • wi1) * (ini2 • wi2) * ... (inin • win)

where: * represents the appropriate operator (for example: maximum, sum, productive, etc.), n to the number of inputs to
the Ni neuron and wi to the weight.

The activation function calculates the state of activity of a neuron; transforming the global input (minus the threshold, Θi)
into an activation value (state), whose range normally goes from (0 to 1) or from (–1 to 1). This is so, because a neuron can
be totally inactive (0 or –1) or active (1). The most commonly used activation functions are:

Lineal function:

Sigmoid function:

Hyperbolic tangent function:

Output function:

Its value is the output of neuron i (out i); hence, the output function determines what value is passed to the linked neurons.
If the activation function is below a certain threshold, no output is passed to the subsequent neuron. Normally, not just any
value is allowed as an input for a neuron, therefore, the output values are in the range [0, 1] or [-1, 1]. They can also be
binary {0, 1} or {-1, 1}

Two of the most common output functions are:

None: the output is the same as the input. It is also called the identity function.

Binary:
An RN applied to the control of a controlled system is shown in figure 16

Figure 16

2.12.2. Using NN Control in Quadcopters

Biologically inspired artificial neural networks in the central nervous system and brain were applied by Nicol and others
(Nicol et al., 2008) in a robust neural network algorithm for the control of a quadcopter in order to stabilize modeling errors
and considerable disturbances caused by the wind. The method showed results in achieving improvement in the search for
the desired altitude and in reducing drift due to weight.
Dierks and Jagannathan (Dierks and Jagannathan, 2010) implemented an output feedback using neural networks, for the
formation of quadcopters squadrons in tracking mode, to learn the whole dynamics of the quad, including the dynamics
without model. A virtual neural network controlled the six degrees of freedom, using the four control inputs. Applying
Lyapunov's theory of stability, the control was non-uniformly constrained. Boudjedir et al. (Boudjedir et al., 2012) applied
an adaptive neural network scheme for the stabilization of the quadcopter in presence of sinusoidal disturbances. The
proposal of two parallel individual hidden layers solution was successful, as tracking error was reduced and there was no
drift due to the effect of weight.
Cibiraja and Varatharajana (Cibiraja and Varatharajana, 2017) implemented an attitude control of a quadcopter under
external disturbances. The proposed control method was based on a sliding mode control (SMC) algorithm as it is a robust
control method, but its robustness has the disadvantage of high frequency switching. To eliminate this problem, a gain-
scheduling sliding mode control technique with an adaptive neural network (ANGS-SMC) was used to solve the switching
problem. The SMC control gain was adapted using a feedforward neural network. The performance of the designed
controller was compared by simulation in MATLAB, verifying that with a simple neural network, the vibration effect can
be reduced, without the need for a fuzzy logic system.
Mohanty and Misra (Mohanty and Misra, 2020) explored the use of a Neural Network control on a Linear Quadratic
Regulator (LQR) and a Sliding Mode Control (SMC) designed for a 3-degree-of-freedom (DOF) quadcopter that uses
Quancer's Hover model. The main benefits of this approach were the model's ability to quickly adapt to unmodeled
aerodynamics, disturbances, and component failure. In this way they managed to reduce the costs and time associated with
tests in a wind tunnel and the generation of additional control devices for UAVs.

2.13. Control Systems using Hybrid algorithms

Clearly, the best linear and non-linear algorithms have limitations and no controller has got all the optimal characteristics.
Different approaches try to solve this by combining one or more algorithms.

2.13.1 Fundamentals of Hybrid algorithms


A hybrid algorithm is one that combines two or more algorithms that solve the same problem, either by choosing one (at the
mercy of the data), or by switching between them over the course of the algorithm. This is generally done to combine
desired characteristics of each, so that the overall algorithm is better than the individual components. "Hybrid algorithm"
does not refer to combining algorithms to solve a problem - many algorithms are the simplest combinations of pieces - but
to combining algorithms that solve the same problem and that differ in particular characteristics such as the execution time
for an input size dice.

2.13.2 Use of Hybrid algorithms in quadcopters

Some examples can be cited; Zeghlache et al. (Zeghlache et al., 2012) made a hybrid controller using fuzzy logic with
reverse and slider mode control that successfully eliminated the jitter effect of the slider mode control algorithm.
Benallegue et al. (Benallegue et al., 2006) used an FL controller paralleled with high order SMC to command a quadcopter,
where the SMC acted as an observer and estimator of external instabilities. The system showed good rejection to
disturbances and great robustness.
Madani and and Benallegue (Madani and Benallegue, 2008) performed an adaptive control by backtracking combined with
neural networks. The recoil was used to achieve good tracking of the yaw angle and drive positions while maintaining the
stability of the bank and pitch angles. Neural networks were used to compensate for unmodeled dynamics. The main
contribution of the document was the fact that the controller did not require the dynamic model and other parameters,
which resulted in greater versatility and robustness.
Sharim and others (Sharim et al., 2017) proposed a new control strategy for a quadcopter composed of three parts: system
modeling, integral recoil control, and fuzzy logic compensator. The non-linear model used took into account some non-
linearities and variables that are usually neglected, the controller based on the integral recoil algorithm was used to make
the quadcopter follow a desired route and a fuzzy logic compensator in parallel to the recoil controller integral to improve
trajectory tracking in some critical conditions (high wind speed, mass variation, etc.). Simulations were performed that
showed the efficacy of the proposed approach. Also in this line of work, the tracking control of the trajectory of an
unmanned aerial vehicle, Kemik and others (Kemik et al., 2020) used neuro-fuzzy adaptive controllers. The controller
update rules were obtained based on the sliding mode control theory, generated a sliding surface that uses the neuro-fuzzy
controller parameters to converge to zero error in a stable way. The analytical claims were verified by real-time
experiments in the presence of non-zero initial errors.

2.13.3 Use of Hybrid algorithms in AUV


The problem with the follow-up control of the trajectory of an autonomous underwater robot in three-dimensional space is
that the necessary Brockett condition for stabilization of the feedback is not satisfied and there is no invariant control rule
that makes it reach an equilibrium system-specific in asymptotically stable form. The uncertainty of the hydrodynamic
parameters, together with the non-linear dynamics of the underwater vehicle, makes navigation control and tracking a
difficult task. Mohan and Asokan (Mohan and Asokan, 2010) proposed a hybrid control system combining a sliding mode
control (SMC) and classical PID control methods to reduce tracking errors that arise due to disturbances, as well as
variations in buoyancy. A trajectory planner calculates linear and angular velocities, vehicle orientations corresponding to a
given 3-D inertial trajectory and produces a feasible trajectory. This path is used to calculate the control signals for the three
available hybrid controller inputs. A supervisory controller is used to switch between SMC and PID control according to a
predefined switching rule. The parameters of the switching function were optimized using Taguchi techniques. The
efficiency and performance of the proposed controller were investigated by comparing it numerically with the classic SMC
and traditional linear control systems in the presence of disturbances. The simulations used a full set of nonlinear equations
of motion that yielded a controller response with less tracking error.
Deng and others (Deng et al., 2014) developed an autonomous and remotely operated vehicle (ARV), which has the
characteristics of an autonomous underwater vehicle (AUV) and a remotely operated underwater vehicle (ROV). They used
a Fuzzy-PID controller designed for movement, depth and direction control. The controller synthesizes the advantage of
fuzzy control and PID control, uses fuzzy rules to adjust PID controller parameters online, and achieves better control
effect. The conventional ROV cable is replaced by a fiber optic cable, which makes it available for high-bandwidth real-
time video, data telemetry, and high-quality teleoperation. In addition, with the help of real-time manual remote operation
and scope sonar, it solves the conflicting problem of the AUV, so that it can adapt to the real marine environment and meet
the needs of unknown missions. The experiments carried out were successful.
Wang et al. (2016) designed a sepia-inspired undulating fin-powered biomimetic underwater vehicle (BUV) that can
perform flexible movements by wave propulsion in tight spaces. Hybrid heading control combined an active disturbance
rejection (ADRC) control system and a fuzzy logic system for heading control. The experimental results demonstrated the
viability and efficacy of the control mechanism and system.
A method to reduce the influence of unknown external disturbances, internal coupling effects, and model uncertainties,
using a modified disturbance observer was proposed by Li et al. (Li et. Al., 2019) using a hybrid-based coordinate
controller. in a strategy with a new nonlinear disturbance observer for autonomous systems. The algorithm is a redundant
solution that is often used to obtain the articulation between the desired trajectory of the vehicle's movement. However, due
to calibration errors, mounting errors, and numerical errors, these trajectories may not lead to a desired point accurately.
The hybrid strategy was tested using numerical simulations based on a UVMS to verify the effectiveness of the proposed
coordinate controller and the hybrid strategy. During the simulations, random disturbances were introduced into the system,
but the controller kept its robust characteristic compared to other controllers. Experiments were also carried out which
verified acceptable performance.
Cai et al. (Cai et al., 2020) used Hybrid Drive Underwater Vehicle Handling System (HD-UVMS) to capture products on
the seabed. The purpose of the proposed hybrid propulsion system was to improve the navigability of the HD-UVMS by
improving the stability of its tuning mechanism through two unique long-fin thrusters. The control mode of thrusters and
long-fin thrusters is based on a fuzzy logic control method. A vision system was used to allow the HD-UVMS to gradually
approach marine products with the help of monocular vision and capture them with the help of binocular vision. The
system combines the vision system with fuzzy logic, which was experimentally tested verifying that the proposed system is
practical and valid.
3. Comparison of Algorithms
Table 1 compares the previous algorithms applied to the control of drones and UAVs, although it should be noted that the
performance of a specific algorithm depends upon so many factors that it cannot be modeled completely. Table 1 helps to
give a rough guide as it emerges from previously performed analysis and theoretical knowledge.
Table 1. Characteristics of the algorithms
Despite the low precision in determining the characteristics of the algorithms, you can build a table that indicates in a
“rough” way how each algorithm responds to certain characteristics, for example taking values: 1 = Good, 0 = indifferent
and -1 = Bad. This allows building the following table:
Table 2. Performance of the Algorithms

From this table, an individual applicability index (IAI) is proposed, which allows evaluating the performance of the Control
Algorithm for a given application. The index is obtained by the following formula:
IAI = ∑NC1 (ICi) / NC

With:
NC = Number of features required
CI = characteristic index {1 = Good, 0 = indifferent, -1 = Bad)

This index will return a value between 1 and -1 indicating that the chosen algorithm for the desired characteristics will have
a good performance if the value obtained is close to 1 or a performance if it is close to -1.

With this index, the combined use of two or more algorithms could also be evaluated, through a joint applicability index
(IAC), which is obtained from:

IAC = ∑NA1 (ICIi)/NA

With:
NA = Number of algorithms to use

4. Discussion and Conclusions

The review carried out and the algorithm evaluation method for a specific use allow to have analysis tools for future work
that can be carried out for the control of quadcopters or UAVs. As can be seen from this review, no algorithm has all the
characteristics compared to all the problems that must be faced when starting the design of a control system. It is also clear
from the analyzed works that better performance is obtained by combining different algorithms that provide the best
combination of the desired characteristics, such as strength, adaptability, simplicity, traceability, rapid response and refusal
of disturbances, amongst others. However, this does not guarantee good overall performance; having to reach a
compromise on which characteristics would be the most appropriate, for a given application. Although there is no
consensus on which model would provide the best global performance, the proposed a priori evaluation method constitutes
a useful tool in selecting a combination of control algorithms.

5. Bibliography
Huo, X., Huo, M. and Karimi, H.R. (2014) Attitude Stabilization Control of a Quadrotor UAV by Using Backstepping
Approach. Mathematical Problems in Engineering, 2014, 1-9.
Mahony, R., Kumar, V. and Corke, P. (2012) Multirotor Aerial Vehicles: Modeling, Estimation, and Control of Quadrotor.
Robotics Automation Magazine, 19, 20-32. http://dx.doi.org/10.1109/MRA.2012.2206474
Bouabdallah, S. (2007) Design and Control of Quadrotors with Application to Autonomous Flying. Ph.D. Dissertation,
Lausanne Polytechnic University, Lausanne.
Lee, K.U., Kim, H.S., Park, J.-B. and Choi, Y.-H. (2012) Hovering Control of a Quadroto. 12th International Conference on
Control, Automation and Systems (ICCAS), 17-21 October 2012, 162-167.
Runcharoon, K. and Srichatrapimuk, V. (2013) Sliding Mode Control of Quadrotor. Proceedings of the 2013 International
Conference on Technological Advances in Electrical, Electronics and Computer Engineering (TAEECE), Konya, 9-11 May
2013, 552-557.
Madani, T. and Benallegue, A. (2006) Backstepping Control for a Quadrotor Helicopter. Proceedings of the 2006 IEEE/RSJ
International Conference on Intelligent Robots and Systems, Beijing, 9-15 October 2006, 3255-3260.
Fang, Z. and Gao, W. (2011) Adaptive Integral Backstepping Control of a Micro-Quadrotor. Proceedings of the 2nd
International Conference on Intelligent Control and Information Processing (ICICIP), Harbin, 25-28 July 2011, 910-915.
Palunko, I. and Fierro, R. (2011) Adaptive Control of a Quadrotor with Dynamic Changes in the Center of Gravity.
Proceedings of the 18th IFAC World Congress, Milan, 28 August-2 September 2011, 2626-2631.
De Monte, P. and Lohmann, B. (2013) Position Trajectory Tracking of a Quadrotor Helicopter Based on L1 Adaptive
Control. Proceedings of the 2013 European Control Conference (ECC), Zurich, 17-19 July 2013, 3346-3353.
Bai, Y., Liu, H., Shi, Z. and Zhong, Y. (2012) Robust Control of Quadrotor Unmanned Air Vehicles. Proceedings of the 31st
Chinese Control Conference (CCC), Hefei, 25-27 July 2012, 4462-4467.
Tony, C. and Mackunisy, W. (2012) Robust Attitude Tracking Control of a Quadrotor Helicopter in the Presence of
Uncertainty. Proceedings of the IEEE 51st Annual Conference on Decision and Control (CDC), Maui, 10-13 December
2012, 937-942.
Satici, A., Poonawala, H. and Spong, M. (2013) Robust Optimal Control of Quadrotor UAVs. IEEE Access, 1, 79-93.
http://dx.doi.org/10.1109/ACCESS.2013.2260794
Falkenberg, O., Witt, J., Pilz, U., Weltin, U. and Werner, H. (2012) Model Identification and H 1 Attitude Control for
Quadrotor MAVs. Intelligent Robotics and Applications, 460-471.
Raffo, G.V., Ortega, M.G. and Rubio, F.R. (2010) An Integral Predictive/Nonlinear h? Control Structure for a Quadrotor
Helicopter. Automatica, 46, 29-39. http://dx.doi.org/10.1016/j.automatica.2009.10.018
Roza, A. and Maggiore, M. (2012) Path Following Controller for a Quadrotor Helicopter. Proceedings of the American
Control Conference (ACC), Montreal, 27-29 June 2012, 4655-4660. http://dx.doi.org/10.1109/ACC.2012.6315061
Lee, D., Kim, H.J. and Sastry, S. (2009) Feedback Linearization vs. Adaptive Sliding Mode Control for a Quadrotor
Helicopter. International Journal of Control, Automation and Systems, 7, 419-428.http://dx.doi.org/10.1007/s12555-009-
0311-8
Santos, M., Lopez, V. and Morata, F. (2010) Intelligent Fuzzy Controller of a Quadrotor. Proceedings of the 2010
International Conference on Intelligent Systems and Knowledge Engineering (ISKE), Hangzhou, 15-16 November 2010,
141-146.
Nicol, C., Macnab, C.J.B. and Ramirez-Serrano, A. (2008) Robust Neural Network Control of a Quadrotor Helicopter.
Proceedings of the Canadian Conference on Electrical and Computer Engineering, Niagara Falls, 4-7 May 2008, 1233-
1237.
Dierks, T. and Jagannathan, S. (2010) Output Feedback Control of a Quadrotor UAV Using Neural Networks. IEEE
Transactions on Neural Networks, 21, 50-66. http://dx.doi.org/10.1109/TNN.2009.2034145
Boudjedir, H., Yacef, F., Bouhali, O. and Rizoug, N. (2012) Adaptive Neural Network for a Quadrotor Unmanned Aerial
Vehicle. International Journal in Foundations of Computer Science and Technology, 2, 1-13.
http://dx.doi.org/10.5121/ijfcst.2012.2401
Zeghlache, S., Saigaa, D., Kara, K., Harrag, A. and Bouguerra, A. (2012) Backstepping Sliding Mode Controller Improved
with Fuzzy Logic: Application to the Quadrotor Helicopter. Archives of Control Sciences, 22, 315-342.
Madani, T. and Benallegue, A. (2008) Adaptive Control via Backstepping Technique and Neural Networks of a Quadrotor
Helicopter. Proceedings of the 17th World Congress of the International Federation of Automatic Control, Seoul, July 6-11
2008, 6513-6518.
K.J. Åström: (2002) Control System Design - Lecture Notes for ME155.California at Santa Barbara.
Houari, A., Bachir, I., Krachai Mohamed, D. and Kara Mohamed, M. (2020). PID vs LQR controller for tilt rotor airplane,
International Journal of Electrical and Computer Engineering (IJECE). Vol. 10, No. 6.
Giraldo Suarez, E., Muñoz Gutiérrez, P. y Bonilla Becerra, J. (2016). Identificación y control de un Vehículo Aéreo no
Tripulado tipo Quadcopter , USBMed, Vol. 7, No. 1
Chovancová, A.; Fico, T.; Duchoň, F.; Dekan, M.; Chovanec, Ľ.; Dekanová, M. (2020). Control Methods Comparison for
the Real Quadrotor on an Innovative Test Stand. Appl. Sci.,10, 2064
H. S. Vieira, E. C. de Paiva, S. K. Moriguchi and J. R. H. Carvalho, (2020),"Unified Backstepping Sliding Mode
Framework for Airship Control Design," in IEEE Transactions on Aerospace and Electronic Systems, vol. 56, no. 4, pp.
3246-3258, doi: 10.1109/TAES.2020.2975525.
A. Benaddy, M. Bouzi and M. Labbadi, ( 2020),"Comparison of the different control strategies for Quadrotor Unmanned
Aerial Vehicle," 2020 International Conference on Intelligent Systems and Computer Vision (ISCV), Fez, Morocco,, pp. 1-
6, doi: 10.1109/ISCV49265.2020.9204143.
SANCHEZ-FONTES, E. et al. (2020). Nuevo vehículo aéreo autónomo estable por construcción: conFigureción y modelo
dinámico. Revista Iberoamericana de Automática e Informática industrial, No. 3, p. 264-275, ISSN 1697-7920
A. M. Elhennawy and M. K. Habib, (2018), "Nonlinear Robust Control of a Quadcopter: Implementation and Evaluation,"
IECON 2018 - 44th Annual Conference of the IEEE Industrial Electronics Society, Washington, DC, pp. 3782-3787, doi:
10.1109/IECON.2018.8591344.
Omar A.Jasim Sandor M.Veres (2020). A robust controller for multi rotor UAVs Aerospace Science and Technology
Volume 105,, 106010
Craig, W., Yeo, D. and. Paley, D. (2020). Geometric Attitude and Position Control of a Quadrotor in Wind. Journal of
Guidance, Control, and Dynamics. https://doi.org/10.2514/1.G004710
C. Fu, A. Sarabakha, E. Kayacan, C. Wagner, R. John and J. M. Garibaldi, (2016). "A comparative study on the control of
quadcopter UAVs by using singleton and non-singleton fuzzy logic controllers," IEEE International Conference on Fuzzy
Systems (FUZZ-IEEE), Vancouver, BC, 2016, pp. 1023-1030, doi: 10.1109/FUZZ-IEEE.2016.7737800.
Mesdaghi, S., Dosaranian-Moghadam, M. (2019). 'A Fuzzy Logic Control System for Quadcopter by Human Voluntary-
Physical Movements', Journal of Computer & Robotics, 12(1), pp. 39-45.
Ferdaus, M.M., Anavatti, S.G., Pratama, M. et al. (2020). Towards the use of fuzzy logic systems in rotary wing unmanned
aerial vehicle: a review. Artif Intell Rev 53, 257–290. https://doi.org/10.1007/s10462-018-9653-z
N.Cibiraja & M.Varatharajana (2017). Chattering reduction in sliding mode control of quadcopters using neural networks.
Energy Procedia Volume 117, June 2017, Pages 885-892
Mohanty S., Misra A. (2020) Autonomous Control Analysis of an Quadcopter Using Artificial Neural Network. In: Gunjan
V., Zurada J., Raman B., Gangadharan G. (eds) Modern Approaches in Machine Learning and Cognitive Science: A
Walkthrough. Studies in Computational Intelligence, vol 885. Springer, Cham. https://doi.org/10.1007/978-3-030-38445-6₄
Kemik H., Dal M.B., Oniz Y. (2021) Trajectory Tracking of a Quadcopter Using Adaptive Neuro-Fuzzy Controller with
Sliding Mode Learning Algorithm. In: Kahraman C., Cevik Onar S., Oztaysi B., Sari I., Cebi S., Tolga A. (eds) Intelligent
and Fuzzy Techniques: Smart and Innovative Solutions. INFUS 2020. Advances in Intelligent Systems and Computing, vol
1197. Springer, Cham. https://doi.org/10.1007/978-3-030-51156-2_126

You might also like