Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

2020 International Conference on Unmanned Aircraft Systems (ICUAS)

Athens, Greece. September 1-4, 2020

Aerial Following of a Non-holonomic Mobile Robot subject to


Velocity Fields: A Case Study for Autonomous Vehicles Surveillance
Anand Sánchez-Orta1 , Pedro Castillo2 , Fatima Oliva-Palomo3 , Julio Betancourt 2 ,
Vicente Parra-Vega1 , Luis Gallegos-Bermudez1 , and Francisco J. Ruiz-Sanchez1

Abstract— Surveillance is a major concern nowadays for


the development of autonomous vehicles (AVs) technology,
in particular during prototyping stage. However, it is unclear
what an effective strategy is for aerial imagery with drones.
On one hand, the dynamics of such autonomous vehicle
is commonly subject to non-holonomic constraints, whose
steering wheel is driven under the paradigm of look-ahead,
i.e. users drive a car by looking forward, instead of minimizing
the instantaneous position error as if navigating in a smooth
velocity field. On the other hand, aerial footage is typically
conducted with underactuated drones where admissible posi-
tion trajectories are limited to a subset of paths.
In this paper, a scheme for aerial visual servoing of a mobile
ground robot tracking a smooth vector field is proposed. The
scheme is based on structural properties and constraints of Fig. 1. Proof of concept of the problem: The chauffeur commands
both systems, such as a non-holonomy, nonlinear dynamics the vehicle in urban set, picking up the next contour A, then B, then
and underactuation. The result is aerial surveillance of an C to plan smoothly ahead, as if navigating in smooth vector fields;
autonomous vehicle mimicking how we drive a real vehicle by then, facilitating tracking with airborne monocular camera, since
redefining locally smooth velocity field toward the next target height can be regulated efficiently, and assuming target remains in
through admissible paths. Experiments show the feasibility of the FoV.
the proposed scheme by controlling the quadrotor’s underac-
tuated positions (x, y) from the velocity field, the altitude z
to facilitate a monocular camera, and yaw angle ψ to recover
the direction of the field. steering wheel angle is, based on the corner ahead of
Index Terms— Aerial Surveillance; Aerial Eye-in-hand con- us, as if we were drawing smooth vector fields. Then,
figuration; Image-based visual servoing; Autonomous vehicles; if we abstract that, such driving motion corresponds
Velocity fields to an autonomous car that mimics human driving,
drawing smooth vector fields. The problem of aerial
I. Introduction
surveillance becomes the aerial tracking of the car Center
Aerial visual tracking is a major hurdle to tackle of Mass (x, y) and yaw ψ angle that matches the heading
for feasible prototyping of an autonomous vehicle (AV). direction of the car, at a given altitude z. However, such
AVs are subject to non-holonomic constraints since rear aerial tracking of (x, y, z, ψ) requires solving the vector
wheels are fixed preventing to produce lateral velocity in field and underactuated visual tracking. Therefore, we
any direction. On the other hand, when we drive such aim at exploiting structural properties of the car, such
a car, we pick visual landmarks ahead (for instance, as as non holonomy, and the vector field control to mimic
showed in Fig. 1, a corner at the end of the block) in the driving, and consider underactuation of the quadrotor, to
visual field to plan how to command the steering wheel establish a tractable problem statement. To this end, we
to intersect ahead such landmark. The result is smooth assume that the car evolves in the (x, y) plane subject to
car trajectories that point ahead with small admissible non-holonomic constraints while tracking smooth vector
turning ratio of the steering wheel. These trajectories fields designed using contour ahead of the road in ψ
can be modeled as vector fields pointing to a landmark direction, thus such vector fields provide the desired
(known as the contour), for instance, when we need to admissible trajectories to the car, and an underactuated
turn in the next corner of the block, we solve what the quadrotor equipped with airborne monocular camera is
1 Robotics and Advanced Manufacturing Dept., Center for considered to track the car at a given desired altitude
Research and Advanced Studies–CINVESTAV, 25900 Mexico. zd , see Fig. 2.
Emails: anand.sanchez@cinvestav.mx, vparra@cinvestav.mx, In this paper, we propose a vector field control for non-
luis.gallegos@cinvestav.edu.mx, fruiz@cinvestav.mx
2 Sorbonne Universités, Université de Technologie de Com- holonomous mobile robots steered by differential wheels,
piègne, CNRS UMR 7253 Heudiasyc Lab., CS 60319, 60203 Com- inspired in our previous scheme [24], then the full non-
piègne Cedex, France. Emails: pedro.castillo@hds.utc.fr, gube- linear underactuated quadrotor dynamics are considered
tanc@hds.utc.fr
3 National Institute of Technology, Saltillo, 25270 Mexico. Email: and the underactuation is solved for a regulator control
fatima.oliva.palomo@gmail.com system. The next desired velocity is recomputed at each

978-1-7281-4277-7/20/$31.00 ©2020 IEEE 1093

Authorized licensed use limited to: CALIFORNIA INSTITUTE OF TECHNOLOGY. Downloaded on October 18,2020 at 12:03:46 UTC from IEEE Xplore. Restrictions apply.
projection, [7], which represents image plane velocity at
predefined height. In contrast, most approaches use the
well-known eye-in-hand Image-Based Visual Servoing of
[8], to derive an orthodox visual servoing of a ground
target by quadrotors, [9]. However, neither approaches
consider any physical constraint of the mobile robot
and the quadrotor capabilities or limitations. Due to
the complexity of airborne visual tracking, multiple
processing units are considered to switch and balance
computational load, [10]. Optical flow is a cautious
alternative to provide damping at image plane, [11],
given that it is not a derivative, similar to [10].
Advanced control schemes have been proposed, such as
fractional control with conventional vision, [14], however
the power of such scheme is not exploited due to its
assumptions, which are based on conventional ideal
Fig. 2. Experimental test-bed showing the AR Drone 2.0 as the conditions; the difficulty to implement the scheme arises
quadrotor, and the Jumping Sumo as the mobile robot (with a
chess board on top for image processing), both from Parrot. since it considers attitude acceleration measurements,
thus chattering is introduced in the second derivative of
position control, unlike [15] where complex disturbance
time to obtain a vector field based on the car position are compensated with continuous fractional control.
and the constant contour, for a motion-to-motion control Path tracking is also considered in [16] by formulating
regimen in image plane. The smoothness of the field a constraint quadratic program problem given the field of
facilitates the airborne visual tracking of altitude z view and maximum acceleration, however the numerical
since such state is controlled independently as usual for solution may not guarantee smooth enough trajectories,
quadrotors, thus it is possible to use monocular vision, which may demand a high battery consumption attempt-
assuming that the mobile robot’s CoM remains in the ing to reduce L2 norm. Interestingly, a fuzzy engine is
field of view (FoV), which is also a reasonable assumption proposed to smooth out and produce allowable quadro-
due to the very smooth path of the field. Overall, the con- tor trajectories from monocular camera, [17], however
tribution amounts for aerial visual servoing of terrestrial neglecting the physics of quadrotor and mobile robot,
vector fields drawn by an autonomous car that mimics similar to [18], where a hybrid scheme is proposed with
human driving command. Two proposition with proofs a mobile manipulator, with dual-task planning.
are presented, and experiments are discussed. Briefly, More general sensory data are considered for drones to
manuscript is organized as follows. Section II introduces propose obstacle evasion and target, however assuming
the preliminaries, and Section III introduces the dynamic in addition perfect knowledge of obstacles and target for
models of the airborne monocular camera, the quadrotor intersection planning in looking-ahead trajectories, [12],
and the mobile robot, as well vector field design. Section even in civil cluttered flying conditions, [13], all based
IV presents the proposal, and experiments are shown on potential fields and civil aviation rulings, [19]. For
in Section V. Then, discussions and conclusions are missile drones, the look-ahead scheme is more critical,
addressed in Sections VI, and VII. [20], similar to heavy oil tankers, [21], to avoid obstacles
and converge to the target. In addition, it is customary to
II. The Problem assume monocular camera based on the virtual camera
In this section we review the relevant literature of approach that aligns the image plane collinear to the
quadrotor dynamics and visual servoing for tracking ground plane of the vehicle.
terrestrial targets. Therefore, we left out important In contrast to the previous approaches, when the
contributions that neglects either dynamics or stability. moving target is a vehicle driven by a human driver, there
arises an important consideration to take into account
A. Relevant Literature for an effective visual tracking scheme. In this case, the
Regulation of visual relative position and yaw with driver evidently avoids obstacles and maneuver smoothly
a quadrotor, assuming 2D ground target, has been toward the target, as if the vehicle’s CoM trajectory is
studied for landing [1], and saturated visual servoing [2], immersed into a vector field. Hereby, we consider the
using spherical projection [3], also assuming that target challenging case of exploiting the physics of the vehicle,
remains in the FoV, though [4] addressed how to handle such driving mode, and the drone with monocular air-
constraints to enforce that, improving the control loop borne camera system. Then, we need several assumptions
with a backstepping scheme in [5]. Moreover, simplified for the problem statement of quadrotor tracking of visual
control design can be obtained with the virtual camera velocity fields when the vehicle avoids obstacles as if
approach, [6], or exploiting the conventional perspective immersed in smooth vector fields.

1094

Authorized licensed use limited to: CALIFORNIA INSTITUTE OF TECHNOLOGY. Downloaded on October 18,2020 at 12:03:46 UTC from IEEE Xplore. Restrictions apply.
B. Assumptions The mobile robot differential kinematics is processed
We aim at studying the visual tracking of mobile robot under kinematic controller in a PC, then velocity control
that smoothly draws lines pointing ahead, as if a vector command are sent wireless. Trajectories are processed
field converging to a contour located ahead of the AV in with a pinhole camera to produce an image-vector field.
the FoV; in addition, we consider that the mobile robot Then, considering the full non-linear dynamics, a quadro-
lies in the field of view of an airborne camera mounted on tor controller in well-posed attitude representation is
an underactuated drone. Then, we assume the following: implemented to closed-loop, under underactuation for
well-posed underactuated coordinates.
1) The AV mimics motion of a car driven by a human,
implying that its CoM draws a smooth vector field
toward the contour ahead; III. Dynamics of the AV-Quadrotor System
2) The AV’s path curvature is finite, implying boun- A. The Aerial Drone System
ded yaw angle that complies with the non-
holonomic constraint and a bounded steer wheel 1) Dynamic Model: Consider a rigid body quadrotor
angle. subject to 3 moments and a vertical force and in body
3) Quadrotor has underactuation in (x, y) coordi- fixed frame B = {ebx , eby , ebz }, whose origin coincides with
nates, however the controlled Degrees of Freedom its center of mass (CoM), [15]. Euler angles ψ, θ, ϕ (yaw,
(DoF) are yaw ψ angle, altitude z, as well as (x, y) pitch, roll, respectively) parametrizes orientation, with
motions, where the latter are controlled indirectly respect to an earth fixed frame E = {ex , ey , ez } through
using roll and pitch angles. rotation matrix R : B −→ E ∈ SO(3). Its nonlinear
4) Monocular camera is along the optical axis equations of motion of the quadrotor can be described
collinear to z in body coordinates, where the mobile as follows:
robot lies in its FoV. Since the quadrotor altitude
can be controlled quickly and independently, 2D mξ¨ = −T R(q̄)ez + mgez (1)
visual servoing suffices to capture AV’s CoM. 1
q̇ = q⊗Ω (2)
5) For experiments, we consider the 6 DoF of Parrot 2
AR.Drone with monocular camera and airborne J Ω̇ = −Ω × JΩ + τ (3)
processing only, [22], to track in the image plane
a differential driven mobile robot. where ξ = [x, y, z]T ∈ R3 denotes the CoM of frame B,
6) Moreover, the hardware, software and firmware of relative to E, vector Ω = [Ω1 , Ω2 , Ω3 ]T ∈ B represents the
airborne sensors have limitations, including monoc- angular velocity. m and g are the known quadrotor mass,
ular camera, noisy, delayed and quantized measure- and the constant gravitational acceleration, respectively,
ments yielding approximation of the required vir- J ∈ R3×3 stands for the constant inertia matrix with
tual camera transformations; then, we assume that respect to the CoM in B, and [Ω×] represents the skew-
the fast quadrotor actuation and smoothness of symmetric matrix of vector Ω. Scalar thrust T ∈ R+ rep-
trajectories produce small angles in roll and pitch, resents the magnitude of the principal non-conservative
then the diffeomorphism of the virtual camera forces exogenous acting in z−axis, and τ ∈ R3 stands
projection is almost an identity. This assumption for the control torque in B.
is substantiated since the proposed quadrotor con- 2) Well-posed Kinematic Representation: We consider
trols ensure asymptotic convergence to the vicinity a rotational matrix R(q) expressed in terms of a unit
of stability along very smooth trajectories. quaternion q = [q0 , q1 , q2 , q3 ]T ∈ S 3 , then R(q) represents
the rotation of the vehicle, whose scalar and vector parts
C. Problem Statement are q0 ∈ R and q = [q1 , q2 , q3 ]T , respectively. This leads
Considering all these assumptions, then the research to [ ] [ ]
problem can be stated as follows: “Design an aerial q cos(θ)
q= 0 = (4)
surveillance system for an autonomous vehicle subject q λ sin(θ).
to 1)-6).”
The quaternion product ⊗ is given by, [25],
D. Proposed solution [ ][ ]
The trajectories of the non-holonomic differential p0 −pT q0
p⊗q = (5)
wheeled mobile robot, under Velocity Field Control in p I3×3 p0 + [p×] q
inertial frame, are captured by the monocular camera as with its conjugate is
a image-based field in aerial body frame. Such image is
processed on board, without relying data to ground sta- q ∗ = [q0 , −q1 , −q2 , −q3 ]T (6)
tion, to produce the desired image-based trajectories for
the controller of the underactuated quadrotor dynamics. Since the quadrotor has 6 DOF, but only four control
Fig. 1 shows an illustration of the proof of concept of inputs are available, translation coordinates (x, y) are
the problem. considered as the underactuated coordinates.

1095

Authorized licensed use limited to: CALIFORNIA INSTITUTE OF TECHNOLOGY. Downloaded on October 18,2020 at 12:03:46 UTC from IEEE Xplore. Restrictions apply.
3) Solving underactuation: To command translation
coordinates (x, y), it is required to solve underactuation,
in this case, consider the virtual control approach, where
input ud defines the virtual input, or control, to actuate
the position dynamics (1), that can be interpreted as the
desired force needed to command such (x, y). To this end,
consider  
ud1
ud = −Td R(q̄d )ez = ud2  (7)
ud3
Fig. 3. Rotations between quadrotor (a Parrot AR Drone 2.0)
and mobile robot (a Parrot Jumping Sumo with a chess board on
where Td represents the magnitude of a desired force, i.e. top, for visual detection).
Td = ∥ud ∥, and −R(q̄d )ez its desired direction, wherein
the unit vector defines the direction as follows
   
−2q1d q3d − 2q0d q2d ûd1 attitude is given by the consecutive rotations of q dz and
ud
ûd = = −R(q̄d )ez = −2q2d q3d + 2q0d q1d  = ûd2  q dxy as follows
∥ud ∥  √ ( )
−1 + 2q1d
2 2
+ 2q2d ûd3
(8)
1
−2û d + 2 cos ψd
 2 (
3
) ( 2 ) 
Now, consider the unit quaternion constraint q0d 2 2
+ q1d +  ûd1 sin ψ2d +ûd2 cos ψ2d 
 √ 
2 2
q2d + q3d = 1, and from (8), the scalar part of the  −2ûd3 +2 
q d = q dz ⊗ q dxy = 

( ) ( )
 (12)
quaternion, expressed in terms of q3d and ûd3 , results ψd ψd
 −ûd1 cos√ 2 +ûd2 sin 2 
√  −2ûd3 +2 
1  √ ( )
q0d = −2ûd3 + 2 − 4q3d
2 (9) 1
−2ûd3 + 2 sin ψ2d
2 2

then, the solution for q1d and q2d , using (8)-(9), is given B. Vision Algorithm
by A chess-pattern target was placed onto the mobile
[ ] [ ][ ] robot to process its orthogonal axis in its frame by high
q1d −q3d q0d ûd1
= 2(q2 1+q2 ) density of corner, assuming a background floor without
q2d 0d 3d −q0d −q3d ûd2
cluttered lines or corners. The camera together with the
 √  processing unit of the AR Drone 2.0 quadrotor detects
q3d
−1
−2 ûd +2−4 q3d 2
3 [ ]
ûd −1 ûd −1 corners with the following:
 ûd1
2
=
3 3

1
−2 ûd +2−4 q3d 2
3 q3d ûd2 1) Finds the 64 most confident corners of chess board
2 ûd −1 ûd −1
3 3 with a pinhole camera image.
Notice that q0d , q1d , q2d depends on q3d and its direction 2) Determine correlation between images using the
given by ûd . Since there is not a constraint for q3d , let Lukas Kanade algorithm, [28].
the desired rotation axis be defined in the xy plane, i.e. 3) Determine centroid based on matched chess cor-
q3d = 0, therefore desired quaternion becomes ners, as follows
1√  ∑
N
2 −2ûd3 + 2 pt = pfi /N (13)
 √ ûd2 
 −2ûd3 +2  i=0
q dxy =   (10)
−ûd 
 √−2ûd1 +2  where pt is the estimated centroid of the target,
3 pfi is the position of each matching corner feature,
0 and N is the total number of matching corners.
Now, given that the information provided by the camera 4) Update matching corners, then determine the cen-
is in the quadrotor’s frame, which has to track the yaw troid (13).
of the AV or mobile robot, the axis of the quadrotor 5) Target is detected if area, centered in the centroid
must be aligned to the robot frame by a desired ψd yaw with radius of 20 pixels, has at least 15 corners.
angle about z-axis on the body frame. This rotation is See Figure 4, there is no target detected, even if a
defined by centroid is calculated with only one corner, N f = 1,
 ( )
but in Figure 5, algorithm determines target detection
cos ψ2d
  for N f = 22 corners.
 0 
q dz = 


 (11) C. Onboard Computation of Centroid and Optical Flow
 (0 ) 
ψd The estimation of the optical flow is performed on a
sin 2
dedicated DSP to reduce the computational cost to the
From this, see in Fig. 3, the desired quaternion for the main control board, whose processing time takes between

1096

Authorized licensed use limited to: CALIFORNIA INSTITUTE OF TECHNOLOGY. Downloaded on October 18,2020 at 12:03:46 UTC from IEEE Xplore. Restrictions apply.
D. The Autonomous Vehicle as a Mobile Differential
Robot
Consider a 2D terrestrial differential wheeled robot
steered by the difference of angular wheel velocity con-
nected along a perpendicular axis to longitudinal motion,
see Figure 7. The state equation is follows
Ẋ = Aω, (15)
where X = [x, y, θ] ∈ R is the robot pose, for (x, y)
T 3

representing position its CoM, and θ its orientation, both


with respect to the inertial reference frame, and ω =
[ω1 , ω2 ], the angular speed of the robot’s wheels. The

Fig. 4. Frame when the target is not in FoV.

Fig. 5. Frame when the target is in the FoV. Fig. 7. Kinematic of a differential robot as an abstract represen-
tation of an AV.

flow matrix is given by


15 and 20 ms, after a Kalman filter was implemented for  r1 r2

robust processing. To determine the direction of centroid, 2 cos θ 2 cos θ
see Fig. 6, let the vector be A= r1
2 sin θ r2
2 sin θ  , (16)
− rL1 r2
L
[ ]
uc which maps generalized angular velocities ω to opera-
pt = (14)
vc tional velocity Ẋ, with r1 = r2 the radii of the left and
right wheels, respectively, and L the length to the center
which can be used,[ together of each wheel.
]T with its image approxi-
mation of velocity, u̇c v̇c , for control purposes, by
E. Vector Fields as Desired Smooth Trajectories
considering that image axes are parallel to x and y axes
of body fixed frame. Velocity Fields stand for Cartesian vector distribu-
tions, or fields, and describe kinematic motion of a mov-
ing particle, i.e. the instantaneous changes in direction
toward the objective, or contour. It is useful in our case,
if we consider that AV driver commands the vehicle to go
to a target, or contour, and such driver steers the vehicle
within prescribe standards, thus producing a smooth
maneuverings, that can be modeled as a vector field that
converge to the contour. Then, we assume that the driver
mental map commands toward the final destination by
picking locally targets as soon as they appear in our
FoV. Once the driver picks a target ahead, for instance
Fig. 6. Image position of the mobile robot frame and the camera to turn in the next corner, we somehow compute how
frame.
to turn the AV avoiding obstacles to pass through until
converge tangentially to the contour. Once we determine

1097

Authorized licensed use limited to: CALIFORNIA INSTITUTE OF TECHNOLOGY. Downloaded on October 18,2020 at 12:03:46 UTC from IEEE Xplore. Restrictions apply.
this objective is achieved, we pick next target (contour) thus ⃗v (r) is a gradual uniform speed transition from the
in the next block. Thus, this abstraction stands for how attractive to the guiding direction as the position gets
we plan next trajectory based on local contour in our closer to the curve, [ and represents
]T the reference of the
FoV. Similar to this phenomena, we assume that the AV desired velocities, ẋd ẏd , for the differential robot.
produce such vector field, then the surveillance problem
is to produce a visual servoing to maintain the AV within IV. Vector Field and Control Design
its FoV. Without loss of generality, aiming at illustrating our
To this end, consider a parametric curve, G(s), where proposal, let the desired trajectory of the mobile robot,
s stands for arc length, or contour ahead in the FoV. The or AV, be a 2D circumference. This simplifies the velocity
problem is to determine the R2 directional components field design because the radial and the tangential vectors
(N⃗ n (s), T⃗ (s)) of the field as a radial attraction to the to the curve can be calculated with respect to the center
curve s, or contour, where N ⃗ n (s) and T⃗ (s) stand for of the circumference, rc = (xc , yc ) and its radius, ρ, then
the Normal and Tangential components, respectively, of √
T⃗ (x, y) = − sgn( (x − xc )2 + (y − yc )2 − ρ)·
the field in R2 , whose resultant points at the target
correspond to G(s) = 0. There are several methods (y − yc , −(x − xc ))
√ ,
in the literature to produce such (N ⃗ n (s), T⃗ (s)) that (x − xc )2 + (y − yc )2

smoothly converge to G(s) = 0, the arc to turn in ⃗ n (x, y) = − sgn( (x − xc )2 + (y − yc )2 − ρ)·
R
the corner asymptotically with uniform speed, as we
(x − xc , y − yc )
usually compute when we drive our cars through urban √ ,
settings. The method we develop considers to pick the (x − xc )2 + (y − yc )2
objective path, or contour, as a parametric curve to such that the velocity field becomes
compute from position data of such contour, the velocity
vector that intersects ahead such contour G(s) = 0, see ⃗v (r) = cos(c1 dr )T⃗ (r) + sin(c1 dr )(−R(r))
⃗ (20)
Figure 8. Thus, consider ⃗v : R2 → R2 the velocity vector with c1 = π
2r .

A. Integration of the Overall Scheme


The image-based control law, based on [23], is
 
b1 tanh (k1 uc + k2 u̇c )
ud =  b2 tanh (k3 vc + k4 v̇c )  (21)
b3 tanh (k5 (z − zd ) + k6 ż − mg)
[ ]T [ ]T
where uc vc , and u̇c v̇c , are respectively image
Fig. 8. Three steps Velocity Fields design method based on differ- coordinates of the target’s centroid, and their rate of
ential geometry properties of the desired trajectory:(a) parametric
curve of motion objective, (b) main directional components of the
change, for bi and kj are positive constants for i ∈
field, (c) calculation of the velocity vectors. {1, . . . , 6} and j ∈ {1, . . . , 3}, and zd refers to the desired
altitude, which is controlled independently, thus it can be
that depends on the position of a given point ⃗r, with considered that visual projection occurs in 2D, at a given
N⃗ n (s) = −R
⃗ n (s) (where R⃗ n (s) is a radial unit vector) altitude z = zd . Then, the well-posed desired quaternion,
and T⃗ (s), components gradually transitioning from a q̄d , is computed from (12) using the normalized vector of
given position CoM to an attractive curve G(s) = 0 (21) and AV’s yaw angle, which represents such mapping.
standing for the turning in the corner. Such transition To design the kinematic control of the AV, forward
uses a variable coefficients as a function of the distance kinematics yields the following map
to the curve. Then, consider [ ] [ ]
x + l cos θ ẋ − lθ̇ sin θ
ζ= =⇒ ζ̇ = (22)
⃗v (r) = µclose (dr )T⃗ (s) + µf ar (dr )(−R
⃗ n (s)) (17) y + l sin θ ẏ + lθ̇ cos θ
where µclose (dr ) and µf ar (dr ) are the variable coefficients By using the non-holonomic constraint, the change of
depending on the distance dr to the curve G(s) = 0. coordinates ẋ = u1 cos θ, ẏ = u1 sin θ, θ̇ = u2 , allows the
To keep a uniform speed motion along the attractive differential kinematic map to be written as follows,
trajectories, ||⃗v (r)|| = 1, consider that the coefficient of [ ][ ]
cos θ −l sin θ u1
the weighted sum are defined using sinusoidal functions, ζ̇ =
sin θ l cos θ u2
µclose (dr ) = cos(c1 dr ); µf ar (dr ) = sin(c1 dr ) (18) = Bu (23)
where c1 is a normalizing constant such that c1 dr ∈ Since B is a positive definite matrix, consider the
[0, π2 ]. Recalling the arc length s along G(s) is determined kinematic control as follows
by ⃗r at its closest point, ⃗v becomes [ ]
−1 ẋd
u = αB (24)
⃗v (r) = cos(c1 dr )T⃗ (r) + sin(c1 dr )(−R
⃗ n (r)) (19) ẏd

1098

Authorized licensed use limited to: CALIFORNIA INSTITUTE OF TECHNOLOGY. Downloaded on October 18,2020 at 12:03:46 UTC from IEEE Xplore. Restrictions apply.
2
TABLE I
1.5
Parameters for the mobile robot
1

0.5
L = 0.092 m
0
l = 0.08 m
y

2
-2
1.5
-0.5 -1.5
1
r1 = r2 = 0.055 m
-1

-1 -0.5
0
0.5
m = 0.18 Kg
0

-1.5 0.5 -0.5 α = 10


1 -1
x y
1.5 -1.5
-2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2 -2
x

Fig. 9. Left image stands for the velocity field captured by Two small DC gear motors drive this mobile robot up to
the aerial camera, for zero roll and pitch angles, that is, when
the camera is coplanar to the 2D inertial frame of the mobile
4.3 MPH. A MoCap system, from Optitrack, composed
robot. Moreover, when roll and pitch angles are different from zero, of 24 cameras, is used at a rate of 100Hz to provide
right image, the velocity field suffers minor deviation. Notably, inertial measurement of sumo robot only.
however, since the controller guarantees that the drone converges
asymptotically to a desired hover configuration, then the desired B. Experimental Numerical Conditions
velocity field is the ideal one, left image, despite no virtual camera
projection is used. An arbitrary initial condition of [x(t0 ), y(t0 )] =
[1.2m, −0.2m] was considered for both robots, and a
desired altitude of zd = 1.5m for the quadrotor. The
where
[ α] > 0 is used to scale the desired velocity parameters of the mobile robot are given in Table I and
T
ẋd ẏd of the field. Then, substituting (24) into the tuning of feedback gains are shown in Table II.
[ ]T
(23) yields ζ̇(t) = α ẋd ẏd , which ensures that (24) TABLE II
computes the desired angular wheel velocity based on Experimental parameters for the control law (21)
the given velocity field components.
Notice that α becomes an extra degree of freedom b1 = 0.7 k1 = 270e−6 k2 = 2e−6
needed in practice to adjust engineering units and for b2 = 0.7 k3 = 270e−6 k4 = 2e−6
calibration purposes, otherwise it can be set α = 1. b3 = 1 k5 = 0.5 k6 = 0.2
m = 0.420g g = 9.81 zd = 1.5m
Finally at this point, notice that this map preserves
the velocity field properties in the camera image plane
since the quadrotor controller guarantees asymptotic
C. Experimental Results
convergence to the CoM, wherein a hover configuration
is assured, see Fig. 9. A circumference is defined as the smooth contour of
radius 1.5m, starting from an arbitrary initial condition
V. Experiments in the plane. While the sumo starts its motion, the
drone vision algorithm detects the target (the centroid
A. The Experimental System
of the visual landmarks placed over the mobile robot,
The AV, as the 2D mobile robot, is a novel mobile which corresponds to its CoM). Then, the velocity field
jumping robot called Jumping Sumo (the jumping func- algorithm computes online the kinematic controller to
tion was disabled), and the quadrotor is the AR Drone provide its smooth velocity field components depending
2.0 with monocular camera pointing downwards, both of the instantaneous position with respect to the contour,
from Parrot, see Fig. 2. Experiments were conducted at see Fig. 10.
Heudiasyc Laboratory of the Université de Technologie Figures 11 to 15 show the resultant path of the
de Compiègne. A custom-made programming platform is quadrotor immersed into a visual velocity field that
configured to command both robots based on personal converges to the ideal 2D mobile robot’s velocity field,
computers, giving rise to a platform called FL-AIR, see Fig. 11. This figure also shows the relative position
which decodes communication protocols of peripheral of quadrotor and the mobile robot. The mobile robot
devices by using a custom made software, including the control signal and that of the quadrotor are shown
closed-loop control, [29], [30]. The AR Drone 2.0 uses in Figures 12, and 13, respectively. These continuous
a board with an ARM Cortex A8 1GHz processor with signals have been well handled by the actuators of both
1GB of RAM, running the controller programmed in robots. The video of the experiment can be seen in
C++, for a IMU update at 5 ms, to handle all system https://www.youtube.com/watch?v=zB6dsH3zVg0
processes, including control outputs, interruptions, data
fusion and communication to ground station. VI. Discussions
The mobile sumo robot’s Wi-Fi node enables control Assuming the AV mimics the driving style of a human
command in velocity mode using a Broadcom BCM43526 chauffeur, who tracks in practice trajectories modeled by
transceiver, but for higher range, a pair of Skyworks vector fields to obtain a smooth drive ride, the quadrotor
SKY85803 Wi-Fi front-ends is used. It has an MPU- tracks the CoM and heading angle of the AV, thus
6050 invenSense 6-Axis gyroscope and accelerometer. fulfilling the mission of camera surveillance.

1099

Authorized licensed use limited to: CALIFORNIA INSTITUTE OF TECHNOLOGY. Downloaded on October 18,2020 at 12:03:46 UTC from IEEE Xplore. Restrictions apply.
Desired Velocity "dx" [m/s]
0.5 18

u1 [Digital Signal]
16

0 14

12

-0.5 10
0 10 20 30 40 50 60 70 0 10 20 30 40 50 60 70
time [s] time [s]
Desired Velocity "dy" [m/s]

0.5 15

u2 [Digital Signal]
10
0

5
-0.5
0 10 20 30 40 50 60 70 0 10 20 30 40 50 60 70
time [s] time [s]

Fig. 10. Desired velocity field components computed online from Fig. 12. Kinematic control signals u1 and u2 of the velocity field.
the real position of the mobile robot each sampling period. Notice
that the initial transient correspond to the transient of the mobile
robot, then a smoother and periodic desired trajectory is obtained
in both axes. 0.05

Taux
0
2
Position x [m]

Sumo
Dron -0.05
0 0 10 20 30 40 50 60 70
time [s]
-2
0 10 20 30 40 50 60 70 0.02
time [s]
Tauy

0
2
Position y [m]

Sumo -0.02
Dron
0 0 10 20 30 40 50 60 70
time [s]
-2
0 10 20 30 40 50 60 70 -0.35
time [s]
Tauz

0.1
Error [x] -0.4
Error [m]

Error [y]
0 0 10 20 30 40 50 60 70
time [s]
-0.1
0 10 20 30 40 50 60 70
time [s] Fig. 13. Control signals of momentum [τx , τy , τz ] of the quadrotor.
Experiments end at 65 seconds, however plots are presented up to
70 seconds to match the time basis of all plots.
Fig. 11. Cartesian position trajectories and errors, of both the
mobile and drone robots. The smoothness highlights over the
tracking errors, noticing that improved results can be obtained by tor tracks asymptotically the vector field of the AV in the
tuning in error-and-trial basis. This plots correspond to the second
trial. image plane, care must be exercised in the algorithm that
resolve underactuation, since this scheme may introduce
large acceleration. We are implementing a decentralized
Clearly, we are assuming the CoM of mobile robot regulator, however, there could be designed our previous
remains in the FoV of the airborne camera, so desired proposal [15] to introduce fast and smooth tracking in
contour, resultant velocity field and quadrotor tuning finite time by using fractional sliding modes, even when
require to comply to it. This stands for a problem beyond the full nonlinear 6D position-attitude dynamics are
the scope of this paper, however, the smoothness of the uncertain and subject to disturbances.
fields facilitates to comply to this assumption. Experiments are presented to evaluate the perfor-
Despite that theoretically it is proved that the quadro- mance using a proof of concept experimental design

1100

Authorized licensed use limited to: CALIFORNIA INSTITUTE OF TECHNOLOGY. Downloaded on October 18,2020 at 12:03:46 UTC from IEEE Xplore. Restrictions apply.
0.43 c) Control system exploits, and complies to, the
0.42
physics of each subsystem, without neglecting nonlinear
0.41
dynamics. If tracking regime is required or disturbances
0.4
are present, powerful controllers can be synthesized, as
our previous scheme, [31], that includes even gust wind
0.39
or non-differentiable disturbances, due to the boundary
Thurst [N]

0.38
of rigid-air that may yields complex aerodynamical
0.37
phenomena.
0.36

0.35
VII. Conclusion
0.34

0.33
A novel scheme for the aerial surveillance of AVs using
0 10 20 30 40 50 60 70 quadrotors is proposed, and experimentally verified,
time [s]
under reasonable assumptions to exemplify a solution to
Fig. 14. Thrust of the quadrotor to regulate altitude at constant this problem. Two most stringent assumption introduced
value. in our proposal were that the mobile robot remains
in the FoV of the airbone camera, and that roll and
2
pitch angles were small so as to no virtual camera was
Sumo
Dron
needed. Moreover, without loss of generality, our scheme
1.5
admits introducing solutions to these issues to remove
1 such assumptions.
0.5
Two commercial robots were used to this end under
Position y [m]

a custom-made programming platform. Preliminary re-


0
sults illustrate the feasibility of aerial surveillance for
-0.5 rapid prototyping of AV, where physical constraints and
-1
properties were used for the proposed scheme, such as
underactuation for the drone and non-holonomy for the
-1.5
AV, and tracking smooth fields, as it is the real case
-2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
when a real car is driven.
Position x [m]

References
Fig. 15. Resultant experiment for tracking a circumference
of radius 1.5 m. These signals are obtained with ground truth [1] P. Serra, R. Cunha, T. Hamel, D. Cabecinhas, C. Silvestre,
measurement provided by Optitrack system. “Landing of a quadrotor on a moving target using dynamic
image-based visual servo control”, IEEE Transactions on
Robotics, vol. 32, no. 6, pp. 1524-1535, 2016.
[2] H. Jabbari Asl, J. Yoon, “Bounded-input control of the
based on a differential driven mobile robot as the AV quadrotor unmanned aerial vehicle: A vision-based approach”,
and a popular drone. However, a high-end firmware runs Asian Journal of Control, vol. 19, no. 3, pp. 840-855, 2017.
[3] R. Mebarki, B. Siciliano, “Velocity-free image-based con-
in soft-real time all the process, in decentralized fashion, trol of unmanned aerial vehicles”, Proceedings of the 2013
with local processing for each robot. Notice at this point IEEE/ASME International Conference on Advanced Intelli-
that Optitrack is used to measure ground truth positions, gent Mechatronics, pp. 1522-1527, 2013.
[4] D. Zheng, H. Wang, W. Chen, Y. Wang, “Planning and
but only the position of the mobile robot is used for tracking in image space for image-based visual servoing of a
control purposes. quadrotor”, IEEE Transactions on Industrial Electronics, vol.
Reasonably, and arguably, we claim that this scheme 65, no. 4, pp. 3376-3385, Apr. 2018.
[5] D. Zheng. H. Wang, J. Wang, S. Chen, W. Chen, X. Liang,
may lead to promote further research on visual tracking “Image-Based Visual Servoing of a Quadrotor Using Virtual
of AV with multirotors because it is feasible (cost) and Camera Approach”, IEEE/ASME Transactions on Mecha-
viable (technology): tronics, vol. 22, issue 2, pp. 972-982, 2017.
[6] G. Fink, H. Xie, A. F. Lynch, M. Jagersand, “Nonlinear
a) Since limited airborne battery is of major concern, dynamic image-based visual servoing of a quadrotor”, Journal
our proposal requires lightweight 2D monocular camera of Unmanned Vehicle Systems, vol. 3, no. 1, pp. 1-21, 2015.
as long as height is controlled independently, which is [7] H. Jabbari, G. Oriolo, H. Bolandi, “An adaptive scheme
for image- based visual servoing of an underactuated UAV”,
possible for quadrotors, thus no bulky 3D camera is International Journal of Robotics and Automation, vol. 29,
required (in addition to battery consumption and signal no. 1, pp. 92-104, 2014.
processing of typical stereo cameras). [8] F. Chaumette, S. Hutchinson, “Visual servo control: Basic
approaches”, IEEE Robotics and Automation Magazine, vol.
b) Smooth trajectory tracking is the result of smooth 13, issue 4, pp. 82–90, 2006.
task, thus requiring a low frequency power system, [9] J. Gomez-Avila, C. Lopez-Franco, A.-Y. Alanis, N. Arana-
without involving high frequency commutation that Daniel, M. Lopez-Franco, “Ground Vehicle Tracking with a
Quadrotor using Image Based Visual Servoing”, 2nd IFAC
may deplete battery, thus no additional heavy batteries Conf. on Modelling, Identification and Control of Nonlinear
maybe be required. Systems, vol. 51, issue 3, pp. 344-349, 2018.

1101

Authorized licensed use limited to: CALIFORNIA INSTITUTE OF TECHNOLOGY. Downloaded on October 18,2020 at 12:03:46 UTC from IEEE Xplore. Restrictions apply.
[10] E. Altuğ, J. P. Ostrowski, C. J. Taylor, “Control of a [29] J. Thomas, J. Welde, G. Loianno, K. Daniilidis, V. Kumar,
quadrotor helicopter using dual camera visual feedback”, The “Fractional-Order Control for Robust Position/Yaw Tracking
International Journal of Robotics Research, vol. 24, no. 5, pp. of Quadrotors With Experiments”, IEEE Transactions on
329-341, 2005. Control Systems Technology, vol. 27, Issue 4, pp. 1645-1650,
[11] B. Herisse, T. Hamel, R. Mahony, and F.X. Russotto, “A non- 2019.
linear terrain-following controller for a vtol unmanned aerial [30] FL-AIR Framework. Accessed: Feb. 27, 2020. [Online]. Avail-
vehicle using translational optical flow”, IEEE International able: https://uav.hds.utc.fr/software-flair/
Conference on Robotics and Automation, pp. 3251–3257, 2009. [31] H. Ramirez-Rodriguez, V. Parra-Vega, A. Sanchez-Orta, O.
[12] S. Ahmad, R. Fierro, “Real-time Quadrotor Navigation Garcia-Salazar, “Robust backstepping control based on in-
Through Planning in Depth Space in Unstructured Envi- tegral sliding modes for tracking of quadrotors”, Journal of
ronments”, International Conference on Unmanned Aircraft Intelligent & Robotic Systems, vol. 73, issue 1-4, pp. 51-66.
Systems, pp. 1467-1476, 2019.
[13] M. Wan , G. Gu, W. Qian, K. Ren, X. Maldague, Q. Chen,
“Unmanned Aerial Vehicle Video-Based Target Tracking Algo-
rithm Using Sparse Representation”, IEEE Journal of Internet
of Things, vol. 6, issue 6, pp. 9689-9706, 2019.
[14] H. Maurya, A. Kamath, N. Verma, L. Behera, “Vision-based
Fractional Order Sliding Mode Control for Autonomous Vehi-
cle Tracking by a Quadrotor UAV”, 28th IEEE International
Conference on Robot and Human Interactive Communication,
2019.
[15] F. Oliva-Palomo, A.-J. Muñoz-Vazquez, A. Sánchez-Orta, V.
Parra-Vega, C. Izaguirre-Espinosa, P. Castillo, “A fractional
nonlinear PI-structure control for robust attitude tracking of
quadrotors”, IEEE Transactions on Aerospace and Electronic
Systems, vol. 55, no. 6, pp. 2911-2920, 2019.
[16] J. Thomas, J. Welde, G. Loianno, K. Daniilidis, V. Kumar,
“Autonomous Flight for Detection, Localization, and Tracking
of Moving Targets with a Small Quadrotor”, IEEE Robotics
and Automation Letters, vol. 2, Issue 3, pp. 1762-1769, 2017.
[17] Ch.-M. Huang, M.-Li Chiang, T-S. Hung, “Visual servoing of a
micro quadrotor landing on a ground platform”, International
Journal of Control, Automation and Systems, vol. 15, issue 6,
pp. 2810–2818, 2017.
[18] V. Lippiello, R. Mebarki, F. Ruggiero, “Visual coordinated
landing of a uav on a mobile robot manipulator”, IEEE
International Symposium on Safety, Security, and Rescue
Robotics, 2013.
[19] H. Alturbeh, J. Whidborne, “Visual Flight Rules-Based Col-
lision Avoidance Systems for UAV Flying in Civil Aerospace”,
Robotics, vol. 9, issue 1, pp. 1-35, 2020.
[20] D. Li, J. Cruz, “Better cooperative control with limited look-
ahead”, American Control Conference, 2006.
[21] A. Siddiqui, M. Verma, D.A. Tulett, “A periodic planning
model for maritime transportation of crude oil”, EURO Jour-
nal on Transportation and Logistics, vol. 2, issue 4, pp. 307-
335, 2013.
[22] A. Chakrabarty, R. Morris, X. Bouyssounouse, and R.
Hunt, “Autonomous indoor object tracking with the Parrot
AR.Drone”, International Conference on Unmanned Aircraft
Systems, pp. 25–30, 2016.
[23] A. Sanchez, J. Escareño, O. Garcia, R. Lozano, “Autonomous
Hovering of a Noncyclic Tiltrotor UAV: Modeling, Control
and Implementation”, Proceedings of the 17th World Congress
IFAC, pp. 803-808, 2008.
[24] A.-J. Muñoz‐Vázquez, J.-D. Sánchez‐Torres, V. Parra‐Vega,
A. Sánchez‐Orta, F. Martínez‐Reyes, “Robust contour track-
ing of nonholonomic mobile robots via adaptive velocity field
motion planning scheme”, International Journal of Adaptive
Control and Signal Processing, vol. 33, issue 6, pp. 890-899,
2019.
[25] J. B. Kuipers, “Quaternions and Rotation Sequences: A
Primer with Applications to Orbits, Aerospace and Virtual
Reality”, Princeton University Press, 1999.
[26] O. Araar, N. Aouf, I. Vitanov, “Vision Based Autonomous
Landing of Multirotor UAV on Moving Platform”, Journal of
Intelligent & Robotic Systems, vol. 85, issue 2, pp. 369–384,
2017.
[27] L. Wang, X. Bai, “Quadrotor Autonomous Approaching and
Landing on a Vessel Deck”, Journal of Intelligent & Robotic
Systems, vol. 92, issue 1, pp. 125–143, 2018.
[28] Jean-yves Bouguet, “Pyramidal implementation of the Lucas
Kanade feature tracker”, Intel Corporation, Microprocessor
Research Labs, 2000.

1102

Authorized licensed use limited to: CALIFORNIA INSTITUTE OF TECHNOLOGY. Downloaded on October 18,2020 at 12:03:46 UTC from IEEE Xplore. Restrictions apply.

You might also like