Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 94

9.

Unmanned Aerial Vehicles: Robotic Air Warfare 1917-2007


Chapter 1: "Pioneers of Aerial Automation: The Early Years (1917-1939)"
[mh]The Birth of Aerial Robotics: Early Innovators

[h]Design and Development of Aerial Robotic Systems


for Sampling Operations in Industrial Environment
The development of aerial robots has become one of the most active fields of research in the last decade.
Innovations in multiple fields, like the lithium polymer batteries, microelectromechanical sensors, more powerful
propellers, and the availability of new materials and prototyping technologies, have opened the field to
researchers and institutions, who used to be denied access given the costs, both economic and in specialized
personnel. The new‐found popularity of this research field has led to the proliferation of advancements in several
areas which used to ignore the possibilities of aerial robots given the limited capacities they presented.

One of the environments where aerial robots are putting a foothold for the first time is industry. Given the level
of accountability, certification, and responsibility required in industry, the field had been always reluctant to the
introduction of experimental state of the art technologies. But thanks to the wider experimentation in aerial
robots, resilience and performance robustness levels have been improving, making them an option when solving
several industry problems. Currently, these problem are focused in logistical aspects and related operations, like
distribution of good and placement of them in otherwise hard to access points. An example of this kind of
applications would be the surveying and monitoring of fluids in installations with multiple basins and tanks, for
example, a wastewater processing plant.

Basin sampling operations generally require multiple samples of fluids in several points with a given periodicity.
This makes the task cumbersome, repetitive, and depending on the features of the environment and other factors,
potentially dangerous. As such, automatization of the task would provide great benefits, reducing the efforts and
risks taken by human personnel, and opening options in terms of surveying scheduling.

One of the challenges that any autonomous Unmanned Aerial Vehicle (UAV) has to face is that of estimating its
pose with respect to the relevant navigational frames with accuracy. The estimation methodology here discussed
was formulated for estimating the state of the aerial vehicle. In this case, the state is composed of the variables
defining the location and attitude as well as their first derivatives. The visual features seen by the camera are also
included into the system state. On the other hand, the orientation estimation can be estimated in a robust manner
by most flight management units (FMUs), with the output of the attitude and heading reference system (AHRS)
frequently used as a feedback to the control system for stabilization.

In order to account for the uncertainties associated with the estimation provided by the attitude and heading
reference systems (AHRSs), the orientation is included into the state vector and is explicitly fused into the
system. Regarding the problem of position estimation, it cannot be solved for applications that require
performing precise maneuvers, even with global positioning system (GPS) signal available. Therefore, some
additional sensory information is integrated into the system in order to improve its accuracy, namely monocular
vision.

The use of a monocular camera as the unique sensory input of a simultaneous localization and mapping (SLAM)
system comes with a difficulty: the estimation of the robot trajectory, as well as the features map, can be carried
out without metric information. This problem was pointed out since early approaches like. If the metric scale
wants to be recovered, it is necessary to incorporate some source of metric information into the system. In this
case, the GPS and the monocular vision can operate in a complementary manner. In the proposed method, the
noisy GPS data are used to incorporate metric information into the system, in periods where it is available. On
the other side, the monocular vision is used for refining the estimations when the GPS is available or for
performing purely visual‐based navigation in periods where the GPS is unavailable.

In this work, we present an automated system designed with the goal of automate sampling tasks in an open air
plant and propose a solution for the localization problem in industrial environments when GPS data are
unreliable. The final system should use two autonomous vehicles: a robotic ground platform and a UAV, which
would collaborate to collect batches of samples in several tanks. The specifications and designs of the system are
described, focusing on the architecture and the UAV. The next section describes the vision ‐based solution
proposed to deal with the localization for navigation problem, commenting several contributions done with
respect to a classical visual SLAM approach. Results discussing the performance and accuracy of said
localization techniques are presented, both based on simulations and real data captured with the UAV described
in Section 2. Finally, the conclusions discuss the next step in the testing and development of the system and the
refining of the localization technique.

[h]System architecture

The architecture proposed to deal with the fluid sampling task aims to maximize the capability to reach with
accuracy the desired points of operation and measurement; and minimize the risks associated with the process.
The risks for human operators are removed or minimized, as they can perform their tasks without exposing
themselves to the outdoor industrial environment. To achieve this, the system will present two different robotic
platforms: a quadcopter UAV acts as sample collector, picking fluid samples from the tanks; and a Unmanned
Ground Vehicle (UGV) platform acts as a collector carrier, transporting both the collector and the samples. As
the risks associated with the operation of the sample collector UAV are mainly related to the flight operations,
the UAV will travel generally safely landed on the collector carrier, where it could be automatically serviced
with replacement sample containers or battery charges.

[h]Sampling system architecture and communications

The designed architecture of the system and its expected operation process can be observed in Figure. The
architecture has been divided into several blocks, so it can fit into a classical deployment scheme in an industrial
production environment. The analytics technicians at the laboratory can use the scheduling and control module
interface to order the collection of a batch of samples. This is called collection order, detailing a number of
samples, which basin must they come from, and if there are any required sampling patterns or preferences. The
process can be set to start at a scheduled time, and stops may be enforced, i.e., samples on a specific tank cannot
be taken before a set hour.
Figure.Proposed system distribution and communications diagram.

The collection order is formed by a list of GPS coordinates (with optional parameters, like times to perform
operations), which is processed in the central server of the system. This produces a mission path, which includes
a route for the sample carrier, with one or more stops. The mission path also includes the sampling flight that the
sample collector UAV has to perform with the sample carrier stop. The data of this flight, called collection
mission, will be transmitted by the sample carrier to the sample collector and will contain a simple trajectory that
the sample collector has to approximately follow, with height indications to avoid obstacles, until the sampling
point is reached.

The different paths and trajectories are generated by the path planner module, which works on a two ‐dimensional
(2D) grid map model of the outdoors. When data are available, the grid is to be textured with the map obtained
from the visual SLAM technique proposed. The utilization of the grid model allows introducing additional data,
like occupancy, possible obstacles, or even scheduling the accessibility of an area, e.g., we can set a path
normally used by workers during certain hours to be avoided at those times. An energy minimization planning
technique, essentially a simplified approach to Ref. , is used to obtain the trajectories, considering a set of
criteria.

1. All samples in the same basin/tank should be capture consecutively;


2. Minimizing the expected flight effort for the UAV (heuristically assuming that the trajectory is a polyline
of vertical and horizontal movements);
3. Minimize penalties for restricted access areas;
4. Minimize the trajectory of the UGV.

The communications with the autonomous platforms of the system will be performed through 4G in order to
allow video streaming during the prototyping and testing phases. The data produced by the different elements of
the system will be stored and logged to allow posterior analysis. The rest of the communications can be
performed through any usual channel in industrial environments, be it a local network, Internet, VPN…, as the
segmented architecture allows discrete deployment. The communications between the sample carrier and the
sample collector will be performed through ZigBee, although they present Wi ‐Fi modules to ease development
and maintenance tasks.

A subset of the communication protocols is considered priority communications. This includes the supervision
and surveying messages in normal planned operation, and those signals and procedures that can affect or override
normal operation, e.g., emergency recall or landing. For the UAV, the recall protocol uses a prioritized list of
possible fallback points so that the UAV tries to reach them. If the sample carrier is present, the UAV will try to
land on it, searching for its landing area through fiduciary markers. If it is not possible, the UAV will try another
fallback point in the list, until it lands or the battery is below a certain threshold: in that case, it will just land in
the first clear patch of ground available, though this will generally require dropping the sampling device.

[h]Sample collector aerial robot architecture

The UAV built as a sample collector, UAV is a 0.96 m diameter quadcopter deploying four 16” propellers with
T‐Motor MN4014 actuators. The custom‐built frame supports the propeller blocks at 5° angle and is made of
aluminum and carbon fiber. A PIXHAWK kit is used as a flight management unit (FMU), with custom
electronics to support 2 6 s 8000 mAh batteries. An Odroid U4 single ‐board ‐computer is used to perform the
high level task and deal with all noncritical processes. Beyond the sensors present in the FMU (which includes
AHRS and GPS), the UAV presents a front facing camera USB (640×480@30fps) for monitoring purposes, an
optical flow with ultrasound sensor facing downward, and a set of four ultrasound sensors deployed in a planar
configuration to detect obstacles collision. The FX4 stack is used to manage flying and navigation, while a Robot
Operating System (ROS) distribution for ARM architectures is run at the Odroid Single Board Computer (SBC),
supporting MavLink for communications. The communication modules and sensors are described at hardware
level in Figure.
Figure.Sample collector aerial robot hardware and communications diagram.

The hardware used weights approximately 4250 kg, while the propellers provide a theoretical maximum lift of
13,900 g. The sample collection device, including the container, is being designed with a weight below of 300 g,
meaning that UAV, with the sample capture system and up to 1000 ml of water ‐like fluid, would keep the
weight/lift ratio around 0.4.

A simplified diagram of the operation process is shown in Figure. The communications are divided into two
blocks, those connecting the UAV with the main network for supervision and emergency control, including video
streaming with MavLink over 4G, and those that will connect it to the sample carrier during routine operation.
This routine operation includes receiving the collection mission detailing the trajectory to the sampling point and
the signal to start the process. The action planner module in the UAV supervises the navigation tasks, making
sure that the trajectory waypoints are reached and the managing the water sample collector control module.
Figure.Sample collector UAV robot block diagram.

The localization and positioning of the UAV are solved through a combination of GPS and visual odometry
estimation. To approach the water surface, the downward looking ultrasonic height sensor will be used, as vision‐
based approaches are unreliable on reflective surfaces. To ease the landing problem, the sample carrier will
present fiduciary markers to estimate the pose of the drone. This allows landing operations while inserting the
sample container into a socket, similarly to an assisted peg ‐in ‐the ‐hole operation. After the UAV is landed, the
sample collector control module releases the sample container, and the sample carrier will replace the container
with a clean one. After the container is replaced, the carrier will send a signal to the UAV so that the new
container is properly locked, while the filled one will be stored.
As the autonomous navigation depends mainly on the localization and positioning, it is largely based in GPS and
visual odometry. This combination allows, combined with the ultrasound to find height and avoid possible
obstacle, to navigate the environment event if the GPS signal is unreliable. Though there are more accurate
alternatives to solve the SLAM problem in terms of sensors, namely using RGB ‐D cameras or Lighting
Detection and Ranging (LiDAR)‐based approaches, they present several limitations that make them unsuitable
for outdoor industrial environments. These limitations are in addition to the penalty imposed by their economical
cost, especially for models industry levels of performance and reliability.

In the case of RGB‐D, most of the sensors found in the market are unreliable in outdoors as they use IR or
similar lightning frequencies. The subset of time‐of‐flight RGB ‐D sensors presents the same limitation as the
LiDAR sensors: they are prone to spurious measurements in environments where the air is not clear (presence of
dusty, pollen and particles, vapors from tanks), they present large latencies, making them unfit for real ‐time
operation of UAVs, and they are generally considered not robust enough for industrial operation. In the case of
LiDAR, and RGB‐D if the SLAM/localization technique used focuses on depth measurements, there is an
additional issue present: these approaches normally rely in computationally demanding optimization techniques
to achieve accurate results, and the computer power available in an UAV is limited.

[h]UAV localization and visual odometry estimation

The drone platform is considered to freely move in any direction in R3 × SO(3)R3 × SO(3), as shown in Figure.
Figure.Coordinate systems: the local tangent frame is used as the navigation reference frame NN. AHRS: attitude and
heading reference system.

As the proposed system is mainly intended for local autonomous vehicle navigation, i.e., localize the sample
collector during the different collection missions, the local tangent frame is used as the navigation reference
frame. The initial position of the sample collector landed over the sample carrier is used to define the origin of
the navigation coordinates frame, and the axes are oriented following the navigation convention NED (North,
East, Down). The magnitudes expressed in the navigation, sample collector drone (robot), and camera frame are
denoted, respectively, by the superscripts NN, RR, and CC. All the coordinate systems are right ‐handed defined.
The proposed method will be taken mainly into account the AHRS and the downward monocular camera, though
it also uses data from the GPS during an initialization step.

The monocular camera is assumed to follow the central ‐projection camera model, with the image plane in front
of the origin, thus forming a noninverted image. The camera frame C is considered right handed with the z‐axis
pointing toward the field of view. It is considered that the pixel coordinates are denoted with the [u, v] pair
convention and follow the classical direct and inverse observation models.

The attitude and heading reference system (AHRS) is a device used for estimating the vehicle orientation, while
it is maneuvering. The most common sensors integrated with AHRS devices are gyroscopes, accelerometers, and
magnetometers. The advances in micro-electro-mechanical systems (MEMS) and microcontrollers have
contributed to the development of inexpensive and robust AHRS devices.

In the case of the deployed FMU, the accuracy and reliability provided by its AHRS are enough to directly fuse
its data into the estimation system. Thus, AHRS measurements are assumed to be available at high rates (50–200
Hz) and modeled according to

where aN= [ϕυ, θυ, ψυ]T, being ϕv, θv, and ψv Euler angles denoting, respectively, the roll, pitch, and yaw of the
vehicle; with va being Gaussian white noise. The global positioning system (GPS) is a satellite ‐based navigation
system that provides 3D position information for objects on or near the Earth’s surface, studied in several works.
The user‐equivalent range error (UERE) is a measurement of the cumulative error in GPS position measurements
caused by multiple sources of error. These error sources can be modeled as a combination of random noise and
slowly varying biases. According to a study , the UERE is around 4.0 m (σ); in this case, 0.4 m (σ) corresponds
to random noise.

In this work, it is assumed that position measurements yr can be obtained from the GPS unit, at least at the
beginning of the trajectory, and they are modeled by

where vr is Gaussian white noise and rN is the position of the vehicle. As GPS measurements are usually in
geodetic coordinates, Eq. (2) assumes that they have been converted to the corresponding local tangent frame for
navigation, accounting for the transformation between the robot collector frame and the antenna.

[h]Problem formulation

The objective of the estimation method is to compute the system state x


where the system state x can be divided into two parts: one defined by xv and corresponding to the sample collector UAV
state, and the other one corresponding the map features. In this case, yNi defines the position of the ith feature map. The
UAV state xv is composed, at the same time, of

where qNR represents the orientation of the vehicle with respect to the local world (navigation) frame by a unit quaternion,
and ωR = [ωx, ωy, ωz] represents the angular velocity of the UAV expressed in the same frame of reference. rN = [px, py, pz]
represents the position of the UAV expressed in the same navigation frame, with vN = [vx, vy, vz] denoting the linear
velocities. The location of a feature yiN (noted as yi for simplicity) is parameterized in its Euclidean form:

The proposed system follows the classical loop of prediction ‐update steps in the extended Kalman filter (EKF) in its direct
configuration; working at the frequency of the AHRS. Thus, both the vehicle state and the feature estimations are
propagated by the filter, see Figure.
Figure.Block diagram showing the architecture of the system. EKF ‐SLAM: extended Kalman filter simultaneous
localization and mapping.

At the start of a prediction‐update loop, the collector UAV state estimation xv takes a step forward through the
following unconstrained constant‐acceleration (discrete) model:

In the model represented by Eq. (6), a closed form solution of q˙

= 1/2(W)q is used to integrate the current velocity rotation ωC over the quaternion qNC.

At every step, it is assumed that there is an unknown linear and angular velocity with acceleration zero ‐mean and
known‐covariance Gaussian processes σv and σω, producing an impulse of linear and angular velocity: VN = σv2Δt
and ΩC= σω2Δt. It is assumed that the map features yi remain static (rigid scene assumption). Then, the state
covariance matrix P takes a step forward by

where Q is the noise covariance matrix of the process (a diagonal matrix with the position and orientation
uncertainties); ∇Fx is the Jacobian of the nonlinear prediction model (Eq. (7)); and ∇Fu is the Jacobian of the
concerning the derivatives of the input process represented by the unknown linear and angular velocity impulses
assumed.

[h]Visual features: detection, tracking, and initialization

In order to retrieve the depth of a visual feature, the monocular camera, equipped in the UAV, must observe it at
least two times while moves along the flight trajectory. The parallax angle is defined by the two projections of
those measurements. The convergence of the depth of the features depends on the evolution of the parallax angle.
In this work, a method that is based on a stochastic technique of triangulation is used for computing an initial
hypothesis of depth for each new feature prior to its initialization into the map. The initialization method is based
on previous author’s work.

[h]Detection stage

The visual‐based navigation method requires a minimum number of visual features yi observed at each frame. If
this number of visual features is lower than a threshold, then new visual features are initialized into the stochastic
map. The Shi‐Tomasi corner detector is used for detecting new visual points in the image that will be taken as
candidates to be initialized into the map as new features.

When a feature is detected for the first time at k frame, a candidate feature cl is stored:
where zuv = [u, v] is the locationof the visual feature in pixel coordinates in image frame k; yci= [tNc0, θ0, φ0]T, complying the
inverse observation model yci = h(xˆ,zuυ). Thus, yci models a 3D ray originating in the optical center of the camera when the
feature is first observed, and pointing to the infinite, with azimuth and elevation θ0 and φ0, respectively, according to
previous work. At the same time, Pyci stores, as a 5 × 5 covariance matrix, the uncertainties of yci. Pyci are computed as Pyci =
∇Yci P∇YciT, where P is the system covariance and is the Jacobian of the observation model h(xˆ,zuυ) with respect to the
state and coordinates u,v. A square patch around [u, v], generally with 11 pixels sides, is also stored, to keep the appearance
of the landmark.

[h]Tracking of candidate features

The active search visual technique can be used for tracking visual features that form part of the system state. In
this case, the information included in the system state and the system covariance matrix is used for defining
images regions where the visual feature is searched. On the other hand, in the case of the candidate features,
there is no enough information for applying the active search visual technique. A possibility for tracking the
candidate features is to use of general‐purpose visual tracking approach. This kind of methods uses only the
visual input and do not need information about the system dynamics; however, their computational cost is
commonly high. In order to improve the computational performance, a different technique is proposed. The idea
is to define very thin elliptical regions of search in the image that are computed based on the epipolar constraints.

After the k frame when the candidate feature was first detected, this one is tracked at subsequent frames k + 1, k
+ 2 … k + n. In this case, the candidate feature is predicted to lie inside the elliptical region Sc. The elliptical
region Sc is centered in the point [u, v], and it is aligned (the major axis) with the epipolar line defined by image
points e1 and e2.

Figure.The established search region to match candidate points is constrained to ellipses aligned with the epipolar line.

The epipole is computed by projecting the line tNc0 stored in cl to the current image plane using the camera
projective model. As there is not any available depth information, the semiline stored in cl is considered to have
origin tNc0, and a length d = 1, producing point e1 and e2.
The orientation of the ellipse Sc is determined by αc = atan2(ey,ex), where ey and ex represent the y and x
coordinates, respectively, of the epipole e, and e = e2 − e1. The size of the ellipse Sc is determined by its major
and minor axis, respectively, a and b. This ellipse Sc represents a probability region where the candidate point
must lie in the current frame. The proposed tracking method is intended to be used during an initial short period
of time, applying cross‐correlation operators. During this period, some information will be gathered in order to
compute a depth hypothesis for each candidate point, prior to its initialization as a new map feature.

On the other hand, during this stage, the available information of depth about the candidate features is not well‐
conditioned. For this reason, it is not easy to determine an adaptive and optimal size of the search image region.
Therefore, according to the kind of application, the parameter a is chosen empirically. In the experiments
presented in this work, good results were obtained choosing a = 20 pixels.

[h]Depth estimation

Each time a new image location zuv= [u, v] is obtained for a given candidate cl, a depth hypothesis is computed,
through stochastic triangulation, as described in previous work. In previous authors’ work , it was found that the
estimates of the feature depth can be improved by passing the hypotheses of depth by a low ‐pass filter. Thus,
only when parallax αi is greater than a specific threshold (α i > αmin), a new feature ynew= [pxi, pyi, pzi]T= h(cl, d) is
added to the system state vector x, as Eq. (3).

In order to correctly operate the proposed method, a minimum number of map features must be maintained inside
the camera field of view. For example in Ref. , it is stated that a minimum number of three features are required
for the operation of monocular SLAM. In practice, of course, it is better to operate with a higher minimum
number of features. The above requires to initialize continuously new features into the map.

The initialization time period that takes a candidate point to become a map feature depends directly on the
evolution of the parallax angle. At the same time, the evolution of the parallax depends on the depth of the
feature and the movement of the camera. For example, the parallax for a near point can increase very quickly due
to a small movement of the camera. In practice, it is assumed that the dynamics of the aerial vehicle allows
tracking a sufficient number of visual features for initialization and measurement purposes. Experimentally, at
least for nonaggressive typical flight maneuvers, we have not found yet major problems in order to accomplish
the above requirement.

[h]Prediction‐update loop and map management

Once initialized, the visual features yi are tracked by means of an active search technique, using zero ‐mean
normalized cross‐correlation to match a given feature, and the patch that notes its appearance to a given pixel to
be found in a search area. This search area is defined using the innovation covariance. The general methodology
for the visual update step can be found in previous works , where the details in terms of mathematical
representation and implementation are discussed. In this work, the loop closing problem and the application of
SLAM to large‐scale environments are not addressed. Although it is important to note that SLAM methods that
perform well locally can be extended to large ‐scale problems using different approaches, such as global mapping
or global optimization.

On the other side, when an attitude measurement yNa is available, the system state is updated. Since most low‐
cost AHRS devices provide their output in Euler angles format, the following measurement prediction model ha =
h(xˆv) is used:
During the initialization period, position measurements yr are incorporated into the system using the simple measurement
model hr = h(xˆv):

The regular Kalman update equations are used to update attitude and position whenever is required, but using the
corresponding Jacobian ∇H and measurement noise covariance matrix R related to the AHRS observation model.

In this work, the GPS signal is used for incorporating metric information into the system in order to retrieve the
metric scale of the estimates. For this reason, it is assumed that the GPS signal is available at least during some
period at the beginning of the operation of the system. In order to obtain a proper operation of the system, this
period of GPS availability should be enough for allowing the convergence of the depth for some initial features.
After this initialization period, the system is capable of operating using only visual information.

In Ref. , it is shown that only three landmarks are required for setting the metric scale in estimates. Nevertheless,
in practice, there is often a chance that a visual feature is lost during the tracking process. In this case, it is
convenient to choose a threshold n ≥ 3 of features that present convergence in order to end the initialization
process. In this work, good experimental results were obtained with n = 5.

An approach for testing features convergence is the Kullback distance. Nevertheless, the computational cost of
this test is quite high. For this reason, the following criterion is proposed for this purpose:

where Pyi is the covariance matrix of the yi feature, and it is extracted from the system covariance matrix P. If the larger
eigenvalue of Pyi is smaller than a centime of the distance between the feature and the camera, then it is assumed that the
uncertainty of the visual features is small enough to consider it as an initial landmark.

In this section, the results related to the proposed visual odometry approach for UAV localization in industrial
environments are discussed. The results obtained using synthetic data from simulations as well as that obtained
from experiments with real data are presented. The experiments were performed in order to validate the
performance, accuracy, and viability of the proposed localization method, considering a real outdoor scenario.
Previous works by the authors and other researchers have proven that similar solutions can reach real ‐time
performance, being of more interest development of robust solutions that can be optimized further down the line.
The proposed method was implemented in MATLAB©.

[h]Experiments with simulations

The values of the parameters used to simulating the proposed method were taken from the following sources: (i)
the parameters of the AHRS were taken from Ref. , (ii) the model used for emulating the GPS error behavior was
taken from Ref. , and (iii) the parameters of the monocular camera are the same for real camera used in real ‐data
experimentation.

In order to validate the performance of the proposal, two different scenarios were simulated as follows:

1. The environment of the aerial robot is composed of landmarks uniformly distributed over the ground. The
quadrotor performs a circular flight trajectory, at a constant altitude, after the taking off.
2. The environment of the aerial robot is composed of landmarks randomly distributed over the ground. The
quadrotor performs and eight‐like flight trajectory, at a constant altitude, after the taking off.

Figure.Estimation of two flight trajectories obtained with the filtered GPS data (left plots), and with the scheme proposed
of visual‐based navigation (right plots).

In simulations, the data association problem is not considered, that is, it is assumed that visual features can be
detected and tracked without errors. Also, it is assumed that the aerial robot can be controlled perfectly.

In order to obtain a statistical interpretation of the simulations results, the MAE (mean absolute error) was
computed for 20 Monte Carlo runs. The MAE was calculated for the following cases:

1. Trajectory estimated using only filtered data from the GPS.


2. Trajectory estimated using visual information in combination with GPS data, during all the simulation.
3. Trajectory estimated using visual information and GPS data, but only during the initialization period.

In the results presented in Figure, it can be appreciated the typical SLAM behavior, every time when the aerial
robot flight near to its initial position, when the visual ‐based scheme is used. In this case, note that when the
initial visual features are recognized again, the error is minimized. On the other hand, in the case where GPS data
is fused into the system during the whole trajectory, it can be appreciated an influence of the GPS error when the
aerial robot flight near to its initial position. In this latter case, the error is less minimized. The above results can
suggest that for some scenarios, it is better to navigate relying only on visual information.

Figure.Mean absolute error (MAE) in position computed from two simulations (a and b) out of 20 Monte Carlo runs:
(upper plot) simulation (a) results; (lower plot) simulation (b) results.

Also, it is important to note that for trajectories where the aerial robot moves far away from the initial position,
the use of GPS data can be very useful because an upper bound is imposed to the error drift which is inherent to
the navigation scheme based only on vision.

It is important to note that errors related to the slow ‐time varying bias of the GPS can be modeled in Eq. (2) by
considering a bigger measurement noise covariance matrix. However, in experiments, it was found that if this
matrix is increased too much then the convergence of initial visual features can be affected. Future work could
include an adaptive approach for fusing GPS data, or for instance, to include the bias of the GPS into the system
state in order to be estimated.

[h]Experiments with real data

The custom build sample collector UAV was used to perform experiments with real data. In experiments, the
quadrotor has been manually radio‐controlled to capture data. The data captured from the GPS, AHRS, and
frames from the camera were synchronized and stored in an ROS data set. The frames with a resolution of 320 ×
240 pixels, in gray scale, were captured at 26 fps. The flights of the quadrotor were conducted in industry ‐like
facilities.

The surface of the field is mainly flat and composed of asphalt, grass, and dirt, but the experimental environment
also included some small structures and other manmade elements. In average eight to nine GPS, satellites were
visible at the same time.
In order to evaluate the estimates obtained with the proposed method, the flight trajectory of the quadrotor was
determined using an independent approach. In this case, the position of the camera was computed, at each frame,
with respect to a known reference composed of four marks placed on the floor forming a square of known
dimensions. The perspective on four‐point (P4P) technique described in was used for this purpose.

As it was explained earlier, an initialization period was used for incorporating GPS readings in order to set the
metric scale of the estimates. After the initialization period, the estimation of the trajectory of the aerial robot
was carried out using only visual information.

The same methodology used with simulated data was employed with real data. Therefore, the same experimental
variants were tested: (i) GPS, (ii) GPS + camera, (iii) camera (GPS only during the initialization period). All the
results were obtained averaging 10 experimental outcomes. Figure shows the evolution of the estimated flight
trajectory along the time, which was obtained with each experimental variant. Table shows a summary of the
above experimental results. In this case, in order to compute the error in position, the trajectory computed with
the P4P technique was used as ground truth.
Figure.Flight trajectory estimated with: (i) P4P visual reference, (ii) camera, (iii) camera + GPS, and (iv) GPS. The
position is expressed in north coordinates (upper plot), east coordinates (middle plot), and altitude coordinates (lower
plot). In every case, an initialization period of 5 s was considered with GPS availability.
Method NOF aMAE

GPS – 1.70 ± 0.77σ

Camera + GPS 56.4 ± 10.2σ 0.21 ± 0.11σ

Camera 57.9 ± 9.3σ 0.20 ± 0.09σ

Table. Summary of results with real data.

NOF stands for the average number of visual features contained within the stochastic map. The aMAE is the
average mean absolute error in position (in meters).

It worthwhile to note that analyzing the above results, it can be verified similar conclusions that they were
obtained with simulations: The use of GPS exclusively can be unreliable for determining the vehicle position in
the case of fine maneuvers. In flight trajectories near to the initial reference, the error can be slightly low relying
only on visual information.

Regarding the application of the proposed method in a real ‐time context, the number of features that are
maintained in the system state is considerably below an upper bound that should allow a real ‐time performance,
for instance by implementing the algorithm in C or C++ languages. In particular, since early works in monocular
SLAM as Davison , it was shown the feasibility of real‐time operation for EKF ‐based methods for maps
composed of up to 100 features.

An industrial system to automatize the water sampling processes on outdoor basins in a wastewater treatment
plant has been proposed, with novel research to solve the localization for navigation problem under unreliable
GPS. The architecture of the whole system has been described, while specifications at the hardware level have
been presented for those elements designed completely, including the sample collector UAV, and the proposed
localization technique has been described and validate with experiments performed over simulated and real data.
The localization technique presented can be described as a vision ‐based navigation and mapping system applied
to UAV.

The proposed estimation scheme is similar to a pure monocular SLAM system, where a single camera is used for
estimating concurrently both the position of the camera and a map of visual features. In this case, a monocular
camera equipped in the aerial robot, which is pointing to the ground, is used as the main sensory source. On the
other hand, in the proposed scheme additional sensors, which are commonly available in this kind of robotic
platforms, are used for solving the typical technical difficulties which are related to purely monocular systems.

One of the most important challenges, regarding the use of monocular vision in SLAM, is the difficulty in
estimating the depth information of visual features. In this case, a method based on a stochastic technique of
triangulation is proposed for this purpose.

Perhaps the other most relevant challenge, regarding the use of monocular vision, has to do with the difficulty of
retrieving the metric scale of estimates. In this work, it is assumed that the GPS signal is available during an
initial period of time which is used for set the metric scale of the estimates. After the initialization period, the
proposed system is able to estimate the position of the flying vehicle using only visual information.

For some scenarios, it was seen that the exclusive used of filtered GPS data can be unreliable for performing fine
maneuvers. This is due to the noise implicit in GPS measurements. In this context, the following conclusions
were found based on the experiments with real data as well as with simulations:
1. Even the use of very noisy GPS data, during an initial short period of time, can be enough for set the
metric scale of the estimates obtained by a monocular SLAM system.
2. The integration of GPS measurements can be avoided for flight trajectories near to the origin of the
navigation frame.

[mh]World War I Experiments: The Dawn of Unmanned Flight


Nowadays the proliferation of small unmanned aerial systems or vehicles (UAS/Vs), formerly known as drones, coupled
with an increasing interest in tools for environmental monitoring, have led to an exponential use of these unmanned aerial
platforms for many applications in the most diverse fields of science. In particular, ecologists require data collected at
appropriate spatial and temporal resolutions to describe ecological processes. For these reasons, we are witnessing the
proliferation of UAV-based remote sensing techniques because they provide new perspectives on ecological phenomena
that would otherwise be difficult to study. Therefore, we propose a brief review regarding the emerging applications of
low-cost aerial platforms in the field of environmental sciences such as assessment of vegetation dynamics and forests
biodiversity, wildlife research and management, map changes in freshwater marshes, river habitat mapping, and
conservation and monitoring programs. In addition, we describe two applications of habitat mapping from UAS-based
imagery, along the Central Mediterranean coasts, as study cases: (1) The upper limit of a Posidonia oceanica meadow was
mapped to detect impacted areas, (2) high-resolution orthomosaic was used for supporting underwater visual census data in
order to visualize juvenile fish densities and microhabitat use in four shallow coastal nurseries.

Mankind has always been fascinated by the dream of flight, in fact in many ancient cultures, myths and legends
depicted deities with the extraordinary ability to fly like birds. It should be sufficient to recall the Egyptian
winged goddess Isis or the Greek myth of Icarus; even Christian iconography preserves and recovers the figures
of winged beings as intermediaries between man and God, reinterpreting them as angels. From those ancient
times, passing through the Renaissance intuitions of Leonardo Da Vinci, to the first balloon flights of the
Montgolfier brothers in 1783, we came to the early twentieth century which witnessed the first sustained and
controlled flight of a powered, heavier-than-air machine with a pilot aboard. Just over a hundred of years have
passed since that fateful day—December 17, 1903, that is since the Wright Flyer took off near Kill Devil Hills,
about four miles south of Kitty Hawk, North Carolina, USA. Nowadays the beauty of flying characterizes our
daily lives, becoming an indispensable tool to move people and things in few hours in all parts of the world. We
can state that the ability of flight has strongly changed the perspective on our vision of the world.

Long before this first powered flight of Wright brothers, one of the first recorded usages of unmanned aircraft
systems was by Austrians on August 22, 1849. They launched 200 pilotless balloons, carrying 33 pounds of
explosives and armed with half-hour time fuses, against the city of Venice. On May 6, 1896, Samuel P.
Langley’s Aërodrome No. 5, a steam-powered pilotless model was flown successfully along the Potomac River
near Washington. During World War I and World War II, radio-controlled aircrafts were used extensively for
aerial surveillance, for training antiaircraft gunners, and they also served as aerial torpedoes (e.g. in 1917, the
Hewitt Sperry Automated Airplane, developed by Elmer Sperry, was the early version of today’s aerial torpedo).
During Cold War, the drone was seen as a viable surveillance platform able to capture intelligence in denied
areas. Reconnaissance UASs were first deployed on a large scale in the Vietnam War. By the dawn of the
twenty-first century, unmanned aircraft systems were used more and more frequently for a variety of missions
especially since the war on terror, becoming a lethal hunter-killer. Due to these historical aspects the public
perception of most of the UAV applications is still mainly associated with military use, but nowadays the drone
concept is refashioned as a new promise for citizen-led applications having several functions, ranging from
monitoring climate change to carrying out search operations after natural disasters, photography, filming, and
ecological research.

The interpretation of photos from airborne and satellite-based imagery has become one of the most popular tools
for mapping vast surfaces, playing a pivotal role in habitat mapping, measuring, and counting performed in
ecological research, as well as to perform environmental monitoring concerning land-use change. However, both
satellite and airborne imagery techniques have some disadvantages. For example, the limitations of piloted
aircrafts must be considered in regard to their reliance on weather conditions, flight altitude, and speed that can
affect the possibility to use such method. In addition, satellite high-resolution data might not be accessible for
many developing-country researchers due to financial constraints. Furthermore, some areas such as humid
biotopes and tropic coasts are often obscured by a persistent cloud cover, mostly making cloud-free satellite
images unavailable for a specific time period and location; moreover the temporal resolution is limited by the
availability of aircraft platforms and orbit characteristics of satellites. In addition, the highest spatial resolution
data, available from satellites and manned aircraft, is typically in the range of 30–50 cm/pixel. Indeed, for the
purpose of monitoring highly dynamic and heterogeneous environments, or for real-time monitoring of land-use
change in sensitive habitats, satellite sensors are often limited due to unfavorable revisit times (e.g. 18 days for
Landsat) and spatial resolution (e.g. Landsat and Modis ~30 m/pixel). To address these limitations, new satellite
sensors (Quickbird, Pleiades-1A, IKONOS, GeoEye-1, WorldView-3) have become operational over the past
decade, offering data at finer than 10-m spatial resolution. Such data can be used for ecological studies, but
hurdles such as high cost per scene, temporal coverage, and cloud contamination remain.

Emerging from a military background, there is now a proliferation of small civilian unmanned aerial systems or
vehicles (UAS/Vs), formerly known as drones. Modern technological advances such as long-range transmitters,
increasingly miniaturized components for navigation and positioning, and enhanced imaging sensors have led to
an upsurge in the availability of unmanned aerial platforms both for recreational and professional uses. These
emerging technologies may provide unprecedented scientific application in the most diverse fields of science. In
particular, UAVs offer ecologists a new way to responsive, timely, and cost-effective monitoring of
environmental phenomena, allowing the study of individual organisms and their spatiotemporal dynamics at
close range.

Two main categories of unmanned aerial vehicles (UAVs) exist: rotor-based copter systems and fixed-wing
platforms. Rotor-wing units have hovering and VTOL (Vertical Take-Off and Landing) capabilities, while fixed-
wing units tend to have longer flight durations and range. However, a more detailed classification can be made
according to size, operating range, operational flight altitude, and duration. For additional information regarding
the classification of UAVs, please refer to Refs..

Size Nomenclature Specifics Operational requirements Application areas Examples

Very large HALE (High Fly at the highest Prohibitively expensive for most Assessments of climate Global Hawk,
(3–8 tons) Altitude, Long altitude (> 20 users (high maintenance, sensors, variable impacts at global Qinetiq
Endurance) Km) with huge crew training costs), long runway scales, remote sensing Zephyr, NASA
operating range for takeoff and landing, ground- collection, and PathFinder
that extend station support, and continuous earth/atmospheric science
thousands of km, air-traffic control issues, investigations
long flight time challenging deployment/recovery
(over 2 days), and transport
very heavy
payload capacity
(more than 900
kg in under-wing
pods)

Large (1–3 MALE Medium altitude Similar requirements as for Near-real-time wildfire NASA Altus II,
tons) (Medium (3–9 Km), over HALE but with reduced overall mapping and surveillance, NASA Altair,
Altitude, Long 12 h flight time costs investigation of storm NASA Ikhana,
Endurance) with broad electrical activity and MQ-9 Reaper
operating range storm morphology, (Predator B),
(> 500 km), remote sensing and Heron 2,
heavy payload atmospheric sampling, NASA
capacity (~100 arctic surveys, SIERRA
kg internally, atmospheric composition
external loads of and chemistry
45 up to 900 kg)
Size Nomenclature Specifics Operational requirements Application areas Examples

Medium LALE (Low Fly at moderate Reduced costs and requirements Remote sensing, ScanEagle,
(25–150 kg) Altitude, Long altitude (1–3 for takeoff and landing compared mapping, surveillance and Heron 1, RQ-
Endurance), Km) with to MALE (hand-launched security, land cover 11 Raven, RQ-
LASE (Low operating ranges platforms and catapult-launch characterization, 2 Pionee, RQ-
Altitude, Short that extend from platforms), simplified ground- agriculture and ecosystem 14 Dragon Eye,
Endurance) 5 to 150 km), control stations assessment, disaster NASA J-FLiC,
flight time (over response and assessment Arcturus T-20
10 hours),
moderate
payload capacity
(10–50 kg)

Small, mini, MAV (Micro) or Fly at low Low costs and minimal take Aerial photography and AR-Parrot,
and nano NAV (Nano) altitude (< 300 off/landing requirements (Hand- video, remote sensing, BAT-3,
(Less than Air Vehicles m), with short launched), often are accompanied vegetation dynamics, SenseFly eBee,
25 kg for duration of flight by ground-control stations disaster response and DJI Inspire 3,
small (5–30 min) and consisting of laptop computers, assessment, precision DJI Phantom 4,
AUVs, up range (< 10 flown by flight planning software agriculture, forestry Draganflyer
to 5 Kg for Km), small or by direct RC (Visual Line Of monitoring, geophysical X6, Walkera
mini and payload capacity Sight or Beyond Visual Line Of surveying, Voyager 4
less than 5 (< 5 kg) Sight when allowed), usually photogrammetry,
Kg for fixed-wing (small AUVs) and archeological research,
nano) copter-type (mini and nano environmental monitoring
AUVs)

Table. Summary of f UAVs’ classes with examples.

In this brief review and in our case studies, we only discuss and illustrate the use of small and mini UAVs
because these portable and cost-effective platforms have shown a great potential to deliver high-quality spatial
data to a range of science end users.

[h]Some recent ecological applications of lightweight UASs

Although lightweight UASs represent only a small fraction of the full list of unmanned systems capable of
performing the so-called “three Ds” (i.e. dull, dirty, or dangerous missions), they have been used in a broad range
of ecological studies.

[h]Forest monitoring and vegetation dynamics

Tropical forests play a critical role in the global carbon cycle and harbor around two-thirds of all known species.
Tropical deforestation is a major contributor to biodiversity loss, so an urgent challenge for conservationists is to
be able to accurately assess and monitor changes in forests, including near real-time mapping of land cover,
monitoring of illegal forest activities, and surveying species distributions and population dynamics.

Koh and Wich provided with a simple RC fixed-wing UAV helpful data for the monitoring of tropical forests of
Gunung Leuser National Park in Sumatra, Indonesia. In fact, the acquired images allowed the detection of
different land uses, including oil palm plantations, maize fields, logged areas, and forest trails.

UAVs have also been used for the successful monitoring of streams and riparian restoration projects in
inaccessible areas on Chalk Creek near Coalville , as well as to perform nondestructive, nonobtrusive sampling
of Dwarf bear claw poppy (Arctomecon humilis), a short-lived perennial herb of crust community which is very
sensitive to off-road vehicle (ORV) traffic. A fixed-wing and a quadcopter (Phantom 2 Vision+, DJI) were used
to acquire high-spatial resolution photos of an impounded freshwater marsh, demonstrating that UAVs can
provide a time-sensitive, flexible, and affordable option to capture dynamic seasonal changes in wetlands, in
order to collect effective data for determining percent cover of floating and emergent vegetation.

Dryland ecosystems provide ecosystem services (e.g. food, but also water and biofuel) that directly support 2.4
billion people, covering 40% of the terrestrial area, they characteristically have distinct vegetation structures that
are strongly linked to their function. For these reasons, Cunliffe et al. acquired aerial photographs using a 3D
Robotics Y6 hexacopter equipped with a global navigation satellite system (GNSS) receiver and consumer-grade
digital camera (Canon S100). Later, they processed these images using structure-from-motion (SfM)
photogrammetry in order to produce three-dimensional models, describing the vegetation structure of these semi-
arid ecosystems. This approach yielded ultrafine (<1 cm 2) spatial resolution canopy height models over
landscape-levels (10 ha). This study demonstrated how ecosystem biotic structures can be efficiently
characterized at cm scales to process aerial photographs captured from inexpensive lightweight UAS, providing
an appreciable advance in the tools available to ecologists. Getzin et al. demonstrated how fine spatial resolution
photography (7-cm pixel size) of canopy gaps, acquired with the fixed-wing UAV ‘Carolo P200,’ can be used to
assess floristic biodiversity of the forest understory. Also in riparian contexts, UAS technology provides a useful
tool to quantify riparian terrain, to characterize riparian vegetation, and to identify standing dead wood and
canopy mortality, as demonstrated by Dunford et al..

[h]Wildlife research

Often population ecology requires time-series and accurate spatial information regarding habitats and species
distribution. UASs can provide an effective means of obtaining such kind of information. Jones et al. used a 1.5-
m wingspan UAV equipped with autonomous control system to capture high-quality, progressive-scan video of a
number of landscapes and wildlife species (e.g. Eudocimus albus, Alligator mississippiensis, Trichechus
manatus). Israel dealt with the problem of mortally injured roe deer fawns (Capreolus capreolus) by mowing
machinery, and demonstrated a technical sophisticated ‘detection and carry away’ solution to avoid these
accidents. In fact, he presented a UAV-based (octocopter Falcon-8 from Ascending Technologies) remote
sensing system via thermal imaging for the detection of fawns in the meadows.

Considering that in butterflies, imagoes and their larvae often demand specific and diverging microhabitat
structures and resources, Habel et al. took high-resolution aerial images using a DJI Phantom 2 equipped with a
H4-3D Zenmuse gimbal and a lightweight digital action camera (GoPro HERO 4). These aerial pictures, coupled
with the information on the larvae´s habitat preference from field observations, were used to develop a habitat
suitability model to identify preferential microhabitat of two butterfly larvae inhabiting calcareous grassland.

Moreover, UAVs may offer advantages to study marine mammals, in fact Koski et al. used the Insight A-20
equipped with an Alticam 400 (a camera model developed for the ScanEagle UAV) to successfully detect
simulated Whale-Like targets, demonstrating the values of such methodology for performing marine ecological
surveys. In a similar manner, Hodgson et al. captured 6243 aerial images in Shark Bay (western Australia) with a
ScanEagle UVS, equipped with a digital SLR camera, in which 627 containing dugongs, underlying that UAS
systems may not be limited by sea state conditions in the same manner as sightings from manned surveys.
Whitehead et al. described efforts to map the annual sockeye salmon run along the Adam’s River in southern
British Columbia, providing an overview of salmon locations through high-resolution images acquired with a
lightweight fixed-wing UAV.

[h]Case studies along temperate Mediterranean coasts

Although over the past decade there has been an increasing interest in tools for ecological applications such as
ultrahigh-resolution imagery acquired by small UAVs, few have been used for environmental monitoring and
classification of marine coastal habitats. Indeed, in this section we outline two case studies regarding the
application of a small UAV for mapping coastal habitats. These applications represent a cross section of the
types of applications for which small UAVs are well-suited, especially when one considers the ecological aspect
related to marine species biology and habitat monitoring. Despite the fact that there are a number of advanced
sensors that have been developed and many proposed applications for small UASs, here we carried out our
studies using a commercially available and low-cost camera. As such, it can be considered a simple, inexpensive,
and replicable tool that can be easily implemented in future research which could also be carried out by
nonexperts in the field of UASs technologies.

For each survey we used a modified rotary-wing Platform (Quanum Nova CX-20, Figure), which included an
integrated autopilot system (APM v2.5) based on the ‘ArduPilot Mega’ , which has been developed by an online
community. The APM includes a computer processor, geographic positioning system module (Ublox Neo-6
Gps), data logger with an inertial measurement unit (IMU), pressure and temperature sensor, airspeed sensor,
triple-axis gyro, and accelerometer. This quadcopter is relatively inexpensive (<$500) and lightweight (~1.5 Kg).
The cameras used to acquire the imagery was a consumer-grade RGB, FULL-HD action camera (Gopro Hero 3
Black Edition, sensor: Complementary Metal-Oxide-Semiconductor; sensor size: 1/2.3″ (6.17 × 4.55 mm), pixel
size: 1.55 μm; focal length: 2.77 mm). In addition, a brushless 3-Axis Camera Gimbal (Quanum Q-3D) was
installed, to ensure a good stabilization on acquired images, avoiding motion blur. Both drone and gimbal were
powered by a ZIPPY 4000 mAh (14.8 V) 4S 25C Lipo battery which allowed a maximum flying time of about
13 min or less, depending on wind. In addition, by combining the APM with the open-source mission planner
software (APM Planner), the drone can perform autonomous fly paths and survey grids.

Figure.The Quanum Nova CX-20 Quadcopter ready to fly just before a coastal mapping mission.

The marine phanerogam P. oceanica (L.) Delile is the most widespread seagrass in the Mediterranean Sea. It
plays a pivotal role in the ecosystems of shallow coastal waters in several ways by (i) providing habitat for
juvenile stages of commercially important species ; (ii) significantly reducing coastal erosion, promoting the
deposition of particles with dense leaf canopy and thick root-rhizome (‘matte’) ; and (iii) offering a nursery area
for many fish and invertebrate species. Although known to be a reef-building organism capable of long-term
sediment retention, P. oceanica meadows are however experiencing a steep decline throughout the Mediterranean
Sea. Along the Mediterranean coasts, the decline of seagrasses on a large spatial scale has been attributed to
anthropogenic disturbances such as illegal trawling , fish farming , construction of marinas , and sewage
discharge and pollution. On contrast, on a smaller spatial scale, particularly in coastal areas subjected to intense
recreational activity, seagrasses are impacted by mechanical damage caused by boat anchoring or moorings.
Major damage to seagrasses seems to be caused by dragging anchors and scraping anchor chains along the
bottom, as boats swing back and forth, generally resulting in dislodgement of plant rhizomes or leaves. In most
published works the mapping of P. oceanica meadows has been based on satellite, airborne imagery, multibeam
bathymetry, and side-scan sonar mosaics. Remote sensing data from satellites and piloted aircraft can be used to
map large areas, but they either do not have adequate spatial resolution or are too expensive to map fine-scale
features, otherwise small UAVs are particularly well-suited to mapping the upper limits of meadows at a smaller
spatial scale (i.e. 1–5 Km).

The case study for this application was carried out along a sandy cove (Arenella bay) with a well-established P.
oceanica meadow, approximately 2 km north of Giglio Porto , in late November 2016. Our goals were to show
the high level of detail that can be reached with UAV-based imagery, to respect other free-available remote
sensing techniques, and to detect impacted areas of the meadow. In fact, in this study site, there are two principal
sources of disturbance: a direct adverse effect on meadow due to boat anchoring during summer seasons, and the
presence of a granite quarry that in the past (no longer operational) may have caused an increase in sedimentation
rates, resulting in a reduction of cover and shoot density.

We set the GoPro Hero 3 camera to take photos every 2 s (time lapse mode) in Medium Field of View (M FOV:
7 Megapixel format, 3000 × 2250 pixels), and we set the camera pointing 90° downward with auto white
balance. Flight speeds were maintained between 5 and 7 m/s to allow for 75% in-track overlap. The drone was
programmed to fly at 30 m above mean sea level in order to get a Ground Sampling Distance (GSD) of ~2.5 cm
per pixel, according to the formula :

where GSD is the ground sample distance (i.e. photo resolution on the ground), SW is the sensor width, FH is the
flight height, FL is the focal length of the camera, and IMW is the image width. By multiplying the GSD by
image size (width and height) the resulting photo footprint was 66 × 50 m.

The bay (1.96 ha) was flown in 16 strips with a total flight duration of 6.34 min. In total, the survey yielded 184
images, which were processed in Adobe Photoshop Lightroom 5.0 (Adobe Systems Incorporated, San Jose,
California, USA) using the lens correction algorithm for the GoPro HERO 3 Black Edition camera, in order to
remove lens distortion (fish-eye effect). Since for this application, high-spatial accuracy was not required, five
ground-control points (GCPs) were placed at accessible locations along the coast (with easily recognizable
natural features such as rocks), and they were surveyed with a handheld GPS + GLONASS receiver (Garmin
Etrex 30), leading to horizontal errors of ±5 m. Successively, the images where used to produce a high-resolution
orthoimage mosaic in Agisoft Photoscan 1.0. This structure from motion (SfM) package allows a high degree of
automation, and makes it possible for nonspecialists to produce accurate orthophoto mosaics in less time than
what it would take using conventional photogrammetric software.

Figure shows how high-spatial resolution of RGB imagery acquired from UAV has allowed us to detect the
impacted areas of the meadow. In particular, we identified 1.437 m 2 of dead ‘matte’ by analyzing satellite
imagery , 1.686 m2 with Bing Aerial orthophotos and 1.711 m2 with UAV-based orthomosaic. In fact, due to the
higher spatial resolution of UAS imagery, we were able to detect even the smallest areas where dead ‘matte’ was
exposed, due to meadow degradation.
Figure.The bay of Arenella with impacted Posidonia oceanica meadow mapped using three different free/low- cost
remote sensing techniques: (a) Google Earth Satellite image; (b) Bing aerial orthoimage; and (c) UAV-based
orthomosaic. The enclosed area highlighted by the red box is shown at greater scale (1:100) in order to visualize the
increasing level of detail. In (c), red dots represent the position of GCPs.

The imagery acquired provides a new perspective on P. oceanica mapping and clearly shows how comparative
measurements and low-cost monitoring can be made in shallow coastal areas. In fact, in this kind of
environment, anthropogenic drivers such as boat mooring and creation of coastal dumping areas are significantly
affecting ecosystem structure and function. In addition, considering that drone surveying is relatively not
expensive, regular time-series monitoring can be adopted to assess the evolution of coastal meadows.

[h]Case study 2: integration of underwater visual census (UVC) data with UAV-based aerial maps for the characterization
of juvenile fish nursery habitats

Most demersal fishes have complex life cycles, in which the adult life-stage takes place in open deeper waters,
while juvenile life-stages occur in benthic inshore habitats. The presence of suitable habitats becomes an
essential requirement during the settlement of juvenile stages. In fact, these habitats are the key to success for the
conclusion of early life phases, providing shelter from predators and abundance of trophic resources. As a result
of this site-attachment, juveniles exhibit systematic patterns of distribution, influenced by the availability of
microhabitats. Habitat identification has been generally achieved by human underwater visual censuses (UVC)
techniques. The latter has been considerably improved in recent years with visual underwater video technologies.
However, these studies require a deep knowledge of the environment in addition to considerable efforts in terms
of time and experienced staff. Small UASs potentially offer a low-cost support to conventional UVC techniques,
providing a time-saving tool aimed at improving data from underwater surveys. Indeed, our aim is to couple
UVC data (e.g. number of juvenile fish) with remote sensing data (high-resolution UAV-based imagery), to
extrapolate habitat features from image analysis, allowing a considerable saving of both time and efforts,
especially for underwater operators.

The case study for this application involved the same UAS used in the previous example, an underwater
observer, and was focused on a common coastal fish species: the white seabream. D. sargus is abundant in the
Mediterranean and dominates fish assemblages in shallow rocky infralittoral habitats. It inhabits rocky bottoms
and P. oceanica beds, from the surface to a depth of 50 m. In common with other sparid fishes, it is an
economically important species of interest for fisheries and aquaculture.

Between early May and late June 2016, juvenile white seabream were censused from Cannelle Beach to Cape
Marino, along a rocky shoreline (~1.5 km long) south of Giglio Porto. Counts of fish were obtained from two
visual census surveys per month: the diver swam slowly along the shoreline (from 0 to 6 m depth) and recorded
the numbers of individuals encountered while snorkeling. When juvenile fish or shoals of settlers (size range 10–
55 mm) were observed, the abundance and size of each species were recorded on a plastic slate. In addition, the
diver towed a rigid marker buoy with a handheld GPS unit with WAAS correction in order to accurately record
the position of each shoal of fish.

Two mapping missions were successfully carried out in late July 2016, along the same shoreline, in order to
produce a high-resolution aerial map of the coast.

Figure.The high-resolution (2.5 cm/pix) mosaic representing the rocky coast (~1.5 Km) south of Cannelle Beach ,
derived from two mapping mission (204 images) of Quanum Nova CX-20.

The quadcopter flew at 40 m, yielding a ground resolution of ~2.5 cm/pix. The two surveys covered 1446 m of
shoreline and took approximately 16 min, resulting in 204 images. Since many stretches of the coast were
inaccessible areas, where GCPs cannot be physically measured on the ground, we used a direct georeferencing
approach. The GPS coordinates of the cameras are determined using the UAV onboard GPS receiver, so that the
GPS position at the moment of shot can be written to the EXIF header information for each image, after
estimating time offset with Mission Planner (v.1.3.3 or higher) geotagging images tool (for better results
preflight synchronization of the camera’s internal clock with GPS time is recommended). In addition, these
measured values (from onboard GPS) may be useful to estimate the camera’s approximate external orientation
parameters to speed up photogrammetric workflow (bundle adjustment) in Agisoft Photoscan. However, since
they are typically captured at relatively low accuracy in the case of UAVs’ consumer-grade GPS, we also
registered the final orthomosaics, by importing it as raster image (TIFF format) into Arcmap 10.1. We aligned
the raster with an already existing 1:5000 scale aerial orthophoto by 8 control points in order to perform a 2nd
order polynomial transformation. Afterward, the control points were used to check the reliability of image
transformations. The total error was computed by taking the root mean square sum of all the residuals to compute
the RMS error (RMSE). This value described how consistent the transformation was between the different
control points. The RMSE achieved was 0.15 pixels which was well under the conventional requirements of less
than 1 pixel. The successful geo-registration allowed a direct visualization on the map of UVC data (i.e. lat/long
coordinates of fish shoals) after downloading GPS eXchange (.gpx) information from GPS unit. These GPS data
were imported as point shapefile in ArcMap using DNRGPS 6.1.0.6 application.

As all juvenile fish positions, with their relative abundances (number of fish per shoal), are now available in a
GIS environment, it is a straightforward process to model them with interpolation methods, which is, for
example, available in ArcMap. The point data, measured from irregularly spaced locations, were converted into
continuous surfaces using an inverse distance weighting (IDW) method and then rasterized into a grid format.
We used local interpolators of inverse distance weighting because the concept of computation (i.e. it assumes
that each point has a local influence that diminishes with distance ) is relevant for juvenile fish, where closer
points are thought to be similar as a result of the habitat characteristics. Figure and 6 show the spatial distribution
of D. sargus juvenile density collected through underwater visual census after IDW interpolation. GIS data
integration allowed us to identify two important aspects: (1) four areas with high densities of juveniles were
clearly visible, suggesting that such zones serve as nursery grounds for juvenile white seabreams and as the
juveniles grew larger in size (> 40 mm) a dispersal out of the nursery areas was evident and the preference for a
given habitat type decreased leading to an increase in the number of shoals but with lower densities within
shoals.
Figure.Spatial distribution of small-sized (10–40 mm) juvenile D. sargus. IDW-interpolated fish density after UVC data
collection in May 2016. The four areas (a–d) with the highest densities of juvenile are highlighted by red circles.

Figure.Spatial distribution of large-sized (41–55 mm) juvenile D. sargus. IDW-interpolated density after UVC data
collection in late June 2016.

These four nurseries were investigated through image analysis : we performed a Maximum Likelihood
Classification algorithm followed by both postprocessing workflow and manual polygons editing for edge
refinement in order to highlight the most important habitat feature such as substrata type and extent. In fact, due
to high site-attachment of juvenile fish, the presence of specific habitats play a key role in the development of
early life-stages, hence the fine characterization of these environments becomes an important aspect regarding
ecological studies focused on juvenile fish. However, underwater data collection by SCUBA operators require a
large effort to acquire such detailed information, therefore UASs-based remote sensing techniques become
useful, reliable, and feasible tools for mapping coastal fish habitats and for supporting ecological investigations.

Nursery Substrata Habitat description Habitat Percent Depth Total


type cover (m2) cover (%) (m) extent
(m2)

Sandy cove (a) Sand Granite coarse sand 243.9 36.2 0–3 674.4

Rock Large- (mean ± SD diameter: 3.4 ± 16) and 382.5 56.7


medium-sized (mean ± SD diameter: 0.9 ± 0.3)
boulders with photophilic algae biocenosis

Posidonia Small patches on sand 48 7.1


oceanica

Rocky cove (b) Sand Granite coarse sand 11.6 3.7 0–3.5 317.8
Nursery Substrata Habitat description Habitat Percent Depth Total
type cover (m2) cover (%) (m) extent
(m2)

Rock Small-sized (mean ± SD diameter: 0.6 ± 0.2) blocks 306.2 96.3


and pebbles with photophilic algae biocenosis

Small port (c) Sand Fine sand and mud 218.7 23.8 0–2.8 918.3

Rock Cranny rock semisciaphilic algae and isolated 402.2 43.8


boulders on soft sediment

Debris Dead P. oceanica leaves on mud 297.4 32.4

Rocky/sandy Sand Sandy patches 129.1 5.1 0–5.5 2521.2


cove (d)
Gravel and Small- and medium-sized pebbles on sand 25.2 1
pebbles

Rock Cranny rock with photophilic algae biocenosis and 1698.9 67.4
isolated boulders

Posidonia Posidonia meadow and ‘matte’ 621 24.6


oceanica

Debris Dead P. oceanica leaves on sand 47 1.9

Table.Main habitat features characterizing the four nursery areas (a–d).

Measures are derived from high-resolution mosaics image analysis in Arcmap 10.1.
Figure.Thematic maps of the four nursery areas derived from Maximum likelihood classification and manual editing.
Different colors represent main habitat types.
In this brief review we have provided an overview of ecological studies carried out with small drones. Through study cases
we demonstrated how UAV-acquired imagery has a substantial potential to revolutionize the study of coastal ecosystem
dynamics. The future of UASs applications looks very promising due to the relative low cost with respect to the benefits
obtained. In fact, the field of ecology is severely hindered by the difficulties of acquiring appropriate data, and particularly
data at fine spatial and temporal resolutions, at reasonable costs. As demonstrated in this study, unmanned aerial vehicles
offer ecologists new opportunities for scale-appropriate measurements of ecological phenomena providing land cover
information with a very high, user-specified resolution, allowing for fine mapping and characterization of coastal habitats.
Although the camera equipment used herein only captures three color (RGB) channels with relatively low resolution (max
16 megapixel), it was possible to distinguish impacted areas in sensitive habitat types, as well as preferred sites for juvenile
fish species. Moreover, high-spatial resolution data derived from UAVs combined with traditional underwater visual
census techniques enable the direct visualization of field data into geographic space bringing spatial ecology toward new
perspectives. High-resolution aerial mosaics allow rapid detection of key habitats, and thus can be used to identify areas of
high relevance for species protection and areas where management action should be implemented to improve or maintain
habitat quality. UASs are potentially useful to investigate population trends and habitat use patterns, and to assess the effect
of human activities (e.g. tourism, pollution) on abundance, particularly in coastal and shallow habitats, where visibility
enables animal detection from the surface, as demonstrated for elasmobranch species in coral reef habitats. Finally,
although the flexibility of UASs will be able to revolutionize the way we address and solve ecological problems , we must
consider government approval navigational stipulations and social implications that impose restrictions on the use of UASs
before undertaking research projects involving the use of UASs.

Chapter 2: "The Role of UAVs in World War II (1939-1945)"


[mh]UAV Contributions to the War Effort
UAVs are aircraft that are guided autonomously, by remote control, or by both means and that carry some
combination of sensors, electronic receivers and transmitters, and offensive ordnance. They are used for strategic
and operational reconnaissance and for battlefield surveillance, and they can also intervene on the battlefield—
either indirectly, by designating targets for precision-guided munitions dropped or fired from manned systems, or
directly, by dropping or firing these munitions themselves.

The earliest UAVs were known as remotely piloted vehicles (RPVs) or drones. Drones were small radio-
controlled aircraft first used during World War II as targets for fighters and antiaircraft guns. They fell into two
categories: small, inexpensive, and often expendable vehicles used for training; and, from the 1950s, larger and
more sophisticated systems recovered by radio-controlled landing or parachute. The vehicles were typically fitted
with reflectors to simulate the radar return of enemy aircraft, and it soon occurred to planners that they might
also be used as decoys to help bombers penetrate enemy defenses.

It also occurred to planners that RPVs could be used for photographic and electronic reconnaissance. One result of this
idea was the AQM-34 Firebee, a modification of a standard U.S. target drone built in various versions since about 1951 by
the Ryan Aeronautical Company. First flown in 1962, the reconnaissance Firebee saw extensive service in Southeast Asia
during the Vietnam War. It was also used over North Korea and, until rapprochement in 1969, over the People’s Republic
of China. A swept-wing, turbojet-powered subsonic vehicle about one-third the size of a jet fighter, the AQM-34 penetrated
heavily defended areas at low altitudes with impunity by virtue of its small radar cross section, and it brought back
strikingly clear imagery. Firebees fitted with receivers to detect electronic countermeasures returned intelligence about
Soviet-built surface-to-air missiles that enabled American engineers to design appropriate detection and jamming
equipment.
AQM-34s operated with the limitations of 1960s technology: they carried film cameras, were launched from underwing
pylons on a C-130 Hercules transport plane, and were recovered by parachute—snagged from the air by a harness hung
from a helicopter. The full advantages of UAVs were to remain unexploited on a large scale until the 1980s, when reliable
miniaturized avionics combined with developments in sensors and precision-guided munitions to increase the capabilities
of these vehicles dramatically. One critical development was small high-resolution television cameras carried in gimbaled
turrets beneath a UAV’s fuselage and remotely controlled via a reliable digital downlink and uplink. Often, the vehicles
also carried a laser designator for homing munitions. Global positioning system (GPS) sensors provided precise location
information for both the UAVs and their guided munitions. Employing these new technologies, the United States has
fielded strategic-range UAVs, using communications satellites to relay control signals and sensor readouts between UAVs
and control centres over global distances. For instance, in 2003 Ryan produced the first of a series of RQ-4 Global Hawk
UAVs. The Global Hawk is capable of carrying a wide array of optical, infrared, and radar sensors and takes off from and
lands on a runway. Its service ceiling of 65,000 feet (20,000 metres), its relatively small size, and the reach of its sensors
render it effectively immune to surface-based defensive systems. Prototype Global Hawks were pressed into wartime use
over Afghanistan in 2002 and over Iraq as early as 2003. They are currently the most important strategic-range UAVs in
service.
The advantages of strategic UAVs notwithstanding, the emergent technologies described above were first exploited in war
by Israeli battlefield UAVs. The first of these was the Tadiran Mastiff, a twin-boom aircraft introduced in 1975 that
resembled a large model airplane weighing just over 90 kg (200 pounds) with a boxy fuselage and a pusher propeller driven
by a small piston engine. It could be catapulted from a truck-mounted ramp, launched by rocket booster, or operated from a
runway. The Mastiff and the larger but similar Scout, produced by Israeli Aircraft Industries (IAI), proved effective in
identifying and locating surface-to-air missiles and marking them for destruction during hostilities in Lebanon in 1982. The
U.S. Marine Corps procured the Mastiff, and it followed up this vehicle with the IAI-designed and U.S.-built RQ-2
Pioneer, a slightly larger vehicle with secure up- and downlink. The Pioneer, fielded in 1986, was used by the Marine
Corps and Navy in the Persian Gulf War of 1990–91. Meanwhile, the U.S. Army promoted the development of a similar
but still larger UAV, the Israeli-designed RQ-5 Hunter, which had a gross weight of 1,600 pounds (720 kg) and was
propelled by both pusher and tractor propellers. Although not procured in quantity, Hunters served in the 2003 invasion of
Iraq.
Following the lead of Israel, the United States has aggressively developed UAVs. The most important UAV in operational
use is the General Atomics MQ-1 Predator, powered by a piston engine driving a pusher propeller. The Predator entered
service in 1995 and, after initial problems, developed into a capable surveillance craft carrying a wide variety of optical,
infrared, electronic, and radar sensors. The first operational use of armed UAVs involved Predators carrying antitank
missiles and operated by the Central Intelligence Agency during the 2001 invasion of Afghanistan. However, Predators are
operated mainly by the U.S. Air Force, often to locate and mark targets for heavily armed fighter-bombers or gunships.
Supplementing the MQ-1 is General Atomics’ MQ-9 Reaper, a larger version of the Predator powered by a turboprop
engine. The Reaper can carry some 3,000 pounds (1,360 kg) of ordnance and external fuel and has a significantly higher
service ceiling than the Predator. It entered operations over Afghanistan in the autumn of 2007. Predators and Reapers have
been purchased by allies of the United States, notably the United Kingdom.
All major military powers and even some militia groups employ battlefield surveillance UAVs to extend the view of
ground and naval forces and to enhance the reach and accuracy of their supporting fire. For example, in its conflict with
Israel, the Lebanese group Hezbollah has used the Iranian-built Ababil (“Swallow”), a vehicle with a wingspan of 3.25
metres (10 feet 8 inches) that is powered by a pusher propeller and launched either from a truck-mounted pneumatic
launcher or by a booster rocket. Tactical surveillance craft range in sophistication from vehicles that, like the Ababil, loiter
over battlefields acquiring and designating targets to hand-launched “mini-UAVs” carrying a single visible- or infrared-
spectrum television camera. An early example of the latter is the U.S. AeroVironment FQM-151 Pointer, a UAV weighing
less than 10 pounds (4.5 kg) and resembling a powered model sailplane. The Pointer first saw service with the U.S. Marine
Corps in the Persian Gulf War. It is being replaced by the Puma, a development of the Pointer with more-advanced sensors,
by the RQ-11 Raven, a scaled-down version of the Puma, and by the Wasp, a tiny vehicle weighing about 1 pound (less
than half a kilogram) with a wingspan of 2 feet 4.5 inches (72 cm); the last is being issued to air force ground combat
control teams as well as marines down to the platoon level.

Hovering UAVs have entered service—for example, the U.S. Honeywell RQ-16 T-Hawk, a ducted-fan vehicle
weighing 18.5 pounds (8 kg), fielded in 2007 and used to locate improvised explosive devices, and the Russian
Kamov Ka-137, a 280-kg (620-pound) helicopter powered by coaxial contrarotating blades and carrying a
television camera for border patrol. The much larger Northrop Grumman MQ-8 Fire Scout, a 3,150-pound
(1,420-kg) single-rotor craft resembling an unmanned helicopter, has been operational with the U.S. Navy since
2009; it was first used in anti-drug-smuggling operations off the coasts of the United States.

In 1997 the U.S. Defense Advanced Research Projects Agency (DARPA) began to fund feasibility studies of
extremely small “micro UAVs” no larger than 6 inches (15 cm). These studies (and similar studies conducted
since 2003 in Israel) have produced a bewildering variety of designs powered by electric motors or tiny gas
turbines the size of a watch battery, but no publicly acknowledged use has yet been found for them.

The next wave of UAV development is likely to be so-called uninhabited combat air vehicles (UCAVs). If the experimental
Boeing X-45 and Northrop Grumman X-47 are representative of these vehicles, they will resemble small B-2 Spirit stealth
bombers and will vary in size from one-third to one-sixth the gross weight of a single-seat fighter-bomber. They will most
likely supplement or even replace piloted fighter-bombers in the attack role in high-threat environments. Finally, large,
extremely light solar-powered “endurance UAVs” have been flown in order to test the feasibility of communications and
surveillance vehicles that would stay on station at high altitude for months or even years at a time.

[mh]Technological Advancements during the War Years


The chapter addresses the technological evolution of global economy since the yearly post war years until the beginning of
world crisis in 2008. The author explains spectacular growth, demonstrated in the world economy, by implication of
technologies, invented during “the golden age of technologies”, the dual-use peculiarity for the majority of them and their
subsequent transfer from the leading countries to the less developed, enforced the extension in scale of production and
markets. It should be recognized that the technological system, launched after the World War II represents the backbone of
the contemporary global economy, despite the different role of its main drivers: manufacturing production, trade in goods
and services or foreign direct investments. The theoretical model of the steady-state growth most appropriately describes
how the increments in capital and investments enforce the economic growth, no matter of where there are originating from.
The 2008 global crisis reveals the exhaustion of the “technological source” for continuing growth of the world economy,
reflecting in many ways the emerging discrepancy between technological development and economic growth:
deindustrialization of the leading economies, “bubble effect”, eroding the foundation for economic sustainability, “Dutch
disease” for the oil-dependent countries, the bias toward the energy resources in the world trade in general and, of course,
worldwide growing militarization. The chapter highlights the necessity for the revision of that states of affairs in the world
economy and proposes in where to start creating the new global technological system as the new backbone for restarting
the economic growth and international civil cooperation.

Globalization means that world has been functioning as a single system, where any country-specific fluctuations
spread over the other countries and being substantial in scope and scale urge them to adjust, even through
revision of the domestic policy. The increasing circulation of resources – labor, financial and physical among
regions, countries and continents makes the “resource scarcity” senseless, thus impeding the overall
technological development and on that way – the global economic growth. In terms of technological and local
diversities, the world became more standardized and less unique, which, in turn, engenders its fragility.
Undoubtedly, diversity makes the world stronger and, on the contrary, squeezing diversity makes the world
weaker!

Scientists, monitoring the economic development, could be, probably, divided on those, who are formally or
informally followers of the “steady-state growth” theory and denies the most evidently approaching destruction
of the existing global economic system, and those, who recognizes the realities of the Schumpeterian “creative
destruction” epoch, similar or even more profound that of the Great Depression in 1930. The different
standpoints are reflected in the different insights into economic development and its prospects: “followers”
preserve the existing order of things (through reallocation of financial and labor resources, application of the
austerity measures, confronting the national identification processes) and “Schumpeterians”, who recognize the
cyclical mode of economic development, with its essential downturn stage and depression (leading further to the
new highs in growth and prosperity, enforced by the application of technological innovations); the emergence of
new innovative companies, replacing the “old champions”; and the new individuals-innovators coming to the
scene and taking a lead over societies with their exceptional ability of taking a risk of novelty rather than
exploiting the way things go on.

Among the multifaceted contradictions of our epoch, the profound transformation of global economic and
technological system spurs any other political, ecological, social challenges confronting the global society.
Probably the new and more sustainable global politico-economic system will originate from the technologically
transformed local societies intersected in between and on that way establishing a new model of interregional and
subsequently international division of labor as the core of the renovated market economy. This is our main
standpoint underpinning our further insights into globalization.

[h]Creating the global economy after the World War II

The twentieth century historically is unprecedented in terms of economic growth, elevated by the growth in
production output, trade, emergence of new business in services, tourism and information, spreading over the
countries, involving their population into economic activities, therefore increasing personal incomes and
consuming capabilities almost in all over the world.

Peaking point in the global economic development after the World War II was reached in 2008, before the first
sign of the incoming economic downturn has appeared. During the period 1950–2007 the Gross World Product
has increased from $5.3 trillion to $65.6 trillion. – more than 12 times. During the same period the World export
has increased from $295 billion. to $12 trillion. – more than 40 times. About the growing welfare has been
witnessing the following fact: in 2007, at the eve of the world economic crisis the world car sales have reached
its highest level of $1183 billion (or 8.7% of the whole world export) or over 2 times higher than the trade in
textile and clothes. The other spectacular fact – the pace of growth of the average income in China, one of the
poorest countries in the twentieth century: in 1952 with only 445 CNY per year while in 2016 it amounted
67,569 CNY per year, increasing almost in 152 times!

The explanation of that fabulous global economic growth would be inappropriate without mentioning the
critically low level from which it originates. After the World War II, the manufacturing capacities, transportation
systems, houses in many countries in Europe and Asia were almost totally destroyed. Margaret MacMillan
writes: “In Germany, it has been estimated, 70% of housing had gone and, in the Soviet Union, 1700 towns and
70,000 villages. Factories and workshops were in ruins, fields, forests and vineyards ripped to pieces. Millions of
acres in north China were flooded after the Japanese destroyed the dykes. Many Europeans were surviving on
less than 1000 calories per day; in the Netherlands they were eating tulip bulbs. Apart from the United States and
allies such as Canada and Australia, who were largely unscathed by the war’s destruction, the European powers
such as Britain and France had precious little to spare. Britain had largely bankrupted itself fighting the war and
France had been stripped bare by the Germans”. The twentieth century economic story is, the most probably,
about the post war recovery, steady growth in production output, successive extension of trade and markets,
appearance of new profitable business in services with the involvement of new countries and companies into the
global trade.

Figure shows the linkage between the World GDP and growth in manufacturing output, revealing the key role of
industrial production in driving the economic growth, creation of new jobs and extension of markets, supplying
producers with raw resources and materials and providing sales of final and semi-final goods. Figure displays the
increasing role of trade in the development of global economy beginning from 1990. Elimination of trade
barriers, pursued by the World Trade Organization (successor of the General Agreement on Tariffs and Trade,
established in 1947 with the inclusion of 164 member states up to 2017 among 196 total number of states in the
world), has played significant role in developing the world trade and global economy in general.
Figure.World manufacturing output and GDP (rebased, 1800 = 100).

Figure.The development of world trade.

At the same time the circulation of the production factors – capital and labor – among the countries has
intensified substantially. Gross cross-border capital flows rose from about 5% of world GDP in the mid-1990s to
about 20% in 2007, or about three times faster than world trade flows. According to the International Labour
Organization, in 2015 the migrant workers accounted for 150.3 million of the world’s approximately 232 million
international migrants. The vast majority of migrant workers are in the services sectors, with 106.8 million
workers accounting for 71.1% of the total, followed by industry, including manufacturing and construction, with
26.7 million (17.8%) and agriculture with 16.7 million (11.1%).

The interdependence among the countries became critical. The United States, for instance, being the dominated
country for the rest of the world, depends substantially on the inflows of foreign capital. According to Kames
K. Jackson, the US direct investment at the current cost in 2015, included into the “net international investment
position of the United States” amounted $7280.6 billion. The other case of the increasing dependence of national
state from its international disposition demonstrates Russia with its crucial dependence on trade in oil and gas.
And almost every country in the world has been falling under the Chinese economic dependence in terms of
trade, investment and world economic growth in general. The fragility of the global economy became so high
that a minor fluctuation somewhere in the world might be a trigger for dismantling the whole post war economic
architecture.

[h]Technological system: the backbone of the post war economy

Economists often treat the monetary policy, embracing manipulation of interest rates, exchange rates, taxes as the
main regulation intervention of state to the market economy and the means for spurring the economic growth.
Charles I. Jones writes on that matter: “…it is helpful to think of the economist as a laboratory scientist. The
economist sets up a model and has a control over the parameters and exogenous variables. The ‘experiments’ is
the model itself. Once the model is setup, the economist starts the experiment and watches to see how the
endogenous variables evolve over time” ().
On the contrary, our perception about economic growth is based on the assumption that it is driven by the
technological nucleus, endogenously created within the economic system. Technological systems overlapping
with economic systems could either facilitate the economic growth or impede it. After the World War II and until
2008, the technological input into economic development has produced an output – explosive economic growth,
gradually spreading over the increasing number of countries.

The technologies, invented during the so-called “golden age of technologies” (the period embracing the second
half of the nineteenth century and the first decades of twentieth century), have composed the technological
backbone for the post war recovery and forthcoming economic growth. In his book, entitled “Creating the
Twentieth Century” Vaclav Smil states that: “The greatest technical discontinuity in history took place between
1867 and 1914. This era was distinguished by the most extraordinary concatenation of a large number of
scientific and technical advances the synergy of which produced bold and imaginative innovations as well as
ingenious improvements of older ideas, by the rapidity with which these innovations were improved after their
introduction, by their prompt commercial adoption and widespread diffusion, and by the extent of the resulting
socio-economic impacts. Even the most rudimentary list of these epoch-defining innovations must include
telephones, sound recordings, light bulbs, practical typewriters, chemical pulp, and reinforced concrete for the
pre-1880 years. The astonishing 1880s, the most inventive decade in history, brought reliable electric lights,
electricity-generated plants, electric motors and trains, transformers, steam turbines, gramophone, popular
photography, practical gasoline-fueled internal combustion engines, motorcycles, cars, aluminum production,
crude oil tankers, air-filled rubber tires, steel-skeleton skyscrapers and pre-stressed concrete. The 1980s saw
Diesel engines, x-rays, movies, liquefaction of air, and the first wireless signals. And the period between 1900
and 1914 witnessed mass-produced cars, the first airplanes, tractors, radio broadcasts, vacuum diodes and triodes,
tungsten light bulbs, neon lights, common use of halftones in printing, stainless steel, air conditioning, and the
Haber-Bosch synthesis of ammonia (without which at least 40% of humanity would not be alive)” (Vaclav ). The
mismatch between these technologies, requiring the long-term investments into capital-intensive production and
the gradual extension of sales, on the one side and the “market shortemism” on the other side was overcome by
the state intervention. During the War economies of twentieth century the leading countries: the United States,
the USSR, Germany, the United Kingdom, Japan and many others pursued the state intervention policy,
replacing the market in terms of provision the long-term financial resources, state military procurement, R&D
facilitation and protection of national companies from the international competition. Margaret McMillan writes:
“Under the stimulus of war, governments poured resources into developing new medicines and technologies.
Without the war, it would have taken us much longer, if ever, to enjoy the benefits of penicillin, microwaves,
computers – the list goes on. In many countries, social change also speeded up”.

After the World War II, the enormous intellectual capital and R&D capabilities, accumulated during the war in
order to facilitate the production of armaments, were shifted to the non-military production, especially in the
countries, in which the military spending were prohibited (Japan, Germany and alike). In coincidence, the world
was divided on the fast growing countries, competing on the market of non-military production (the most
spectacular was the rise of the so-called “catching up countries” of the East Asia), and the countries, producing
innovations regardless their predominantly military application, the United States and the USSR to mention here.

That was the time of permanently cumulating welfare around the globe: the scale and diversity of production
output in the leading countries was growing steadily, the international trade, propelled by the gradual elimination
of the trade barriers expanded quite rapidly. The emergence of the new comer countries, “producing the same at
less cost”, filled out the fast growing market of consumer goods, first emerged in the leading countries and then –
in the rest of the world. Besides, the new economic activities, such as tourism, logistics and transportation,
appear. The technological backbone, created at the beginning of twentieth century, gave a birth for the enormous
varieties of economic activities, embracing countries, companies and ordinary people.

[h]Endogenous insight into the technological input into production output

There are two different theoretical insights into the role of technologies in economic development: one considers
technologies as the exogenous input into production output and economic growth, the other considers
technologies as the endogenous input into production output and economic growth. The critics of the “exogenous
theory” writes: “Growth has occurred not by producing more of the same, using static techniques, but by creating
new products, new processes, and new forms of organization”.

The general principles of the endogenous theory are reflected in the “steady-state growth model”. Smriti Chand
explains the essence of that model in the following way: “The concept of steady-state growth is the counterpart
of long-run equilibrium in static theory. It is consistent with the concept of equilibrium growth. In steady-state
growth all variables, such as output, population, capital stock, saving, investment, and technical progress, either
grow at constant exponential rate, or are constant.

To some extent the Cobb–Douglas production function, based on the assumption that output increases by the
labor and capital increments would be referred as the endogenous theory of economic growth, based on the
steady-state principles. Peter Howitt writes: “The first version of endogenous growth theory was AK theory,
which did not make an explicit distinction between capital accumulation and technological progress” (). Charles
I. Jones explains that theoretical model in the following way: The production function describes how input such
as bulldozers, semiconductors, engineers, and steel-workers combine to produce output. To simplify the model,
we group these inputs into two categories, capital, K, and labor, L, and denote output as Y. The production
function is assumed to have the Cobb–Douglas form and is given by

where α is some number between 0 and 1. Notice that this production function exhibits constant returns to scale:
if all of the inputs are doubled, output will exactly double.

In other words, to produce more output the additional input of labor and capital is required – this is the core of
the “steady-state” visioning of economic growth. Is there any other theoretical justification for the large
corporations exploiting more and more resources all over the world, absorbing more and more people, ignoring
their national identities and personal dignity?

The Cobb-Douglas model was further developed by the Nobel laureate in economics Robert Solow, by adding a
technology variable, A, to the production function:

where A represents the technology variable.

Charles I. Jones writes: “An important assumption of the Solow model is that technological progress is
exogenous: in common phrase, technology is like ‘manna from heaven’, in that it descends upon the economy
automatically and regardless of whatever else is going on in the economy”. Evidently, the “steady-state growth
model”, based on the exogenous insight into the role of technology in the production function and in economic
growth in general, is not coinciding with the “new reality” of increasing economic flexibility.

Exogenous insource of technological changes in economies was further criticized by the followers of the
endogenous economic growth theory. Paul Romer, Kenith J. Arrow, Lucas and others proposed their quite
different insight into the role of technologies in economic growth. As Peter Howitt underlines: “The neoclassical
growth theory of Solow (1956) and Swan (1956) assumes the rate of technological progress to be determined by
a scientific process that is separate from, and independent of, economic forces. Neoclassical theory thus implies
that economists can take the long-run growth rate as given exogenously from outside the economic system.
Endogenous growth theory challenges this neoclassical view by proposing channels through which the rate of
technological progress, and hence the long-run rate of economic growth, can be influenced by economic factors.
It starts from the observation that technological progress takes place through innovations, in the form of new
products, processes and markets, many of which are the result of economic activities”.. However, the both
theories consider labor and capital as the dominant factors for economic growth even when explaining the role of
the “technological input” into the production output to quote here James Morley who writes for the World
Economic Forum: “Economist Paul Romer has developed a theory of economic growth with ‘endogenous’
technological change — that is, it can depend on population growth and capital accumulation”.

Assumedly, the notion “endogenous” for explaining the modern economic development based on technological
innovations precisely is what the new economic mainstream should have as its core. However, the theories,
shortly described above, have a critical shortage – they are academic. Robert L. Heilbroner describes this
shortcoming as the economic irrelevance: “As a rule, the aspect of economics that upsets those who begin to
study it is its abstractness, its seeming removal from life, but any instructor worth his salt can reassure his
students that this abstract quality is a strength and not a weakness if we are to study large-scale questions, and
that the “unreality” of many economic conceptions conceals a sharp cutting edge”.

Perhaps there are at least two main aspects of the “endogeneity” of the modern technology-driven economic
development stemming from the real life and practice:

 technological innovations represent the main tool, invented by the human being, for overcoming various
obstacles, restricting economic development or even worse – threatening the humans existence. Probably,
the J. Watt steam engine was never invented unless the deforestation, encountering by England in
seventeenth and eighteenth centuries. In this sense, the successful technologies starts not when they are
supported by the government but when they are focused on overcoming the very precisely identified
economic necessities;
 emergence of the technological innovations per se results from a very complicated economic
configuration. This is not an easy deal to produce fabulous tune on that piano! Only a few remarks on that
matter: the technological innovations are very unattractive for the private investments (they require
substantial long-term investments with low portion of commercially successful outcomes with high
“spillover” and immense number of “imitators” “waiting at the door”). Moreover, successful technologies
embraces not only the stage of their creation in the R&D laboratories, but their selection by the private
companies, their adoption in the production processes, their commercialization on the market, their
diffusion among as much as many national companies (for cumulating the national economic growth in
general), their technological feedbacking from the markets. That is why it is a substantial simplification to
consider technological development within a narrow framework of the government financial support for
R&D and education.

In fact, the intersection between technologies and economy represents one of the most significant theoretical and
practical shortages. In this exploration we would like to explain only two among many others distinctive features
of the core twentieth century technologies: their dual-use capabilities and the large-scale character.

The majority of the key technologies in twentieth century initially was applied in the production of armaments
and then were diffused into the non-military production. VernonW. Ruttan in his book “Is War Necessary for
Economic Growth? Military Procurement and Technology Development” writes about the dual-use properties of
the military technologies: “It is difficult to overemphasize the importance of the historical role that military
procurement has played in the process of technology development. Knowledge acquired in making weapons was
an important source of the industrial revolution”. The Scandinavian scientist T. Cronberg underlines the role of
state in the R&D development in the United States: “State expenditure on research and development for the
military has been the way the US government created a national technology base. In contrary where industrial
policy and state intervention in the affairs of commercial companies are not accepted, military technology has
been a way to go around this sensitive theme. Military research and development has constituted the industrial
policy of the US, the very nature of the spin-off paradigm bears witness to this. The dual-use handshake is simply
a new way of defending industrial policy”.

The military procurements from the United States Department of Defense gave rise the basic industries, such as
triangle - Aerospace, Communications and Electronic industries (ACE), incubating large American companies
Rockwell, Lockheed, McDonnel-Douglas, General Dynamics, Huges, Northrop and others. The number of the
new industries emerged in consequence of the advance in the military technologies and their spin off to the civil
production: computers, jet aircraft, nuclear power and space communication.

From the graph, depicted on Figure the changing role of military and civil aircraft manufacturing during the wars
and in between is seen. However, there would be no doubt that the military aircraft production fertilizes civil
aircraft manufacturing in terms of technologies, R&D capabilities, graduation of personnel, provision of
equipments and in terms of the other common production characteristics.

Figure.U.S. aircraft production in the twentieth century.

Similar to aircraft manufacturing the other industries in the United States acquired advantage from the dual-use
capabilities of key technologies. To mention electronic industry, about which T. Cronberg writes, “Early military
and space programs helped the US electronic industry to achieve research and production superiority over its
competitors through the early 1970s. The military requirement-for example, for miniaturization and lower power
consumption- coincided almost exactly with the likely needs of commercial uses in the computer industry”.

Undoubtedly, the war economy had been dominating through the entire twentieth century and it cross fertilized
the other industries and countries through the process of technology spin off. However, the impact of the military
technology on the civil production has changed since then. Ann Markuzen and Joel Yudken underline: “The
military requirement no longer coincides with the likely needs of commercial users. This is due to the more
complex nature of the military technology, its special product development environment and the general
dynamics of the military-industrial complex itself. Innovation in the military becomes scrutinized and leads only
to incremental improvements…. Submarines become faster and faster, quieter, bigger and with longer ranged
instead of becoming simpler and more efficient. At the same time the military industry becomes more dependent
on commercial technology, such as computers”. Therefore, the military technologies could not play a role of the
technological drivers for the national and global economic growth any more.

The other economic peculiarity of the core technologies in twentieth century, which we would like to mention in
our investigation, is their consistency with the “economy of scale”. In other words, in terms of capital turnover,
profitability and cost competitiveness, their implication in industry requires gradual extension in production scale
and subsequently in market sales scale. In case of issuing only a few copies, as it has happened with the aircraft
manufacturing in Russia at the beginning of 1990, the industry could not exist.

The extension of production scale makes the cost of unit less, requiring expansions of the products sales on the
international markets, conquering competitors and advocating the free trade regime. Ultimately, large
corporations take the lead on the markets, swallowing up the competitors through mergers and acquisitions,
launching the “entry barriers”, establishing the multinational network in production and sale. The offshore
production launched in the countries with cheap labor, tax holidays and devaluated currency, enables the large
corporations to reduce costs of production through the exploitation of “geographic advantage” rather than
application of process technologies. Logistically, the offshore production finds its sales in the United States,
Europe and the other countries with high average income, large number of consumers and in the countries, where
the domestic production was replaced by import (to mention here Russia and the other ex-USSR countries). The
advantage of “large” became more lucrative and less risky than testing something new. The technological drive is
not what the large corporations would like to accept. Needless to say, that that path of economic development
became resource consuming, ecologically unfriendly and leads to suppression of the own national business in as
much number of countries as never before. The most probably “national identity” agenda, which has been
accepting today in many countries and usually referred by political establishment as “populism of political
opposition” has its fundamental nucleus – the exhausted capacities of “large”, whether they are corporations or
any other actors, in their mission to lead the global world. The global world needs to reconstruct the whole
economic architecture, reconsidering the role of large and small as its important chains.

[h]Changing role of technological system: from driving to impeding the global economic growth

The global crisis has interrupted the spectacular economic growth in 2008. As many experts agreed, the main
causes of the beginning economic downfall were emerged in the financial sector. That is why the economic crisis
was denoted as the global financial crisis. We have quite different standpoint on the nature of the global
economic crisis, explaining it by the technological reasons. Generally speaking, large companies as the main
actors in the global economy are not technological drivers any more, as we noted it before. The theoretical
conceptualization of that fact is yet insufficient, that is why we are quoting the participants of the Davos forum
2017, who had expressed a strong concern about the emerging “shortemism” in the global economic affairs,
which is essentially inconsistent with the long-term nature of the technology-driven development. We would like
to quote the British businessman Martin Stuart Sorrell, who has evaluated the technologies, applied by the global
companies as: “big companies made incremental, but not fundamental innovations”. Sharing that vision on large
business the Ivorian businessman TidjaneThiam says: “Once you became big your natural impose is to be
incrementalist and conservative and protect your position”. Therefore, the most probably the large multinational
companies has been acting today as the opposition rather than supporters to technological change of society.

Among the other reasons, restricting the further technological development and therefore negatively effecting the
growth of global economy are the following:

1. The deindustrialization of the leading economies, which is denoted in the increasing share of services in
the GDP and employment. For example, when the global economic crisis starts in 2008 the share of
services in the GDP composed: over 80% in the United States, about 69% in Germany, 77.4% in France
and 72.3% in Japan. It means that manufacturing production, whether it is knowledge-intensive or not,
has been losing its common ground. Among many circumstances are increasing cost of labor, favoring
consumption rather than production, stringent ecological requirements elaborated for the manufacturing
production, import-favorable exchange rate of the national currencies in many leading countries.
2. The “Dutch disease phenomenon”, affected Russia, Saudi Arabia, Venezuela and the other oil-dependent
countries. The Russian economic drama emerged since the early 1990th, when the market-driven insight
on the way the country should follow, has ignored a substantial pool of R&D capabilities, accumulated
during the Soviet times. As a result today the Russian dependence on oil and natural gas is enormous:
33% of total export, refined oil and gas products estimated as 24% of total export and natural gas
estimated as 14% of total export (three articles in sum composes 71% of the total export), which is
depicted in Figure. It is worth remembering that the second half of the twentieth century was marked by a
strong technological confrontation between the United States and the USSR, enforcing the worldwide
move forward in science, technologies and education. When Russia, as the main USSR successor and one
of the leading global technology race participant, has left its disposition it negatively affected the other
countries.
3. The general global economic biases toward increasing role of raw, specifically energy, resources in trade,
which is depicted on Figure.

In 2015 the crude oil sales overflow any other trading items, reaching $786.3 billion., the third position in
the global trade was occupied by the processed petroleum oil, estimated as much as $605.9 billion. The
car sales, estimated as $672.9 were on the second position.

4. The escalation of military spending and defense procurements represents not the last factor, undermining
the further technological development and growth in global economy. Figure and 7 depict the military
spending in the world, specifically fast growing in China.
Figure.Russian export breakdown.

Figure.World consumption of primary energy by energy type.


Figure.World military expenditure, 1988–2016.

Figure.China published military budget.

The growing militarization of the economies means that commercial technologies are gradually replaced by the
military, the state has been replacing the market in the processes of creation and application of new technologies
and the companies are forced to compete for the state military contracts rather than market share and cost
reduction. Besides, the cost of the new type of armaments is growing rapidly, negatively affecting the state
budgets, which has been undermining the financial stability in the military dependent countries. In his article
“The Jet That Ate the Pentagon: The F-35 Is A Boondoggle. It's Time to Throw It In the Trash Bin (excerpt)”
Winslow Wheeler writes: “The F-35 will actually cost multiples of the $395.7 billion cited above. That is the
current estimate only to acquire it, not the full life-cycle cost to operate it. The current appraisal for operations
and support is $1.1 trillion - making for a grand total of $1.5 trillion, or more than the annual GDP of Spain”.

Therefore, the modern economy, flourished on a substantial technological base, became unfriendly for the further
technological development.

[h]Creating new technological system in twenty-first century

The challenges, encountering the global society, have been steadily multiplying in scale and variety: escalation of
trade wars, financial flexibility, disintegration processes in Europe, migration crisis, militarization of economies,
global warming – these and many other problems have been undermining sustainable life of human beings. To
cope that challenges the governments, pursuing the monetary paradigm in regulation, are spending more from
their budget (at least mentioning 50 billion pounds, which the UK will pay for their leave the EU), rather
aggravating than improving the state of affairs. The entire global system has been badly working provoking
manifold disruptions as it is listed above. What would be more useless than the efforts focused on improvement
the obsolete, ill working politico-economic system? Let us recall here the Simon Kuznets statement, which he
made in his Nobel lecture in 1971: “… the changing course of economic history can perhaps be subdivided into
economic epoch, each identified by the epochal innovation with the distinctive characteristics of growth that it
generates”.

Epochal innovations and epochal transformation of the global economy requires new knowledge, new mindsets
and new individuals, pursuing the novelty in theory and practical decision makings. When Charles Jones explains
economist “as a laboratory scientist, setting up a model…” we could not agree. Economics in accordance with
our perception is a kind of social engineering science , and economists are social engineers, occupied in creation
a building of new economy, rather than manipulating data, drafting economic scenarios or just only criticizing
governments.

What should we know about technological adjustment to the ongoing processes of global disruption? Let us
make some preliminary insights, which, probably will be reflected in the further investigations.

First, the general evolution leads to a gradual replacement of labor-intensive and capital-intensive by respectively
labor-saving and capital saving mode of production. In this regards, any labor-intensive or investment-intensive
decisions in economy should be treated as inconsistent to the general trend and prospective. Once when we have
made our investigation for the prospects of economic development of the Russian Far East, we were brought to
the conclusion that for that specific region, peculiar in terms of shortage in population, substantial dependence on
external energy resources and scarcity of capital resources, the labor-intensive and capital-intensive (energy
intensive as well) model of economic development is inapplicable. It led us searching the new technologies,
substituting the scarce regional resources and they were found. Only one case among many others: three
scientists and five engineers had started to produce extracts from Holothuroidea, which brought them substantial
benefit and contributed into improvement of the regional economic performance in general. Production of that
kind brings substantial export revenues, accumulate financial resources to pay salaries, taxes and making further
own investments into extension of production facilities. Besides, it is energy, capital and labor saving
technologies and what is also very important – environmentally friendly. This important finding encouraged us to
learn the technology-intensive capabilities, existing in some other Russian regions. In fact, the reserves for
technology-driven economic growth even in the lagged behind regions are enormous. Who knows, probably, the
other countries also endowed with that kind of “hidden” localized capabilities.

Second, the other decisive characteristic of the prospective key technologies for restarting the economic growth
stems from the technology life-cycle shortening. It increases the rapidity with which the new technologies has
been replacing the already applied, requiring the close cooperation (face-to-face interaction) between scientists
from various fields of knowledge (cross-discipline interaction), small and medium innovation companies, local
administration, financial institutions, universities and the other “innovation stakeholders”. The higher flexibility
of small and medium companies appropriate to the increasing rapidity of technological change makes that
companies a new technological drivers, replacing the large companies in that mission. The “economy of scope”
when production of technologically unique products dominates over the exploitation of the given technologies
within the “economy of scale” paradigm, signifies about the appearance of the new stage in technological
development.

Third, there are no justifications anymore for the development of military technologies neither in terms of their
spin-off effect on the civil production and employment nor R&D advancement and multiplication of investments.
Militarization represents a resource consuming process, harmful for the environment and risky for the financial
stability of the national states. Explaining the necessity of elevation the armaments production, accompanied by
anticipating job creation, R&D facilitation and investment growth, the politicians should be clear enough about
its long-term destructive consequences.

Fourth, we would like to emphasize specifically the process of “technological localization”, which is not
common, but would enable understanding the new role of regions in the global reconstruction. Perhaps, regions
would be a new nutshell for incubating not only specific technologies and technological innovations, but new
technological systems, embracing S&M innovative companies; universities, providing the R&D and education
capabilities; and local administrations. Simon Kuznets denotes a special technological advantage of small nations
in the following statements: “Obviously, community of feeling, a sense of common destiny, and subordination of
individual or group interest to that of the whole, are far easier to attain in small and homogenous nations than in
large nations with their regional, racial and other diversities”…. “Another possible advantage of small units is the
rapidity with which they can adjust to changing situations. In a sense this rapidity is related to the greater
possible ease of reaching secular decisions. And since economic growth is a process of continuous adjustment to
a changing technological potential and a changing constellation of national structures, the speed with which
small nations may be able to make such an adjustment is a great advantage”.

Obviously, regions are endowed with specific resources, making them different from each other. Usually that
regional specification is indicated as “regional comparative advantage”. Technologies represent a means with
which regions overcome their specific “comparative disadvantage” and resource scarcity, develop their specific
“comparative advantage” and within their specific niche accumulates R&D capabilities to produce
technologically complicated products for increasing value-added production, spurring the economic growth,
making the local society more sustainable, wealthy and adjustable to the exogenous turmoil.

Localities or regions would be a space, where technologies are strongly related with the real economic needs,
which is quite different from the vision of technologies as some know-how, produced in R&D laboratories and
transferred into innovation companies with the assistance of venture capital. Our perception of the technology-
oriented networking local communities, clusters considers them as a new “drivers” for technological change and
global economic growth, replacing large companies – “old champions” in that mission. From that standpoint the
phenomenon of “Brexit” or “Katalonia’s challenge” could be explained as the first signs of the forthcoming
technological transformation of the national and local societies, based on their specific “national (regional)
comparative advantage” or “national (regional) economic identity”. These processes need to be carefully
governed in a proper economic, rather than political, manner. No doubt technologically advanced local societies,
pursuing technology-based economic growth, will make the global community more interactive, sustainable and
civilized.

The growing contradictions in international and economic relations between countries lead to the destruction of the
established world order, which can produce a negative impact on humanitarian stability in the world. Overcoming of the
world’s major problems should be sought not in the military sphere, or even in the political dialog, but in the economy, in
its transformation to a new stage in development. Today, as never before, intellectuals from various countries of the world
should gather together for proposing a new agenda for the global development based on new fundamental principles.

Chapter 3: "The Cold War Era: Spies in the Sky (1946-1991)"


[mh]Surveillance and Reconnaissance: Cold War Intelligence Gathering

[h]High-Tech Defense Industries: Developing


Autonomous Intelligent Systems
Advances in intelligent defense systems are growing and paving the way for high technological defense industries.
Technologies such as robotics , artificial intelligence , and the internet of things are driving considerable changes and
impacts on defense industries. These technologies are known to be capable of developing autonomous intelligent systems,
increasingly designed for military applications and capable of operating efficiently in conflict areas and war zones. Thus,
technology is often considered to be the key to many military revolutions in history, the so-called major military
innovations or, as others have named, radical change in military affairs. Three main factors are driving the latest revolution
in military affairs: (1) rapid technological advancements that moved the Industrial Age into the Information Age; (2) the
end of the Cold War; and (3) the decline in United States defense budgets.
In recent years, we have identified several studies on the development of autonomous defense systems in the industry
sector. Some examples are presented in Zhang et al. , who focus on studying autonomous defense systems and relevant
technological applications (e.g., X47-B, Predator, Global Hawk). However, apart from the division of warfare applications,
to the best of our knowledge, no article has so far made a characterization of the autonomous defense systems literature in
modes. That said, our intention is to explore and understand the following two research questions:
Research Question 1. How can the several modes of autonomous defense systems in the defense industry be categorized?
Research Question 2. How does the characterization of the autonomous defense systems modes contribute to making the
defense industry highly technological?
Considering these research questions, it is quite evident that this research aims to understand and describe a real-life
phenomenon, as suggested by Yin. Although the research is of a qualitative and descriptive nature, it falls within the
domain of applied sciences since it aims to identify the existing gaps and fill these gaps by initiating new scientific research
in robotics and artificial intelligence applications at the various levels of war. It would seem, thus, inappropriate to carry
out a more in-depth study to improve the high-tech defense industry without first knowing the holistic view of the use of
autonomous defense systems.
This topic is important since there has been an exponential investment in autonomous intelligent systems in the military
sector. However, to the best of our knowledge, there has been no prior characterization of the modes of operations of these
systems. In that regard, our article provides a comprehensive characterization of autonomous intelligent systems according
to the various levels of war and different types of decisions and artificial intelligence. In addition to presenting a
characterization of the various modes of autonomous intelligent systems in the defense industry, this article also
characterizes how these modes contribute to making the defense industry highly technological. The novelty of this article is
associated with the need to increase the degree of intelligent automation at the lowest levels of war. The need for
automation is justified by the argument that, at the tactical/operational level, the military follows clear orders and makes
structured decisions. Therefore, if structured decisions are made at the tactical level, which requires the performance of
tasks of an analytical-cognitive nature, the type of intelligence needed is mechanical, thus, justifying the replacement of
military personnel by machines, creating an excellent opportunity for the technological defense industries. On the other
hand, at the strategic level of war, there is limited scope for the use of lethal autonomous weapon systems, given the need
for human intervention, so as not to run into the current United Nations ethical, moral, and legal guidelines. Thus, this
article’s specific strengths are also associated with the managerial contribution, insofar as it provides the due knowledge on
where to invest innovation and development resources of intelligent autonomous technologies, where they have a more
significant growth perspective (i.e., in field operations—tactical level).
The remaining of the paper is organized as follows: the next section presents a conceptualization of the most relevant terms
and applications; this is followed by an explanation of the methodological process; the results of this study are presented in
Section 4, where the answers to the research questions can be found; in the last section, the conclusions are withdrawn,
focusing on theoretical and managerial contributions, research gaps and suggestions for future work.

[h]Concepts and Definitions


In recent years, one of the most profound changes in the defense sector has been the use of robots, aiming to replace human
beings in tasks of high precision and analytical-cognitive complexity , tasks of high risk for human life and/or physically
and physiologically strenuous. To that end, the defense industry developed various sophisticated warfare applications that
can operate in multiple domains (i.e., space, cyberspace, air, sea, and land). Due to a wide range of defense technologies, it
is currently difficult, if virtually impossible, to describe consensual warfare applications among all defense researchers and
academics. In addition to the growing demand for autonomous defense systems, a relevant argument that justifies the
writing of this article is the degree of maturity that some military technologies have achieved. The degree of maturity is
somewhat related to the high learning curve of autonomous defense systems, due to the increased connectivity of these
systems, via remote and network connection (e.g., fifth-generation technology standard for broadband cellular networks—
5G) and easy access to big data. Those characteristics allow autonomous defense systems to make decisions quickly,
without empathetic or emotional impasses, which is an important distinctive characteristic when compared to human
beings. Thus, autonomous defense systems work without human interference, boosted by the latest advances in
intelligence, cognition, computing, and system sciences. When referring to autonomous defense systems, most are robots
that detect, identify, understand, and interact autonomously with the external environment. Their capacity is based
essentially on three functionalities: (1) sensors, which detect the environment characteristics ; (2) artificial intelligence,
which identifies and understands the surrounding reality ; and (3) mechanisms, which allow real interaction.
Several articles in the literature carried out an analysis of automation, autonomy, and intelligence, which allows us to
understand the most relevant terms. For instance, Insaurralde and Lane define automation as the ability of a system to
automatically carry out a certain process by following a code. They define autonomy as the ability of a system to carry out
an automatic process by making decisions, implementing the choices made, and checking the evolution of such actions, i.e.,
make choices on its own. On the other hand, the human intelligence literature usually defines intelligence as the human
ability to learn over time and adapt to the external environment. Moreover, the artificial intelligence literature refers that
intelligence is the capacity of mimicking human intelligence , such as the ability of knowledge and reasoning , problem-
solving , communicating, interacting, and learning. Huang and Rust and Huang et al. also proposed three artificial
intelligences—mechanical, thinking, and feeling.

Figure. Three artificial intelligences.


Mechanical artificial intelligence is used for simple, standardized, repetitive, and routine tasks. An example of a civil
application of mechanical intelligence is the use of service robots to clean hotel rooms, replacing humans in routine and
standardized tasks. Thinking artificial intelligence is used for complex, systematic, rule-based, and well-defined tasks. An
example of this is Boston Dynamics’ quadrupedal robots, highly adaptable, versatile, and capable of capturing the mobility,
autonomy, and speed of living creatures. Feeling artificial intelligence is used for social, emotional, communicative, and
interactive tasks. Due to limited empathic and socioemotional capacity of machines, we believe that feeling artificial
intelligence in the military domain should be performed by human beings. This argument is justified by the current state of
social and emotional evolution of intelligent machines, which has not yet reached the desirable stage. For instance, it is
difficult to imagine a lethal autonomous weapon system in a combat situation with socioemotional ability capable of
deciding whether to kill a child with a weapon in his possession. Such complex and socioemotional tasks require skills that
are currently at the limit of the human being; some activities can eventually be shared with intelligent machines, but in the
context of the decision-making process. We will hardly see a robot crying for a human in its current stage of evolution,
whereas the opposite is already true. For example, when Boomer “died” in Iraq, the American soldiers offer him an
impromptu military funeral—in this case Boomer was not a human being, but a robot whose job was to search for and
defuse bombs. Perhaps due to examples such as this one, robots have been increasingly integrated and accepted into
military teams.
It is likely that the types of intelligence are also related to the types of decisions, as we will see in the Results section.
Laudon and Laudon classified decisions as structured, semi-structured, and unstructured. According to these authors,
unstructured decisions are those in which decision-makers must evaluate and decide about something that is new, not
routine, and for which there is no previously defined procedure. These decisions are made at the strategic level of the
organization. In contrast, structured decisions are repetitive and routine, where established rules and accepted procedures
are previously defined. Decisions of this type are taken at the operational management level. Intermediate decisions are
semi-structured, where only part of the problem has a clear answer defined by a well-accepted procedure.

[h]Military Applications of Autonomous Intelligent Systems


The defense industry is increasingly conducting autonomous defense systems research and development in several
domains: space, cyberspace, air, sea, and land. It is in that regard that we present an overview of military application of
autonomous defense systems in each of the aforementioned domains:

 Space robotics and autonomous intelligent systems;


 Autonomous intelligent cyber-defense agents;
 Intelligent unmanned autonomous systems—in the air, at sea, and on land.

Space robotics autonomous intelligent systems are presented as machines capable of operating in space and carrying out
exploration in adverse environments that may not be found in the natural conditions of earth. In general, space robots are
divided into two types: orbital robots , which are characterized by robotics in low earth orbit or robotics in geostationary
orbit; and planetary robots , which are capable of closely examining extraterrestrial surfaces. As mentioned by Gao et al. ,
depending on the applications , space robots are often designed to be mobile, manipulate, grab, drill and/or collect samples,
such as the National Aeronautics and Space Administration’s recent Mars 2021 Perseverance Rover. These robots are
expected to have several levels of autonomy, from tele-operation by humans to fully autonomous operations. Depending on
the level of autonomy, a space robot can act as : a robotic agent that can perform tele-operation up to semi-autonomous
operations; a robotic assistant, who can help human astronauts that range from semi- to fully-autonomous operations; or a
robotic explorer capable of exploring unknown territory in space using fully autonomous operations. As an exemplary
method, Giordano et al. is presented as a validating benchmark. They reported an efficient fuel control strategy for a
spacecraft equipped with a manipulator. Their key findings are associated with the strategy of using the thrusters, the
reaction wheels, and the arm drives in a coordinated way to limit the use of the thrusters and achieve the ideal zero fuel
consumption in non-contact maneuvers. The authors were able to validate the method via a hardware-in-the-loop simulator
composed of a seven degrees of freedom arm mounted on a simulated six degrees of freedom spacecraft. Some military
space robotics autonomous intelligent systems are related to spy satellites of some military powers and space-based missile
defense systems.
Autonomous cyber defense is an area that has been driven by the defense sector in anticipation of threats to military
infrastructures, systems, and operations. These systems will be implemented through autonomous and intelligent cyber-
defense agents that will fight against intelligent autonomous malware and are likely to become primary cyber fighters on
the future battlefield.
The popularity of advanced applications in the domains of air, land, and sea has also been steadily increasing with the
intelligent unmanned autonomous systems, which can perform operations without human intervention with the help of
artificial intelligence. In that regard, intelligent unmanned aircraft systems have aimed at autonomous flight, navigation,
sensory, and decision-making capabilities above conventional unmanned aircraft systems or unmanned aerial vehicles.
Currently, unmanned aircraft systems and unmanned aerial vehicles systems have often been used for intelligence,
surveillance, and reconnaissance activities or to carry out attack missions against high-value targets. In addition, several
methodologies have been targeted to these systems, such as Jourdan et al. , who designed an approach to mitigate common
unmanned aerial vehicle failures, including primary control surface damage, airframe damage, and complete engine failure.
Intelligent unmanned aircraft systems are beginning to be developed mainly in the civil context, such as package delivery ,
agriculture, and agroindustry , to name a few. The intelligent unmanned maritime and underwater systems have pushed the
technology beyond imaginable limits in order to deal with complex ocean and sea missions. These systems present new
opportunities for naval use, in particular for dangerous missions, such as highly efficient mine sweeping. Recognizing the
potential of autonomous underwater vehicles for both science and the military, in 1997 the Massachusetts Institute of
Technology and the North Atlantic Treaty Organization joined efforts to develop robotic technologies applicable to mine
countermeasures. These systems have evolved with the use of disruptive technologies until today. For example, Sands
developed very relevant methodologies in this regard. He studied deterministic artificial intelligence for unmanned
underwater vehicles, proposing an automated control and learning methodology that requires simple user inputs.
Noteworthy is also the attention that has been given to unmanned ground vehicles. Of these, an interesting example is the
BigDog, developed by Boston Dynamics, which is a four-legged robot for transporting loads in difficult terrain and has
been adapted with autonomous navigation. Moreover, mathematical structures and expectation-maximization and Gaussian
mixture models algorithms have been developed. Besides its applications in robotics and unmanned ground vehicles, these
algorithms can also be used in various domains such as cybersecurity, object detection, and military logistics. The practical
application of these algorithms has resulted in modern technology that can be applied on the battlefield, as presented by
Bistron and Piotrowski , who give us examples of solutions such as Spot and Atlas (also developed by Boston Dynamics).
In addition to the potential for civil society, much is speculated about the use of these autonomous robots in the revolution
of military ground operations.
This study was carried out as a systematic literature review, based on the original guidelines proposed by Moher et al.. This
research strategy is well justified by Fink , who argues that it is: systematic, since it embraces a methodological approach;
explicit, as all data collection procedures are described in it; and comprehensive, as it brings together a wide range of
scientific knowledge. The search process was undertaken in Elsevier´s Scopus, one of the world´s largest abstract and
citation databases of peer-reviewed literature. By choosing Scopus over other scientific databases and/or academic search
engines (e.g., Google Scholar), this article takes advantage of greater transparency and easy replicability of results ,
superior coverage of journals in the fields of applied and technological sciences , peer-reviewed articles, which increases
the quality of the results and the credibility of the research, wider application of filters, and immediate generation of search
results in graphs and tables. Thus, based on the previous advantages, the use of a single scientific database has been
accepted by the academic community. By following the assumptions of Moher et al. , this research uses the PRISMA
Statement , which consists of a 27-item checklist and a four-phase flow diagram. The checklist includes items considered
essential for the transparent reporting of a systematic review. As Page points out, each item on the checklist was
accompanied by an “explanation and elaboration”, providing additional pedagogical justifications and guidance for each
item, along with examples to demonstrate complete reports. The adoption of the guideline has been extensive, as indicated
by its citation in tens of thousands of systematic reviews and frequent use as a tool to assess the integrity of published
systematic review reports. To do so, the search was initially conducted on 14 April 2021, by identifying documents with
the terms “Defense Industry” and “Intelligent Systems” in all fields of search.

[h]American satellite surveillance systems

Weapons System 117L - Weapons System 117L was the first program designed to develop space-based
reconnaissance satellite systems. Several satellite systems would be developed through this program including
Corona, the Satellite and Missile Observation System (SAMOS), and the Missile Detection Alarm System
(MIDAS).

SAMOS - SAMOS or the Satellite and Missile Observation system was one of the first of a series of short lived
satellite systems developed by the United States, operated from October 1960 to November 1962. The SAMOS
satellite system was one of the first systems developed through the WS-117L program. The system consisted of
three primary components: visual reconnaissance, communications, and electronic intelligence gathering. Of the
eleven launched attempted, 8 were successful. The program was likely cancelled because of poor image quality.
The satellite could produce photos with 100 foot resolution. It was also heavily overshadowed by the Corona
system operated by the CIA.

MIDAS - MIDAS or the Missile Defense Alarm System was a satellite system operated by the United States
between 1960 and 1966. This system was designed to be made of twelve satellites, although only nine satellites
were successfully launched. The system was designed to provide early warning of Soviet missile launches. The
system was eventually cancelled because of the high price and because of the slow warning times. This system,
however, would be directly responsible for technologies used in its successors.

Corona - Corona was among the first of a series of reconnaissance satellite systems developed through the WS-
117L program. The Corona program was headed by the Central Intelligence Agency along with the Air Force.
Corona satellites were used to photograph Soviet and other installations. The first successful Corona mission
began on August 10, 1963. The Corona satellite system was expedited largely in part to the U-2 incident in 1960.
All of the Corona missions, with the exception of the KH-11 Kennan program, would make use of photographic
film, which would have to survive re-entry through the atmosphere and be recovered.

Keyhole - Keyhole was the designation for the initial Corona launches, which included KH-1, KH-2, KH-3, KH-
4A, and KH-4B. The name was used because it is analogous to spying through the keyhole of a door. 144
satellites would be launched through this program, and 102 returned usable photos.

Keyhole/Argon (KH-5) - Argon was the designation the surveillance satellites, manufactured by Lockheed, used
by the United States from February 1961 to August 1964. Argon made use of photographic film in a way similar
of the original Corona satellites. Of the twelve known launches, seven were not successful for varying reasons.

Keyhole/Lanyard (KH-6) - Lanyard was the designation for the first, albeit unsuccessful, attempt to establish a
high resolution, optical satellite system. This program would be led by the newly established National
Reconnaissance Office, and operated from March–July 1963. Because it was only able to achieve a resolution
similar to that of the KH-4 satellites, it was discontinued after only 3 launches.
Keyhole/Gambit (KH-7) - Gambit was the next of a series of satellites operated by the United States, by the
National Reconnaissance Office, from July 1963 to June 1967. It was among the first successful Corona missions
as it produced some of the first high resolution photos. These photos were typically of Soviet and Chinese missile
emplacements. Gambit satellites would make use of a three camera system and missions would last typically up
to eight days.

Keyhole/Gambit III (KH-8) - The Gambit III satellite system was amongst the longest serving satellite
reconnaissance programs operated by the United States during the Cold War. This satellite system would operate
from July 1966 to April 1984. Of the fifty-four launch attempts, only three would fail, all of which were
attributed to rocket failure. The average mission time of the Gambit III satellite systems were thirty-one days.
The Gambit III satellites would differ from the Gambit I satellites in that the Gambit III had a four camera
system, which carried over twelve thousand feet of film, and were able to produce resolutions as small as four
inches.

Keyhole/Hexagon (KH-9) - The Hexagon satellite system, commonly known as Big Bird, this satellite system
was operated from 1971 and 1986. The Hexagon system was officially known as the Broad Coverage Photo
Reconnaissance satellites. These satellites photographed large areas of the Earth at a time with moderate
resolution. These satellites were also used for mapping missions, which were used in map making. Three
Hexagon missions also included electronic intelligence or ELINT gathering capabilities. These were used to
eavesdrop on soviet communications and on soviet missile launches. Twenty launches were attempted, only one
was unsuccessful. These satellites operated for an increasing duration of time, with the longest mission lasting
275 days.

Keyhole/Kennan (KH-11) - The KH-11 Kennan, which goes under the code name Crystal, was first launched in
1976, and missions are still ongoing. According to leaked documents, this program is currently operation under
the code name Evolved Enhanced Crystal. The Kennan satellite system was the first satellite system to use
electro-optical imaging, which gives real time imaging capabilities. The resolution of these satellites is estimated
to be as low as 2 inches. These satellites are unique in that they have been placed in Sun-synchronous orbits,
which allow them to use shadows to help discern ground features. These satellites became famous in 1978 when
a CIA employee tried to leak the design of the satellite to the Soviets. He was tried and convicted of espionage.

Keyhole/Improved Crystal (KH-12) – The later KH-11 satellites have been called KH-12 improved Crystal or
Ikon satellites. These satellites also had increased download times which allowed for faster processing of photos.
It is also suspected that these satellites may have stealth technology to avoid detection by other satellites.

Vela - The Vela satellite system was developed to by the United States to ensure that the Soviet Union complied
with the Partial Test Ban Treaty which was ratified in 1963. The satellites were designed to monitor for nuclear
explosions in space and in the atmosphere by measuring for neutrons and gamma rays. A total of twelve Vela
satellites were launched during the course of the Cold War. The Vela satellites became publicly famous because
of the Vela Incident which occurred on 22 September 1979. It was theorized at the time that it was a nuclear test
conducted by South Africa and Israel, but new evidence does not support this theory. They have also been used
in the study of Gamma-Ray Bursts.

[h]American surveillance aircraft

U-2 – The U-2 aircraft, known as the "Dragon Lady", was developed by the Lockheed Skunk Works, and was
first flown on August 1, 1955. The aircraft was initially flown by the CIA, but control was later transferred to the
Air Force. The aircraft was designed to fly at altitudes of 70,000 ft. The U-2 was equipped with a camera which
had a resolution of 2.5 feet at an altitude of 60,000 ft. The first overflights of the Soviet Union by the U-2 began
in May 1956. The aircraft became publicly prominent after a U-2 aircraft flown by Francis Gary Powers was hit
by an SA-2 missile and crash landed in the USSR.

SR-71 – The SR-71 Blackbird is a long-range reconnaissance aircraft developed by Lockheed Skunk Works, and
designed by Kelly Johnson. The aircraft is known for setting speed and altitude records. The SR-71 was equipped
with optical and infrared imaging systems, electronic intelligence gathering systems, side looking airborne radar,
and recorders for those systems which are listed. The aircraft required two personnel to pilot it, one to pilot the
aircraft, and a Reconnaissance Systems Officer (RSO) to operate the systems.

[h]Soviet satellite surveillance systems

Zenit (Zenith) - The Zenith Satellite program, was a satellite program developed by the Soviet Union and used
from 1961 to 1994. During the course of its lifespan over 500 Zenith satellites were launched, making it the most
used satellite system ever. The Soviets concealed the nature of the system by giving the satellites the Kosmos
designation. The Zenith satellites had an advantage over the satellites developed by the United States, in that they
could be reused. A total of eight variants would be developed for a variety of mission types, ranging from high-
resolution photography to ELINT to cartographic and topographic missions.

Yantar - The Yantar satellites, designed to replace the Zenit satellite system, are among the newest Russian
satellites. Yantar satellites have been launched as recently as 2015. Two variants of the satellite have been
developed for high-resolution photography and for medium-resolution broad spectrum imaging.

After 1990s new memoirs and archival materials have opened up the study of espionage and intelligence during
the Cold War. Scholars are reviewing how its origins, its course, and its outcome were shaped by the intelligence
activities of the United States, the Soviet Union, and other key countries. Special attention is paid to how
complex images of one's adversaries were shaped by secret intelligence that is now publicly known.

The Soviet Union proved especially successful in placing spies in Britain and West Germany. It was largely
unable to repeat its successes in the 1930s in the United States. NATO, on the other hand, also had a few
successes of importance, of whom Oleg Gordievsky was perhaps the most influential. He was a senior KGB
officer who was a double agent on behalf of Britain's MI6, providing a stream of high-grade intelligence that had
an important influence on the thinking of Margaret Thatcher and Ronald Reagan in the 1980s. He was spotted by
Aldrich Ames a Soviet agent who worked for the CIA, but he was successfully exfiltrated from Moscow in 1985.
Biographer Ben McIntyre argues he was the West's most valuable human asset, especially for his deep
psychological insights into the inner circles of the Kremlin. He convinced Washington and London that the
fierceness and bellicosity of the Kremlin was a product of fear, and military weakness, rather than an urge for
world conquest. Thatcher and Reagan concluded they could moderate their own anti-Soviet rhetoric, as
successfully happened when Mikhail Gorbachev took power, thus ending the Cold War.

Chapter 4: "Revolutionizing Warfare: The Rise of Modern UAVs"


[mh]Gulf War Innovations: UAVs
Modern warfare strategies and technological capabilities in the latest war technology are associated with
planning for contemporary war. In recent years, one of the notable developments has been a military drone. At
the same time, modernized drones are becoming vital weapons of surveillance, gathering intelligence, and
occasionally offense.

This article outlines the development of drones as weaponry and examines their role during tomorrow’s combat.
With this, some of the most sophisticated devices, futuristic military drones, have existed.

Due to these circumstances, the whole process of spying by military agencies has undergone a radical
transformation; what is more, they substantially influence special-mission operations. These units are self-
operated, and one can manage them remotely; therefore, they suit perfect wars.

[h]Landscapes of Modern Day Military Drones

Our modern warfare situation is incomplete without a military drone, which is very important for different kinds
of military tasks. They are suitable for conducting real-time information-gathering missions and precise pinpoint
strikes.

These upgraded drone models were crafted to address the requirements of modern warfare and incorporate up-to-
date sensors and a powerful arsenal for battle. Futuristic military drone application is one of the aspects that has
increased due to the demands in today’s world. These modern machines are more robust in flying time, durable,
virtually invisible, and one of the most potent weaponry for an army.
[h]Drones and Military: A Symbiotic Relationship

Here, the synergy between drones and military operations possesses tactical and strategic benefits, which qualify
them as crucial tools for military operations. These drones have an essential role, including observing adversary
activities and identifying hazard tendencies found on site. This valued information helps the decision-makers
make reasonable decisions to avoid the risks of having soldiers in the fields.

Besides, drones and the military offer to accomplish essential objectives within seconds and have maximum
precision. The use of military drones coupled with guided missile shells reduces any possible danger to civilians
while maintaining a high level of security.

[h]The Role of Drones in the Near Future


AI-Powered Autonomous Drones:

The military drones will include artificial intelligence that is meant to increase their capabilities. AI algorithms
make these drones capable of tracking down a lot of information while flying to detect tendencies and respond
immediately. It allows the drones to be adapted as regards changing environments and eventually enhances the
efficiency and flexibility of operation on the battlefield.

Swarm Technology Advancements:

Swarm technology, which involves multiple advanced drones that operate independently in harmony, can be
employed everywhere. Research is taking place on ways of improving swarm intelligence so that drones can
work together, coordinate, or synchronize with each other. Swarm technology improves drones’ scalability; thus,
many armed forces use this ability to overwhelm the enemy with many drones at once, which gives them a
tactical advantage.

Energy Efficiency and Extended Flight Durations:

Improvement in battery tech and more efficient use of energy has led to longer flight durations for these drones,
as one of the main hurdles against their implementation. This growth will promote more distance coverage,
prolonged surveillance, and op-duration, increasing unmanned vehicles’ value in war operations.

Adaptation to Urban Warfare:

To cater to urban settings of conflict zones, the military drone will be equipped with sophisticated navigation
systems that help them avoid obstacles. Engineers are trying to build an innovative, capable system that can let
the drone fly past tricky areas such as the city’s canyon, among others, due to the intelligent aspects of engineers.
This is important since there is no option but to use conventional means during reconnaissance and surveillance
in the urban areas’ domains.
Interoperability and Collaboration with Other Technologies:

Nonetheless, a military drone can combine with other state-of-the-art techniques. Drones may work with other
ground-based robots, satellites, and various sensors to constitute a huge amount of information-gathering
devices. Sharing interoperable situational awareness between military forces ensures decision-making with
respect to disparate databases.

These next-generation combat drones have AI-driven autonomy and high energy efficiency combined with urban
warfare adaptivity to keep pace in contemporary combat contexts.

[h]Ethical Considerations and Future Challenges

As we welcome the development of combat drones and their crucial role in modern warfare, it is imperative to
consider ethical implications and prepare for future challenges:

Ethical Implications:

The use of autonomy in weapon systems raises some ethical issues regarding their liability, command, control,
and compliance with the law. Ethics of using combatant drones. The moral consequence implies that, in most
occasions, there exists close inspection on the issue of military necessity against humanity.

Cybersecurity Threats:

Nonetheless, the mere fact that there is a chance for any military equipment to be affected by a cyber war does
not imply that drones are exempted. It is also possible for hacker activities to distract pilots, and this may result
in military operations, which may be fatal. If next-generation drones are going to become viable, such systems
need to be guarded against cyber threats.

International Regulations:

The world needs proper rules and guidelines for effective international consensus on conduct while using
military technology like drones. Hence, future military drones will not violate human rights conventions as
reflected in treaties internationally.

[mh]Ethical Debates and Legal Challenges: The Beginnings of UAV Controversies

[h]Artificial Intelligence and Blockchain: Debate around


Legal Challenges
Recent discussions have revolved around the need to regulate the AI sphere itself and set limits, to prevent the
so-called artificial general intelligence from developing. On the other hand, blockchain is becoming popular, and
its continuous attempts to implement it have curiously given confidence to the digital society of the future,
although it does pose serious challenges.

Although it must be kept in mind that artificial intelligence (AI) is going to pose challenges in its application, we
must not lose sight of the blockchain, which, although it focuses on validation, permanence, and achieving higher
levels of certainty, control, and trust, is going to pose the challenge of acting together with AI; that is, blockchain
has the mission of generating trust, transparency, and acting as a mediator. So you are going to have the
challenge of making it possible for AIs to act and connect with each other.
However, we cannot forget the commercialized cloud (computing) services either. The cloud has become the
perfect scenario for AI, and also for blockchain, since, for the development of these applications, not only is a
significant calculation capacity necessary during learning

[h]Artificial intelligence: aspects to consider

AI-based systems can simply consist of software (e.g., voice assistants, image analysis programs, search engines,
voice and facial recognition systems), but AI can also be embedded in devices hardware (e.g., advanced robots,
self-driving cars, drones, or Internet of Things applications).

AI is used daily, for example, to translate from one language to another, generate subtitles in videos, or block
unsolicited email (spam). Far from being science fiction, AI is already part of our lives, in the use of a personal
assistant to organize our workday, in the movement in a self-driving vehicle or in the songs or restaurants
suggested by our phones.

AI is about developing systems capable of solving problems and performing tasks by simulating intellectual
processes. The AI can be taught to solve a problem, but it can also study the problem and learn how to solve it
itself without human intervention. Different systems can achieve different levels of autonomy and can act
independently. In this sense, its operation and its results are unpredictable, since these systems function as “black
boxes”.

Today, there are various definitions of artificial intelligence. However, none of them has been universally
accepted , a fact that leads us to the first challenge, making a timeless, general, and at the same time, robust
definition of AI, especially when one thinks of AI and its normative regulation.

A certain issue cannot be regulated without establishing a solid definition of what is regulated. Therefore, it is
essential to establish a generally accepted definition of AI that is common and flexible and does not hinder
innovation, considering that AI is becoming more and more sophisticated.

The principles enunciated by UNCITRAL when establishing in their Model Laws on Electronic Commerce or on
Electronic Signature the procedures and basic principles, as well as fundamental, to facilitate the use of modern
techniques can serve as a starting point of communication, in order to record and communicate information in
various types of circumstances, such as: nondiscrimination, neutrality with respect to technical means, and
functional equivalence. These principles are widely recognized as fundamental elements of electronic commerce.
At the same time, they are reflected in the enunciation of the requirements that electronic communications must
meet.

In this way, a common European definition must be established , including the definitions of its subcategories,
considering the following characteristics:

1. the ability to acquire autonomy through sensors and/or through the exchange of data with its environment
(interconnectivity) and the analysis of the said data;
2. the ability to learn through experience and interaction;
3. the form of the physical support of the robot;
4. the ability to adapt their behavior and actions to the environment.

[h]Debate of regulations

In our opinion, we must consider a series of methodological and substantive questions regarding the regulation of
technology in general and robotics and AI in particular. In the first place, it is worth asking whether sufficient
arguments can be identified to accommodate these new technologies and therefore justify a change in the
existing legal framework, that is, are existing laws sufficient to meet the regulatory challenges of the technology,
and if not, should some laws be adapted to include the new technology, generally by making the language of the
law more technology-neutral or rather should they be sui generis laws? With the existing administrative
regulations, the lack of interoperability is guaranteed.

As we know, interoperability is a very important issue. When we talk about this, we mean that the processes,
technologies, and protocols required to ensure the integrity of the data and the identity of the citizen, when they
are transferred from one system to another, must entail, by definition, a correct interconnection of the systems
and data exchange. However, this is not always carried out, let’s think about the right of access.

At the technical level, although standards abound, the lack of common basic standards for some technologies
must be highlighted. At the legal level, the laws that prescribe a specific predominant technology are pointed out
as factors that impede progress, the difficulty of those responsible in understanding their respective frameworks
of mutual trust and even in the areas of liability and compensation. An example of the problem of interoperability
is, and sometimes continues to be, the situation that is present in Spain at a regional level and in the EU: State
authorities throughout Europe offer electronic access, focusing, above all, on the needs and national media,
which has generated a complex system with different solutions, which has given rise to new obstacles to cross-
border exchanges, which hamper the functioning of the single market for companies and citizens.

Second, it is necessary to clarify the direct and indirect role that ethics can play in the regulation of technology.
The impact of AI is cross-border. The EU is realizing this, trying to regulate its sphere and establish limits. The
debates point to 11 areas: ethics, security, privacy, transparency, and accountability, work, education and
capacity development, inequality and inclusion, legislation, and regulation; governance and democracy, war, and
superintelligence.

These debates are justified and should be taken into account. However, they are part of a larger problem, related
to the insufficient conception of artificial intelligence that society has, which makes trust difficult, and to the
current laws, which have not yet recognized the specific characteristics of artificial intelligence. In this way, the
need arises to analyze and deepen an ethical framework so that both citizens and companies can trust the
technology with which they interact, have a predictable legal environment, and have the effective guarantee that
they will protect themselves and their fundamental rights and freedoms.

Let us think that technologies based on artificial intelligence influence aspects such as health, safety,
productivity, or leisure, and in the medium term, they will have a great impact on energy, transport, education,
and domestic activities. Regarding education, it is essential to find new models and methodologies that integrate
ethical concerns in relation to the impact of artificial intelligence on humanity, especially in everything related to
security, freedom, privacy, integrity, and dignity; self-determination and nondiscrimination, and the protection of
personal data.

The complexity of AI entails the need to create an ethical and efficient framework, for which the principle of
transparency must be based on, which consists of the fact that it must always be possible to justify any decision
that has been adopted with the help of artificial intelligence and that can have a significant impact on the life of
one or more people. On the other hand, it should always be possible to reduce the calculations of the AI system
to a form understandable to humans.

Blockchain is a distributed information processing technique on which different treatments and business models
can be implemented. The blockchain is a large database that is distributed among various nodes that participate
in the chain. This functions as an immutable logbook that contains the complete history of all transactions that
have been executed on the network5. These nodes are connected in a decentralized network, without a main
computer, and they are networks called P2P (network of peers, network between equals) that communicate with
each other using the same language that they transmit a message, called a token. The P2P network is a computer
network in which all or some aspects work without fixed clients or servers, but rather a series of nodes that
behave as equals to each other. They act simultaneously as clients and servers with respect to the other nodes of
the network, allowing the direct exchange of information in any format between the interconnected computers.
Usually, this type of networks is implemented as overlay networks built in the application layer of public
networks as in the case of the Internet. A token (symbol, signal, or token) is a representation of the information
that the network contains. The information travels encrypted, due to this, it is distributed without revealing its
content, as the number of transactions grows, the chain of blocks grows, and each block has its own digital
footprint. Its scope is immense, and Ethereum could replace basically any intermediary, substituting products and
services that depend on third parties to be totally decentralized.

[h]Types

Depending on the permissions required to be part of a blockchain, three categories can be distinguished:

1. Public: Where anyone can download the necessary programs on their computer and set up a node and
participate in the consensus process, anyone who is a party can send transactions through the Internet,
which will be included in the blockchain.
2. Federated or consortium: In this class, they do not allow anyone to configure a node on their PC and
participate in the transaction validation process since access permission is needed, which is usually
granted to members of a certain group, for example: the group of financial entities.
3. Private: In these blockchains, the authorizations to carry out transactions are conceived by private
organizations that will determine the conditions under which they will allow the reading of the
transactions carried out.

[h]Features

1. Immutable. Nobody can alter or delete the data in the registry or add new content without any validation.
When a transaction occurs, all nodes on the network will have to say that it is valid, or it will not be
added to the record.
2. Decentralization. There is no single person or governing authority that reviews the framework.
3. Origin: All nodes can verify the moment in which a certain asset has been registered in the block chain,
who was its first owner, and all the subsequent changes of ownership that have occurred up to that
moment.
4. Security. All registry data are strongly encrypted, using cryptography, which is one of the most complex
mathematical algorithms in existence.
5. Each block has a unique hash ID (fingerprint) and changing it is impossible.
6. To carry out a transaction in the blockchain, the use of public and private keys is necessary.
7. Register distributed. Due to the nature of the technology, all nodes maintain the registry and therefore, the
computational power is distributed among them, and the nodes act as verifiers.
8. Consensus. It is a determining factor. Without consensus, the system does not work. For the information
contained in a block to be considered valid, all participants must agree. The network developer needs to
implement some kind of consensus algorithm.
9. Speed. It offers a faster result. For example, a transaction might take a few minutes to complete.

[h]Artificial intelligence and blockchain: the need for a joint study

Recent developments in AI are the result of increased processing power, improvements in algorithms, and
exponential growth in the volume and variety of digital data. Many AI applications have begun to enter our daily
lives, from machine translations to image recognition to music generation, and are increasingly being
implemented in industry, government, and commerce. Connected and autonomous vehicles and AI-supported
medical diagnostics are soon-to-be-common application areas.

Now, for the other to happen, there must be a communication between the machines that must be validation and
with a high level of certainty and control. For this reason, we consider that joint action, if it is not obvious that it
must take place, will be important. In fact, it is an issue that is being debated more and more globally.

In this context, a question of trust, transparency, reliability, speed, and effectiveness in automatic electronic
transactions arises.
The emergence of new traceability and authentication systems, such as blockchain, can make it possible to record
assets, transactions, and participants, which can provide valuable information about origin and history.

In this way, solutions based on blockchain can enable the rapid detection of possible illicit or defective actions
within the system itself, products to illicit markets. In this way, it is evident that the deployment of digital
technologies, such as blockchain, is key to the development of AI.

In this context, Regulation (EU) 2018/1807 of the European Parliament and of the Council, of November 14,
2018, regarding a framework for the free circulation of non-personal data in the European Union, arises from the
need to establish administrative cooperation, based on the review of the European Interoperability Framework;
however, with this standard we only intend to make you see that development is a reality.

Now, it should be borne in mind that the current regulations have not yet recognized the specific characteristics
of the contracts that may arise, and neither blockchain technology nor artificial intelligence, it is true that in the
cloud environment (computing), things have been developed in UNCITRAL.

For example, in the absence of direct legal regulation of AI, article 12 of the United Nations Convention on the
Use of Electronic Communications in International Contracts, which establishes that a person (either a natural or
legal person) on whose behalf a programmed computer was used should be responsible for any messages
generated by the machine. However, in the explanatory note, UNCITRAL makes it clear that this article is an
enabling provision and should not be misunderstood as transforming an automated message system or a
computer into a subject of rights and duties. Electronic communications that are generated automatically by a
computer or messaging system without human intervention should be interpreted as “coming from” the legal
entity on behalf of which the computer or messaging system operates. The issues relating to the subject of the
action that could arise in this context must be settled in accordance with rules outside the Convention, which
returns us to the previous point. In any case, in our opinion, it is of vital importance to establish a legal
framework, especially in an international context, in which its real applicability is included.

To give security to the transaction to be carried out, blockchain arises, as a decentralized technology, which
entails some legal uncertainties, such as the legal nature of blockchains and shared digital records, which
includes problems of judicial jurisdiction and applicable law; thus, each network node may be located in a
different place as there is no “central party” responsible for the digital registry, whose nationality could serve for
regulation.

As we say, as the Internet becomes part of everyday life, the need arises to study the adaptation of private
international law systems to the new demands. For this reason, our intention is to analyze, deepen the debate, and
respond to the problems, succinctly raised, in order to contribute, as far as possible, to give certainty to the
responsibility, due diligence, contracts on intelligence systems artificial, as well as the condition of artificial
intelligence and the attribution of its acts of legal significance and the use of blockchain technology in the
formation of smart contracts, to mention some pertinent issues, as we say, from a private international law
perspective.

[h]The existing international law

The agreements attributing jurisdiction included in the underlying contract are relevant to the extent that they are
effective in accordance with the rules of our private international law system (in principle, in accordance with the
provisions of Regulation 1215/2012 – Brussels Reg. I bis), which significantly restricts its operability in
transactions with consumers.

Aside from these transactions, it will be necessary to observe the eventual incidence of special rules such as that
of article 25.2 Brussels I bis Regulation with respect to the written form in electronic contracting, presupposition
of the effectiveness of the agreements attributing jurisdiction. Likewise, the situations in which the special
jurisdiction of article 7.1 Regulation 1215/2012 becomes applicable must be observed, and it may be
controversial to what extent the automation of certain benefits conditions the determination of the place of
fulfillment of the obligation for the purposes of that rule.

In relation to the applicable Law, we are going to bear in mind issues that have to do with the meaning of the
automated fulfillment of certain commitments and its interaction with the underlying relationship between the
parties. The basic criterion is that the provisions of the Rome I Regulation (including its specific consumer
protection regime and the application of rules transposing the Directives on consumers) must be followed in
principle in relation to the underlying transaction between the parties, without ignoring that the application of
certain provisions may be a source of controversy—for example, its art. 14.2 and the specification in this
framework of the place of compliance—as well as that other regulatory instruments may also be relevant.

For example, the origin criterion of the Directive on electronic commerce in relation to the contractual aspects
included in the coordinated scope of the Directive when it is applicable. It will also be necessary to take into
account the existence of issues subject to autonomous connection, among others, the ability to contract and, very
especially, everything related to the applicable regime in terms of personal data protection, in which it will be
necessary to comply with the provisions in article 3 RGPD regarding the need to comply with its provisions in
the situations included within its scope of territorial application.

However, notwithstanding the foregoing, the evidentiary effectiveness to accredit transactions or other
circumstances within the framework of judicial proceedings will in principle be determined, as a procedural
matter, by the lex fori.

In the context of the EU, Regulation 910/2014, regarding electronic identification and trust services for electronic
transactions in the internal market and which repeals Directive 1999/93/EC, is of particular relevance for these
purposes.

[h]Problems associated with AI, blockchain

All this in accordance with the principle of preexistence of existing law. Now, in the underlying contractual
relationship we are going to find cases in which the technology can be given a non-nationality, as we have seen
before, and we may encounter problems in the connection criteria, cross-border insolvencies that blockchain
detects, etc., are problems that, if not now, they will be over time, when artificial intelligence advances and also
companies evolve to the cloud.

The blockchain poses different risks because of the technology and the way of operations: One of the main
problems that will affect the blockchain is the inability to control and stop its operation. In addition, the lack of
control over the operation can lead to the lack of responsibility of the company that manages the platform. Let us
think that, in its simplest form, blockchain is a decentralized technology or a distributed ledger in which
transactions are recorded anonymously. This means that the transaction ledger is simultaneously maintained on a
network of unrelated computers or servers.

Therefore, the allocation and attribution of risk and responsibility in relation to a blockchain service that is not
working properly will have to be carefully analyzed, not only at the provider-customer level, but also around all
the participants in the system.

It should be noted, regarding the process, that blockchain has the ability to cross-jurisdictional boundaries since
the nodes in a blockchain can be located anywhere in the world.

This can pose a series of complex problems that require careful consideration in relation to citizen-State,
company-State, company-company, citizen-company, company-administration, citizen-Administration relations
of the same State, and of different ones. In this regard, it should be noted that, in a decentralized environment, it
may be difficult to identify the appropriate set of rules to apply. Estonia does.
And it does so by proposing electronic identity as a connection criterion, since it is related to residence, in this
case electronic , insofar as the need to link information and its management, solely, with the person who issues it,
becomes essential for numerous different interactions: an organizational infrastructure (identity management)
and a technical infrastructure (identity management systems), to develop, define, designate, manage, and specify
authorization levels, assigning roles and identity attributes related to specific groups of people, such as company
directors, employees, or customers.

The evolution of technology is creating large electronic files, with it, large commercial and state databases. A
national identifier, contained in an identity card, allows capturing information about a person, which is found in
different databases, so that they can be easily linked and analyzed through certain data analysis techniques. At
the same time, ID cards are also getting smarter. The generation of data also has the potential to be offered in a
medium where they can be directly processed. In this way, files that can be crossed and structured, as well as
transferred, are created. For this reason, you have to pay special attention to any identity management system and
see who are where people are.

At its simplest level, each transaction could fall within the jurisdiction of the location of each node in the
network. With this, it should be noted that in an online environment, authenticating the identity of the remote
party is more important than ever. It plays a key role in the fight against identity fraud and is also essential to
establish the necessary trust that facilitates any type of electronic transaction.

At this point, it should be taken into account that the relationship between Law and IT goes beyond what has
been seen so far. That is why one of the main issues that arise, with respect to cross-border services, is the
security and confidentiality of information transmitted over the Internet, which should lead us to guarantee the
protection of personal data that lead to the identification of its owner.

By this, we mean that the electronic identity is an identity that is made up of information stored and transmitted
to the different users of it. Let us think that the identity is a fundamental element, which links the information to
its owner, located in some State, giving rise to its location, and therefore, to the effective and safe handling of the
specific data that enter the cloud.

We must keep in mind that all electronic identity schemes depend on two processes: first, identity authentication
and, later, identity verification. When authenticated, the identity is registered in the system and can then be used
for transactions. Identity is verified at the time of each transaction, from within the cloud itself. From the
information registered at that moment, the identification information arises or that will identify the person, as if it
were the signature, which will be used, later, to link an individual in an inseparable way.

We observe that within the cloud, there will be two elements that will come together to facilitate the identity of
the person who intends to access the cloud. These two fundamental elements are the identity fixed to the
individual and another fixed to the transaction that is carried out. The first will be the one that identifies the
parties and, therefore, will have a direct effect on the formation and enforceability of the contract, thus
determining its capacity to be contractually bound, by including elements such as the name of the legal person,
its form legal, its registration number in the registry (if applicable), its registered office or address of the business
center, together with the mention of its founding documents. The second would be the largest body of transaction
information, and it is continually updated, based on the transactions you make in the cloud.

If the above is not enough, it will be necessary to assess the specific contract and the specific links, of the said
contract, with the different countries attending, as indicated by the CJEU, in its Judgment of October 23, 2014,
case C-305/13 , to the “overall assessment of all the objective elements that characterize the contractual
relationship and assess the element or elements that, in his opinion, are more significant” and “in the event that it
is alleged that a contract has closer ties with a country” other than the country whose law is designated by virtue
of the presumption established in said section, the national court must compare the ties between the contract and
the country whose law is designated by virtue of the presumption, on the one hand, and between the contract and
the other country in question, on the other. The national judge must consider all the circumstances that concur,
including the existence of other contracts related to the contract in question.
In this way, issues related to identity management must be regulated by the different legal systems that discipline
the multiple activities of the specialized operators that carry out the identification tasks and the functional
operators. In this context, it must be considered that the eIDAS Regulation does not impose the creation of
national electronic identification schemes as such, but rather aims to guarantee their interoperability by applying
the principle of mutual recognition.

Let us think that the identity is a fundamental element, which links the information to its owner, located in some
State, giving rise to its location and, therefore, to the effective and safe handling of the specific data that enter the
cloud. We must keep in mind that all electronic identity schemes depend on two processes: first, identity
authentication and, later, identity verification. When authenticated, the identity is registered in the system and
can then be used for transactions. Identity is verified at the time of each transaction, from within the cloud itself.
From the information registered at that moment, the identification information arises or that will identify the
person, as if it were the signature, which will be used, later, to link an individual in an inseparable way.

On the other hand, we could find ourselves in situations in which the parties intervene in unequal conditions, the
applicable laws on contracts usually establish that the contract in question is a contract of adhesion. Service
providers are often familiar with a limited number of local laws, and especially local laws governing contracts
and the right to privacy. For that reason, they will proceed to choose an applicable law that establishes the
requirements related to the protection of information that the service provider in question can or is willing to
comply with, which offers rules for the elaboration of contracts that are predictable and acceptable to their
purposes.

These issues may force the client company to assume certain responsibilities toward its end users in relation to
the determination of the applicable type Law, which may be abusive.

Given this, as indicated by the Judgment of the CJEU, dated July 28, 2018, case C-191/15, when appreciating the
abusive nature of a certain contractual clause in the framework of an action for injunction, of article 6, section 2 ,
of the Rome I Regulation, it turns out that the choice of the applicable law is made without prejudice to the
application of the mandatory provisions provided for by the law of the country in which the consumers whose
interests are defended by means of that action reside. Such provisions may include those transposing Directive
93/13, provided that they guarantee a higher level of protection for the consumer. At this point, the main legal
risk that arises for the client company is not being able to fully assess the risks linked to the contract, for
example, ignorance of the weak points inherent in the technology that is being used; missing or inadequate
security features; economic risks linked to the loss of data or breaches of the agreement, etc.

Chapter 5: Into the 21st Century: UAVs in the War on Terror


[mh]Drone Warfare in Iraq: The Impact of UAVs on the Battlefield

[h]Advancements and Applications of Drone-Integrated


Geographic Information System Technology
Geographic Information System (GIS) refers to a field of science that involves the study of the lands, features, inhabitants,
and phenomena of the Earth. GIS is a comprehensive framework that includes the processes of acquiring, retaining,
modifying, examining, and presenting geographical data in an effective and streamlined way. The software package under
consideration may be described as a tool that effectively establishes a connection between graphical information and
attribute data recorded in a database, and vice versa. GIS provides a range of tools that may enhance operational efficiency
and effectiveness in handling both spatial and non-spatial characteristic data.
The integration of drones with advanced technologies such as artificial intelligence (AI), machine learning (ML), cutting-
edge equipment including high-definition (HD) cameras, precision lenses, light detection and ranging (LiDAR), and the
computational efficiency afforded by cloud storage and computing has witnessed exponential growth across diverse
domains. Accordingly, GIS has revolutionized the field of data collection and analysis. Drones help to procure high-
resolution images and collect a plethora of data. GIS, on the other hand, is a technology that captures, manages, analyzes,
and displays spatial or geographic data. A simple pictorial representation highlighting the different functions of drones and
GIS as well as their potential integrated applications is shown in Figure. The integration of these two technologies has
brought numerous benefits, especially in areas such as mapping, surveying , disaster management , and agriculture.

Figure. The potential functions and applications of GIS-integrated drone technology.

[h]Incorporation of GIS–Drone Technology


Drones equipped with high-resolution cameras can capture detailed images of the terrain and produce accurate maps of an
area. These maps can be used in urban planning, land-use management, and environmental monitoring. Surveyors can also
use drones to capture topographic data and produce digital elevation models (DEMs) that are useful in various fields such
as civil engineering and mining. Another important application of GIS-integrated drone technology is disaster management
and emergency response assistance. Drones equipped with thermal cameras can be used to detect and monitor wildfires and
other natural disasters. Drones can also be used to assess the damage caused by disasters and identify areas that need
immediate attention. GIS can be used to analyze the data collected by drones and produce maps that show the extent of the
damage and the areas that need assistance.

[h]Use Cases and Applicative Scope


In agriculture, drones integrated with GIS can be used to monitor crop growth and health. Drones can capture multispectral
images of crops, which can be used to detect the early signs of disease or stress. GIS can then be used to analyze data and
produce maps that show the areas that need attention. This can help farmers take timely action and improve their yields.
Apart from these applications, drone-integrated GIS can also be used in wildlife monitoring, infrastructure inspection, and
search and rescue operations. In wildlife monitoring, drones can be used to capture images of animals and track their
movement. In infrastructure inspection, drones can be used to inspect bridges, power lines, and other structures that are
difficult to access. In search and rescue operations, drones can be used to search for missing persons or to locate stranded
individuals.
Furthermore, drones integrated with GIS can play a crucial role in smart city management and sustainable development.
GIS is a technology that captures, manages, analyzes, and presents geographical data in a way that helps decision-makers
make informed choices. By integrating drones with GIS, city managers can collect and analyze data on various aspects of
the city such as traffic management , public safety , and environmental monitoring. Drones integrated with GIS have the
ability to map and monitor changes in urban landscapes. By capturing aerial imagery and analyzing it using GIS, city
planners can track changes in land use, detect illegal construction, and identify areas in need of development. This
information can be used to develop more sustainable urban planning strategies that promote environmentally friendly and
socially equitable growth. A generalized flowchart for the implementation of GIS–drone technology is depicted in Figure.
Figure. Process and formulation of GIS using drones.

[h]Motivation
Numerous review and research articles have been presented related to drones and GIS. This review aims to present the
multi-faced applicative potential of GIS–drones through presenting a comprehensively exhaustive review of their many
applications achieved, their processes, and innovations while highlighting their collective major challenges. The motivation
of this review are as follows:

 To help understand the utilization of drones towards complementing and augmenting data acquisition, and their
implementation towards geographical and geospatial analysis.
 To review multiple sectors, including agriculture, smart cities, advanced supply chain management, mapping,
monitoring, surveillance, and tracking while highlighting their respective use-cases to visualize a comprehensive
scope of GIS–drone technology.
 To facilitate a resource for students, academics, researchers, and legislators to access the latest developments in this
field and progress towards necessary standardization and innovation.

As a powerful tool, GIS technology has a wide range of applications in various fields, including urban planning,
environmental management, health, agriculture, transportation, and emergency response. Although GIS was traditionally
used for spatial data analysis and visualization for several decades, with evolving technologies, GIS has also evolved and
has significantly contributed to different fields and applications. For example, the integration of remote sensing technology
with GIS in the 1980s and 1990s was a significant milestone in the evolution of GIS technology. Remote sensing
technology uses satellite imagery to collect data on the Earth’s surface. These data are then integrated into GIS systems to
create detailed maps of the environment. This integration allowed for the creation of more accurate and comprehensive
maps, which were used for a wide range of applications, including environmental monitoring, land-use planning, and
natural resource management.
Another breakthrough integration was that of global positioning systems (GPSs) and GIS. GPS technology uses satellite
signals to determine the location of objects on the Earth’s surface. The integration of GPS technology with GIS allowed for
the creation of real-time maps that could track the movement of objects, including vehicles, people, and animals. The GIS
and GPS integration revolutionized the transport and logistics industry. One of the earliest applications of GIS and GPS
integration was in the field of wildlife tracking. Researchers used GPS technology to track the movement of animals and
GIS technology to analyze the spatial data collected. This helped to understand the behavior and movement patterns of
animals and inform wildlife conservation efforts. The recent development in GIS technology integrated with mobile
devices such as smartphones and tablets has further revolutionized GIS applications. Computing devices allow us to collect
spatial data in the field and transmit these to a central database for analysis. For example, in the field of emergency
response, mobile devices equipped with GIS technology allow emergency responders to collect real-time data on the
location and extent of emergencies, which is then transmitted to a central database for analysis. This helps inform
emergency response efforts and improve response times.
An interesting cross-pollination of GIS is drone technology. The technology of drones has evolved over time with
numerous upgrades, installations, and optimizations increasing its potential as well as application in numerous different
studies. Drones or unmanned aerial vehicles (UAVs) have revolutionized the field of GIS by providing high-resolution,
accurate, and up-to-date imagery data that can be used for a variety of GIS applications. Initially, drones were used for
aerial photography and videography, but with the introduction of specialized sensors such as LiDAR and hyperspectral
cameras, drones can now collect data on vegetation health, topography, and water quality. These sensors can provide highly
accurate and detailed data that can be used for a variety of GIS applications, including precision agriculture, land-use
planning, and natural resource management. The integration of drone technology with GIS has also become more
streamlined, with the development of software and platforms that can process and analyze drone-collected data. This has
made it easier for GIS professionals to incorporate drone data into their existing GIS workflows and analysis.
The use of drones in GIS applications is on the rise and is a popular topic for future research. The global GIS–drone
mapping market is expected to reach a valuation of USD 349.5 million in 2023 and is projected to grow at a compound
annual growth rate of 16.5 %, reaching up to USD 1609.6 million in the next ten years. Therefore, in this paper, we present
an insightful review of the integration of drone technology with GIS in different fields of study.

[h]Integration of Drone with GIS Technique—Use Cases


Drone-integrated GIS is a rapidly evolving technology that combines the capabilities of drones with GIS software to
generate spatial data and maps for various sectors. This technology has proved to be a game-changer in various industries,
including agriculture, health, disaster management, security and surveillance, wildlife monitoring, delivery services, and
military applications. In the following section, the drone-integrated GIS application for various fields is comprehensively
discussed, as shown in Figure.
Figure. Pictorial representation of drone-integrated GIS applications.

[h]Precision Farming
In the agricultural sector, drone-integrated GIS has revolutionized farming practices by enabling farmers to identify crop
diseases, monitor crop growth, and optimize water usage. Drones equipped with multispectral cameras can capture high-
resolution images of crops, which can be used to identify areas that require attention, such as irrigation and fertilization.
This information is then processed by GIS software to generate accurate maps, allowing farmers to make informed
decisions and maximize yields.
A detailed survey on UAV application in precision agriculture (PA) was performed in. The author explicitly addressed the
different GIS applications in PA. PA applications rely on numerical data that describe specific parameters, field
observations, and agrochemical quantities, along with geolocation data from GPS systems, to create production maps. Due
to the vast amount of data, appropriate software, such as ArcGIS, is necessary to process, organize, analyze, and visualize
the information as digital maps. These systems can also include statistical analyses, simulation data, and information
extracted from various databases to support decision making. A GIS generally comprises several components, including a
spatial data input system that integrates various types of data such as maps, satellite imagery, and multi-spectral imagery.
Additionally, this includes a data storage system to store the collected information. Furthermore, a data visualization
system is employed to present the data in the form of maps, tables, and shapes. Moreover, a data analysis system is utilized
to identify and rectify potential data errors, as well as analyze geospatial data. Lastly, a user interface system is
implemented to facilitate user interaction with the GIS system. An illustrative image showing PA using the quasi zenith
satellite system by Hitachi is shown in Figure.
Figure. Precision agriculture using the quasi-zenith satellite (QZS) system.
Drone-integrated GIS applications in agriculture offer numerous benefits, such as:

 Improved crop monitoring: Drones can capture high-resolution images of crop fields and use GIS to create accurate
maps of crop health, density, and yield potential. This helps farmers monitor crop growth and identify any potential
issues early on.
 Precision farming: Drone-integrated GIS applications allow farmers to apply fertilizers, pesticides, and other inputs
precisely where they are needed, reducing costs and improving crop yield.
 Time and cost savings: Drones can cover large areas of farmland in a short amount of time, allowing farmers to
identify potential issues quickly and efficiently, saving time and reducing costs.
 Enhanced data collection and analysis: GIS technology can be used to collect and analyze data from drones,
weather stations, and other sensors to provide insights into soil health, weather patterns, and crop performance.
 Better decision making: By providing real-time data and imagery, drone-integrated GIS applications can help
farmers make more informed decisions about irrigation, fertilization, and other management practices.
 Increased safety: Drones can be used to monitor farmland and identify potential safety hazards, such as areas with
poor drainage or uneven terrain.
 Reduced environmental impact: By providing more accurate data and analysis, drone-integrated GIS applications
can help farmers reduce their use of fertilizers, pesticides, and other chemicals, leading to a reduction in
environmental impact.

[h]Optimal Site Selection and Asset Deployment


GIS and drones have revolutionized the process of optimal site selection and asset deployment such as finding the best
region for harnessing solar or wind energy farms or the implementation of smart grids. By integrating GIS data with drone
technology, governments and agencies can collect real-time geospatial data, enabling them to make informed decisions. In
site selection, GIS aids in identifying suitable locations based on factors like accessibility, proximity to resources, and
environmental impact. Drones play a crucial role in this process by capturing high-resolution aerial imagery and surveying
terrains, offering unparalleled insights. In particular, for the optimal site selection for the development of solar and wind
energy farms, GIS enables the analysis of vast datasets, such as terrain, solar irradiance, wind patterns, and environmental
constraints, to identify the most suitable locations for renewable energy installations. By integrating drone imagery into
GIS, detailed and up-to-date aerial views of potential sites can be obtained, allowing for accurate topographical assessments
and the identification of obstacles. This combination facilitates informed decision making, reducing development risks and
maximizing energy output. The synergy between GIS and drones streamlines the site selection process, promoting the
expansion of sustainable solar and wind energy projects, and contributing to a greener and more efficient energy landscape.
Moreover, in asset deployment, GIS allows for efficient resource allocation by visualizing infrastructure and understanding
its interconnectivity. When it comes to smart grids, the combination of GIS and drones assists in monitoring power
distribution, analyzing energy consumption patterns, and facilitating maintenance. This seamless integration enhances
operational efficiency and promotes sustainable energy management.
[h]Urban Planning
The integration of drones and GIS has transformed the way urban planning is carried out in many cities around the world.
In recent years, drones have become increasingly popular due to their ability to capture high-resolution aerial imagery and
collect data in a non-invasive and cost-effective manner. This technology has allowed urban planners to better understand
the dynamics of built environments, identify key areas of concern, and make informed decisions about land use and
development. Drones provide high-resolution aerial imagery for urban planning, allowing planners to analyze building
footprints, road networks, and land use patterns with GIS software to identify areas for improvement. Additionally, drone-
integrated GIS technology enables non-invasive and cost-effective data collection on air quality, noise pollution, and
pedestrian flow to create detailed maps for identifying areas of concern and prioritizing interventions.
Drone-integrated GIS technology has been used in a number of urban planning projects around the world. A route planning
technique for multi-UAV cooperative data collection for 3D building model reconstruction in response to emergencies was
presented in. In Singapore, drones were used to collect data on the condition of the city’s green spaces, which was then
analyzed using GIS software to identify areas where improvements were needed. In Kota Bharu, multi-rotor drones were
used to create 3D models of the city for the purpose of studying urban development and conserving the city heritage.
There are several benefits of using drone-integrated Geographic Information System (GIS) applications for urban planning:

 Accurate and detailed data collection: Drones equipped with high-resolution cameras and sensors can capture the
accurate and detailed data of urban areas, which can be used to create accurate GIS maps. These data can help
urban planners make informed decisions about land use, transportation, and infrastructure development.
 Improved efficiency and cost savings: drone technology can significantly reduce the time and cost involved in
collecting data for GIS applications. With drones, data can be collected faster and more efficiently than with
traditional surveying methods, resulting in cost savings for urban planning projects.
 Enhanced data visualization: drone imagery can be integrated with GIS applications to provide the 3D
visualizations of urban areas. This can help urban planners better understand the spatial relationships between
different features and infrastructure and make more informed decisions about urban planning.
 Improved public participation: Drone imagery and GIS applications can be used to engage the public in urban
planning processes. By providing detailed visualizations of proposed developments and infrastructure, the public
can better understand the potential impacts of urban planning decisions and provide feedback to planners.
 Better disaster response: Drones can be used to quickly assess damage and collect data in the aftermath of natural
disasters, providing critical information to aid in disaster response and recovery efforts.

[h]Smart City Management


In smart city applications, drone-integrated GIS technology can be used to collect data on various aspects of the city’s
infrastructure, such as traffic flow, building density, and energy usage. This information can be fed to the GIS software for
different types of smart city management applications. This information can be used to optimize resource allocation,
improve public safety, and enhance the overall quality of life for residents. For example, drones equipped with thermal
imaging cameras can be used to detect heat loss from buildings, which can help city planners to identify areas that require
insulation or other energy-saving measures. Drones can also be used to monitor traffic flow, identify congested areas, and
optimize traffic signal timings to reduce traffic congestion and improve public transportation. An graphical representation
of smart city management using the integrated GIS-drone technology is shown in Figure.

Figure. Conceptual representation of drones and GIS application in smart cities management.
There are several benefits of using drone-integrated GIS applications for smart city development and management:

 Accurate and detailed data collection: Drones equipped with high-resolution cameras and sensors can capture the
accurate and detailed data of urban areas, which can be used to create accurate GIS maps. These data can help city
planners make informed decisions about infrastructure development, traffic management, and environmental
management.
 Improved efficiency and cost savings: Drone technology can significantly reduce the time and cost involved in
collecting data for smart city development and management. With drones, data can be collected faster and more
efficiently than with traditional methods, resulting in cost savings for smart city projects.
 Enhanced data visualization: Drone imagery can be integrated with GIS applications to provide 3D visualizations
of urban areas. This can help city planners better understand the spatial relationships between different features and
infrastructure and make more informed decisions about smart city development.
 Improved traffic management: Drones can be used to monitor traffic patterns and identify areas in need of traffic
management interventions. This can help reduce traffic congestion and improve the overall efficiency of
transportation systems.
 Better environmental management: Drones can be used to monitor environmental indicators such as air quality,
water quality, and noise levels. This can help city planners to make informed decisions about environmental
management and ensure that smart city development is sustainable and environmentally friendly.

[h]Smart Supply Chain


The utilization of GIS–drone technology has the potential to enhance the ever-increasing landscape of supply chain
management. The combined technology associated with GIS and drones entails harnessing the power of spatial data
analysis and unmanned aerial vehicles to optimize supply chain operations in diverse sectors, such as healthcare, disaster
management, and delivery services. By leveraging GIS and drone technology, researchers aim to enhance the efficiency,
accuracy, and responsiveness of supply chain processes, thereby revolutionizing the way logistics are managed. This
section explores the transformative capabilities of smart supply chains and sheds light on the manifold benefits they bring
to these critical areas, paving the way for more effective and agile supply chain management practices.

Figure. GIS–drone utilization in various fields of supply chain enhancement.

[h]. Health
The innovative study conducted by John Snow about the cholera epidemic that occurred in London in 1854 serves as a
prominent and well-recognized illustration of the efficacy of mapping and spatial methodologies within the field of public
health. Shiode et al. in 2015 recreated the London cholera outbreak using modern GIS technology. GIS has become
increasingly prevalent in the healthcare industry for investigating the influence of distance and non-spatial variables on
healthcare accessibility and usage. The use of GIS enables researchers to understand the complex relationship between the
physical environment and healthcare outcomes. However, there are differing methodological perspectives employed when
exploring the spatial and temporal patterns of healthcare utilization. For instance, some researchers focus on measuring the
accessibility of healthcare services and facilities by analyzing the distance between them and the population they serve.
Others explore the spatial distribution of healthcare resources and the utilization patterns of these resources by analyzing
demographic and socioeconomic factors, such as income and education level. Furthermore, some researchers examine the
temporal patterns of healthcare utilization, such as seasonal variations or changes over time due to policy interventions.
In recent years, drone-integrated GIS technology has been adopted for disease surveillance and control in the healthcare
industry. GIS-equipped drones can be used to map and track disease outbreaks and the spread of viruses, including the
COVID-19 pandemic. By collecting information on high-risk areas and populations, healthcare professionals can take
preventive measures to mitigate the spread of disease. Additionally, drones can deliver vaccines to remote and inaccessible
areas, such as rural regions or areas affected by natural disasters, where access to healthcare services is limited. This helps
to reduce the spread of diseases and improve the overall health outcomes of these populations. The integration of GIS and
drone technology is an innovative approach that is transforming the healthcare industry, enabling more effective disease
surveillance and control, and improving healthcare access and utilization in remote and underserved areas.

 Faster response times: Drones equipped with medical supplies can be used to quickly respond to medical
emergencies in remote or hard-to-reach areas. This can significantly reduce response times, and potentially save
lives.
 Improved supply chain management: Drone technology can be used to monitor the supply chain of medical
equipment and supplies, ensuring that they are delivered to the right place at the right time. This can help to reduce
waste, optimize inventory levels, and improve the overall efficiency of healthcare supply chains.
 Enhanced data visualization: Drone imagery can be integrated with GIS applications to provide detailed
visualizations of healthcare facilities and infrastructure. This can help healthcare managers to better understand the
spatial relationships between different facilities and make more informed decisions about healthcare management.
 Improved disease surveillance: Drones can be used to collect data on disease outbreaks and monitor the spread of
infectious diseases. This can help healthcare managers to better understand the patterns of disease transmission and
develop effective response strategies.
 Better disaster response: Drones can be used to quickly assess damage and collect data in the aftermath of natural
disasters, providing critical information to aid in disaster response and recovery efforts. This can help ensure that
medical resources are deployed where they are needed most.

Drones have been widely used in disaster management for rescue operations, emergency delivery, and
monitoring the extent of damage. A detailed review of drone applications in the field of disaster management is
presented in. Drones coupled with GIS software will be an amalgamation of two powerful tools that can allow
swift action based on accurate and precise location. Drones can be used to assess the extent of damage caused by
natural disasters such as hurricanes, earthquakes, and floods. The integration of drones and GIS technology
provides a comprehensive platform for capturing, analyzing, and visualizing data in real time. Drones can be
equipped with sensors and cameras that capture data and transmit these to the GIS system, where they are
analyzed and visualized in real time. GIS technology can provide a spatial analysis of the data captured by
drones, which can help disaster management agencies in decision making. Spatial analysis can help in identifying
the affected areas, the extent of the damage, and the resources required for relief operations. For example, during
a flood, drones can be used to capture the images of the affected areas, which can be analyzed by the GIS system
to identify the extent of the damage and the areas that require immediate relief. The drone-integrated GIS
application provides several benefits in disaster management, some of which are as follows:

 Real-time data: Drones equipped with GIS technology can provide real-time data that can help disaster
management agencies in decision-making. Real-time data can help in identifying the affected areas and
the resources required for relief operations.
 Accurate data: Drones equipped with sensors and cameras can capture accurate data, which can help in
identifying the extent of the damage and the areas that require immediate relief.
 Cost-effective: The use of drones equipped with GIS technology is cost-effective compared to traditional
methods of data collection and analysis.
 Time-saving: Drones equipped with GIS technology can capture data in a short period, which can help in
decision making and relief operations.
 Improved safety: The use of drones equipped with GIS technology can improve safety during relief
operations, as they can capture data from a safe distance.

[h]Delivery Services
Delivery services have also embraced drone-integrated GIS technology. Drones equipped with GPS technology can be used
to deliver packages quickly and efficiently, reducing delivery times and costs. GIS technology allows users to collect,
analyze, and visualize geographic data, while drones offer the ability to navigate through difficult terrain and areas with
limited accessibility.
Research has shown that drone-integrated GIS applications have the potential to significantly reduce delivery times and
costs. In a survey study conducted in , the use of drones for delivery has shown considerable economic and environmental
implications. Specifically, the implementation of drone-assisted delivery has resulted in a reduction in carbon emissions by
24.90%, a decrease in overall cost by 22.13%, and a reduction in delivery time by 20.65% when compared to conventional
delivery methods. Furthermore, GIS technology can be used to optimize delivery routes and improve delivery accuracy.
There are several benefits of UAV-integrated GIS applications in delivery and logistics, including:

 Efficient delivery: UAVs equipped with GIS technology can be used to optimize delivery routes, reducing delivery
times, and costs. This can lead to improved customer satisfaction and increased efficiency in delivery operations.
 Improved accuracy: GIS technology can be used to map and analyze delivery routes, enabling delivery companies
to identify potential obstacles and optimize their routes accordingly. This can help reduce delivery errors and
improve accuracy.
 Remote access: UAVs equipped with GIS technology can be used to deliver goods to remote areas that are difficult
to access by traditional delivery methods. This can open up new markets and increase the reach of delivery
services.
 Reduced environmental impact: UAVs have a smaller carbon footprint compared to traditional delivery vehicles,
which can help reduce the environmental impact of delivery operations. Additionally, by optimizing delivery
routes, UAVs can help reduce fuel consumption and emissions.
 Real-time tracking: UAVs equipped with GIS technology can provide the real-time tracking of deliveries, enabling
customers to track their packages and delivery companies to monitor their operations.

[mh]Mapping, Tracking, and Monitoring


[h]Crowd Management and Control
Drone-integrated GIS can also help in crowd management and control. Overcrowding in public areas, particularly stadiums,
and concerts, can result in stampedes. GIS allows the mapping and visualization of crowd flow patterns, enabling
organizers to identify potential bottlenecks and plan effective evacuation routes in the case of emergencies. By integrating
real-time drone surveillance, event organizers can monitor crowd density, identify potential overcrowded areas, and
respond proactively to ensure crowd safety. Drones equipped with high-resolution cameras provide live feeds to security
personnel, allowing them to closely monitor the situation and detect any suspicious activities. The combination of GIS and
drones revolutionizes crowd management practices, enhancing the overall event experience by ensuring the safety and
security of attendees in large gatherings. In , a drone with a GIS technique and remote sensing were used for crowd
management and risk mitigation during the COVID-19 pandemic and lockdowns.

[h]Security and Surveillance


UAVs have revolutionized the way security and surveillance tasks are performed. With the integration of GIS, drones
provide accurate and real-time spatial information, enhancing situational awareness and decision-making capabilities.
GIS integration with drones enables the mapping of critical infrastructure, high-value targets, and areas of interest. Drones
equipped with high-resolution cameras can be used to monitor large areas, providing real-time data that can be used to
detect potential threats and respond quickly. The drones can provide real-time video and thermal imagery to enhance the
ability to identify and track targets. The integration of GIS allows for the creation of 3D models, which enable the better
visualization and analysis of the area under surveillance. This capability is particularly useful in large areas, where ground
surveillance can be difficult. Drones can also be used for border patrol, allowing authorities to monitor and secure borders
more effectively. The U.S. Customs and Border Protection Agency (CBP) uses drones equipped with GIS technology to
monitor and secure the borders of the United States. The drones can cover vast areas and provide real-time video and
thermal imagery to detect and track illegal activity. The drones also enable the CBP to quickly respond to breaches of
security.
A model for a command control communications, computer, intelligence, surveillance, and reconnaissance (C4ISR) system
with a focus on detection and reconnaissance using drone technology and GIS was developed in. A drone-integrated GIS
technique was used in to examine the archeological site of Nineveh, situated in Mosul, Iraq. The researchers employed
drone data and GIS technologies to examine surface morphology. Satellite images, drone images, and digital surface
models (DSMs) were visually and digitally analyzed to identify any possible discovery that could contribute to the
comprehension of the site’s original surface.
There are several benefits of using drone-integrated Geographic Information System (GIS) applications for security
surveillance and monitoring:

 Improved situational awareness: Drones equipped with cameras and sensors can provide real-time situational
awareness of a given area, allowing security personnel to respond quickly to potential security threats. This can
help to prevent crime and improve public safety.
 Enhanced data visualization: Drone imagery can be integrated with GIS applications to provide detailed
visualizations of security risks and vulnerabilities. This can help security personnel to better understand the spatial
relationships between different security features and infrastructure and make more informed decisions about
security management.
 Cost-effective surveillance: Drone technology can significantly reduce the cost of security surveillance and
monitoring. With drones, large areas can be monitored quickly and efficiently, reducing the need for expensive
manned security patrols.
 Reduced risk to personnel: Drones can be used to collect data in hazardous or inaccessible areas, reducing the risk
of injury or harm to security personnel. This can help to improve safety and security for security personnel.
 Improved emergency response: Drones can be used to assess damage and collect data in the aftermath of security
incidents, providing critical information to aid in emergency response efforts. This can help to ensure that
emergency resources are deployed where they are needed most.

[h]Wildlife and Forest Monitoring


Wildlife monitoring is another area where drone-integrated GIS has proved useful. Drones can be used to monitor wildlife
populations, track migration patterns, and identify critical habitat areas. This information is essential in conservation efforts
and helps wildlife officials to develop effective management strategies. In , drones are used to investigate and analyze
forest geospatial information. The technique is useful for efficiently calculating the area and volume of forest damage while
saving time and resources. While remote sensing, GIS, and UAVs are employed in to study the deforestation in the
mangrove forest of the Niger Delta.
In , drone technology was used to identify potential sites for healing forests. Analyzing the vegetation density was
accomplished with GIS and the green-red vegetation index (GRVI), while the slope classification was analyzed using a
digital terrain model (DTM). A survey on the use of drones for monitoring wildlife during forest fires is presented in. The
utilization of satellite remote sensing and GIS assistance in visualizing the size and devastation caused by forest fires at
varying time intervals and scales is also presented.
There are several benefits of using drone-integrated Geographic Information System (GIS) applications for wildlife and
forest monitoring and management and conservation:

 Improved data collection: Drones equipped with high-resolution cameras and sensors can collect data on wildlife
and forest ecosystems more efficiently and accurately than traditional methods. This can help identify patterns and
changes in the ecosystem over time and inform conservation strategies.
 Reduced disturbance to wildlife: Drones can collect data on wildlife without disturbing them, which is particularly
important for monitoring sensitive or endangered species. This can help to minimize human impact on the
ecosystem.
 Enhanced data visualization: Drone imagery can be integrated with GIS applications to provide detailed
visualizations of wildlife and forest ecosystems. This can help wildlife and forest managers to better understand the
spatial relationships between different species and habitats and make more informed decisions about conservation
management.
 Improved forest management: Drones can be used to monitor forest health, detect forest fires, and identify areas in
need of reforestation. This can help forest managers to make informed decisions about forest management and
ensure the long-term sustainability of forest ecosystems.
 Improved law enforcement: Drones can be used to monitor wildlife populations and detect illegal activities such as
poaching and deforestation. This can help law enforcement agencies to better protect wildlife and forest
ecosystems.

[h]Military Application
Drone plays an important role in different military applications such as the monitoring and surveillance of borders, delivery
of essential equipment in war zones, and most importantly, supporting air combat. All the applications require precision
and the key technology that supports drone applications is GIS. Drones equipped with cameras and sensors can be used to
collect real-time data on enemy movements and positions, providing critical information to military planners. Using UAVs,
a technique for evaluating the hidden regions and military targets in mountainous terrain was developed and proposed
through quantitative methods in. The main objective of this work was the assessment of invisible areas and military objects
in mountainous terrain and the successful execution of UAV reconnaissance flights in mountainous combat situations. An
IoT-based UAV network for military applications supported by GPS and GSM is proposed in for geographic surveillance,
security monitoring, the radar detection of unwanted signals, and the tracking of UAVs.
The integration of drones and GIS offers several benefits for military applications, including:

 Enhanced situational awareness: By collecting aerial data, drones can provide a detailed and up-to-date picture of
the terrain, buildings, and other features in the area of operation. When this information is integrated with GIS
technology, it can help military personnel gain a better understanding of the situation and make more informed
decisions.
 Improved accuracy: Drones can capture high-resolution images and data that can be used to create highly accurate
maps and 3D models of the terrain. This level of detail can be especially useful for military applications where
precise measurements and analysis are required.
 Increased efficiency: traditional methods of mapping and data collection can be time-consuming and labor-
intensive. Drones can cover large areas quickly and efficiently, reducing the time and resources required for data
collection.
 Reduced risk: By using drones to collect data, military personnel can avoid putting themselves in harm’s way. This
can be especially important in dangerous or hostile environments.
 Flexibility: Drones can be deployed quickly and easily, making them a flexible tool for a wide range of military
applications. They can also be equipped with a variety of sensors and cameras to meet different operational needs.

[h]Oil and Gas Pipeline Monitoring


Drone-integrated GIS technology is being increasingly used in the oil and gas industry for pipeline monitoring and
inspection. The technology has proven to be an effective solution for improving safety, reducing costs, and increasing
operational efficiency. Similarly, it has also found its use in the smart city application, where it helps city planners to better
understand the city’s infrastructure and optimize resources.
In the oil and gas industry, drone-integrated GIS technology can be used to inspect pipelines and detect leaks, corrosion,
and other defects that could lead to a potential spill or pipeline failure. Drones equipped with high-resolution cameras and
sensors can capture detailed images of pipelines and their surroundings, which can be analyzed by GIS software to identify
potential problem areas. Additionally, drones can also be used for pipeline surveillance, monitoring activities such as
excavation and construction near pipelines, and ensuring that safety protocols are being followed. GIS software can then be
used to generate accurate maps, which provide a comprehensive view of the pipeline network, including its location,
condition, and potential risks. The integration of drones and GIS technology can bring numerous benefits to oil and
pipeline monitoring. Some of these benefits include:

 Improved safety: Drone-integrated GIS applications can help oil and pipeline companies monitor their
infrastructure from a safe distance. This can help to identify potential safety hazards, such as leaks or spills before
they become a major issue. This can also help to reduce the need for manual inspections, which can be dangerous
for workers.
 Cost savings: Drone-integrated GIS applications can help to reduce costs associated with traditional monitoring
methods, such as manual inspections or helicopter flyovers. Drones can cover large areas quickly and efficiently,
which can help to save time and reduce labor costs.
 Real-time monitoring: Drones can provide real-time data on pipeline and oil infrastructure conditions. This can
help companies quickly identify issues and respond to them in a timely manner, reducing the risk of damage or
downtime.
 Increased accuracy: GIS technology can help to accurately map and track pipeline and oil infrastructure. This can
help companies identify potential issues and more effectively plan maintenance and repairs.
 Improved environmental monitoring: Drone-integrated GIS applications can help monitor the environmental
impact of oil and pipeline operations. Drones can collect data on air and water quality, as well as wildlife habitats,
helping companies to minimize their impact on the environment.

[h]Challenges for GIS–Drone Applications


Drone-integrated GIS technology is a powerful tool for collecting and analyzing geospatial data. However, there are several
technical challenges associated with this technology, such as:

 Data acquisition: One of the primary challenges with drone-integrated GIS technology is acquiring accurate and
reliable data. The quality of data depends on the accuracy of the drone’s sensors and the quality of the camera used
to capture the images. The drone’s battery life, flight stability, and wind conditions can also affect data acquisition.
 Data processing: Another challenge is that of processing the vast amounts of data collected by the drone. Large
datasets generated by drone-integrated GIS technology require significant computational resources for processing
and analysis. The data processing software must also be capable of handling different types of data formats and
sources. The cost associated with data storage and processing makes it a challenge.
 Data management: Managing the data generated by drone-integrated GIS technology is another challenge. The data
need to be efficiently stored, organized, and accessed to ensure that these are available when needed. Proper data
management also requires security protocols to protect sensitive data.
 Data safety and security: Drone and GIS both may be vulnerable to cyberattacks, data interception, and hacking,
which may lead to a loss of crucial data or incorrect data manipulation. Therefore, it is necessary to implement
robust cybersecurity protocols, encryption techniques, and secure communication channels for both the drones and
the GIS systems. Regular security assessments, updates, and employee training are also crucial for maintaining the
integrity and privacy of drone-based GIS data.
 Accuracy and precision: Drone-integrated GIS technology requires high levels of accuracy and precision to
generate reliable data. Factors such as sensor calibration, GPS accuracy, and image resolution can affect the
accuracy of the data.
 Environmental factors: Environmental factors such as weather, terrain, and vegetation can also pose technical
challenges for drone-integrated GIS technology. For example, vegetation can obstruct the view of the drone’s
camera, making it difficult to capture accurate data. Wind, rain, and other weather conditions can also affect the
drone’s stability and data quality.
 Drone regulation: The main hurdle to the widespread utilization of drones is inadequate drone regulations. Despite
an estimated 7 million drones being shipped worldwide by 2020, only 57 out of 174 recognized countries have
publicly available drone regulations, and these regulations often outright ban commercial drones. Furthermore,
regulatory requirements lack harmonization across national boundaries, which is problematic for isolated
communities that may need drones the most and lie along these borders.
 Lack of trained drone operators: Drone control relies on human operators who may make mistakes, leading to
accidents or unintended flights. Typically, the majority of risks and mishaps associated with drone operations are
attributed to human operators. While there seems to be a lack of specialized drone operators, standardized training
and user-friendly control interfaces are pertinent towards technological readiness as well as accelerate its
implementation and usage.

Drone technology represents a cutting-edge innovation that provides highly efficient and sustainable solutions to
complex conventional methods. In this direction, different drone-based methodologies are collaborated with
multi-domain technologies to achieve, optimize, enhance and accomplish challenging tasks with higher
efficiency. In this review, we discussed an exemplary instance of such integrated technical cooperation between
drones and GIS technology across various domains.
In the realm of precision farming, drone technology is poised to revolutionize the agriculture sector. Drones are
rapidly becoming the industry norm for tasks such as area mapping and GIS applications. Future research
endeavors should prioritize the development of innovative land use and crop management methods, as well as the
optimization of supply chains within the agricultural sector. It is strongly recommended that the GIS data
analytics are enhanced in order to contribute towards enhanced and informed decision-making processes based
on the vast data available from various sources that requires a meticulous and thorough analysis to unearth
hidden insights. Alternatively, lack of information or incomplete information can lead to future short-term as
well as long-term challenges, as misinformed decisions can have irreversible impacts on crop production and the
environment.
Drone flight path modeling introduces an innovative, GIS-based approach to enhance monitoring and
surveillance efforts. The benefits of employing UAVs become evident when we compare their surveillance flight
time interval to the time it would take for ground patrols to cover the same territory. The recorded response times
in guard post analysis highlight how GIS methods contribute towards achieving comprehensive and efficient
coverage in protected areas. This type of analysis has the potential to aid conservation planners in identifying
optimal ground crew station placements, consequently reducing response times to high-risk zones and enhancing
the ability of ground personnel to combat intrusion and poaching. Furthermore, it can uncover vulnerabilities in
the current protection coverage by assessing response times based on the locations of existing guard stations.
Similarly, the technological development in this field has the potential to be applied in large crowd management
during social events.
In urban planning and resource allocation, integrated GIS–drone technology proved to be a formidable asset,
offering potent capabilities for spatial analysis. In light of the rapid depletion of land resources, the imperative of
judicious land use planning to identify novel urban development zones has become increasingly apparent.
Drones, in this context, will be utilized as an invaluable tool for crafting high-resolution 3D urban models,
thereby furnishing comprehensive insights into the intricacies of smart city infrastructure. Therefore, augmented
reality technology could be integrated with GIS–drone technology to provide urban planners with immersive,
real-time visualizations of proposed developments. Better planning and deployment will be achieved wherein
developers, designer, and shareholders could "walk through" digital models of future urban landscapes, making it
easier to assess the impact of design choices on the environment and community. Furthermore, establishing a
global drone network for disaster management will enable local as well as international collaboration and the
rapid deployment of resources in the affected regions.
Accordingly, the large-scale implementation of GIS and drone technology has ushered in a transformative
operational era in terms of supply chain management, encompassing inventory tracking, route optimization, risk
assessment, and sustainability endeavors. Therefore, it is imperative to underscore the significance of seamlessly
integrating GIS data with existing supply chain management systems. This integration facilitates the continuous
flow of data between GIS platforms and enterprise resource planning systems, paving the way for real-time
decision making and in-depth analysis. Moreover, it is necessary to leverage GIS and drone technology, not only
for route optimization, but also as pivotal tools in the pursuit of reduced carbon emissions and minimized
environmental impact.
Considering the extensive collaboration between the drone and GIS in different domains and applications, the
future prospects of GIS and drone technology are highly promising. Given the continuous advancements in both
technologies and their potential synergies with other sectors, it is imperative to look for futuristic applications.
Therefore, some considerations are needed to accelerate and appropriately utilize this combined technology:

 Enhance the interoperability between different GIS software and hardware platforms. Promote the
development and adoption of standardized data formats and protocols to ensure seamless data sharing and
integration between systems. This will facilitate collaboration and data exchange across organizations and
industries.
 Incorporate AI and ML algorithms into GIS and drone technology. This will enable these systems to
analyze and interpret data more effectively, automate repetitive tasks, and provide actionable insights. For
example, AI can help in the automated recognition of objects in drone imagery.
 Explore edge computing solutions for processing GIS and drone data in real-time, closer to the data
source. This reduces latency and allows for quicker decision making, making it particularly useful for
applications that require a rapid response, such as autonomous drones or emergency response systems.
 Work closely with regulatory bodies to shape drone and GIS regulations that foster innovation while
ensuring safety, privacy, and security. Advocate for the responsible and ethical use of these technologies
and collaborate with authorities to develop guidelines for data collection, storage, and sharing.
 Foster collaboration between the public and private sectors to leverage GIS and drone technology in the
interest of public welfare. This can include initiatives for disaster response, infrastructure development,
and environmental conservation.
Chapter 6: "From Reconnaissance to Precision Strikes: UAVs in Modern Warfare "
[mh]Intelligence, Surveillance, and Reconnaissance (ISR): Enhancing Battlefield
Awareness

Figure: Joint Surveillance Target Attack Radar System (JSTARS)

ISTAR stands for intelligence, surveillance, target acquisition, and reconnaissance. In its macroscopic sense,
ISTAR is a practice that links several battlefield functions together to assist a combat force in employing its
sensors and managing the information they gather.

Information is collected on the battlefield through systematic observation by deployed soldiers and a variety of
electronic sensors. Surveillance, target acquisition and reconnaissance are methods of obtaining this information.
The information is then passed to intelligence personnel for analysis, and then to the commander and their staff
for the formulation of battle plans. Intelligence is processed information that is relevant and contributes to an
understanding of the ground, and of enemy dispositions and intents. Intelligence failures can happen.

[h]ISR (Intelligence, surveillance and reconnaissance)


USNS Sea Hunter, an unmanned ocean-going surface vessel is suited for freedom of navigation operations (FONOPS)

ISR is the coordinated and integrated acquisition, processing and provision of timely, accurate, relevant, coherent
and assured information and intelligence to support commander's conduct of activities. Land, sea, air and space
platforms have critical ISR roles in supporting operations in general. By massing ISR assets, an improved clarity
and depth of knowledge can be established. ISR encompasses multiple activities related to the planning and
operation of systems that collect, process, and disseminate data in support of current and future military
operations.

On 28 July 2021 the NDAA budget markup by the House Armed Services Committee sought to retain ISR
resources such as the RQ-4 Global Hawk, the E-8 Joint Surveillance Radar and Attack System (JSTARS) which
the Air Force is seeking to divest. Examples of ISR systems include surveillance and reconnaissance systems
ranging from satellites, to crewed aircraft such as the U-2, to uncrewed aircraft systems (UAS) such as the US
Air Force's Global Hawk and Predator and the US Army's Hunter and PSST Aerostats, to unmanned ocean-going
vessels, to other ground-, air-, sea-, or space-based equipment, to human intelligence teams, or to AI-based ISR
systems.
Figure :MQ-9 Reaper at a FARP

The intelligence data provided by these ISR systems can take many forms, including optical, radar, or infrared
images or electronic signals. Effective ISR data can provide early warning of enemy threats as well as enable
military forces to increase effectiveness, coordination, and lethality, and demand for ISR capabilities to support
ongoing military operations has increased. In December 2021, the US Navy began testing the usefulness and
effectiveness of unmanned "saildrones" at recognizing targets of interest on the high seas.

For space-based targeting sensors, in a 2019 Broad Agency Announcement, the US government defined ISR in
this case as "a capability for gathering data and information on an object or in an area of interest (AOI) on a
persistent, event-driven, or scheduled basis using imagery, signals, and other collection methods. This includes
warning (to include ballistic missile activity), targeting analysis, threat capability assessment, situational
awareness, battle damage assessment (BDA), and characterization of the operational environment." Persistence
was in turn described: "Persistent access provides predictable coverage of an area of interest (AOI). Most space-
based intelligence collection capabilities consist of multiple satellites operating in concert, or supplemented by
other sensors, when continuous surveillance of an area is desired. Persistent sensors must provide sufficient
surveillance revisit timelines to support a weapon strike at any time."

The United States Space Force, National Reconnaissance Office (NRO), and National Geospatial-Intelligence
Agency (NGA) share the satellite-based ISR task as of 2021. See Space Delta 7

NGA uses Data transformation services (DTS), a program begun in 2018, to convert raw sensor data into a
format usable by its mission partners, who are government agencies whose names are classified. In light of the
2022 Russian invasion of Ukraine, NGA has taken operational control of DoD's Project Maven, the AI ISR
project for area defense, to identify point targets for ISR. NGA is currently using OREN, the Odyssey GEOINT
Edge Node for National System for Geospatial Intelligence, or NGS; the Joint Regional Edge Node (JREN) is
on-deck for distributing nearly a petabyte to the Combatant Commands in the next year (for 2023, an increase by
a factor of 10).

NRO "has a proven track record in [ISR]", insists one of the founders of the US Space Force, who defends the
capability of the NRO over the ambition of the Space Force to take over the role of ISR. GMTI (ground moving
target indicator) data is an objective for Space Force, NGA, and NRO.

[h]ISR at platoon level

Junior (3rd year) and Senior (4th year) cadets at West Point had hands-on experience building and using drones
with various tactical capabilities, guided by faculty from the Electrical Engineering and Computer Science
departments in tactical applications during Cadet Leadership Development Training in July 2022.

Ukraine's soldiers are using FPV drones on the battlefield, armed with munitions.

ISR concepts are also associated with certain intelligence units, for instance Task Force ODIN, ISR TF
(Company+) in Bosnia, Kosovo and Afghanistan. In the United States, the similar entity is used within their
Marine Corps's Surveillance, Reconnaissance, and Intelligence Group (SRIG). The SRIG modelled as a
consolidated military intelligence collection agency, most of the gathered intelligence are collected from many
sources (i.e. STA Sniper platoons, Marine reconnaissance assets, signal intelligence, etc.).

[h]Commercial ISR

In light of the 2022 Russian invasion of Ukraine, commercial satellite imagery is being used to track troop
movements, broadcast world events in real time, and conduct war.—NHK World-Japan

[h]ISTAR

ISTAR is the process of integrating the intelligence process with surveillance, target acquisition and
reconnaissance tasks in order to improve a commander's situational awareness and consequently their decision
making. The inclusion of the "I" is important as it recognizes the importance of taking the information from all
the sensors and processing it into useful knowledge.

ISTAR can also refer to:

 a unit or sub unit with ISTAR as a task (e.g.: an ISTAR squadron)


 equipment required to support the task

[h]Variations of ISTAR

There are several variations on the "ISTAR" acronym. Some variations reflect specific emphasis on certain
aspects of ISTAR.

[h]Surveillance, target acquisition, and reconnaissance (STAR)

A term used when emphasis is to be placed on the sensing component of ISTAR.

Reconnaissance, surveillance and target acquisition (RSTA)


Main article: Reconnaissance, surveillance, and target acquisition (United States)

RSTA is used by the US Army in place of STAR or ISTAR. Also, a term used to identify certain US Army units:
for instance, 3rd Squadron, 153rd RSTA. These units serve a similar role to the below mentioned US Marine
Corps STA platoons, but on a larger scale.

[h]Surveillance and target acquisition (STA)

Used to designate one of the following:

 A US Military Occupational Specialty (MOS) - specifically a STA United States Marine Corps Scout Sniper
 The role of a unit (e.g. STA patrol) or equipment
 A doctrine similar to ISTAR; for the US, and its allies and partners, its basis is the US National Defense Space
Architecture (NDSA) as realized by layered constellations of Earth satellites and Earth stations

[h]ISTAR units and formations

 Space Delta 7 (U.S. Space Force)


 Reconnaissance Surveillance and Target Acquisition (RSTA) Units (U.S. Army)
 Long-Range Surveillance (LRS) Units (U.S. Army)
 Northrop Grumman E-8 Joint STARS, Raytheon Sentinel, Alliance Ground Surveillance Aircraft
 Sayeret Matkal (Israeli Army)
 Shaldag Unit (Israeli Air Force)
 Brigade de renseignement (French Army)
 13th Parachute Dragoon Regiment (French Army)
 [Intelligence battalion Norwegian Army)
 Jegerkompaniet (Norwegian Army)
 Kystjegerkommandoen (Norwegian Coastal Ranger Command)
 Artillerijeger (Norwegian Army)
 Garnisonen i Sør-Varanger (Borderguard, Norwegian Army)
 ISTAR battalion (Norwegian Army)
 ISTAR HQ platoon (Norwegian Army)
 Joint ISTAR Command (Dutch Army)
 103 ISTAR battalion (Dutch Army)
 Cavalry Corps (Irish Army)
 Jagers te Paard Battalion (Belgian Army)
 ISTAR Battalion (Portuguese Army)
 NBG ISTAR TF (EU Nordic Battle Group)
 ISTAR (Canadian Army)
 62nd Svarzochna Brigada (Bulgarian Armed Forces)
 Särskilda Inhämtningsgruppen (SIG) (Swedish Armed Forces)
 61 Special Reconnaissance Regiment (Jordan Royal Guard, Jordanian Armed Forces)
 Strategic Reconnaissance Company (28th Ranger Brigade, Jordanian Armed Forces)
 Acquisition and Survey Regiment (Jordanian Armed Forces)
 Special Support & Reconnaissance Company - SSR (Danish Defence)
 ISTAR Bat TF 11 ad hoc] (Swiss Armed Forces)
 350. military intelligence battalion ( 350. vojnoobavještajna bojna ) (Croatian Army)
 Razuznavacki bataljon na ARM, Republika Makedonija
 5. obveščevalno-izvidniški bataljon, 5th Intelligence-Reconnaissance Btn, Military of Slovenia
 ISTAR battalion (Slovakian army)
 Regimiento de Inteligencia 1 (1st Intelligence Regiment, Spanish Army)
 32nd Regiment Royal Artillery UAS Regiment (British Army)
 47th Regiment Royal Artillery UAS Regiment (British Army)
 30 Commando Information Exploitation Group (Royal Marines)
 21 SAS and 23 SAS (British Army), now part of 1st Intelligence, Surveillance and Reconnaissance Brigade
 Special Reconnaissance Regiment (British Army)
 Honourable Artillery Company (British Army)
 Surveillance, Reconnaissance, Intelligence Group (SRIG) U.S. Marine Corps
 STA Sniper U.S. Marine Corps
 6th Brigade (Australia)
 Intelligence Center (Croatian: Središnjica za obavještajno djelovanje) (Croatia)
 Fernspäher (German Bundeswehr)
 Meritiedustelupataljoona (Coastal Brigade, Finnish Defence Forces)

Figure: Army Modular Force


The battlefield surveillance brigade (BfSB) was a United States Army surveillance/reconnaissance formation
introduced from 2006 to 2015. The United States Army planned for the creation and transformation of nine
intelligence brigades to a 'battlefield surveillance' role in 2007. The first battlefield surveillance brigade was
deployed the same year conducting Intelligence, Surveillance and Reconnaissance (ISR) operations.

However, gathering information is only a part of the challenge it faces. Along with the structural changes and
intelligence capabilities, the sustainment capabilities of the brigade also changed. The United States Army is
currently reorganizing these BfSB formations into expeditionary military intelligence brigades. These brigades
were designed to be self-sufficient Army modular forces.

[h]Mission

The BfSBs were meant to improve situational awareness for commanders at division or higher so they can focus
joint combat power in current operations while simultaneously preparing for future operations. The units had the
tools to respond to the commanders needs from unmanned aerial vehicles to signals gathering equipment and
human intelligence collectors. One of the major initiatives of the modernization plan involves migrating the army
from a division-centric force designed to fight one or two potential major-theater wars toward a modular,
brigade-centric force that is expeditionary in nature and deployed continuously in different parts of the world.

Structure

Each BfSB consisted of a headquarters and headquarters company; active component units have two military
intelligence battalions while the Army National Guard BfSBs had one; each brigade had a reconnaissance &
surveillance squadron consisting of a headquarters troop; two ground troops (Troops A and B) and a Long-range
surveillance (LRS) company; a signal company (network support company, or NSC); and a brigade support
company (BSC). In 2012, the active component brigades started grouping the brigade HHC, the signal company,
and the support company under a special troops battalion.

Before Schoomaker's appointment, the Army was organized around large, mostly mechanized, divisions of
around 15,000 soldiers each, with the aim of being able to fight in two major theatres simultaneously. Under the
new plan, the Army would be organized around modular brigades of 3,000–4,000 soldiers, intended to deploy
continuously in different parts of the world and to organize the Army closer to the way it fights.

An additional 30,000 soldiers were recruited as a short-term measure to ease the structural changes, although a
permanent end-strength change was not expected because of fears of funding cuts. This forced the Army to pay
for the additional personnel from procurement and readiness accounts. Up to 60% of the defense budget is spent
on personnel; at the time, each 10,000 soldiers cost roughly US$1.4 billion annually.

In 2002, the Belfer Center for Science and International Affairs held a key conference: the "Belfer Center
Conference on Military Transformation". Co-sponsored by the United States Army War College and the Dwight
D. Eisenhower National Security Series, on November 22 and 23, it brought together present and former defense
officials and military commanders to assess the Department of Defense's progress in achieving a
"transformation" of U.S. military capabilities.

In 2004, the United States Army Forces Command (FORSCOM), which commands most active and reserve
forces based in the Continental United States, was tasked with supervising the modular transformation of its
subordinate structure.

In March 2004, a contract was awarded to Anteon Corporation (later a part of General Dynamics) to provide
"Modularity Coordination Cells" (MCCs) to each transforming corps, division and brigade within FORSCOM.
Each MCC contained a team of functional area specialists who provided direct, ground-level support to the unit.
The MCCs were coordinated by the Anteon office in Atlanta, Georgia.
In 2007 a new deployment scheme known as Grow the Army was adopted that enabled the Army to carry out
continuous operations. The plan was modified several times including an expansion of troop numbers in 2007
and changes to the number of modular brigades. On 25 June 2013, plans were announced to disband 13 modular
brigade combat teams (BCTs) and expand the remaining brigades with an extra maneuver battalion, extra fires
batteries, and an engineer battalion.

In 2009 an "ongoing campaign of learning" was the capstone concept for force commanders, meant to carry the
Army from 2016 to 2028.

[h]Planning process, evolution, and transformation

The commander-in-chief directs the planning process, through guidance to the Army by the Secretary of
Defense. Every year, Army Posture Statements by the Secretary of the Army and the Chief of Staff of the Army
summarize their assessment of the Army's ability to respond to world events, and also to transform for the future.
In support of transformation for the future, TRADOC, upon the advice of the Army's stakeholders, has assembled
20 warfighting challenges. These challenges are under evaluation during annual Army warfighting assessments,
such as AWA 17.1, held in October 2016. AWA 17.1 was an assessment by 5,000 US Soldiers, Special
Operations Forces, Airmen, and Marines, as well as by British, Australian, Canadian, Danish, and Italian troops.
For example, "reach-back" is among the capabilities being assessed; when under attack in an unexpected
location, a Soldier on the move might use Warfighter Information Network-Tactical (WIN-T). At the halt, a light
Transportable Tactical Command Communications (T2C2 Lite) system could reach back to a mobile command
post, to communicate the unexpected situation to higher echelons, a building block in multi-domain operations.

[h]Implementation and current status

Grow the Army was a transformation and re-stationing initiative of the United States Army which began in 2007
and was scheduled to be completed by fiscal year 2013. The initiative was designed to grow the army by almost
75,000 soldiers, while realigning a large portion of the force in Europe to the continental United States in
compliance with the 2005 Base Realignment and Closure suggestions. This grew the force from 42 Brigade
Combat Teams (BCTs) and 75 modular support brigades in 2007 to 45 Brigade Combat Teams and 83 modular
support brigades by 2013.

On 25 June 2013, 38th Army Chief of Staff General Raymond T. Odierno announced plans to disband 13 brigade
combat teams and reduce troop strengths by 80,000 soldiers. While the number of BCTs will be reduced, the size
of remaining BCTs will increase, on average, to about 4,500 soldiers. That will be accomplished, in many cases,
by moving existing battalions and other assets from existing BCTs into other brigades. Two brigade combat
teams in Germany had already been deactivated and a further 10 brigade combat teams slated for deactivation
were announced by General Odierno on 25 June. (An additional brigade combat team was announced for
deactivation 6 November 2014.) At the same time the maneuver battalions from the disbanded brigades will be
used to augment armored and infantry brigade combat teams with a third maneuver battalion and expanded
brigades fires capabilities by adding a third battery to the existing fires battalions. Furthermore, all brigade
combat teams—armored, infantry and Stryker—will gain a Brigade Engineer Battalion, with "gap-crossing" and
route-clearance capability.

On 6 November 2014, it was reported that the 1st Armored Brigade Combat Team, 2nd Infantry Division,
currently stationed in South Korea, was to be deactivated in June 2015 and be replaced by a succession of U.S.-
based brigade combat teams, which are to be rotated in and out, at the same nine-month tempo as practiced by
the Army from 2001 to 2014.

Eleven brigades were inactivated by 2015. The remaining brigades as of 2015 are listed below. On 16 March
2016, the Deputy Commanding General (DCG) of FORSCOM announced that the brigades would now also train
to move their equipment to their new surge location as well as to train for the requirements of their next
deployment.
By 2018, 23rd Secretary of the Army Mark Esper noted that even though the large deployments to Iraq and
Afghanistan had ceased, at any given time, three of the Armored Brigade Combat Teams are deployed to
EUCOM, CENTCOM, and INDOPACOM, respectively, while two Infantry Brigade Combat Teams are
deployed to Iraq, and Afghanistan, respectively.

[At any given time,] there are more than 100,000 Soldiers deployed around the world —23rd Secretary of the
Army Mark Esper

In 2019 the 23rd Secretary of the Army asserted that the planning efforts, including Futures Command, the
SFABs, and the Decisive Action readiness training of the BCTs are preparing the Army for competition with
both near-peer and regional powers. The Army and Marine Corps have issued "clear explanations and guidance
for the 429 articles of the Geneva Conventions".

The Budget Control Act could potentially restrict funds by 2020. By 2024–2025, the Fiscal Year Development
Plan (FYDP) will have reallocated $10 billion more into development of the top 6 modernization priorities,
taking those funds from legacy spending budgets.

[h]Reorganization plans by unit type

The Army has now been organized around modular brigades of 3,000–4,000 soldiers each, with the aim of being
able to deploy continuously in different parts of the world, and effectively organizing the Army closer to the way
it fights. The fact that this modernization is now in place has been acknowledged by the renaming of the ' Brigade
Modernization Command' to the "U.S. Army Joint Modernization Command," on 16 February 2017.

By 2021 the Army of 2030 was envisioned to consist of Brigades for the close fight, Divisions for Large scale
combat operations, Corps for enduring, sustained operations, and Theater-scale commands. See Transformation
of the United States Army

[h]Modular combat brigades

Modular combat brigades are self-contained combined arms formations. They are standardized formations across
the active and reserve components, meaning an Armored BCT at Fort Cavazos is the same as one at Fort Stewart.

Reconnaissance plays a large role in the new organizational designs. The Army felt the acquisition of the target
was the weak link in the chain of finding, fixing, closing with, and destroying the enemy. The Army felt that it
had already sufficient lethal platforms to take out the enemy and thus the number of reconnaissance units in each
brigade was increased. The brigades sometimes depend on joint fires from the Air Force and Navy to accomplish
their mission. As a result, the amount of field artillery has been reduced in the brigade design.

The three types of BCTs are Armored Brigade Combat Teams (ABCTs), Infantry Brigade Combat Teams
(IBCTs) (includes Light, Air Assault and Airborne units), and Stryker Brigade Combat Teams (SBCTs).

Armored Brigade Combat Teams, or ABCTs consist of 4,743 troops. This includes the third maneuver battalion
as laid out in 2013. The changes announced by the U.S. army on 25 June 2013, include adding a third maneuver
battalion to the brigade, a second engineer company to a new Brigade Engineer Battalion, a third battery to the
FA battalion, and reducing the size of each battery from 8 to 6 guns. These changes will also increase the number
of troops in the affected battalions and also increase the total troops in the brigade. Since the brigade has more
organic units, the command structure includes a deputy commander (in addition to the traditional executive
officer) and a larger staff capable of working with civil affairs, special operations, psychological operations, air
defense, and aviation units. An Armored BCT consists of:

 the brigade headquarters and headquarters company (HHC): 43 officers, 17 warrant officers, 125 enlisted
personnel – total: 185 soldiers.
 the Brigade Engineer Battalion (BEB) (formerly Brigade Special Troops Battalion (BSTB)), consisted of
a headquarters company, signal company, military intelligence company with a TUAV platoon and two
combat engineer companies (A and B company). The former BSTB fielded 28 officers, 6 warrant
officers, 470 enlisted personnel – total: 504 soldiers. Each of the combat engineer company fields 13×
M2A2 Bradley Fighting Vehicle (BFV) Operation Desert Storm-Engineer (ODS-E), 1× M113A3
Armored Personnel Carrier (APC), 3× M1150 Assault Breacher Vehicle (ABV), 1× M9 Armored Combat
Earthmover (ACE), and 2× M104 Heavy Assault Bridge (HAB).
 a Cavalry (formerly Armed Reconnaissance) Squadron, consisting of a headquarters troop (HHT) and
three reconnaissance troops and one armored troop. The HHT fields 2× M3A3 Cavalry Fighting Vehicles
(CFVs) and 3× M7A3 Bradley Fire Support Vehicles, while each reconnaissance troop fields 7× M3A3
CFVs. The squadron fields 35 officers and 385 enlisted personnel – total: 424 soldiers.
 three identical combined arms battalions, flagged as a battalion of an infantry, armored or cavalry
regiment. Each battalion consists of a headquarters and headquarters company, two tank companies and
two mechanized infantry companies. The battalions field 48 officers and 580 enlisted personnel each –
total: 628 soldiers. The HHC fields 1× M1A2 main battle tank, 1× M2A3 infantry fighting vehicle, 3×
M3A3 cavalry fighting vehicles, 4× M7A3 fire support vehicles and 4× M1064 mortar carriers with
M120 120 mm mortars. Each of the two tank companies fields 14× M1A2 main battle tanks, while each
mechanized infantry company fields 14× M2A3 infantry fighting vehicles. In 2016, the ABCT's
combined arms battalions adopted a triangle structure, of two armored battalions (of two armored
companies plus a single mechanized infantry company) plus a mechanized infantry battalion (of two
mechanized companies and one armored company). This resulted in the reduction of two mechanized
infantry companies; the deleted armored company was reflagged as a troop to the Cavalry Squadron.
 a Field Artillery battalion, consisting of a headquarters battery, two cannon batteries with 8× M109A6
self-propelled 155 mm howitzers each (the changes announced by the U.S. Army on 25 June 2013,
include adding a third battery to the FA battalion, and reducing the size of each battery from 8 to 6 guns;
these changes also increase the number of troops in the affected battalions and also increase the total
troops in the Brigade), and a target acquisition platoon. 24 officers, 2 warrant officers, 296 enlisted
personnel – total: 322 soldiers.
 a brigade support battalion (BSB), consisting of a headquarters, medical, distribution and maintenance
company, plus six forward support companies, each of which support one of the three combined arms
battalions, the cavalry squadron, the engineer battalion and the field artillery battalion. 61 officers, 14
warrant officers, 1,019 enlisted personnel – total: 1,094 soldiers.

Infantry Brigade Combat Team, or IBCTs, comprised around 3,300 soldiers, in the pre-2013 design, which did
not include the 3rd maneuver battalion. The 2013 end-strength is now 4,413 Soldiers:

 Special Troops Battalion (now Brigade Engineer Battalion)


 Cavalry Squadron
 (2), later (3) Infantry Battalions
 Field Artillery Battalion
 Brigade Support Battalion

Stryker Brigade Combat Team or SBCTs comprised about 3,900 soldiers, making it the largest of the three
combat brigade constructs in the 2006 design, and over 4,500 Soldiers in the 2013 reform. Its design includes:

 Headquarters Company
 Cavalry Squadron (with three 14-vehicle, two-120 mm mortar reconnaissance troops plus a surveillance
troop with UAVs and NBC detection capability)
 (3) Stryker infantry battalions (each with three rifle companies with 12 infantry-carrying vehicles, 3
mobile gun platforms, 2 120 mm mortars, and around 100 infantry dismounts each, plus an HHC with
scout, mortar and medical platoons and a sniper section.)
 Engineer Company (folded into the Brigade Engineer Battalion)
 Signal Company (folded into the Brigade Engineer Battalion)
 Military Intelligence Company (with UAV platoon) (folded into the Brigade Engineer Battalion)
 Anti-tank company (9 TOW-equipped Stryker vehicles) (folded into the Brigade Engineer Battalion)
 Field Artillery Battalion (three 6-gun 155 mm Howitzer batteries, target acquisition platoon, and a joint
fires cell)
 Brigade Support Battalion (headquarters, medical, maintenance, and distribution companies)

[h]Modular support brigades

Combat support brigades

Figure: Heavy Combat Aviation Brigade Structure

Figure: Full Spectrum Combat Aviation Brigade Structure

Similar modularity will exist for support units which fall into five types: Aviation, Fires (artillery), Battlefield
Surveillance (intelligence), Maneuver Enhancement (engineers, signal, military police, chemical, and rear-area
support), and Sustainment (logistics, medical, transportation, maintenance, etc.). In the past, artillery, combat
support, and logistics support only resided at the division level and brigades were assigned those units only on a
temporary basis when brigades transformed into "brigade combat teams" for particular deployments.

Combat Aviation Brigades are multi-functional, offering a combination of attack helicopters (i.e., Boeing AH-64
Apache), reconnaissance helicopters (i.e., OH-58 Kiowa), medium-lift helicopters (i.e., UH-60 Black Hawk),
heavy-lift helicopters (i.e., CH-47 Chinook), and medical evacuation (MEDEVAC) capability. Aviation will not
be organic to combat brigades but will continue to reside at the division-level due to resource constraints.

Heavy divisions (of which there are six) will have 48 Apaches, 38 Blackhawks, 12 Chinooks, and 12 Medevac
helicopters in their aviation brigade. These are divided into two aviation attack battalions, an assault lift battalion,
a general aviation support battalion. An aviation support battalion will have headquarters, refuelling/resupply,
repair/maintenance, and communications companies. Light divisions will have aviation brigades with 60 armed
reconnaissance helicopters and no Apaches, with the remaining structure the same. The remaining divisions will
have aviation brigades with 30 armed reconnaissance helicopters and 24 Apaches, with the remaining structure
the same. Ten Army Apache helicopter units will convert to heavy attack reconnaissance squadrons, with 12 RQ-
7B Shadow drones apiece. The helicopters to fill out these large, combined-arms division-level aviation brigades
comes from aviation units that used to reside at the corps-level.

Figure: Fires Brigade Structure

Field Artillery Brigades (known as "Fires Brigades" prior to 2014) provide traditional artillery fire (M109
Paladin self-propelled howitzer, M270 MLRS and HIMARS rocket artillery) as well as information operations
and non-lethal effects capabilities. After the 2013 reform, the expertise formerly embodied in the pre-2007
Division Artillery (DIVARTY) was formally re-instituted in the Division Artillery (DIVARTY) of 2015, with a
colonel as commander. The operational Fires battalions will now report to this new formulation of DIVARTY,
for training and operational Fires standards, as well as to the BCT.

Air Defense: The Army was no longer to provide an organic air defense artillery (ADA) battalion to its divisions
as of 2007. Nine of the ten active component (AC) divisional ADA battalions and two of the eight reserve
(ARNG) divisional ADA battalions will deactivate. The remaining AC divisional ADA battalion along with six
ARNG divisional ADA battalions will be pooled at the Unit of Employment to provide on-call air and missile
defense (AMD) protection. The pool of Army AMD resources will address operational requirements in a
tailorable and timely manner without stripping assigned AMD capability from other missions. Maneuver short-
range air defense (MSHORAD) with laser cannon prototypes fielding by 2020. But by 2015 the Division
Artillery was restored.

Maneuver Enhancement Brigades are designed to be self-contained, and will command units such as chemical,
military police, civil affairs units, and tactical units such as a maneuver infantry battalion. These formations are
designed so that they can operate with coalition, or joint forces such as the Marine Corps, or can span the gap
between modular combat brigades and other modular support brigades.
Figure: Combat Sustainment Brigade Structure

Sustainment Brigades provide echelon-above-brigade-level logistics. On its rotation to South Korea, 3rd ABCT,
1st Armored Division deployed its supply support activity (SSA) common authorized stockage list (CASL) as
well. The CASL allows the ABCT to draw additional stocks beyond its pipeline of materiel from GCSS-A. The
DoD-level Global Combat Support System includes an Army-level tool (GCSS-A), which runs on tablet
computers with bar code readers which 92-A specialists use to enter and track materiel requests, as the materiel
makes its way through the supply chain to the brigades. This additional information can then be used by GCSS-A
to trigger resupply for Army pre-positioned stocks, typically by sea. The data in GCSS-Army is displayed on the
Commander's Dashboard —Army Readiness-Common Operating Picture (AR-COP); this dashboard is also
available to the commander at BCT, division, corps, and Army levels.

Figure: Battlefield Surveillance Brigade Structure

The former Battlefield Surveillance Brigades, now denoted Military Intelligence Brigades (Expeditionary), will
offer additional UAVs and long-term surveillance detachments. Each of the three active duty brigades is attached
to an Army Corps.
Figure: Maneuver Enhancement Brigade Structure

[h]Security Force Assistance Brigades

Security force assistance brigades (SFABs) are brigades whose mission is to train, advise, and assist (TAA) the
armed forces of other states. The SFAB are neither bound by conventional decisive operations nor counter-
insurgency operations. Operationally, a 500-soldier SFAB would free-up a 4500-soldier BCT from a TAA
mission. On 23 June 2016 General Mark Milley revealed plans for train/advise/assist Brigades, consisting of
seasoned officers and NCOs with a full chain of command, but no junior Soldiers. In the event of a national
emergency the end-strengths of the SFABs could be augmented with new soldiers from basic training and
advanced individual training.

An SFAB was projected to consist of 500 senior officers and NCOs, which, the Army says, could act as a cadre
to reform a full BCT in a matter of months. In May 2017, the initial SFAB staffing of 529 soldiers was
underway, including 360 officers. The officers will have had previous command experience. Commanders and
leaders will have previously led BCTs at the same echelon. The remaining personnel, all senior NCOs, are to be
recruited from across the Army. Promotable E-4s who volunteer for the SFAB are automatically promoted to
Sergeant upon completion of the Military Advisor Training Academy. A team of twelve soldiers would include a
medic, personnel for intelligence support, and air support, as cited by Keller.

These SFABs would be trained in languages, how to work with interpreters, and equipped with the latest
equipment such as Integrated Tactical Network (ITN) using T2C2 systems including secure, but unclassified,
communications and weapons to support coalition partners, as well as unmanned aircraft systems (UASs). The
first five SFABs would align with the Combatant Commands (SOUTHCOM, AFRICOM, CENTCOM,
EUCOM, and USINDOPACOM, respectively); an SFAB could provide up to 58 teams (possibly with additional
Soldiers for force protection).

Funding for the first two SFABs was secured in June 2017. By October 2017, the first of six planned SFABs (the
1st Security Force Assistance Brigade) was established at Fort Moore. On 16 October 2017, BG Brian Mennes of
Force Management in the Army's G3/5/7 announced accelerated deployment of the first two SFABs, possibly by
Spring 2018 to Afghanistan and Iraq, if required. This was approved in early July 2017, by the 27th Secretary of
Defense and the 39th Chief of Staff of the Army. On 8 February 2018, 1st SFAB held an activation ceremony at
Fort Moore, revealing its colors and heraldry for the first time, and then cased its colors for the deployment to
Afghanistan. 1st Security Force Assistance Brigade deployed to Afghanistan in spring 2018.

On 8 December 2017, the Army announced the activation of the 2nd Security Force Assistance Brigade, for
January 2018, the second of six planned SFABs. The SFAB are to consist of about 800 senior and
noncommissioned officers who have served at the same echelon, with proven expertise in advise-and-assist
operations with foreign security forces. Fort Liberty was chosen as the station for the second SFAB in
anticipation of the time projected to train a Security Force Assistance Brigade. On 17 January 2018 39th Chief of
Staff Mark Milley announced the activation of the third SFAB. 2nd SFAB undergoes three months of training
beginning October 2018, to be followed by a Joint Readiness Training Center Rotation beginning January 2019,
and deployment in spring 2019. The 3rd, 4th, and 5th SFABs are to be stationed at Fort Cavazos, Fort Carson,
and Joint Base Lewis-McChord, respectively; the headquarters of the 54th Security Force Assistance Brigade,
made up from the Army National Guard, will be in Indiana, one of six states to contribute an element of 54th
SFAB. It is likely that these brigades will be seeing service within United States Central Command.

The Security Force Assistance Command (SFAC), a one-star division-level command and all six SFABs will be
activated by 2020. The Security Force Assistance Directorate, a one-star Directorate for the SFABs, is part of
FORSCOM in Fort Liberty. SFAD will be responsible for the Military Advisor Training Academy as well. The
1st SFAB commander was promoted to Brigadier General in Gardez, Afghanistan on 18 August 2018. The 2nd
SFAB commander was promoted to Brigadier General 7 September 2018. SFAC and 2nd SFAB were activated
in a joint ceremony at Fort Liberty on 3 December 2018. 2nd SFAB deployed to Afghanistan in February 2019.
3rd SFAB activated at Fort Hood on 16 July 2019; 3rd SFAB will relieve 2nd SFAB in Afghanistan for the
Winter 2019 rotation.

Security Assistance is part of The Army Strategy 2018's Line of Effort 4: "Strengthen Alliances and
Partnerships". The Security Assistance Command is based at Redstone Arsenal (but the SFAC is based at Fort
Liberty).

[h]Army Field Support Brigades

Army Field Support Brigades (AFSBs) have been utilized to field materiel in multiple Combatant Command's
Areas of Responsibility (AORs). Initially 405th AFSB prepositioned stocks for a partial brigade; eventually, the
405th was to field materiel for an ABCT, a Division headquarters, a Fires Brigade, and a Sustainment Brigade in
their AOR, which required multinational agreements. Similarly, 401st AFSB configured materiel for an ABCT in
their AOR as well. The objective has been combat configuration: maintain their vehicles to support a 96-hour
readiness window for a deployed ABCT on demand. In addition, 403rd Army Field Support Brigade maintains
prepositioned stocks for their AOR.

[h]Command headquarters

Below the Combatant Commands echelon, Division commands will command and control their combat and
support brigades. Divisions will operate as plug-and-play headquarters commands (similar to corps) instead of
fixed formations with permanently assigned units. Any combination of brigades may be allocated to a division
command for a particular mission, up to a maximum of four combat brigades. For instance, the 3rd Infantry
Division headquarters could be assigned two armor brigades and two infantry brigades based on the expected
requirements of a given mission. On its next deployment, the same division may have one Stryker brigade and
two armor brigades assigned to it. The same modus operandi holds true for support units. The goal of
reorganization with regard to logistics is to streamline the logistics command structure so that combat service
support can fulfill its support mission more efficiently.

The division headquarters itself has also been redesigned as a modular unit that can be assigned an array of units
and serve in many different operational environments. The new term for this headquarters is the UEx (or Unit of
Employment, X). The headquarters is designed to be able to operate as part of a joint force, command joint
forces with augmentation, and command at the operational level of warfare (not just the tactical level). It will
include organic security personnel and signal capability plus liaison elements. As of March 2015, nine of the ten
regular Army division headquarters, and two national guard division headquarters are committed in support of
Combatant Commands.
When not deployed, the division will have responsibility for the training and readiness of a certain number of
modular brigades units. For instance, the 3rd Infantry Division headquarters module based at Fort Stewart, GA is
responsible for the readiness of its combat brigades and other units of the division (that is, 3rd ID is responsible
for administrative control —ADCON of its downtrace units), assuming they have not been deployed separately
under a different division.

The re-designed headquarters module comprises around 1,000 soldiers including over 200 officers. It includes:

 A Main Command Post where mission planning and analysis are conducted
 A mobile command group for commanding while on the move
 (2) Tactical Command Posts to exercise control of brigades
 Liaison elements
 A special troops battalion with a security company and signal company

Divisions will continue to be commanded by major generals, unless coalition requirements require otherwise.
Regional army commands (e.g. 3rd Army, 7th Army, 8th Army) will remain in use in the future but with changes
to the organization of their headquarters designed to make the commands more integrated and relevant in the
structure of the reorganized Army, as the chain of command for a deployed division headquarters now runs
directly to an Army service component command (ASCC), or to FORSCOM.

In January 2017, examples of pared-down tactical operations centers, suitable for brigades and divisions, were
demonstrated at a command post huddle at Fort Bliss. The huddle of the commanders of FORSCOM, United
States Army Reserve Command, First Army, I and III Corps, 9 of the Active Army divisions, and other
formations discussed standardized solutions for streamlining command posts. The Army is paring-down the
tactical operations centers, and making them more agile, to increase their survivability. The C5ISR center of
CCDC ran a series of experiments (Network Modernization Experiment 2020 — NetModX 20) whether using
LTE for connecting nodes in a distributed Command post environment was feasible, from July to October 2020.

[h]Training and readiness

Under Schoomaker, combat training centers (CTCs) emphasized the contemporary operating environment (such
as an urban, ethnically-sensitive city in Iraq) and stress units according to the unit mission and the commanders'
assessments, collaborating often to support holistic collective training programs, rather than by exception as was
formerly the case.

Schoomaker's plan was to resource units based on the mission they are expected to accomplish (major combat
versus SASO, or stability and support operations), regardless of component (active or reserve). Instead of using
snapshot readiness reports, the Army now rates units based on the mission they are expected to perform given
their position across the three force pools ('reset', 'train/ready', and 'available'). The Army now deploys units upon
each commanders' signature on the certificate of their unit's assessment (viz., Ready). As of June 2016, only one-
third of the Army's brigades were ready to deploy. By 2019, two-thirds of the Active Army's brigades and half of
the BCTs of the Total Army (both Active and Reserve components) are now at the highest level of readiness.
The FY2021 budget request allows two-thirds of the Total Army (1,012,200 Soldiers by 2022) to reach the
highest level of readiness by FY2022 —Maj. Gen. Paul Chamberlain.

[h]Soldiers need to be ready 100 percent of the time

39th Chief of Staff Mark Milley's readiness objective is that all operational units be at 90 percent of the
authorized strength in 2018, at 100 percent by 2021, and at 105 percent by 2023. The observer coach/trainers at
the combat training centers, recruiters, and drill sergeants are to be filled to 100 percent strength by the end of
2018. In November 2018, written deployability standards (Army Directive 2018–22) were set by the Secretary
and the Chief of Staff of the Army; failure to meet the standard means a soldier has six months to remedy this, or
face separation from the Army. The directive does not apply to about 60,000 of the 1,016,000 Soldiers of the
Army; 70–80 percent of the 60,000 are non-deployable for medical reasons. Non-deployables have declined from
121,000 in 2017. The Army combat fitness test (ACFT) will test all soldiers; at the minimum, the 3-Repetition
Maximum Deadlift, the Sprint-Drag-Carry and an aerobic event will be required of all soldiers, including those
with profiles (meaning there is an annotation in their record See: PULHES Factor); the assessment of the
alternative aerobic test will be completed by 19 October 2019.

[h]Soldier and Family Readiness Groups

By 2022 surveys of military servicemen, veterans, and spouses and family were indicating that financial and
other difficulties were raising questions about the viability of an all-volunteer force.

Soldiers and Army spouses belong to Soldier and Family Readiness Groups (SFRGs), renamed from (FRGs)
which mirror the command structure of an Army unit—the spouse of the 40th Chief of Staff of the United States
Army has served on the FRG at every echelon of the Army. The name change to SFRG is to be more inclusive of
single soldiers, single parents, and also those with nontraditional families. An S/FRG seeks to meet the needs of
soldiers and their families, for example during a deployment, or to address privatized housing deficiencies, or to
aid spouses find jobs. As a soldier transfers in and out of an installation, the soldier's entire family will typically
undergo a permanent change of station (PCS) to the next post. PCS to Europe and Japan is now uniformly for 36
months, regardless of family status (formerly 36 months for families). Transfers typically follow the cycle of the
school year to minimize disruption in an Army family. By policy, DoD families stationed in Europe and Japan
who have school-aged children are served by American school systems— the Department of Defense
Dependents Schools. Noncombatant evacuation operations are a contingency which an FRG could publicize and
plan for, should the need arise. In 2021, a new Exceptional Family Member Program (EFMP) is being tested by
300 families who are undergoing a permanent change of station (PCS).

When a family emergency occurs, the informal support of that unit's S/FRG is available to the soldier. (But the
Army Emergency Relief fund is available to any soldier with a phone call to their local garrison. Seventy-five
Fisher Houses maintain home-away-from-home suites for families undergoing medical treatment of a loved one.
The Army, Navy, and Air Force Medical Treatment Facilities (MTFs) are scheduled to complete their transfer to
the Defense Health Agency (DHA) no later than 21 October 2021. This has been a ten-year process. The
directors of each home installation's Medical treatment facility (MTF) continue to report to the commanders of
their respective installations. This change transfers all civilian employees of each Medical treatment facility
(MTF) to the Defense Health Agency (DHA).) The name change links Soldier Readiness with Family Readiness.
Commanders will retain full responsibility for Soldier sponsorship after a move, especially for first term Soldiers
in that move.

In response to Army tenant problems with privatized base housing, IMCOM was subordinated to Army Materiel
Command (AMC) on 8 March 2019. By 2020, AMC's commander and the Residential community initiative
(RCI) groups had formulated a 50-year plan. The Army's RCI groups, "seven private housing companies, which
have 50-year lease agreements" on 98% of Army housing at 44 installations, will work with the Army for long-
term housing improvements, and remediation.

In 2020 Secretary McCarthy determined that the Sexual Harassment/Assault Response & Prevention (SHARP)
program has failed to meet its mandate, particularly for young unmarried Soldiers at Fort Hood and Camp Casey,
South Korea. Missing soldiers were previously classified as Absent without leave until enough time has elapsed
to be denoted deserters, rather than victims of a crime; the Army has established a new classification for missing
Soldiers, to merit police investigation.

In response to the report of the Fort Hood Independent Review Committee, the Army has established the People
first task force (PFTF), an Army-wide task force that is headed by 3 chairs: 1) Lt. Gen. Gary M. Brito, 2) Diane
M. Randon, and 3) Sgt. Maj. Julie A.M. Guerra, who are: 1) the deputy chief of staff G-1, 2) the assistant deputy
chief of staff G-2, and 3) the assistant deputy chief of staff G-2 Sgt. Maj. respectively. Cohesion assessment
teams (CATs), part of the People first task force, work with brigade commanders on their brigade's command
climate. The Cohesion assessment team interviews members of that brigade or battalion, to identify any
problems. The CAT then works with the unit commanders to address the root causes of those problems. On 13
May 2022 Fort Hood's People First Center opened its doors; the center is to offer immersive experiences for
participants over several days, centered on "family advocacy, sexual harassment and assault prevention, equal
opportunity, resiliency, substance abuse, suicide [prevention] (The Senate Armed Services Committee is
requesting that the military track suicides by MOS.), and spiritual readiness... all housed at the center with
training focused on immersion", collocated with subject matter experts.

Plans are being formulated for mobilization of the Army Reserve (42,000 to 45,000 soldiers) very quickly. For
example, 'Ready Force X' (RFX) teams have fielded Deployment Assistance Team Command and Control Cells
to expedite the associated equipment to the various ports and vessels which is required for the specific Reserve
personnel who have been notified that they are deploying. FORSCOM's mobilization and force generation
installations (MFGIs) have fluctuated from two primary installations (2018) to an envisioned eleven primary and
fourteen contingency MFGIs, in preparation for future actions against near-peers.

[h]National Guard training

The 29th chief of the National Guard Bureau, as director of the Army National Guard, plans to align existing
ARNG divisions with subordinate training formations. This plan increases the number of divisions in the Total
Army from 10 to 18, and increases the readiness of the National Guard divisions, by aligning their training plans
with large-scale combat operations. Additional advantages of the August 2020 plan are increased opportunity for
talent management, from the Company to the Division level, and opportunity for leader development unfettered
by geographical restriction.

[h]"Associated units" training program

The Army announced a pilot program, 'associated units', in which a National Guard or Reserve unit would now
train with a specific active Army formation. These units would wear the patch of the specific Army division
before their deployment to a theater; 36th Infantry Division headquarters deployed to Afghanistan in May 2016
for a train, advise, assist mission.

The Army Reserve, whose headquarters are co-located with FORSCOM, and the National Guard, are testing the
associated units program in a three-year pilot program with the active Army. The program will use the First
Army training roles at the Army Combat Training Centers at Fort Irwin, Fort Polk, and regional and overseas
training facilities.

The pilot program complements FORSCOM's total force partnerships with the National Guard, begun in 2014.
Summer 2016 will see the first of these units.

 Associated units
o 3rd Infantry BCT, 10th Mountain Division, stationed at Fort Polk, Louisiana, associated with the 36th
Infantry Division, Texas Army National Guard
o 48th Infantry BCT, Georgia ARNG, associated with the 3rd Infantry Division, Stationed at Fort Stewart,
Georgia
o 86th Infantry BCT, Vermont ARNG, associated with the 10th Mountain Division, stationed at Fort Drum,
New York
o 81st Armored BCT, Washington ARNG, associated with the 7th Infantry Division, stationed at Joint Base
Lewis-McChord, Washington
o Task Force 1-28th Infantry Battalion., 3rd Infantry Division, stationed at Fort Moore, Georgia, associated
with the 48th Infantry BCT, Georgia Army National Guard
o 100th Battalion, 442nd Infantry Regiment, USAR, associated with the 3rd Infantry BCT, 25th Infantry
Division, stationed at Schofield Barracks, Hawaii
o 1st Battalion (Airborne), 143rd Infantry Regiment Texas ARNG, associated with the 173rd Airborne BCT,
stationed in Vicenza, Italy
o 1st Battalion, 151st Infantry Regiment, Indiana ARNG, associated with the 2nd Infantry BCT, 25th
Infantry Division, stationed at Schofield Barracks
o 5th Engineer Battalion, stationed at Fort Leonard Wood, Missouri, associated with the 35th Engineer
Brigade, Missouri ARNG
o 840th Engineer Company, Texas ARNG, associated with the 36th Engineer Brigade, stationed at Fort
Cavazos, Texas
o 824th Quartermaster Company, USAR, associated with the 82nd Airborne Division's Sustainment Brigade,
stationed at Fort Liberty, North Carolina
o 249th Transportation Company, Texas ARNG, associated with the 1st Cavalry Division's Sustainment
Brigade., stationed in Fort Cavazos
o 1245th Transportation Company, Oklahoma ARNG, associated with the 1st Cavalry Division's
Sustainment Brigade., stationed in Fort Cavazos
o 1176th Transportation Company, Tennessee ARNG, associated with the 101st Airborne Division's
Sustainment Brigade, stationed at Fort Campbell, Kentucky
o 2123rd Transportation Company, Kentucky ARNG, associated with the 101st Airborne Division's
Sustainment Brigade, stationed at Fort Campbell

[h]Rifleman training

Soldiers train for weapons handling, and marksmanship first individually, on static firing ranges, and then on
simulators such as an Engagement Skills Trainer (EST). More advanced training on squad level simulators
(Squad Advanced Marksmanship-Trainer (SAMT)) place a squad in virtual engagements against avatars of
various types, using M4 carbine, M249 light machine gun and M9 Beretta pistol simulated weapon systems.
Home stations are to receive Synthetic training environments (STEs) for mission training, as an alternative to
rotations to the National Combat Training Centers, which operate Brigade-level training against an Opposing
force (OPFOR) with near-peer equipment.

Some installations have urban training facilities for infantrymen, in preparation for brigade-level training.

A 2019 marksmanship manual TC 3-20.40, Training and Qualification-Individual Weapons (the "Dot-40") now
mandates the use of the simulators, as if the soldier were in combat. The Dot-40 is to be used by the entire Army,
from the Cadets at West Point to the Active Army, the Army Reserve, and Army National Guard; the Dot-40
tests how rapidly soldiers can load and reload while standing, kneeling, lying prone, and firing from behind a
barrier. The marksmanship tests of a soldier's critical thinking, selecting targets to shoot at, in which order, and
the accuracy of each shot are recorded by the simulators.

[h]Stryker training

Up to a platoon-sized unit of a Stryker brigade combat team, and dismounted infantry, can train on Stryker
simulators (Stryker Virtual Collective Trainer – SVCT), which are in the process of being installed at eight home
stations. The fourth was being completed as of 2019. Forty-five infantrymen (four Stryker shells) or thirty-six
scouts (six Stryker shells) can rehearse their battle rhythm on a virtual battlefield, record their lessons learned,
give their after-action reports, and repeat, as a team. The Stryker gunner's seat comes directly from a Stryker
vehicle and has a Common Remotely Operated Weapon Station (CROWS) and joystick to control a virtual.50
caliber (12.7 mm) heavy machine gun or a virtual 30 mm autocannon and other CROWS configurations are
possible.

[h]Digital air ground integration ranges (DAGIRs)

Live-fire digital air ground integration ranges (DAGIRs) were first conceptualized in the 1990s, and established
in 2012, with follow-on in 2019. The ranges initially included 23 miles of tank trails, targets, battlefield effects
simulators, and digital wiring for aerial scorekeeping. These ranges are designed for coordinating air and ground
exercises before full-on sessions at the National Training Centers.
[h]Training against OPFORs

Figure: Opposing-Forces Surrogate Vehicles (OSVs) undergoing maintenance at Anniston Army Depot

To serve a role as an Opposing force (OPFOR) could be a mission for an Army unit, as temporary duty (TDY),
during which they might wear old battle dress uniforms, perhaps inside-out. TRADOC's Mission Command
Training Program, as well as Cyber Command designs tactics for these OPFORs. When a brigade trains at Fort
Irwin, Fort Polk, Joint Pacific Multinational Readiness Center, or Joint Multinational Training Center (in
Hohenfels, Germany) the Army tasks 11th Armored Cavalry Regiment, 1st Battalion, 509th Infantry Regiment
(Abn), 196th Infantry Brigade, and 1st Battalion, 4th Infantry Regiment, respectively, with the OPFOR role, and
provides the OPFOR with modern equipment (such as the FGM-148 Javelin anti-tank missile) to test that
brigade's readiness for deployment. Multiple integrated laser engagement systems serve as proxies for actual
fired weapons, and soldiers are lost to the commander from "kills" by laser hits.

[h]Training against cyber

Deceptive data intended to divide deployed forces are making their way into the news feeds, and are falsely
implicating actual soldiers who are deployed at the time of the false social media reports, which are mixing fact
and fiction.

The Army now has its tenth direct-commissioned cyber officer: a Sergeant First Class with a computer
engineering degree, and a masters in system engineering was commissioned a major in the National Guard, 91st
Cyber Brigade, on 30 July 2020.

[h]Soldier integration facility

PEO Soldier has established a Soldier integration facility (SIF) at Fort Belvoir which allows prototyping and
evaluation of combat capabilities for the Army Soldier. CCDC Soldier center in Natick Massachusetts, Night
Vision Lab at Fort Belvoir Virginia, and Maneuver Battle Lab at Fort Moore Georgia have prototyped ideas at
the SIF.

[h]Applications for Synthetic Training Environment (STE)

The Squad Advanced Marksmanship Training (SAMT) system, developed by the STE Cross-functional team
from Futures Command, has an application for 1st SFAB. Bluetooth enabled replicas of M4 rifles and M9 and
Glock 19 pistols, with compressed air recoil approximate the form, fit and function of the weapons that the
Soldiers are using in close combat. For 1st SFAB, scenarios included virtual reality attacks which felt like
engagements in a room. The scenarios can involve the entire SFAB Advisor team, and engagements can be
repeated over and over again. Advanced marksmanship skills such as firing with the non-dominant hand, and
firing on the move can be practiced.

Nine Army sites are now equipped with the SAMT. Over twenty systems are planned for locations in the United
States. The Close combat tactical trainers are in use, for example, to train 3rd Infantry Division headquarters for
a gunnery training event (convoy protection role), and 2nd BCT/ 82nd Airborne close combat training.
The concept has been extended to the Live, Virtual, Constructive Integrating Architecture (LVC-IA), to integrate
the National Guard, and the Reserves, with Active Army.

 "A simulation places leadership teams in a situation akin to a Combat Training Center rotation, an intellectually
and emotionally challenging environment that forgives the mistakes of the participants "—Dr. Charles K. Pickar
 "It is important for Soldiers to have an open and clear mind during the simulation so that they learn something from
the experience." —Tim Glaspie
 "Repetition increases a team's situational understanding of the tactics they'll use" —Maj. Anthony Clas

Other training environments include MANPADS for SHORAD in the 14P MOS at Fort Sill.

The force generation system, posited in 2006 by General Schoomaker, projected that the U.S. Army would be
deployed continuously. The Army would serve as an expeditionary force to fight a protracted campaign against
terrorism and stand ready for other potential contingencies across the full-spectrum of operations (from
humanitarian and stability operations to major combat operations against a conventional foe).

Under ideal circumstances, Army units would have a minimum "dwell time," a minimum duration of which it
would remain at home station before deployment. Active-duty units would be prepared to deploy once every
three years. Army Reserve units would be prepared to deploy once every five years. National Guard units would
be prepared to deploy once every six years. A total of 71 combat brigades would form the Army's rotation basis,
42 from the active component with the balance from the reserves.

Thus, around 15 active-duty combat brigades would be available for deployment each year under the 2006 force-
generation plan. An additional 4 or 5 brigades would be available for deployment from the reserve component.
The plan was designed to provide more stability to soldiers and their families. Within the system, a surge
capability would exist so that about an additional 18 brigades could be deployed in addition to the 19 or 20
scheduled brigades.

From General Dan McNeil, former Army Forces Command (FORSCOM) Commander: Within the Army Forces
Generation (ARFORGEN) model, brigade combat teams (BCTs) would move through a series of three force
pools; they would enter the model at its inception, the "reset force pool", upon completion of a deployment cycle.
There they would re-equip and reman while executing all individual predeployment training requirements,
attaining readiness as quickly as possible. Reset or "R" day, recommended by FORSCOM and approved by
Headquarters, Department of the Army, would be marked by BCT changes of command, preceded or followed
closely by other key leadership transitions. While in the reset pool, formations would be remanned, reaching
100% of mission required strength by the end of the phase, while also reorganizing and fielding new equipment,
if appropriate. In addition, it is there that units would be confirmed against future missions, either as deployment
expeditionary forces (DEFs-BCTs trained for known operational requirements), ready expeditionary forces
(REFs-BCTs that form the pool of available forces for short-notice missions) or contingency expeditionary
forces (CEFs-BCTs earmarked for contingency operations).

Based on their commanders' assessments, units would move to the ready force pool, from which they could
deploy should they be needed, and in which the unit training focus would be at the higher collective levels. Units
would enter the available force pool when there is approximately one year left in the cycle, after validating their
collective mission-essential task list proficiency (either core or theater-specific tasks) via battle-staff and dirt-
mission rehearsal exercises. The available phase would be the only phase with a specified time limit: one year.
Not unlike the division-ready brigades of past decades, these formations would deploy to fulfill specific
requirements or stand ready to fulfill short-notice deployments within 30 days.

The goal was to generate forces 12–18 months in advance of combatant commanders' requirements and to begin
preparing every unit for its future mission as early as possible in order to increase its overall proficiency.

Personnel management would also be reorganized as part of the Army transformation. Previously, personnel was
managed on an individual basis in which soldiers were rotated without regard for the effect on unit cohesion.
This system required unpopular measures such as "stop loss" and "stop move" in order to maintain force levels.
In contrast, the new personnel system would operate on a unit basis to the maximum extent possible, with the
goal of allowing teams to remain together longer and enabling families to establish ties within their communities.

Abrams 2016 noted that mid-level Army soldiers found they faced an unexpected uptempo in their requirements,
while entry-level soldiers in fact welcomed the increased challenge.

[h]Readiness model

ARFORGEN, "a structured progression of increased unit readiness over time, resulting in recurring periods of
availability of trained, ready, and cohesive units prepared for operational deployment in support of geographic
Combatant Commander requirements" was utilized in the 2010s. ARFORGEN was replaced by the Sustainable
Readiness Model (SRM) in 2017. In 2016 the 39th Chief of Staff of the Army identified the objective of a
sustainable readiness process as over 66 percent of the Active Army in combat ready state at any time; in 2019
the readiness objective of the National Guard and Army Reserve units was set to be 33 percent; Total Army
readiness for deployment was 40 percent in 2019.

Regionally Aligned Readiness and Modernization Model (ReARMM) is a unit lifecycle model which goes into
effect in October 2021. ReARMM was introduced in October 2020. It is a force generation model which uses the
total Army, the Reserve components as well as Active component when planning. Dynamic force employment
(DFE) will be used more often. The Operational tempo will decrease, which gives Commanders will more times,
'training windows' during which their units can train, first at the small-unit level, and then at larger-step
modernization of their formations. The units can then train at echelon for Large scale combat operations (LSCO)
at a more measured pace.

In 2018 39th Chief of Staff Mark Milley's readiness objective is that all operational units be at 90 percent of the
authorized strength in 2018, at 100 percent by 2021, and at 105 percent by 2023. The observer coach/trainers at
the combat training centers, recruiters, and drill sergeants are to be filled to 100 percent strength by the end of
2018.

The requested strength of the Active Army in FY2020 is increasing by 4,000 additional troops from the current
476,000 soldiers; this request covers the near-term needs for cyber, air & missile defense, and fires (Army
modernization).

[h]Organic industrial base (OIB)

The Army’s Organic industrial base (OIB) Modernization Implementation Plan got a refresh in 2022, with a
review of the "23 depots, arsenals and ammunition plants that manufacture, reset and maintain Army equipment",
in light of the 2022 Russian invasion of Ukraine.

The Acting CG of FORSCOM, Lt. Gen. Laura Richardson, has noted that the Sustainable Readiness Model uses
the Army standard for maintenance readiness, denoted TM 10/20, which makes commanders responsible for
maintaining their equipment to the TM 10/20 standard, meaning that "all routine maintenance is executed and all
deficiencies are repaired". But Richardson has also spoken out about aviation-related supplier deficiencies
hurting readiness both at the combatant commands and at the home stations.

Preface
Unmanned Aerial Vehicles (UAVs) have emerged as transformative tools in modern warfare, reshaping the
landscape of aerial combat and reconnaissance since their inception in the early 20th century. As the title
suggests, "Unmanned Aerial Vehicles: Robotic Air Warfare 1917-2007" delves into the rich history and
evolution of UAV technology, spanning nearly a century of development, innovation, and deployment. In this
comprehensive exploration, we embark on a journey through time, tracing the origins of aerial robotics from
their humble beginnings during World War I to their pivotal role in contemporary military operations. The period
from 1917 to 2007 witnessed remarkable advancements in UAV technology, reflecting the relentless pursuit of
military powers to gain strategic superiority and enhance battlefield capabilities. Throughout the pages of this
book, readers will encounter a tapestry of historical events, technological breakthroughs, and strategic insights
that have shaped the trajectory of robotic air warfare. From the rudimentary radio-controlled aircraft of the
interwar period to the sophisticated autonomous drones of the 21st century, each chapter unveils new dimensions
of innovation and adaptation in response to the evolving demands of warfare. Moreover, this book transcends
mere technical discourse by exploring the broader implications of UAV proliferation, including ethical
dilemmas, legal frameworks, and geopolitical ramifications. As UAVs transitioned from reconnaissance
platforms to lethal weapons, they sparked debates about the ethics of remote warfare, civilian casualties, and the
erosion of traditional notions of combat. In narrating the saga of unmanned aerial vehicles, this book also pays
homage to the pioneers, engineers, and military strategists who contributed to their development and deployment.
Their ingenuity, perseverance, and vision have left an indelible mark on the history of warfare, reshaping the
dynamics of air power and revolutionizing the conduct of military operations. As we delve into the annals of
robotic air warfare, we invite readers to embark on a captivating journey of discovery, reflection, and
enlightenment. Through meticulous research, vivid storytelling, and insightful analysis, "Unmanned Aerial
Vehicles: Robotic Air Warfare 1917-2007" seeks to illuminate the past, present, and future of UAV technology,
offering a compelling narrative that resonates with scholars, military professionals, and enthusiasts alike.

About the book


The book "Unmanned Aerial Vehicles: Robotic Air Warfare 1917-2007" is a comprehensive exploration of the
evolution of UAV technology over the span of nearly a century. From their modest beginnings in World War I to
their contemporary significance in modern warfare, this book traces the remarkable advancements and strategic
implications of unmanned aerial vehicles. Readers will be taken on a journey through history, witnessing the
progression from early radio-controlled aircraft to sophisticated autonomous drones. Each chapter illuminates
key developments, technological innovations, and the shifting landscape of military tactics and strategy.
Moreover, the book delves into the broader implications of UAV proliferation, exploring ethical dilemmas, legal
considerations, and geopolitical consequences. As UAVs transitioned from reconnaissance tools to lethal
weapons, they raised important questions about the ethics of remote warfare and civilian casualties. Throughout
the narrative, the contributions of pioneers, engineers, and military strategists are acknowledged for their role in
shaping the trajectory of robotic air warfare. Their ingenuity and vision have left an enduring impact on the
history of warfare, reshaping the dynamics of air power and military operations. "Unmanned Aerial Vehicles:
Robotic Air Warfare 1917-2007" invites readers to engage with a captivating exploration of UAV technology,
offering insights that resonate with scholars, military professionals, and enthusiasts alike. Through meticulous
research and compelling storytelling, this book seeks to illuminate the past, present, and future of unmanned
aerial vehicles.

You might also like