Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 109

8.

UAVs and Defense: Prospects for Transformational Change

Chapter 1: "The Evolution of UAV Technology"

[mh]Advancements in UAV Design

Over the past few decades, there have been significant advancements in the design of unmanned aerial
vehicles (UAVs), also known as drones. These advancements have enabled drones to become more
efficient, versatile, and reliable. In this paragraph, we will explore some of the key advancements in
UAV design that have revolutionized various industries.

One significant advancement in UAV design is the improvement in battery technology. Early drones
had limited flight times, often lasting only a few minutes. However, modern UAVs are equipped with
high-capacity batteries that can power the aircraft for several hours. This has expanded the range of
applications for drones, allowing them to perform tasks such as aerial photography, surveillance, and
even package delivery.

Another important advancement in UAV design is the development of lightweight materials.


Traditionally, drones were made from heavy metals, which limited their agility and maneuverability.
However, the introduction of advanced composite materials, such as carbon fiber, has significantly
reduced the weight of drones while maintaining their structural integrity. This has made drones more
nimble and capable of performing complex maneuvers, making them ideal for tasks that require
precision and agility.

Furthermore, advancements in miniaturization have played a crucial role in the design of UAVs. In the
early days, drones were bulky and cumbersome, making them difficult to transport and deploy.
However, the miniaturization of components, such as sensors, cameras, and processing units, has
allowed for the development of smaller and more portable drones. These miniaturized drones are now
widely used in industries such as agriculture, where they can be easily transported to remote locations
and used for crop monitoring and analysis.

In addition to miniaturization, advancements in sensor technology have greatly enhanced the


capabilities of UAVs. Drones are now equipped with a wide range of sensors, including GPS,
gyroscopes, accelerometers, and even thermal cameras. These sensors allow drones to maintain stable
flight, navigate autonomously, and gather data in real-time. For example, drones with thermal cameras
can be used in search and rescue operations, as they can detect heat signatures to locate missing
persons or identify hotspots in wildfires.

One area of significant advancement in UAV design is autonomous flight capabilities. Early drones
relied heavily on manual control, requiring skilled operators to fly them. However, with advancements
in artificial intelligence and automation, drones can now perform autonomous flight missions, using
pre-programmed routes or even real-time decision-making algorithms. This advancement has made
drones more accessible to a broader range of users and has opened up new possibilities for tasks such
as mapping, surveying, and inspection.

To conclude, advancements in UAV design have revolutionized various industries, making drones
more efficient, versatile, and reliable. These advancements include improvements in battery
technology, the use of lightweight materials, miniaturization of components, advancements in sensor
technology, and the development of autonomous flight capabilities. With these advancements, drones
can now perform tasks that were previously unimaginable, offering endless opportunities in fields such
as surveillance, agriculture, search and rescue, and aerial photography.

[mh] Payload Innovations

[h]Earth Observation Technologies: Low-End-Market Disruptive Innovation

Although Earth Observation (EO) started as an activity exclusively affordable for governments or big
players in space with vast financial resources to sustain expensive programmes, it is no longer an
exclusive and expensive industry. It allows the emergence of start-ups and spin-offs from academia
and emerging countries that are the foundations of the New Space. This phenomenon, known as the
democratization of space, changes the accessibility, affordability, and commercialization of space
products and services to companies of all types and sizes.

According to, New Space can be understood as a disruptive trend whose aim is to transform space into
a commodity by taking advantage from the joint between Information Technology (IT) and EO. Even
though its origins were in Silicon Valley, the trend is now extended worldwide.

Regarding the Union of Concerned Scientists (UCS) satellite database, 1980 operational satellites were
orbiting the Earth at the end of April 2018, with 684 of these aimed to EO. This represents a growth of
250% compared to January 2014, when there were only 192 active EO satellites. From this huge
increase, it is clear that EO data acquisition is an emerging market. With the number of companies
growing year-by-year and optimistic forecasts, it can be reinforced that “EO is on its earlier days and
there are still a lot of improvements to do and problems to solve”.

Looking at the new EO-based markets, it is observed in some signs of potential disruptive innovation
in the space sector. Some technological drivers promote this innovation. For instance, low cost access
to earth imagery; availability of high-quality spatial, spectral and temporal imagery; innovations in
computer science, like cloud computing and machine learning; and some specific programmes, like
Copernicus, that transfer high technology development from governmental programmes to other
industries and services.

Beyond technological drivers, other drivers promote the disruptive innovation in the space sector, such
as the sharing economy, the increasing demand of the commercial sector—like smart cities—and the
government interests in environmental monitoring. For example, Spaceflight offers companies a global
launching opportunity, working with almost every launch vehicle provider on the planet.

All of these drivers encourage new companies—like Skybox, SpaceX, OneWeb, Telesat, Planet, and
OpenCosmos, among others—to develop new business models that make space more accessible and
affordable for nongovernmental organizations on shorter periods.

Figure summarizes the total number of micro- and nano-satellites launched per year between 2001 and
2017. Micro-satellites are loosely defined as any satellite weighting between 10 and 100 kg, while
nano-satellites weight less than 10 kg. They have been classified by sectors to emphasize the huge
increase of commercial micro- and nano-satellites launched in recent years compared to those launched
by defense departments and governments.
Figure. Micro- and nano-satellites launched between 1997 and 2017, classified by sectors.

Hypotheses: disruption in EO technologies

The term disruptive innovation was popularized in 2003 by Clayton M. Christensen, professor at
Harvard Business School. In, he distinguished between sustaining and disruptive technologies and
later, in, it replaced the term technology with innovation, since disruption does not come from
technology but from businesses. According to, sustaining innovations foster improved product
performance, while disruptive innovations bring to the market a very different value proposition, with a
performance that is initially below the mainstream products, but with low prices or unique features that
compensate for it.

Additionally, in, a distinction between low-end-market and new-market innovations is made. Low-end-
market innovations are those that do not result in better product performance but offer lower prices,
such as Walmart and its cheap retailing malls. On the other hand, new-market innovations, like the
iPod, serve new users who had not owned or used the previous generation of products.

Christensen approached disruptive innovations from the point of view of both management and
industry. However, his recommendations are kept at industrial level. Despite his product performance
and business strategy analysis, his definition does not identify the innovation characteristics, since they
are intrinsic rather than external factors that change over time, like customer perception or government
regulations.
In, the concept of innovation is applied to the space environment. The author stated that some factors
would affect the likelihood of innovation within the European space sector. Tablesummarizes these
factors, dividing them between those that promote space innovation (+) and those that prevent it (−).

Factors affecting
Space situation
innovation

Challenging objectives Space missions remain technically very challenging and their components
and attractive and technologies are still one-off prototypes, custom-designed, and
environments (+) optimized for specific missions

Space is a closed sector with little exchange of resources outside of


Closed sector (−) aerospace and defense. However, innovations, especially the disruptive
ones, appear from the intersection of domains and disciplines

Space activities are high-risk efforts and they do not offer opportunities for
Risk adversity (−) error corrections after launch. This leaves little freedom for innovation and
leads to a risk-averse culture

Highly skilled workforce Space workforce is highly educated and mobile and has a diverse cultural
(+) background

Without open competitive markets, space innovations are likely considered


High entrance barriers
useless for businesses. Additionally, high entry barriers and huge launching
and open competitive
costs reduce the stimuli of industrial and private sectors to invest in space
markets (−)
innovations

Table.

Factors affecting space innovation.

In, the previous concept of disruptive innovation is refined by identifying three innovation
characteristics: functionality, discontinuous technical standards, and ownership models. His definition
broadens the meaning of low-end market and new market innovations.

Low-end market innovationsare those with discontinuous technical standards that disrupt markets by
using new, less costly materials or new production processes in the creation of existing technologies or
new forms of ownership. These forms dictate how innovations are received in a marketplace, as they
establish prices and innovation-related services among others.

New-market innovationsare those with a disruptive functionality that provides the user with the ability
to undertake a new behavior or accomplish a new task that was impossible before.

Taking the above signs of innovation in the space sector and following the strategy developed in,
recent micro- and nano-satellite EO missions seem to show the key characteristics of disruptive
innovation. Tablesummarizes the characteristics of micro- and nano-satellites as disruptive innovations
according to different authors.
Christensen and
Characteristics of disruptive innovations Summerer Nagy et al. Denis et al.
Raynor

High level of risk

Discontinuous technical standards


(simplicity)

Accessibility

Enabling new market opportunities

Inferior performance

Performance improvement

Disruptive functionality

Affordability

Forms of ownership

Table. Characteristics of micro- and nano-satellites as disruptive innovation in space.

By combining the above-presented characteristics of disruptive micro- and nano-satellite innovations


with the main specificities of the EO space market, a set of six hypotheses for micro- and nano-space
market disruption has been developed and presented in Table. In this section, all mentioned
characteristics are tested to verify if the authors’ hypotheses are true in order to clarify whether micro-
and nano-satellites are disruptive for the EO market.

Characteristics of
Hypothesis
disruptive Hypothesis to test
label
innovations

The space sector has a low level of risk acceptance, which leaves
little freedom for innovation. However, micro- and nano-satellites
Standardization provide simplicity and standardization in terms of design and 1
manufacturing, This leads to a higher level of risk acceptance and,
consequently, more innovation

New market Data accessibility and technology standardization are essential


2
opportunities conditions to open new market opportunities

Micro- and nano-satellites improve their performance in a pace that


Performance meets market needs even though they have an inferior performance 3
than those of traditional EO spacecraft

Affordability Traditional, established space companies are ignoring the market 4


due to very low-profit margins. This fact leaves room for new
entrants with totally different business models. These new actors
Characteristics of
Hypothesis
disruptive Hypothesis to test
label
innovations

bet on low-cost technology to produce more affordable space


systems for Earth Observation

Recent evolutions in micro- and nano-satellite technologies are


Ownership forms affecting the forms of ownership and operability of EO systems, 5
which were formerly owned by governments or public organizations

Micro- and nano-satellite missions offer disruptive functionalities


Disruptive
that provide novel products or services that were unthinkable or 6
functionality
impossible with traditional spacecraft missions

Table.

Summary of studied hypothesis related to the disruptive innovation characteristics.

Analysis: disruption in EO technologies

In this section, the analysis of the six hypotheses stated in Tablefor micro- and nano-space market
disruption has been done. The first hypothesis is related to space market standardization, the second
hypothesis is related to market opportunities, the third hypothesis is related to micro- and nano-satellite
performance, the fourth hypothesis is related to the affordability of the new space technologies for EO,
the fifth hypothesis is related to the forms of ownership and operability of EO systems, and finally, the
sixth hypothesis is related to disruptive functionalities that provide novel products.

3.1 Hypothesis 1: micro- and nano-satellite simplicity and standardization

H1: The space sector has a low level of risk acceptance, which leaves little freedom for innovation.
However, micro- and nano-satellites provide simplicity and standardization in terms of design and
manufacturing. This leads to a higher level of risk acceptance and, consequently, more innovation.

In mainstream space platforms, each of design, development, and test campaign tends to be almost
unique, custom-made for the specific mission. Long project durations and high costs are consequences
of the complexity that implies the need of guaranteeing the maximum quality, hence the minimum risk
for the mission.

On the other hand, micro- and nano-satellite constellations are based on the concept of standardization,
which opens up the possibility of using commercial electronic components and the choice of numerous
technology suppliers. In that way, it is possible to create less expensive satellites in shorter periods.
Depending on the specifications, a micro-satellite can be built and placed in orbit for a few million
euros and a nano-satellite for almost a quarter million. In comparison, the cost of a large satellite can
rise to 500 million euros.

Apart from the cost and size, the main benefit of micro- and nano-satellites is the time required to
design and implement each model. As an average, a micro- or a nano-satellite can be designed,
manufactured, and launched within less than 2 years. This means that large constellations of small
satellites can be regularly renewed with state-of-the-art systems, ensuring optimal performance even if
some units are lost or fail. This is not the case of conventional satellites, which are developed and
launched within expensive and long projects that last between 5 and 10 years and, accordingly, cannot
afford any failure in the platform without risking the entire mission.

Particularly worthy of mention is the recent emergence of many dedicated micro-launchers designed to
place small satellites in orbit. So far, micro- and nano-satellites are launched at marginal costs as
“piggyback” payload alongside traditional spacecraft. However, new micro-launcher concepts may be
responsible for providing simplicity and standardization to the whole process, lowering launch costs if
they demonstrate reliability and good performance.

For the stated reasons, H1 can be supported, since micro- and nano-satellite design and manufacturing
is focused on simple and standard equipment that eventually may increase the linked risk acceptance.

3.2 Hypothesis 2: new market opportunities

H2: Data accessibility and technology standardization are essential conditions to open new market
opportunities.

EO is a promising, fast-growing field boosted by a wide range of applications across various economic
sectors, including precision farming, natural resource monitoring, oil and gas exploration, meteorology,
civil protection, insurance, and urban monitoring. The emergence of low-cost micro- and nano-
satellites enabled EO start-ups to attract new markets interested in their tremendous amount of
accessible and affordable high-resolution images. Additionally, more and more countries invest in their
EO capacity, confirming the soft power dimension of space but also opening new market opportunities
for international or regional cooperation.

Not only space is becoming more accessible through new launch technology, but also data from
programs like US Landsat and Europe’s Sentinel program are already available to all. This allows third
parties to develop new services and applications over high-quality databases supported by different
funding programs. For instance, OneAtlas updated the base map of the whole world with high-
resolution imagery without taking any picture or OneWeb plan to use small spacecraft technology to
make satellite Internet available on a global scale.

It is clear that some of these new markets are recently gaining access to EO data because it is cheaper
than before. However, a very important entry barrier was also the traditional space companies
themselves, because data owned and controlled by defense and public organizations were not available
at any price.

In Figure, it can be seen that in the year 2017, defense represented more than 60% of the commercial
data market ($1.8 billion), with infrastructure and natural resources verticals accounting a similar share
to each other. These three vertical markets represented 80% of the commercial data market in 2017.
Looking to the future, Euroconsult forecasts that the market for commercial EO data is expected to
reach $3 billion (5% of the Compound Annual Growth Rate (CAGR)) in 2026.
Figure. Commercial EO data market in 2017 (left) and value-added services market in 2017 (right).

In the short term, growth is expected to continue to be driven by the defense, with ongoing regional
unrest and growing Image Intelligence needs of countries without proprietary military systems. By
2026, the defense is expected to represent 46% of the total market value ($1.7 billion). Therefore,
although defense will continue to be the major client for EO imagery, their share will reduce in the
coming years. Other applications, such as maritime, infrastructure, and resource monitoring will
support growth in the long term. Together with defense, these applications should have a 5% CAGR
through 2026. Emerging applications in these sectors such as critical infrastructure monitoring and
precision agriculture benefit from more capable satellite systems (i.e., a combination of higher ground
resolution with higher temporal resolution). Location-Based Services (LBS) applications, including
financial and insurance services, have been slow to develop, but the longer-term outlook for these
services remains positive with the availability of new satellite capacity. For LBS applications, greater
emphasis is expected to be put on integrated product offerings, emphasizing requiring the development
of change detection analytics. In terms of revenue generation by data type, VHR optical is expected to
remain the most significant in terms of data sales. More moderate-resolution datasets will be
challenged by the availability of free solutions and low-cost systems offering comparable data.

According to, in 2016, the market for Value-Added Services (VAS) was $3.5 billion. This discounts
the purchase of commercial data to develop geospatial solutions. Key markets for VAS do not mirror
those for commercial data sales. Defense, while representing 61% of the commercial data market, only
represents 15% of the VAS market; conversely, infrastructure and engineering (which incorporated
cartography, cadastre, etc.) is only 10% of the commercial data market but 33% of the value-added
market.

According to, the reasoning for this is relatively straightforward: defense end-users purchase data with
much value-added analytics performed in-house. On the other hand, lower-cost, coarser resolution, and
lower geolocation accuracy data can be leveraged with value-adding to form greater value products and
services. Environment-monitoring users, for instance, procure limited commercial data but are
developing solutions using scientific and coarse resolution data, for example, pollution/aerosol
monitoring and climate modeling. Many infrastructure applications for mapping also can be developed
by using Landsat and Sentinel data that are free of charge.
In, it is also forecast that data also add to the belief that by making coarser-resolution data free, the
value-added services industry can leverage this to build greater value services with the potential for
two very different businesses: a “high-end” data market to support defense and free/low-cost data
sources to support commercial and civil government applications.

For these reasons, H2 would also be supported, since new market opportunities are growing and
standardization has been proven as H1.

3.3 Hypothesis 3: micro- and nano-satellites performance

H3: Micro- and nano-satellites improve their performance in a pace that meets EO market needs
despite having an inferior performance than those of traditional EO spacecraft.

EO optical imaging satellite performance is defined in terms of spatial and temporal resolution. Spatial
resolution relates to the level of detail obtained from an image and can be measured by the Ground
Sample Distance (GSD), which is the distance between adjacent pixel centers measured on the ground.

Figure shows the evolution of EO micro- and nano-satellite GSD in the last 20 years. The solid lines
depict how the concepts of Medium Resolution (MR), High Resolution (HR), and Very High
Resolution (VHR) evolved through time. While MR has maintained constant around 15 m, HR and
VHR have decreased to 2 and 0.3 m, respectively.

Figure. Evolution of EO satellite GSD during the period 1999–2018.

Dots infigurerepresent GSD values for the EO micro- and nano-satellites analyzed in this research.
Cross marks prove that between 1999 and 2013 governmental and defense were almost the only micro-
and nano-satellites devoted to obtaining HR and VHR images of the earth. However, 2013 marks a
turning point in the EO market, with the irruption of private start-ups launching small space platforms
able to achieve resolutions from 4 to 1 m.

Nano-satellites can obtain GSD below 3 m, thanks to new sensor technologies and the use of Low
Earth Orbit and Very Low Earth Orbit. In the range of micro-satellites, GSD around 1 m can be
achieved, with perspectives of values even lower in the next years. These values are small enough to
have a great interest for many commercial applications, mainly in the agriculture, transportation,
energy, and infrastructure markets.

If the temporal resolution is also taken into account, micro- and nano-satellites show their huge
potential for EO commercial applications. Satellite’s revisit time is the time elapsed between
observations of the same point on Earth’s surface.figuresummarizes GSD, year of launching, mass, and
revisit time of EO micro- and nano-satellites. The amount of satellites displayed infigureis smaller than
in Figures 1 and 3 because the information about revisit time was not available for many satellites.
Besides, revisit time of Flock (Planet) and ÑuSat (Satellogic) constellations is calculated using their
final future configuration.

Figure. GSD, year of launching, mass, and revisit time of EO micro- and nano-satellites.

In Figure, each circle represents a satellite or a constellation of identical satellites. GSD and launching
year can be measured in both axes, while the circle gives information about the mass and revisit time
(the bigger, the more massive and the darker, the shortest revisit time). Looking at the characteristics of
different satellites, it is easy to see that Flock and the ÑuSat constellations are the only platforms able
to provide revisit times lower than one day. This capability makes their data more appealing than any
of the other platforms, even having a slightly less spatial resolution. The key for this performance is the
possibility to design, launch, and operate constellations of more than 100 satellites, something which
seems only possible, thanks to the reduced costs associated to micro- and nano-satellite technology
considering the several hundred million dollar cost of traditional EO satellites. These massive micro-
and nano-satellite constellations are aiming to transform EO imagery into a commercial product (e.g.,
analytical solutions from the big data obtained from the constellations), taking benefit of their almost
high resolution and their high revisit time.

For all these arguments, H3 would be supported, as performance is increasingly high in new satellites
and constellations, whilst it is still far from conventional satellites.

3.4 Hypothesis 4: affordability of the new space technologies for EO

H4: Traditional, established space companies are ignoring the market due to very low-profit margins.
This fact leaves room for new entrants with totally different business models. These new actors bet on
low-cost technology to produce more affordable space systems for Earth Observation.

During the last decades of the twentieth century, EO systems were mainly dedicated platforms owned
and operated by public organizations or governments, often at a national level. This status quo was
sustained by economic and policy barriers to space commerce. Traditionally, costs associated with
satellite development and operation have been extremely high, both at LEO and Geostationary Orbits
(GSO). However, platform standardization, continued progress in technology miniaturization, and
Components Off-The-Shelf (COTS) are not only leading to cheaper satellite development and launch
but also reducing manufacturing time. The possibility of using many small satellites in a constellation
is enabling near real-time Earth Observation and addressing the issue of temporal resolution.
Consequently, increasingly large amounts of data are being gathered every day.

This novel combination of price reduction and data generation has created in the last decade new
business opportunities favoring the emergence of new space companies dedicated to the EO market.
These companies base their innovative business models on the generation of near real-time high-
resolution images (close to 1 m ) that are sold in user-oriented data access platforms (around $1/km2 ).

It is important to note that this new model is mainly ruled by start-ups with substantial investment
capacity.figuredepicts the evolution of investment and the number of start-ups founded in the EO
market between 2013 and 2017. It can be seen that almost 60 EO companies were funded in these
5 years. More significantly, the solid line clearly shows an increase in the investment, from less than
5 million dollars in 2013 and 2014 to almost 160 million dollars in 2017. According to, one of the
reasons why new start-ups have been so successful in raising capital may well be because it challenges
traditional space enterprises at technology implementation, deployment of spacecraft, innovation in the
business model, etc. Some of these new start-ups that embrace open-innovation and knowledge sharing
are Planet, Satellogic, PLD Space, Deimos, and GomSpace, among others.
Figure. Earth Observation start-ups and investments during the period 2013–2017.

For all these reasons, H4 would be supported, since new dealers in the space sector are mainly new
companies rather than stablished ones.

3.5 Hypothesis 5: forms of ownership and operability of EO systems

H5: Recent evolutions in micro- and nano-satellite technologies are affecting the forms of ownership
and operability of EO systems, which were formerly owned by governments or public organizations.

These new opportunities foster innovation and commercial growth, but they also leave room to
establish a legal framework. That regulation would aim to maintain a safe and predictable space
environment that allows us to face correctly the rapid changes of technology without interrupting the
innovation freedom of the space sector.

The disruption in the space market extends the technological improvements. For instance, an increment
in the supply of EO imagery would have implications for new business models, lower costs and more
flexible ownership models for commercializing imagery.

Emerging start-ups and spin-offs in the space sector are transforming the operability of EO systems
owned by governments or public organizations. This transformation extends from the satellites
themselves to the data processing and finally the data analysis that represents VAS to commercial and
public organizations.

The idea of a “sharing economy” implies a revolution in the ownership of space imagery. All users
have access to relevant and free data under a distributed ownership scheme. This trend is being driven
by multiple technological innovations, for example, reusable launchers, online platforms where users
can combine different data (e.g., Blacksky), or launcher service platform (e.g., Spaceflight).
Nevertheless, there are questions over the sustainability of this ownership model, especially for
commercial organizations that need to generate profit. Additionally, there are certain applications
related to security or defense where such a shared-ownership model may not be appropriate.

Radiant Earth Foundation is trying to address some of these challenges, such as building a place where
the development community can go for earth imagery and geospatial data and with access to market
analytics, best practices guides, return of investment methodologies, and discussion of policy issues.

For all these reasons, H5 would also be supported, since new dealers from H4 are also defining new
forms of ownership and service.

3.6 Hypothesis 6: disruptive functionality

H6: Micro- and nano-satellite missions offer disruptive functionalities that provide novel products or
services that were unthinkable or impossible with traditional spacecraft missions.

Although nano- and micro-satellites are dramatically changing the EO market, it cannot be said that
they are providing novel products or services to the final data users. Satellite imaging has been used
since the early 1970s when the Landsat program started. As stated before, the irruption of new start-ups
with novel business models is not based on the generation of new data, but on the accessibility to it and
complementing traditional space business models.

Therefore, considering that hypothesis 2 and 4 proved to be true, micro- and nano-satellites in the EO
market can be categorized as low-end-market disruptive innovations. This hypothesis is also supported
in, in which it is stated that new space business models do not drastically change satellite EO business,
since both fulfill a similar kind of customer needs. What New Space businesses provide against
traditional ones are accessibility, affordability, and commercialization of space products or services to
commercial and noncommercial companies.
New Space has usually been considered as a disruptive market. Some technological drivers like low-
cost, high-quality image, among others, promote innovation, and they encourage new companies to
develop new business models that make space more accessible and affordable for nongovernmental
organizations on shorter development periods.

This chapter is measuring how disruptive the micro- and nano-satellite innovations are within the EO
space market, under a series of hypotheses based on established standards for disruptive innovation.

As a result, we have observed that micro- and nano-satellite technologies represent a low-end-market
disruptive innovation, since they standardize the production process that reduces the cost of design and
manufacturing phases. Additionally, thanks to this standardization, the forms of ownership and
operability of EO platforms have changed to a private model. This allows the establishment of lower
prices and creation of innovative services that not only open new market opportunities for new
business models and data accessibility by commercial companies, but also improve the space market
performance even though they have inferior characteristics than those provided by traditional EO
spacecraft. As a consequence of all this, it cannot be said that micro- and nano-satellites drastically
change satellite EO business, but they provide an accessible and affordable data to commercial and
noncommercial companies against traditional ones.

[mh] Integration of AI and Autonomy

[h]Human-Machine Collaboration in AI-Assisted Surgery: Balancing Autonomy and Expertise

Artificial Intelligence (AI) is an exponentially growing field that has already impacted many industries.
Although the foundations of AI were laid out by Alan Turing as early as the Second World War, recent
advances in machine learning and deep learning have bolstered the field, making it one of the most
exciting areas of research and development in today’s technology landscape. AI focuses on developing
systems to perform tasks that would normally require human intelligence, including activities such as
problem solving, pattern recognition, decision making, and even creativity. In recent years, as AI
popularity has increased, its impact on various elements of the medical industry have become more
visible, a harbinger to the future integration of AI technology into the surgeon’s daily toolbox.

As this technology continues to mature, and integrates into surgical practice, the questions surrounding
its role in the operating room will become more complex. While the primary question of “what can AI
do for surgeons?” might soon have an obvious answer, it will open the door to the more nuanced
inquiry of “how will surgeons adopt this technology and how can we mark the boundaries of what we
should permit AI to do in the operating room?”

We herein discuss all the necessary information for the surgical community to understand the issues at
hand surrounding AI, and we lay the framework to assist in making appropriate choices when it comes
to balancing Human-AI collaboration in the operating room (OR). The chapter is divided into three
sections: The human perspective of the collaboration, the machine side of the collaboration, and the
balance between Surgeon and Machine.
The human perspective of the collaboration

3.1 The surgeon’s responsibility in the operating room

Surgeons are trained to make complex decisions under pressure and to act on those decisions with
appropriate speed. This requires constant situation assessment and analysis, and reassessment and
reanalysis. When leading a multidisciplinary team, the surgeons are held responsible for their patient’s
welfare, safety and wellbeing. From the very beginning of a surgeon’s professional life this personal
responsibility for their patients’ outcome is instilled in them, and is constantly reinforced throughout
their career. The American College of Surgeons describes the surgical profession as one of
responsibility and leadership, where the surgeon is ultimately in charge of every aspect of the patients’
well-being, even if they are not directly involved. While some of these responsibilities might be
obvious, others may perhaps be less obvious, as laid out in Table.

Responsibility of the surgeon to ensure patient safety

Responsibility Description

Oversee proper preoperative preparation of the patient with standardized


Preoperative protocols. Achieving optimal preoperative preparation frequently requires
preparation consultation with other physicians from different disciplines; however, the
responsibility for attaining this goal rests with the surgeon.

Obtain informed consent from the patient regarding the indication for surgery
Informed consent
and surgical approach, with known risks.

Consult with anesthesia and nursing teams to ensure patient safety. Oversee all
Consultation with
appropriate components of the surgical time-out (Identification of patient,
OR team
procedure, approach, etc.).

Lead the surgical team in performing the operation safely and competently,
Safe and mitigating the risks involved.
competent Ensure anesthesia type is appropriate for the patient and procedure. Including
operation planning the optimal anesthesia and postoperative analgesic method with the
anesthesia team.

Specimen labeling Overseeing specimen collection, labeling, and management with completion of
and management the pathology requisition.

Disclose operative
Disclose operative findings and the expected postoperative course to the patient.
findings

Personal participation and direction of the postoperative care, including the


management of postoperative complications. If some aspects of the
Postoperative care
postoperative care may be best delegated to others, the surgeon must maintain
an essential coordinating role.

Follow up Ensure appropriate long-term follow-up for evaluation and management of


possible extenuating problems associated with or resulting from the patient’s
Responsibility of the surgeon to ensure patient safety

Responsibility Description

surgical care.

Table.

Responsibilities of the surgeon as the treating physician.

The tremendous weight of carrying all this responsibility often creates a psychological mindset where
the delegation of responsibilities becomes a difficult task that must be managed with great assiduity.
Surgeons learn via their training to “trust no one”, to delegate tasks with caution, and to personally
review all data. This constant and obviously essential need for oversight raises the question - What
does it take for surgeons to feel comfortable delegating responsibility? When do surgeons feel at ease
when relinquishing part of this control? And subsequently, what does it take for the surgical profession
to adopt new technologies that take part of this burden of responsibility away from the surgeon?

3.2 The surgeon as an innovator and the process of adopting new technology

Although surgical training is based on apprenticeship, where the student learns from the master and
replicates the master’s actions exactly, the advancement of surgical capabilities has always relied
heavily on the innovation and adoption of new technologies. Throughout history, the desire to help
their patients has motivated surgeons worldwide to be creative in finding new solutions to their
problems. The evolution and adoption of change within the actual surgical practice, however, is rather
complicated. Some surgeons are constantly innovating by customizing therapies and procedures to
meet the uniqueness of each patient, while most continue to follow the path that was laid out by their
mentors, often reluctant to adopt new technologies. As such, the integration of novel technologies or
procedures into a surgeon’s daily practice is influenced by many factors, including the possible benefit
the innovation provides to the patient, the patient’s demand for it, the learning curve required for skill
acquisition by the surgeon, and the amount of disruption it would create within their practice. Take for
example, laparoscopic cholecystectomy; it took only four years from its introduction, to become the
gold standard for gallbladder removal, as this procedure had obvious and very tangible benefits for the
patients compared to open cholecystectomy, and the amount of disruption to the surgical practice was
low. In contrast, laparoscopic simple nephrectomy attained only a mere 20% acceptance rate by
surgeons thirteen years after its introduction – most likely due to the lack of perceived benefit of
changing the standard of care by the surgeons. The question then arises, how does one promote and
move forward a new concept so that it can be adopted?

The process by which a cohort adopts a new concept (idea, technology, procedure, etc.,) can be studied
and understood with the Technology Adoption Curve (TAC). TAC is a sociological model that divides
individuals into five types of people with different desires and demands, and explains what it takes for
each of these groups to adopt an innovation. These five groups are the innovators, the early adopters,
the early majority, the late majority, and the laggards .
Figure. Technology adoption curve. Bell-curve represents the variation of adoption, and S-curve
represent the accumulated adoption over time.

The TAC model, used to describe adoption in the general population can be extrapolated and applied to
the adoption of technology by surgeons.

Innovator surgeons are enthusiastic about new technologies and are willing to take the risk of failure.
They are willing to test a new procedure even if it is in experimental stages. Early adopters are the
trendsetters, they are also comfortable with risk, but they want to form a solid opinion of the
technology before they vocally support it. These surgeons are comfortable trying a novel procedure
that has enough published literature to be regarded as safe.

Surgeons in the early majority are interested in innovation but want definitive proof of effectiveness.
The benefits of a procedure are more important to them than the novelty. The late majority are averse
to risk, as such, they need to be convinced that the new procedure is worth their time. While Laggards
are skeptical and wary of change, making them reluctant to change, preferring to continue with what is
familiar to them.

In an effort to better relate the model to evidence-based practices like surgery, Barkun et al. proposed
some adaptations, allowing for critical appraisal and assessment of the technology. In their model
every stage would require peer review, thereby promoting a more scientific approach to the application
of new technology in surgery .
Technology adoption curve Stages of surgical innovation

Innovators Development

Early Adopters Exploration

Early Majority Assessment

Late Majority Long Term Implementation

Laggards

Table.

Stages of surgical innovation according to Barkun et al. and how they compare to the TAC model.

On average, for a new concept to be considered adopted, 20% of people must have already begun to
use the technology, in other words, some but not all people in the early majority group of the TAC. For
this to happen with AI in the OR, the benefit of the technology must be proven beyond the proof-of-
concept stage. Once the technology has been proven to be safe and beneficial then it will be easier to
convince more individuals to try it, thereby promoting wider spread acceptance, adoption and
eventually integration into daily practice.

The machine side of the collaboration

4.1 The basics of artificial intelligence

Artificial intelligence (AI) is defined as the simulation of human intelligence in machines programmed
to think and learn like humans. The aim of AI is to create machines with the ability to perceive their
environment, reason with it and act in such a way that would normally require human intelligence or to
process data whose scale exceeds what humans can analyze. In other words, to create systems that have
a certain degree of autonomy. Within the framework of autonomy in AI there is a hierarchy, comprised
of three main tiers:

4.1.1 Artificial narrow intelligence

Systems designed to perform a specific task or solve a specific problem. As such, they have a narrow
range of parameters allowing them to simulate human behaviors in specific contexts such as face or
speech recognition and processing, voice assistance, or autonomous driving. They are “intelligent”
only within the specific task they are programmed to do.

4.1.2 Artificial general intelligence

Systems designed to perform any intellectual task that a human can. Apart from mimicking human
intelligence, these systems have the ability to learn and adapt. Additionally, they can think, learn,
understand, and act in a way that is indistinguishable from that of a human being in any situation.
4.1.3 Artificial superintelligence

A system designed to surpass human intelligence in every aspect with the ability to improve its own
capabilities rapidly. This system is designed to have consciousness and be sentient, surpassing humans
in every way: science, analysis, medicine, sports, as well as emotions and relationships.

While the tiers of AI are each fascinating in their own way, currently the only type of AI that exists is
Artificial Narrow Intelligence. The remaining tiers are merely theoretical and philosophical concepts,
as such have yet to be achieved, and are beyond the scope of this chapter.

To further understand how AI works, it is important to discuss the concepts of Machine Learning,
Artificial Neural Networks, and Deep Learning. These terms are used to describe the techniques that
organize the basis of AI systems and are important to understand how AI is achieved. These terms refer
to different techniques used to train machines on data, each one building upon the prior one in order to
reach more complex results.

 Machine Learning (ML) is the process by which an AI system can automatically improve with
experience; this process allows a system to learn from data without being explicitly
programmed. Machine learning algorithms can analyze large amounts of information to identify
patterns and make predictions or decisions based on that analysis.
 Artificial Neural Network (ANN) is a type of machine learning algorithm based on a collection
of connected units called “neurons” that loosely model neurons in the biological brain. Each
connection can transmit a signal to other “neurons” which in turn receive the signal, process it
and forward a new signal to other neurons connected to it. A neuron can only transmit its
processed signal if it crosses a certain threshold, a process similar to the depolarization of
biological neurons, hence the term neural network.
 Deep Learning (DL) is a type of neural network that is designed to learn and make decisions
based on multiple hidden layers of interconnected neurons. Deep learning algorithms are
capable of learning and representing complex relationships in multiple datasets automatically.
Figure. Deep learning example of an artificial neural network where an image is pushed through
several algorithms in hidden layers. Once all layers are processed the outcome can be reached, in this
case a definition of the image.

Now that the techniques that serve as the basis of AI have been clarified, it is important to understand
how they are applied to create actual usable systems that can perform a task. These basic applications
of AI include Natural Language Processing, Computer Vision, and Expert Systems, which leverage
Machine Learning, Artificial Neural Networks and Deep Learning to solve specific problems.

 Natural Language Processing (NLP) is the ability of AI systems to understand, process, and
interpret human language.
 Computer Vision (CV) is the ability of AI systems to interpret and understand visual data, such
as images and videos.
 Expert Systems (ES) is the ability of AI systems to emulate the decision-making capacity of a
human expert.

For the purpose of simplification one can say that there are different techniques to train artificial
intelligence (ML, ANN and DL) which each perform specific tasks (CV, NLP, ES) in order to solve a
specific problem.

Figure. Relationship between basic concepts of artificial intelligence.

Most of the cutting-edge AI systems available today use a variety of these algorithms in tandem to
accomplish their tasks. Tablepresents examples of technologies that use specific AI algorithms for each
concept discussed above.
Principle Analogy Real world Medical world

Teaching a child to
Netflix, Inc. uses machine Owkin, Inc., a company that
recognize an object by
Machine learning to recommend uses machine learning to
showing it pictures of the
Learning personalized content to each improve drug discovery and
object, without telling the
user. clinical trial design.
child what it is.

The human brain AlphaZero™ by Deepmind, PhysIQ, Inc. is a company using


processes and interprets Ltd. is a chess engine which neural networks to continuously
Neural
information from the after 24 hours of training monitor at-risk patients
network
senses to make decisions defeated world-champion remotely and alert their
and control the body. chess programs. physicians in real time.

A student who starts AIdoc, Ltd. is a company that


Tesla, Inc. uses deep
learning basic concepts in uses deep learning algorithms
Deep learning algorithms to
class and continues to for image analysis to detect and
learning constantly improve their
self-teach building-up to prioritize acute abnormalities in
cars’ self-driving system.
more complex ideas. radiology.

Alexa™ by Amazon, Inc. is a The UNITE algorithm developed


Natural A translator between virtual assistant that can at Harvard University can
language people who speak understand, process and automatically assign ICD codes
processing different languages. respond to language based on clinical notes without
prompts. human supervision.

DeePathology, Ltd. has created


A child that can see any Google Lens ™ can process
Computer an algorithm that can
picture of a dog and know an image and offer actions
Vision autonomously detect H. Pylori
it’s a dog. depending on what it sees.
in pathology slides.

Merative™ (formerly IBM


A firm that has a lawyer AITax, Ltd. has an AI system Watson Health), is a clinical
Expert
on retainer to answer any that can automatically check decision-support system for the
Systems
question at any time. and file user taxes. diagnosis and treatment
planning.

Table AI basic concepts with examples used in the real world and in the medical world today.

4.2 AI in medicine and its current applications

Today AI is already being utilized in medicine, and thus far, its applications have shown promising
results as demonstrated by improved patient outcomes, optimized clinical workflows, and accelerated
research. To date, there are 521 AI-Enabled medical devices approved by the FDA, with the
overwhelming majority of these products being in the field of radiology used to process images for all
pathologies, excluding cancer. Other AI applications currently available are used in the fields of
anesthesiology, cardiology, gastroenterology, general and plastic surgery, hematology, microbiology,
neurology, obstetrics and gynecology, ophthalmology, orthopedics, pathology and urology. Given the
broad spectrum of applications within varying fields of medicine, one understands that AI utilization is
not only based on type but also on what its end goal is. Generally, AI technologies in medicine can be
classified by the end goal they achieve; these include everything from screening and diagnosis, to
triage and clinical trial management. Multiple applications are currently being utilized and under
further development, including:

Computer-Aided Detection (CADe) technology is being developed to aid in marking/localizing regions


that may reveal specific abnormalities. Its goal is to elevate the sensitivity of screening tests.
Curemetrix, Inc. product cmAssist™, for example, has shown a substantial and statistically significant
improvement in radiologists’ accuracy and sensitivity for detection of breast cancers that were
originally missed.

Computer-Aided Diagnosis (CADx) is being developed to help characterize or assess diseases, disease
type, severity, stage, and progression. An example of the application of this technology is GI Genius™;
an Intelligent Endoscopy Module by Medtronic, plc. That can analyze a colonoscopy in real-time and
estimate the possible histology of colorectal polyps.

Computer-Aided Triage (CADt) aids in prioritizing time sensitive patient detection. VIZ™LVO is a
software by Viz.ai, Inc. that detects large vessel occlusion strokes in brain CT scans and directly alerts
the relevant specialists in a median time of 5 minutes and 45 seconds, as opposed to 1 hour which is
the standard of care today, significantly shortening the time to diagnosis and treatment.

Computer-Aided Prognosis (CAP) can provide personalized predictions about a patient’s disease
progression. The EU-funded CLARIFY Project (Cancer Long Survivor Artificial Intelligence Follow-
Up) is working in harnessing big data and AI to provide accurate and personalized estimates of a
cancer patient’s risk for complications, including rehospitalization, cancer recurrence, treatment
response, treatment toxicity, and mortality.

Clinical Decision Support Systems (CDSS) are being employed to aid healthcare providers in the
diagnoses and treatment of patients in the most effective way possible. Babylon AI, by Babylon, Inc.
for example, is a system that uses data to decide on, and provide information about the likely cause of
people’s symptoms. It can then suggest possible next steps, including treatment options. The system
has demonstrated its ability to diagnose as well as or even better than physicians.

Remote Patient Monitoring (RPM) systems are being used to monitor patients, and Virtual
Rehabilitation is being developed to help patients recover from illnesses and injuries. Systems like
CardiacSense Ltd. Medical Watch continuously monitor heart rate and blood pressure, process the data
and update the physician in real time. This noninvasive monitoring system allows the physician to
change treatment according to data that would not have been available otherwise.

Health Information Technology (HIT) is being employed to improve disease prevention and population
health. Medial EarlySign, Ltd. mines data from electronic medical records for early detection of
patients with high risk of colorectal cancer. Patients determined to have a high risk by the system are
flagged and consequently scheduled for colonoscopy. This system has achieved early detection of an
additional 7.5% of colorectal cancers that would otherwise have been caught in more advanced stages.

Clinical Trials Management Systems (CTMS) are being developed to help streamline all aspects of
clinical trials including preclinical drug discovery, clinical study protocol optimization, trial participant
management, as well as data collection and management. These types of systems enable researchers to
improve study design by utilizing the guidance in choosing the best study design, determination of
number of patients needed for each study arm, optimizing candidate selection, as well as tracking and
analyzing large amounts of data. CTMS are helping researchers create stronger and more efficient
trials.

As demonstrated by the above systems, the implementation of these types of AI has significantly and
measurably improved the field of medicine. As the benefit of AI continues to be appreciated, via the
understanding as to how it aids in providing better and more efficient care to patients, more
professionals will begin to utilize it. With improved acceptance, the previously discussed adoption
model that Barkun et al. proposed will continue to shift towards long term implementation.

4.3 Potential benefits of AI in surgery

Improved patient care has historically been linked to technological advancements. Laparoscopic
cameras have evolved from simple VHS quality to HD and 4 K cameras and even 3D vision with Near
Infra-Red capabilities that allow the surgeon to see beyond the naked eye. Laparoscopic instruments
evolved from simple straight and rigid instrumentation to articulating and flexible tools, providing a
limitless range of motion. Standardization and precision-surgery have infiltrated the OR in the form of
staplers for the creation of anastomosis, advanced energy tools for cutting and coagulation, and robotic
assisted surgery that combines all of the above technologies together to enhance human precision.
Most recently, AI has started to appear in the surgical field, albeit in the perioperative setting. These
systems are helping surgeons with decision making processes both pre- and post-operatively by
predicting complications and managing different aspects of patient variables. Nevertheless, AI has yet
to penetrate the walls of the OR.

The disparity between the advancement of AI in surgery and other fields in medicine is probably
because most applicable AI technologies today are focused on vision and reporting, i.e. diagnosis and
big data analysis. Surgery at its core is about both vision and action, which presents a much more
complex challenge. This challenge, however, has not stopped research efforts in the field of Computer
Assisted Surgery. A PubMed query revealed that in 2022, there were more than 5200 publications
discussing AI in surgery, and according to The Growth Opportunities in Artificial Intelligence and
Analytics in Surgery study, by 2024 the AI market for surgery will reach $225.4 million.

Prototypes, proof of concept and pilot studies are being developed around the world, focusing mainly
on improving patient safety and refining workflows in the OR. There are already published reports of
AI projects in Expert Systems, Computer Vision, image classification, as well as data acquisition and
management that show promising results. Studies have reported success of Computer Vision systems
for recognition of surgical tools, surgical phases and anatomic landmarks.

Research on videos of laparoscopic cholecystectomy, for example, has reported success of tool
recognition such as graspers, hooks and dissectors; other studies have been successful in phase
recognition during laparoscopic cholecystectomy. The tested systems have demonstrated the ability of
understanding and reporting when the surgeon is dissecting the cystic duct, separating the gallbladder
from the hepatic bed or removing it from the body. More advanced systems have demonstrated the
ability to recognize and mark the critical view of safety.

While these research efforts are certainly demonstrating promising results, the application of AI within
the operating room itself remains in its infancy.
The balance between human and machine

When trying to find an adequate balance between human and machine collaboration in the OR the
subject of autonomy is a natural starting point. Surgical teams today are comprised of highly
specialized professionals that need to work in perfect synchrony for surgical procedures to run
smoothly. The surgeon, as the leader, must find balance between managing everything going on with a
high degree of control, whilst still allowing for the autonomy and independence of each team member.
Most surgeons are authoritative leaders within these teams, meaning that they retain control while still
empowering the freedom of self-management where each member can be engaged, motivated and
focused on their personal tasks at hand. Although the surgeon is ultimately responsible, he or she will
not intervene in a nurse’s needle or instrument counts, or check whether the anesthesia machine is
properly working. Surgeons authorize themselves to relinquish this direct control because via a strong
culture, values and guidelines they ultimately continue to provide the critical oversight and supervision
for effective risk-management.

Besides team management, the surgeon may be liable for equipment malfunctions, therefore there is a
certain underlying hesitancy in giving autonomy to machines. A 2013 systematic review of surgical
technology and operating room safety failures found that up to 24% of errors within the OR are due to
equipment malfunction. This has not, however, stopped us from relinquishing control in certain parts
of the surgery and delegating it to tools which we cannot always fully control. Advanced hemostatic
devices like Ligasure™ for example, automatically adjusts and discontinues the delivery of energy
based on its own calculations without any surgeon input. Similarly, the Signia™ Stapling System has
Adaptive Firing™ technology that automatically and autonomously makes adjustments depending on
the tissue conditions it senses. So, while there is hesitancy from the surgeon side for adopting new
autonomous devices, if the surgeon is able to see the benefits as with the Ligasure™ and Signia™
systems, these types of tools can in fact break the barrier of more advanced machines into daily OR
practice.

5.1 Machine autonomy in other fields and how they can relate to the OR

Whether we are aware of it or not, AI is already affecting the world and making our everyday lives
easier. It is there every time we search for something online. It automatically recognizes us in pictures
we take, it recommends new music, food or products we will like. AI helps us hear what is written and
read what is spoken. It protects us from credit card fraud and helps us make smarter investments. At
home it manages our thermostat and decides when and where to vacuum clean the floors.

Moreover, machines are already responsible for millions of human lives on a daily basis, albeit
indirectly. The oldest and most famous example is probably the autopilot in airplanes; multiple studies
have shown that in 95% of commercial flights, pilots spend less than 440 seconds manually flying the
plane. Other examples include the automation of emergency medical service dispatchers and the
automation of trains and metros, where nearly a quarter of the world’s metro systems have at least one
fully automated line in operation.

The advancements of automation in settings where human lives are at stake have pushed society to
further debate the autonomy versus control issue. Depending on the field, different scales have been
proposed to define levels of automation and autonomy. These scales have been important as they help
define the capabilities and limitations of a system’s autonomous features and establishing expectations
around the operator’s behavior and responsibilities. In addition, they have helped build trust and reduce
anxiety around autonomous machines, while ensuring that legal and ethical concerns are considered as
technologies continue to develop.

The most prominent autonomy scales revolve around the automotive and aviation industries. The main
difference between the two is that the automotive scale encompasses all the systems in a car as a single
unit and the vehicle is labeled depending on its capability as a whole. While in the aviation scale, each
system in an airplane receives a level of automation independent from other available autonomous
systems on the same plane. It is important to note that the scales defining the levels of autonomy in
cars, trains, and planes all have basic similarities which are adapted to each specific industry. These
adaptations are dependent on the level of complexity of each industry, and the training of the average
operator. All the scales, across the various industries begin at level 0 where there is no automation at
all, gradually increasing to level 5 (or the maximum of 4 in trains ) where there is full machine
autonomy without the need for human input at all.

In the field of surgery, the question of how to define the levels of autonomy in systems within the OR
has already begun, and although surgical systems are not yet as advanced as other industries’, it is
important to have a standardized language when referencing this subject. Yang GZ et al.’s proposal for
defining the levels of autonomy for medical robotics has been extremely effective in catalyzing the
debate of defining the levels of autonomy in surgery. This scale is loosely based on the automotive
levels of autonomy as it grades a robotic system as a whole depending on all of its capabilities.

The scale is composed of 6 levels (0–5) as follows:

 Level 0: No autonomy. This includes currently available robots which are master-slave systems
that follow the surgeon’s movements.
 Level 1: Robot assistance. The robot provides some mechanical assistance, while the human has
continuous and full control of the system.
 Level 2: Task autonomy. The robot can autonomously perform specific tasks when asked by the
surgeon.
 Level 3: Conditional autonomy. A system suggests and then performs a number of tasks when
approved by the surgeon.
 Level 4: High autonomy. The robot can make medical decisions while being supervised by the
surgeon.
 Level 5: Full autonomy. The robot can perform an entire surgery without the need for a human
surgeon.

Others have built upon this scale, using similar classification methods for surgical robot autonomy.
Current technology in robotic surgery is only at Level 0, but when the objectives of the research
projects described above are met, we might reach level 1 and 2.

As surgeons, our experience in the OR environment is more comparable to flying a sophisticated


airplane than driving a car. A surgeon’s professional responsibility is similar to that of a pilot, as such
the expected capabilities from autonomous systems in the OR might be similar to those in an airplane’s
cockpit, where each system has their own level of autonomy, independent from other available
systems. Therefore, it may be more beneficial to expand the robotic surgery scale, creating a more
comprehensive autonomy scale in surgery encompassing all the types of technology used within the
OR. To this end one must first understand the flow of a surgical procedure. Every surgery is built on a
series of different phases, each of which are divided into tasks based on specific steps. A surgeon’s job
in the OR is to perform a series of steps in order to complete tasks in different phases of a procedure.
After fulfilling all steps within each task and phase, the surgery is said to have been completed.

Figure. Divisions of a surgery. The tasks and phases can be done in tandem or can be partially
achieved and completed following completion of another task.

The following scale, adapted from the levels of automation in aviation, may be used to address the role
of automation and autonomy in surgery as it encompasses every type of technology. It proposes a
description of each level of automation, taking into consideration the division of a surgery into phases,
tasks and steps.

Level Description Supervision

0: Complete
 The surgeon performs all steps
Human of in every task. The surgeon is in complete control
autonomy

1: Task  The system executes a specific


step of a task. The surgeon is in complete control.
Assistance

The surgeon monitors performance of the


 The surgeon delegates execution system during the execution of the specific
2: Task of multiple steps of a task to one steps.
Automation or more systems. The system requires active permission from
the surgeon to advance to next step.
 The surgeon delegates most The surgeon monitors performance of the
steps of multiple tasks to the
system and responds if intervention is
3: Phase system.
Automation  The surgeon performs a limited requested/required by the system.
set of actions in support of the The system reviews its own work in order to
tasks. advance to the next step.

 The surgeon delegates execution


of all steps of a task in any phase
to the system. The surgeon actively supervises the system
4: Full Autonomy  The system can manage most and has full authority over the system.
steps of the task under most
conditions.
Level Description Supervision
 Execution of all steps of a task
in all phases is done by an
5: Complete
automated system. The surgeon passively monitors performance
system  The system can manage all steps of the system.
autonomy of the task under complicated
conditions.

Table. Levels of automation in surgery.

5.2 The P.A.D. taxonomy—A novel scale for automation and autonomy in the OR

Every surgical procedure is completed based on a series of perceptions, actions and decisions made by
the surgeon. These three different duties are important aspects of surgery and must be included in the
conversation regarding AI and its application in surgery.

System

Perception Action Decision

Level 0:
Complete
Human
autonomy

Level 1: Task The system has the The system performs a


The system may give basic warnings
Assistance ability of basic sensing. step in a specific task.

The system has the


The System performs
Level 2: Task ability of general phase, The system understands current
multiple steps of a task
Automation tool and anatomy step and reacts accordingly.
within a phase.
recognition.

The system recognizes


most phases, tools, and System can perform The system understands current
Level 3: Phase
anatomical variables. most tasks within a task, can predict next actions and
Automation
The system can detect phase. react accordingly.
abnormal events.

The system can


The system can identify The system has full understanding
perform all tasks of
Level 4: Full every aspect of a of current phase under most
every phase in a
Autonomy procedure under most conditions. It plans and reacts
procedure under most
conditions. accordingly.
conditions.

Level 5: The system recognizes The system can The system has full understanding
Complete every aspect and perform all tasks of of every aspect of the procedure
System

Perception Action Decision

abnormal event of a every phase of a and its variables. It plans and reacts
Autonomy procedure under any procedure under any according to any event under any
condition. condition. condition.

Table. The PAD (perception, action, decision) scale for surgical autonomy.

Perception refers to the recognition of variables in the surgical environment. Surgeons do this
instinctively using their senses. Systems sense using sensors like cameras with computer vision, heat
detectors, impedance measurements, etc., to convert data from a physical environment into a
computational system. As an example, basic bipolar devices transfer a fixed amount of energy through
the target tissue for as long as it is activated by the surgeon regardless of the state of the tissue. While
using the basic bipolar it is important for the surgeon to use their own senses to see that the tissue
appears to have undergone coagulation in order to stop applying energy and prevent inadvertent injury.
Over-activation after the tissue has already been coagulated will create a different path of energy
transfer that could damage nearby tissues. Advanced bipolar devices, in contrast, sense the tissues
impedance, regulating the amount of energy dispensed, and automatically discontinuing the activation
when the tissue is coagulated.

Action refers to the maneuvers performed in order to execute a task. Surgeons perform actions
depending on their perception of a specific scenario. Basic tools and systems can perform actions
without having the ability to sense. Advanced systems have the ability to perform an action depending
on what they sense. The advanced bipolar device, for example, acts to continue energy or stop it
according to its own perception (sensing).

Decision refers to the capability of reaching a conclusion after considering different variables.
Advanced systems can give real-time feedback to the surgeon during a procedure, either passively in
the form of alerts, warnings and suggestions, or actively in the form of whole system halts, or action
restrictions. For example, an advanced laparoscopic stapler can sense the cartridge type inserted to the
device as well as the distance and physical resistance between its two jaws. When the stapler is ready
to fire, if these variables exceed the stapler’s ability, it makes the decision not to fire.

With this taxonomy, one can describe easily the level of AI autonomy by combining each section into a
shortened form. As such, the current standard of care is at P1A1D2, because although AI is not yet
commercially available, we do have tools like advanced devices that perform certain actions
autonomously. Applying the scale to these commercially available devices, we can say that advanced
bipolar devices are a Level 1 automated systems as they measure the impedance of a tissue to
automatically decide when a cycle is completed. A procedure using this device would therefore be
characterized as a P1A1D1 procedure. Smart staplers such as the Signia™ would also be a Level 1
system and a surgeon using it would also be performing a P1A1D2 procedure. As current technologies
are further developed with the evolution of AI into more clinical applications, procedures at the level
of P2A1D3 may in fact be in our near future.

It is important to note that according to these Levels of Autonomy in Surgery, the responsibility still
always falls upon the surgeon, independently of the amount of control and relative autonomy that the
system has. The natural path of the debate in the field will bring surgeons (and healthcare professionals
in general) to reach a consensus on the amount of control we are willing to give up for the whether it
should be ethical and legal for a surgeon to actually relinquish control and autonomy to the point where
the burden of responsibility should not be placed on them.

5.3 Will AI replace surgeons?

As with any industry, the perceived threat of AI taking control and pushing away human involvement
holds true in medicine. Although at the peak of the hype of AI in radiology and pathology many
experts predicted that humans would soon be replaced by machines in these fields, they quickly revised
their opinions, with the realization that rather than replacement, the technology had arrived to augment
their field’s possibilities. This is true in the surgical field as well, and as part of the adoption of AI,
surgeons will have to adapt training methods to include these new systems. Not as a way of replacing,
but as a way of augmenting the surgeon’s capabilities. As such, it is imperative that surgeons
understand the capabilities and limitations of the technology, that they know how to use it and problem
solve with it, with enough exposure during their training to feel comfortable adding it to their bag of
armament. More importantly, as the technology advances it remains imperative that the surgeons retain
the ability to perform a surgery with all the necessary tasks safely, even without the use of automated
systems. This is particularly important when faced with ensuring safety of patients: Imagine the
problematic hypothetical scenario of surgeon who is unable to perform a cholecystectomy due to lack
of the ability to recognize the triangle of safety because they rely solely on AI. Conversely, imagine the
exciting scenario where a surgeon who is trained to recognize the triangle of safety can utilize AI tools
to augment its visualization in a patient with complex anatomy, bringing an added benefit to the patient
and the surgeon themselves.

Fundamentally, it is possible to continue to build on the basis of the surgeon’s knowledge while
maintaining control and delegating specific tasks to AI in order to augment their capabilities, not
replace them. As long as the human understands the capabilities and limitations of an AI system as laid
out above, the loss of control is thereby mitigated.

It cannot be stressed enough that medicine is a profession of empathy. As physicians we consider more
than just the patient’s diagnosis in order to propose an appropriate treatment and management.
Surgeons must weigh the patient’s prognosis, social support system, risks involved in surgery, and
patient expectations in order to propose the best treatment. Moreover, during surgery we make an
immeasurable amount of decisions and subsequent actions based on the unique patient laying on our
table. We cannot say that AI has the ability to consider a patient’s environment, desires and
expectations, nor can we say that it is machine-proof, but the potential for an AI system with the ability
to make such decisions with empathy remains only a theoretical concept for now.

The goal of this chapter was to present the factors that both humans and machines face in the evolution
of surgery, as well as the balance needed to have a fruitful collaboration. As the field of artificial
intelligence has been catapulted into the medical field with many new innovations, the transformation
of the medical field is inevitable. The question of how AI technology will affect the surgical profession
has become pivotal, as the technology continues to grow, finding new ways to benefit surgeons and
patients alike. AI should be viewed not as a threat, rather as another tool in the surgeon’s armament for
augmenting their skills, further benefiting the patients. The challenge facing AI integration into the
operating room are not simple, but as presented herein, we already have some AI available at our
fingertips. In the chapter we proposed a novel taxonomy scale encompassing every type of technology
that could one day be used in the OR in a comprehensive manner. The PAD taxonomy for Surgical
Autonomy may help to bring more awareness to surgeons. With a simple method for stratification of
AI, surgeons may begin to feel more confident and be more willing to adopt newer options by
understanding what they are utilizing.

Questions remain with regards to the legality and ethics of AI in surgery, specifically with regards to
autonomy and task delegation, which may take time to understand and develop. As with any
innovation, it is imperative to continue discussions within the surgical community to find the ideal way
of collaboration between surgeons and advanced AI systems, to ensure a beneficial partnership.

Chapter 2: "Military Applications and Strategic Implications"

[mh] Surveillance and Reconnaissance Capabilities

[h]Evolution of Attacks on Intelligent Surveillance Systems and Effective Detection Techniques

The modern smart city infrastructure has advanced by integrating multimedia-based information input
and the development of an edge computing paradigm. An increase in visual and auditory input from
the deployed sensors has enabled multiple network layer-based processing of incoming information.
While most of the intelligent infrastructure depends on a cloud computing-based architecture, edge
computing has been attracting more and more attention to meet the increasing challenges in terms of
scalability, availability, and the requirements of instant, on-site decision making. Advancements in
artificial intelligence (AI) have equipped the edge computers to process the incoming multimedia feed
and deploy recognition and detection software. Machine learning (ML)-based models such as object
detection, tracking, speech recognition, and people identification are commonly deployed to enhance
the security in infrastructure and private properties. With an increase in such technological
advancements, the system’s reliability has also exponentially increased where the trust factor
established on the system is directly depending on the information retrieved by the multimedia sensor
nodes. The edge devices are enhanced with multi-node communication and equipped with Internet
connections to provide continuous functionality and security services.

Due to their significance in infrastructure security and functionality, edge computing devices are
commonly targeted through networked attacks through Wi-Fi and RF links. The devices are
compromised through malicious firmware updates and result in creating a backdoor with admin
privileges. The perpetrators then control the device Input/Output (IO) and compromise the network and
home security. Specifically, visual layer attacks are developed to manipulate the visual sensor in edge
devices and create a false perception of live events monitored by the control station. Simple frame
manipulation such as frame duplication or shuffling allows the perpetrator to mask the original frame,
where the security of the infrastructure can be easily compromised. There is also no evidence of crimes
without the surveillance recordings, and it falters the need for such security devices. Along with the
visual layer, the audio channel of the edge nodes is equally targeted. Modern home security is enabled
with voice commands and a home assistant system that functions based on the voice commands
received. The audio devices are equipped with voice-based home assistant computers and Voice Over
IP (VoIP) surveillance recorders. The attackers can target the audio channel through hidden voice
commands, control the system, or completely mask the audio channel with noise to disable its
functionality.

As the ML-based models have enhanced the surveillance system’s capabilities, it has also resulted in
the development of frame manipulation attacks. Beginning with the traditional copy-move style
forgery attacks in spatial regions of a frame, modern deep learning (DL) has enabled generative
networks capable of creating a frame based on the user’s input. Adversarial networks have rendered
some ML models useless due to their targeted attack to disable their functionality. General adversarial
networks (GANs) have created DeepFakes, which have become one of the most challenging problems
in current multimedia forgery attacks. DeepFake is trained to function in low computing systems such
as edge devices and result in manipulations such as Face Swaps, Facial Re-enactments, and complete
manipulation of the targeted person’s movements resulting in a very realistic media output. It is clear
that both the visual and auditory channels require robust security measures and reliable authentication
schemes to detect such malicious attacks and secure the network.

Advancements in forgery attacks have always been countered with detection schemes. Traditional
frame forgery attacks were first detected using watermark technology and compression artifacts.
However, when the edge device is compromised, the frames are manipulated at the source level,
creating watermarks on false frames. Similarly, with DeepFake being developed, its counterpart
detection schemes were also trained. The first stages of DeepFakes carry visual artifacts like face
recordings without any eye blinking or face warping artifacts. Still, with more training data and better
networks, DeepFakes have evolved to a point where it is almost not distinguishable from real images.
Although the technology itself has its own merits when ethically used in the field of medical and
entertainment, perpetrators can always use the DeepFake technology with malicious intent without a
reliable detector. It is an ongoing effort to create a reliable detection scheme to clearly distinguish
between real and fake.

This chapter provides an overview of the evolution of multimedia-based attacks to compromise the
edge computing nodes such as surveillance systems and their counterpart forgery detection schemes.
The essential features required by a reliable detection system are analyzed and a framework using an
environmental fingerprint is introduced that has proven to be effective against such attacks.

An overview of audio-visual layer attacks

The networked edge devices are commonly deployed through Wi-Fi or RF links in a private network.
The primary means of hijacking a secure device is through network layer attacks where the
communication between the devices is intercepted and modified. This allows the source and the
destination to believe that the information exchange was secure, while a perpetrator alters the
intercepted message as required. Malicious firmware is updated through direct physical access to the
USB interface or remote web interface, which allows a perpetrator to gain admin privileges to the edge
devices. Some devices are sold through legitimate channels with malicious firmware pre-installed.
With complete access to the visual and audio sensor nodes, the attacker can manipulate the media
capturing module itself, making the network-level security measures compromised.

Surveillance systems are the most targeted edge devices due to their importance and access medium.
Network attacks like Denial of Service (DoS) can disable the network connections of the devices and
negate their purposes. Common admin mistakes like using the default credentials on the networks and
devices login are primary reasons for backdoor entry. Once the device or the network is compromised,
the attacker typically encodes the trigger mechanism into the system. This allows the perpetrator to
remotely trigger the selected attack based on remote commands without re-accessing the device.
Malicious inputs can be encoded into the multimedia encoding scheme of the edge device. Trigger
methods like QR-code-based input to the video recording interpret the command differently, face
detection-based trigger, and hidden voice commands through the audio channels are a few examples of
how an attack can be remotely controlled. Wearable technologies like Google Glass are also affected
through the backdoor firmware, where the QR-code-based input was used to hack the device.
With remote trigger mechanisms, a device can be controlled to manipulate the incoming media signal.
Face detection software can be re-programmed to blur selected faces and car plate registrations or
disable certain functionality like detecting prohibited items like guns. Popular Xerox scanners and
photocopiers were hacked to manipulate the contents of the documents that are scanned and insert
random numbers instead of actual data. Surveillance cameras with Pan-Tilt-Zoom (PTZ) capabilities
can be controlled to re-position the cameras so that the number of blind spots is increased in a
surveillance area. Audio Event Detectors (AED) are commonly deployed in surveillance devices to
raise the alarm based on suspicious audio activity or in-home assistant devices to detect the wake
commands. Still, the AED system can be directly targeted using the hidden voice commands to
interpret its input falsely. Using the adversarial networks, popular ML models on edge devices are
targeted so that the input itself can be modified. Frame-level pixel manipulations are made to confuse
the ML models and result in the false categorization of object recognition models. A wearable patch is
trained to target the person identification ML model, which can be worn by a perpetrator in the form of
a t-shirt and escape the identification module.

Access to the multimedia sensor nodes can result in many variants of visual and audio layer attacks. To
study the effective detection methods, we first narrow the video frame manipulation and audio overlay
attacks commonly designed to target the edge-based media input such as surveillance devices and
online conferencing technologies.

2.1 Frame manipulation attacks

Video recordings used for temporal correlation of the live events are primarily targeted using frame
shuffling or duplication attacks. The perception of live events is affected, which disables the
effectiveness of live monitoring. Adaptive replay attacks are designed such that the frame duplication
attack can adapt to the changes in the environments such as light intensity variations, object
displacement, and camera alignments. With adjusting frame masking, the operator in the monitoring
station cannot distinguish between the real and fake images since the duplicated frames are originally
copied from the same source camera. The effect of source device identification and watermarking
technique is negated since the frames originated in the same camera.figurerepresents a frame replay
attack where the attack is triggered remotely by either a QR-code or face detection module, and the
resulting frame is masked with a static background.
Figure. Frame duplication attack to manipulate the perception of live events triggered by the
perpetrator’s face detection.

Spatial manipulation of a frame includes changes to the pixels like object addition or deletion, while
the static frame is maintained. Frame-level manipulations are commonly made to deceive the viewer
with the presence of a subject. The figure shows the spatial manipulation of the video frame.

2.2 Audio masking and overlay

Most edge nodes are equipped with audio recording capabilities making them a target for forgery
attacks. Every household is equipped with surveillance cameras, home assistants, and edge devices
capable of two-way communications. The AED module is responsible for wake command detection or
event detection based on audio like gunshot sounds. The input audio sensor nodes are disabled by
compromising the AED module by replacing the actual event with the quiet static noise. The input is
also affected by adding additional white noise to disrupt the AED module.

2.3 DeepFake attack

DeepFake attacks developed using GAN architecture have resulted in a large quantity of fake media
generation. With enough training data available and the computation resources, the quality of the
generated media keeps improving to a point where a person cannot distinguish between the real and
fake media. Although DeepFake technology has its application merits, any technology can cause more
harm than good in the wrong hands. The developing software technologies have made it easier and
more convenient for the generation of DeepFake media using their mobile phone.
A simple face manipulation software where two people can swap their facial landmarks originated in
the form of mobile applications. Soon, advanced technologies were made to make the swap more
realistic. Many organizations and institutions rely on online conferencing solutions for their daily
communications. Face-swapping technologies allow perpetrators to mimic a source facial landmark
and duplicate their online personality. However, with the capability to extract facial landmarks and
skeletal features from a source subject, a new form of DeepFake emerged to project source movement
on a targeted subject.

Figure. DeepFake Face Swap Attack to project a source face on a target.

The facial re-enactment software allows the model to extract the face landmark movements from a
source subject. These landmarks are projected on a targeted victim resulting in a media where the
victim is projected to act out however the perpetrator wishes. Although the model was created to
demonstrate the capabilities of deep learning models, it resulted in targeting politicians and celebrities
to develop fake media. A GAN model is created where the source body actions are projected on a
targeted person. The model introduced resulted in creating an entertainment application, and it could
also be alternatively used to frame a victim by forging their actions in surveillance media. The style-
based transfer learning has enabled the GAN technology to create more realistic and indistinguishable
output.

Introducing perturbations in real objects or images can cause edge layer object classifiers to make
incorrect predictions, which could have serious repercussions. A study showed that making small
changes in a stop sign could cause an object detector to wrongly classify it as a different object as
depicted in 3(a). This phenomenon has been analyzed and the Fast Gradient Sign Method attack was
proposed, which uses the gradient of the loss function of the classifier to construct the perturbations
necessary to carry out the attack. The attack begins by targeting an image and observing the confidence
of the classifier in its predictions of the class. Next, the minimum perturbation that maximizes the loss
function of the classifier is found iteratively. Using this method, the image can be manipulated such
that incorrect classification is achieved without producing any discernible difference to the human eye
as shown in 3(b). The Jacobian-based Saliency Map Attack algorithm computes the Jacobian matrix of
the CNN being used for object classification and produces a salient map. The map denotes the scale of
influence each pixel of the image has on the prediction of the CNN-based classifier. The original image
is manipulated in every iteration, such that the two most influential pixels, which are chosen from the
saliency map, are changed. The salient map is updated in each iteration, and each pixel is changed only
once. This stops when the adversarial image is successfully classified to the target label.

Figure. a) Adversarial patches cause the classifier to wrongly classify the stop sign. b) FGSM attack
based on introducing pixel-based perturbations.

Tablesummarizes the multimedia attack techniques and their respective targeted systems. Along with
video manipulation, audio is also equally targeted when creating realistic fake media. Paired with
technology like facial re-enactment, DeepFake audio can create an illusion of a targeted person with
manipulated actions. Software like Descript can recreate source audio with training data for few as 10
minutes. Emerging technologies like DeepFake need a reliable detector that can distinguish between
real and fake media to preserve security and privacy in the modern digital era. Due to the
inconsistencies in earlier stages of DeepFake media, many detector modules were created to identify
the artifacts introduced during media generation. However, with more training data and advanced
computing, the output benefited and rendered the previous detection scheme useless. In the following
section, we study the key parameters required for a reliable detector to establish an authentication
system for digital media.
Attack type Attack surfacea Trigger Attack vectora Complexityb
 Denial of
Service  Access:
 QR code scan
Visual layer attack  Malicious Easy
Frame  Face/Object
(duplication/shuffling Firmware  Low
Manipulation Detection
) Injection Computatio
 Remote Trigger
 Live Event n
Monitoring

 Compromised  Access:
Auditory layer attack  Voice Command AED system Easy
Audio  Programmable  Malicious  Low
(noise addition/
Masking noise input Firmware Computatio
audio suppression)
Injection n

 Face/Object
 Access:
 Visual layer Detection
 Target Face Medium
DeepFake attack  Live Event
Detection  High
Manipulation  Auditory layer Monitoring
 Remote Trigger Computatio
attack  Identity
n
Spoofing

 Access:
High
 Visual layer  Target Object  Object (Reconnaiss
Adversarial
attack Detection Detection/ ance
Perturbation  Auditory layer  Pretrained Noise Classification required)
s attack broadcast  AED Systems  High
Computatio
n

Table. Summary of attack vectors and affected modules.


a
Targeted/Compromised systems and attack technique.

b
Attack launching complexity—varied based on ease of access and computational requirements.

Detection techniques against multimedia attacks

Countering forgery attacks led to the development of detection techniques relying on artifacts related to
the in-camera processing module or the post-processing methods. The prior knowledge of the source of
the media recordings has been an advantage in detecting forgery; however, without that knowledge,
some techniques depend on the artifacts introduced by forgery itself. Techniques based on blind
techniques, prior knowledge, and forgery artifacts using the conventional methods are first discussed,
followed by neural networks trained to identify the forgery.
3.1 Conventional detection methods

The processing modules present in-camera and post-processing of the media captured result in
generating unique features and artifacts, which are exploited to identify frame forgeries. Each image
capturing device is equipped with wide or telescopic lenses, where the unique interaction between the
lens and the imaging sensor creates chromatic aberrations. A profile of unique chromatic aberrations is
created to identify foreign frames inserted from a different lens and sensor. Along with lens distortion
artifacts, another module present in in-camera processing after image acquisition is the Color Filter
Array (CFA). The CFA is used to record light at a certain wavelength, and the demosaicing algorithm
is used to interpolate the missing colors. A periodic pattern emerges due to the in-built CFA module,
and whenever a frame is forged, it disrupts the periodic pattern. For frame region splicing attacks, the
interrupted periodic pattern from CFA is analyzed to detect the forgery and localize the attack.

Each camera sensor manufactured has a unique interaction with the light capturing mechanism due to
its sensitivity and photodiode. A unique Sensor Pattern Noise (SPN) is generated for every source
camera. It can identify the image acquisition device based on prior knowledge of the camera’s sensor
noise fingerprint. The SPN noise is similar for RGB and Infrared video; however, it is weaker in
Infrared due to low light. Since SPN is used for source device identification, frames moved from an
external camera can be identified with any localized in-frame manipulation. The frame and audio
acquisition process introduce noise level to the media recordings based on the sensor light sensitivity
and localized room reverberations. Using the Error Level Analysis, rich features can be extracted from
the noise level present and reveal possible anomalies from image splicing.

In the media capturing post-processing, each compression algorithm uses unique encoding. Therefore,
multiple processing of the same media and multiple compression can result in some artifacts
identifying prior changes. Analyzing the compression algorithms used by H.264 coding, the presence
of any recompression artifacts is used to identify frame manipulations. The spatial and temporal
correlation is used to create motion vector features. The de-synchronization caused by removing a
group of frames introduces spikes in the Fourier transform of the motion vectors. However, these
techniques are sensitive to resolution and noise in the recordings.

The frame manipulations have also inadvertently introduced their unique artifact, and attacks can be
identified with prior knowledge of attack nature. Many types of research were developed using custom
hand-crafted features. The scale-invariant feature transform key points are used as features for the
comparison of duplicated frames in a video recording. The features comprise illumination, noise,
rotation, scaling, and small changes in viewpoint. For a continuous frame capture, the standard
deviation of residual frames can result in inter-frame duplication detection. Histograms of Oriented
Gradients (HOGs) are a unique presentation of pixel value fluctuations, which can be used to identify
copy-move forgery based on the HOG feature fluctuation. The optical flow represents the pattern of
apparent motion of an image between consecutive frames and its displacement. Using the feature
vector designed from the optical flow, copy-move forgery can be identified. Features are generated for
each frame and then lexicographically sorted. The Root Mean Square Error (RMSE) is calculated for
the frames, and any frame that crosses the threshold is identified as the duplicated frame. However, the
technique takes higher processing time due to the sorting and RMSE algorithm and is not applicable in
real-time applications.
3.2 Machine learning-based detection methods

The development of AI in computer vision has efficiently enabled media processing for forgery
detection using trained neural networks. The anomalies introduced in the media recordings result in the
forgery-specific artifact, which many research approaches exploit.

3.2.1 Artifacts and feature-based detection

Convolutional neural network (CNN) is the most commonly used frame processing feed-forwarding
neural network model, enabling pixel data processing. Forgery attacks such as frame manipulation in
the temporal and spatial domain and the DeepFake create an underlying artifact extracted to identify
the forgery. In the initial stages of DeepFake development, the resulting media generated visible frame-
level artifacts such as inconsistent eye-blinking, face warping, and head-poses. Later, a CNN model is
trained to identify the abnormalities introduced by DeepFakes by observing for face warping artifacts.
The synthesized face region is spliced into the original image, and a 3D head pose estimation model is
created to identify the pose inconsistencies. With the help of pixel information obtained from videos,
filters can be designed to identify any tampering. Filters based on discrete cosine transform and video
re-quantization errors combined with Deep CNN are used.

The DeepFake generation tools are integrated with online conferencing tools to create a fake virtual
presence by mimicking a targeted person. The video chat liveness detection in can identify the fake
personality due to its fake behavior. The model is trained on behavioral expression in online presence,
and any abnormality is marked as fake. For offline media, the audio and video are manipulated to
create a video statement; however, the underlying synchronization error for the video lip sync and its
corresponding audio are used to identify fake media. To counter DeepFake videos in edge-based
computers and online social media, lightweight machine learning models are trained based on the
facial presence and its respective spatial and temporal features. Video conferencing solutions are also
protected by analyzing the live video stream and passing it through a 3D convolution neural network to
predict video segment-wise fakeness scores. The fake online person is identified by the CNN trained
on large DeepFake datasets such as Deeperforensics, DFDC, and VoxCeleb.

Along with video forgeries, audio forgeries targeting the AED system in IoT devices like Echo dot by
Amazon and Nest Hub by Google are designed. Using the audio perturbations, the AED system
misclassifies the incoming voice commands or completely ignores the commands. Training a CNN and
recurrent neural network (RNN) has secured the AED system from white noise to disrupt the
commands.

3.2.2 Fingerprint-based detection

Modern DeepFake videos are almost perfect without any visual inconsistencies. However, the
underlying pixel information is modified due to the project of foreign information on existing media.
With advancing DeepFake technology, the current research has developed techniques to identify the
underlying pixel fluctuations and use unique fingerprints due to GAN models and in-camera
processing. Authors in have identified that GAN leaves unique fingerprints in the media generated
from its network. By creating a profile of these unique fingerprints, the forgery can be detected, and the
source GAN model used to create the forgery can also be identified. The DeepFake models introduce
pixel-level frequency fluctuations, which result in spectral inconsistencies. Inspecting the spectral
inconsistencies in a fake image shows that due to the up-sampling convolution of a CNN model in
GAN, the frequency artifact is introduced. A filter-based design is used in to highlight the frequency
component artifacts introduced by GAN. The two filters used are used in the high-frequency region of
an image and the pixel level to observe the changes in the pixels in the background of the image. A
biological signature is created from the portrait videos by collecting the signals from different image
sections such as facial regions and under image distortions.

3.2.3 Adversarial training-based detection

Deep neural networks have been proven to be effective tools in extracting features exclusive to
DeepFaked images and can thus detect DeepFake-based image forgery. The traditional approach uses a
dataset containing real and fake images to train a CNN model, and to identify artifacts that point to
forgery. However, this could lead to the problem of generalization as the validation dataset is often a
subset of the training dataset. To avoid this, the images can be preprocessed by using Gaussian Blur
and Gaussian noise. Doing so suppresses noise due to pixel-level-high-frequency artifacts. Hybrid
models have also been proposed that use multiple streams in parallel to detect fake images. It uses one
branch to prepare a model trained on the GoogleNet dataset to differentiate between benign and faked
images, and another branch that uses a steganalysis feature extractor to capture low-level details.
Results from both the branches are then fused together to formulate the ultimate decision on whether a
particular image has been tampered with or not.

There are various approaches to detecting fake or tampered videos using machine learning techniques
and can be broadly categorized into those that use biological features for detection, and those that
observe spatial and temporal relationships to achieve the same objective. A study proposed a novel
approach based on eye blinking to detect tampered videos. It is common knowledge that forgery
techniques such as DeepFakes produce little-to-no eye blinking in the fake videos that they produce.
Using a combination of CNNs and RNNs that were trained on an eye blinking-based dataset, a binary
classifier can be produced, which in turn can be used to detect fake videos with reasonable accuracy.
Facial regions of interest were used to train models to differentiate between real and DeepFaked
videos. Specifically, photoplethysmography (PPG), which uses color intensities to detect heartbeat
variations, was used to train a GAN to distinguish between real and fake face videos. However, the
drawback lies in the fact that this method is limited to high-resolution videos containing faces only.

Spatiotemporal analysis-based methods treat videos as a collection of frames related to time. Here, in
addition to CNNs, Long-Short Term Memory (LSTM) models are used due to their ability to learn
temporal characteristics. One such combination that used a CNN to extract frame level features and an
LSTM for temporal sequence analysis was proposed. Simply put, the input to the LSTM is a
concatenation of features extracted per frame by the CNN. The final output is a binary prediction as to
whether the video is genuine or not. GANs have also been proposed as means of analyzing
spatiotemporal relationships of videos. An information theory-based approach was used to study the
statistical distribution of fake and real frames, and the differential between them was used to make a
decision.

Measure of effective detection techniques

Evaluating the state of the current media authentication system, the existing state-of-the-art technique
relies on a fundamental forgery-related artifact or training a deep neural network to identify specific
forgery. However, the same deep learning technology has allowed the perpetrator to hijack the existing
detection scheme and counteract its purpose. A source device identification methodology used to locate
the device used to capture a certain media recording by leveraging the Sensor Pattern Noise fingerprint
can be spoofed. The counter method uses a GAN-based approach to inject camera traces into synthetic
images, deceiving the detectors into realizing that the synthetic images are real. Development in GAN
technology and abundantly available computing resources have generated many fake media that are in-
distinguishable. A style transfer technique can project facial features into a targeted person and re-
create a realistic image.

Modern infrastructure relying on machine learning algorithms for seamless people detection and
tracking are targeted by adversarial training. A wearable patch can be trained and used to escape the
detection or fool the detector into misclassifying the object. The remote trigger mechanism for frame-
level attacks is triggered using visual cues and avoids detection by face blur or frame duplication.
Tools with simple instructions are designed to allow users to create DeepFake in online video
conferences by portraying a targeted person.

The need for secure media authentication that spans multiple media categories becomes more and more
compelling because of an increase in counterattacks on existing detection techniques. Based on our
analysis of the current state-of-the-art detection methods and their counterattacks, here we highlight the
key ingredients of the most successful and reliable approaches:

 Spatial and temporal correlation: Forgeries involve manipulating spatial frame regions or
shuffling the frame itself, which affects the temporal region. A reliable detector should exploit
both spatial and temporal correlations to identify forgeries in both layers.
 Unique Fingerprint: Deep learning has enabled architectures that are capable of replicating
unique device-related fingerprints given sufficient training data. The detector should utilize a
fingerprint that is independent of external factors and the device to avoid predictions and re-
creation of a unique fingerprint. Inability to control the source of fingerprint generation
correlates with difficulty in recreating its unique nature.
 Multimedia Applicability: Detectors target specific attacks, which allows a perpetrator to adjust
the artifacts and bypass the detection. Both audio and video recordings are the primary input
sources for edge devices, and it is equally important to secure both media channels against
attacks. A detector should equally account for changes and manipulations in both channels,
thereby creating a redundant system capable of dual authentication.
 Heterogeneous Platform: Modern smart infrastructure consists of many different types of edge-
based IoT smart devices. Each device has its designated functionality relying on either video or
audio sensors. Each edge device is also limited in its computational capability due to its power
source preservation. The forgery detection technique should account for enabling its
authentication measures across all devices capable of capturing any multimedia.
 Online Detection: Attacks are focused on interrupting the active state of the detection system,
and most existing techniques are offline systems. Given the state of infrastructure security, it is
crucial to immediately raise the alarm upon forgery detection. Enabling instant, online detection
can actively observe the media capture and process for any manipulations.
 Attack Localization: Lastly, it is important to localize the forgery for further inspection along
with attack detection. A detection method that is capable of tracking spatial and temporal
changes to the media can locate changes made to the collected samples.

Analyzing the critical traits of a reliable detection system, we propose an environmental fingerprint
capable of justifying the qualities aforesaid using the power system frequency. The following section
discusses the rationale behind our fingerprint-based authentication system for edge-based IoT devices.
Environmental fingerprint-based detection

Electrical Network Frequency (ENF) is a power system frequency with a nominal value of 60 Hz in the
United States and 50 Hz in most European and Asian countries. The power system frequency fluctuates
around its nominal values, making it a time variant, and the resulting signal is referred to as the ENF
signal. The ENF-based media authentication was first introduced for audio forgery detection in law
enforcement systems. The fluctuations in ENF are similar to a power grid interconnect and originate
from the power supply demand, making the fluctuations unique, random, and unpredictable. For audio
recordings, ENF is induced in the recordings through electromagnetic induction from being connected
to the power grid. Later, it was discovered that battery-operated devices could also capture ENF
fluctuations due to the background hum generated by grid-powered devices. In the case of video
recordings, ENF is captured in the form of illumination frequency from artificially powered light
sources. The capturing of ENF signal through photos depends on the type of imaging sensor used in the
camera. For a CCD sensor with a global shutter mechanism, one sample is captured per frame since the
whole sensor is exposed at one time instant. However, for a CMOS sensor with a rolling shutter
mechanism, each row in the sensor is exposed sequentially, resulting in collecting the ENF samples
from spatial and temporal regions of a frame.

ENF estimation from media recordings allows many applications due to its time-varying unique nature.
For geographical tagging of media recordings, the ENF signal estimated is compared with the global
reference database, and its recording location can be identified. Similar fluctuations in ENF signal
throughout the power grid are used to synchronize the multimedia recordings in audio and video
channels. The fluctuations in ENF and the standard deviations of the signal from its nominal value are
observed to study the load effects on the grid and predict blackouts.

The estimation of ENF from media recordings is thoroughly studied for a reliable signal estimation and
the factors that affect its embedding process. An ENF-based authentication system is integrated for
false frame forgery detection in both spatial and temporal regions due to the nature of the ENF signal.
In DeFake, the distributed nature of ENF is exploited by utilizing ENF as a consensus mechanism for
distributed authentication among the edge-based system. The media collected from online systems are
processed, and the ENF signal is estimated along with the consensus ground truth signal. With the help
of the correlation coefficient, any mismatch in the signal is located, and an alarm is raised. For detailed
system implementation and ENF integration techniques, interested readers are referred to papers on the
ENF-based authentication system.

State of multimedia authentication

The state of the detection system and forgery attacks never reach an equilibrium where the presented
detection scheme can function as a solution for all types of attacks. This chapter discussed the
evolution of forgery attacks from subtle frame-level modifications to advanced generated images with
fake people, along with its parallel development in detection methods. Based on the critical
observations discussed in Section 4, Tablepresents a comparison of several current forgery detection
techniques.

System FakeCatcher FakeBuster Noiseprint UpConv MesoNet DeFakea

Spatial ✓ ✓ ✓ ✓ ✓ ✓

Temporal ✓ ✓
System FakeCatcher FakeBuster Noiseprint UpConv MesoNet DeFakea

Unique ✓ ✓ ✓

Multimedia ✓

Heterogeneous ✓ ✓ ✓

Online ✓ ✓ ✓ ✓

Localization ✓ ✓ ✓

Table. A comparison of recently proposed forgery detection techniques.


a
ENF-based authentication System.

ENF is a reliable detection method given the signal embedded in the media recordings. The current
limitation of this approach involves the recording environment where the ENF-inducing equipment is
not present. Due to the absence of artificial lights for outdoor recording, the ENF is not captured in the
video recordings. However, in the case of outdoor surveillance recordings, the device is connected to
the power grid directly, and the ENF signal is induced in the audio recordings.

Most of the DeepFake detection techniques presented utilize higher computational resources for each
frame analysis, and in general, edge devices are not equipped with such power. A different approach
would be to design lightweight algorithms utilizing the artifacts or fingerprints for its detection.
However, the DeFake approach avoids any training step, and the ENF estimation can be performed in
low-computing hardware like Raspberry Pi. Although computer vision has advanced with the
emergence of deep learning architecture, DeFake is an environmental fingerprint-based approach
relying on signal processing technologies and with encouraging results.

The development of forgery attacks has exponentially accelerated with growing computer vision
technologies, and the need for a reliable and secure authentication system becomes more
compelling. Most detection systems are exploited for their weakness, and attackers frequently launch
attacks targeting the system and its security system. This chapter studied the evolution of multimedia
attacks using traditional frame-level modification and advanced machine learning-based techniques
like DeepFakes. Countering each forgery, we analyzed the detection techniques proposed over time
and their progress with the attacks. For a reliable detection and authentication system, we constitute
vital ingredients that a system should possess to counter forgery attacks. A thorough analysis and
comparison of existing detection techniques are performed to understand the current state of
multimedia authentication. Based on the key qualities introduced for a reliable system, we highlight
DeFake, an environmental fingerprint-based authentication system, and describe its applications for
frame forgeries like a DeepFake attack. Given the state of current edge computing technologies and
the constant attacks targeted to disable the system, DeFake is the potential to provide a unique
approach for detecting such forgery attacks and protecting the information integrity.

[mh] Precision Strike Operations

[h]Basic Rules of Air Supremacy in the Last Thirty Years


Basic rules and patterns

In 1991, operations with precision air strikes lasted from 2 to 3 hours, while from 1999 to 2003, the
duration of those strike operations reached 7 hours in a row. This became possible due to accurate
strike means, quality intelligence, as well as effective management systems.

The speed of the enemy detection (its intentions), as well as information gathering and processing
together with decision making, will be one of the decisive issues in the conflicts. Decision-making time
will be further reduced making the commander’s decision not a reaction to the ongoing situation, but a
preparation for the predicted situation. That is to say, a decision should be made for the expected
action, for the future. The combat operations will be carried out in parallel or in an interconnected way
in all four dimensions with the same intensity, moreover, the operations of the virtual dimension will
be more than the operations of other dimensions. The air dimension will reach cosmic heights, while
the sea dimension will go down to a depth of kilometers. Distances will become smaller and the speeds
will increase unprecedentedly. Most of the combat arms will be able to carry out operations not only in
their dimension but also in the other dimension. The GTs (ground troops) will receive toolkits to
operate in the air and sea dimensions. Weapons of the new generation, drones, such as “F/A-XX,”
“FCAS,” “XQ-58A Valkyri”, “MQ-25,” etc., extended long-range missiles, such as “AARGM-ER”,
“AIM-260,” “JASSM ER,” etc. will make air and anti-air operations “smarter,” more far-reaching and
effective. The ground Air Defense is becoming more vulnerable. The main part of the operations of
different dimensions will be combined, interconnected and not necessarily carried out at the same time.
The leading armies will try to minimize the involvement of human resources during combat operations
or at least it will be by remote intervention, which will be supported by various robotic military
techniques, built on a modular basis, are designed to operate in different environments (on the land, in
the air, in the water or combined), equipped with different fire, shock means, providing high fire
density and accuracy, which will be endowed with as high artificial intelligence as possible minimizing
the factor of human intervention (even if various international humanitarian laws are tightened).
Robots will take a wide place along with people, especially in special combat arms. They will be
created with modular 3D printers, which will allow to have a large quantity of them.

During 2020 the Azerbaijani Air Force applied the above-mentioned Western military-scientific
achievements and technologies, with which it overcame the Armenian Air Defense. In particular,
mainly Israeli “Harop,” “AirStriker,” “Orbiter” strike UAVS, long-range anti-tank “Spike-NLOS”
missiles, long-range rocket artillery, ballistic “LORA” missiles with the supervision of reconnaissance
and air control points and extensive use of fake targets launched massive and combined airstrikes over
the control points of the Artsakh Defense Army, large depots of weapons, Air-Defense means, and the
heavy military equipment and artillery still being in permanent deployment locations. The Azerbaijani
airstrike, with certain exceptions, was planned in accordance with the specified standards. All the
reserves and military targets of the Defense Army were simultaneously subjected to intensive strikes,
that is, a deep strike operation was carried out, which was implemented for the first time in the post-
Soviet area from the perspective of operational art. During this strike, the Azerbaijanis necessarily used
old Soviet “Аn-2” airplanes instead of the extremely important fake targets, and they used UAVs and
other “F-16” fighter jets instead of expensive American air control points. “E-7A (AEW)” airplanes of
air control of the Turkish Air Force were also used during the combat operations. REW (Radio-
Electronic Warfare) disruptions were actively used so that the Armenian Air Defense means could not
fight effectively. The Israeli experience of 1982 and the American experience of 1991 were required
for the Azerbaijani Army to paralyze the Armenian Air Defense system. This piece of the operation
was carried out at a high level, which seriously paralyzed the control system of the Armenian Army.
This strike, the success of which is often exaggerated, in any case caused a lot of damage to the
Armenian Air Defense, while it gave a free hand to the Azerbaijani troops. It is true that some planned
target points were hit unnecessarily, for example, no damage was caused when striking the
headquarters of the Defense Army or the divisions. Here the problem was not related to one or two
massive strikes, taken separately, but the fact that in real time the Azerbaijani Air Force saw the theater
of the operation on the entire front and almost in full depth, therefore, it had the opportunity to strike
everywhere, along the entire front and depth of the front.

The Azerbaijani helicopters started to use “Spike-NLOS” missiles to launch long-range strikes even on
separate armored vehicles and other vehicles. The Turkish “Bayraktar TB2” UAVs gained more
popularity in this case, but still, the Israeli missiles did the main job. The long-range missiles started to
launch precision strikes also on the Armenian logistics network and the newly discovered command
points and the Air Defense means.

During the 44-day war, the Armenian Air Defense Forces did not have the strength, means, skills and
mastery to disrupt the highly competent Azerbaijani air operations, even if they were local. In fact,
Azerbaijan used the above-mentioned concept at the local level, where the air management points,
intelligence striking complexes, fake targets and direct striking means broke down Armenian Anti-
aircraft defense, which had been based on the Soviet Union model.

In September 2023, the Azerbaijani Armed Forces again used these technologies and solutions towards
the population of Artsakh which had been in blockade and had been suffering hardships already for
several months. They launched the first strikes over the Air Defense and REW installations of Artsakh,
and in order to ensure the great effectiveness of those strikes, they used the Israeli “LORA” ballistic
missiles. In a short period of time the Azerbaijani Armed Forces launched intensive strikes on the Air
Defense means, artillery divisions and heavy equipment of the Armenian Army. The battles of the
ground troops were very tough and intensive, but they lasted very short. The months-long blockade had
yielded its results, the Armenian population, which had relied on the Russian peacekeepers, but was
completely disappointed, was exhausted and not ready for persistent resistance.

Achieving success within a short time they depopulated Artsakh with the tools of crimes against
humanity.

Unfortunately, advanced weapons also enable dictators to target the civilian population, and they
achieve their goals by violating the world order. Aliyev committed a new crime in the 21st century
with advanced Israeli, Turkish and other weapons. The tyrant of Baku forcibly depopulated Artsakh
using Syrian mercenaries, targeting the Armenian civilian population and destroying Armenian
millennial culture. Even after the war of 2020 Armenians relied on the Russian peacekeepers, who,
however, could not or did not want to perform their duties. Even the death of the Russian officers did
not change anything.

During the 2020–2023 Russian-Ukrainian war, both sides were actively using accurate means, which
are often decisive for solving the main challenges in military operations, but both sides, due to the lack
of accurate means—used more conventional means rather than accurate ones. Even though the Russian
side also has strategic long-range cruise missile defense systems and other means, their effectiveness is
low, due to the lack of their number, as well as the lack of tools to ensure their use, particularly, the
lack of reconnaissance strike complexes. By the summer of 2023, the Russian army had used about
2475 long-range missiles, mainly air and sea weapons, on the territory of Ukraine. According to Forbes
experts, the value of this amount is 12.5 billion dollars. At the same time, the Russian army spent about
2300 operational and tactical long-range missiles, mainly Iskanders, the cost of which was 3.5 billion
dollars. The number of released missiles is also confirmed by other sources. The reason why such a
large number of missile strikes are ineffective is mainly the result of not using combined, complex and
mass strikes due to the low efficiency of reconnaissance strike complexes. At best, the Russian side
launches a dozen or two missiles at a time, one of two types. The number of launches of Persian UAVs
by the Russian army is certainly increasing, but it cannot be considered as a part of a complex
combined air attack. In May 2023, 413 UAVs were launched, after which the numbers decreased
significantly. However, in September, they increased again, making 503 units, of which, according to
Ukrainian sources, 396, that is 78.7%, were damaged. The real number of wounded UAVs is not as
important as the significance of how many UAVs were launched. The increased number of launched
UAVs does not have an influence on the strategic number and importance. The Russian Air Force and
Navy are unable to carry out large-scale military operations, especially with two or three branches of
service. Later we will refer to the complex American strikes lasting 6–9 hours. In the Ukrainian war,
the Russian side also tried to disrupt the small, local multi-layered strikes organized by the Ukrainian
side, in which a small number of airstrike tactics, fake targets, etc. were included. With the help of air
control points, fighter air force and anti-aircraft warfare division the Russian Air Force tried to disrupt
these attacks, in which several units of quality cruise missiles and/or reactive volley-fire system
missiles were included. However, it was enough for the Ukrainian side to use several units of fake high
quality, equally cheap and low-quality fake targets, cheap strike UAVs and properly plan everything.
Nearly all Ukrainian local strikes achieved their goals and implemented effective strikes. Because of
those strikes, the Russian “Moscow” cruiser, several other important ships, the building of the Black
Sea Command, several strategical bridges, several Russian airports were sunk, as a result of which
dozens of high-quality aircraft and anti-aircraft warfare divisions were destroyed.

Contrary to that, the Russian side tried to implement mixed multi-layered strikes from the beginning of
the war, which later became poorer in terms of means. Cheap Persian UAVs were replaced by cruise
missiles, but they did not have much effectiveness, because the organization and provision of strikes
failed seriously. Before and during the strikes, the organization of the constant intelligence process, the
accompaniment, the radio-electronic support, the same type of tactical tricks, the poorness of strike
means, and the deficiencies in the organization of the interaction of the military forces make these
strikes ineffective. The help of the American reconnaissance means allows the Ukrainian high-quality
anti-aircraft defense to hit most of these means on the way, even though these anti-aircraft means are
not quantitatively sufficient.

Today it is evident that significant revolutionary changes take place in military affairs. For example,
unmanned aerial vehicles and strike devices of various sizes have been created. Another important step
in the management system is that every soldier should be visible on the commander’s screen. After the
C4 I and C4 ISR command and control systems, the US military uses new, more advanced digital
command and control systems.

Fighter jets of the new generation already operate in the system of “C5 ISR.” These systems ensure
unity of the brigade and air component control systems and together form the control component of
network-centric warfare. In the case of NPCW (network platform-centric wars), these systems even
give the opportunity to control joint or separate wars that are factually going on in one strategic
network but are actually being waged in different operational and strategic theaters. In 2021, the name
of that system was fixed as “Combined Joint All-Domain Command and Control (CJADC2)”.

Some experts believe that simultaneously horizontal and vertical integration systems will give the
opportunity to increase the accuracy of destruction, as well as enhance the control of actions, etc..
Here the theory of “Prompt Global Strike (PGS)” is important. Thanks to the new type of airstrike
means systems, the American armed forces will have the opportunity to strike at any point in the world
after a maximum of 1 hour. Now within the framework of 48 hours, the American army can
accumulate the number of airstrike means systems needed to provide the necessary superiority at any
point on the planet. The “Prompt Global Strike (PGS)” project is being promoted in parallel with the
network-centric war theory, it is a highlighted toolkit for “Multi-domain battle,” “Multi domain
operations/Joint All-Domain Operations (MDO/JADO)” concepts. War is the set of simultaneous
actions in all dimensions and domains. It is the highest display of means employment and operations
management, a finished version of American military culture which has been developing over the
decades. The latest doctrinal document of the US Army “MDO/JADO” clearly states that there are a
lot of dimensions and domains of wars and military operations, the separations are often artificial, and
cognitive and cyber domains are as important as the physical ones. In May 2023, the third operational
group of the US Army, built according to the “MDO/JADO” concept, reached the appropriate level of
operational readiness.

The conclusion can be made that at the operational and strategic levels of the war, air supremacy will
remain a decisive factor. However, this does not mean that traditional, classical military types and
weapons will be pushed into the background.

Today we can clearly see all this in Ukraine, where no large-scale offensive operation is carried out
without ensuring air supremacy, but ground troops and especially artillery are widely used.

The Russian Air Force is constantly trying to carry out complex air operations in Ukraine, but they
have never been of a high level. The maximum number of aircraft that were simultaneously involved in
the military operations did not reach a hundred. As a rule, they were used with very poor coordination,
sometimes with intervals of two to three hours, while the maximum launches during the same
operation generally amounted to no more than a hundred means of airstrike, of which about a quarter
were striking unmanned aerial vehicles. This is a result of the limited possibilities of implementation of
the six rules that we have previously mentioned. The Russian Air Force does not have sufficient space
and air reconnaissance and control capabilities, the accuracy of the strike means and the coordination
of complex strikes is weak. The most massive strike was carried out on November 29, 2023, when a
total of 158 means of airstrikes were launched, about 40 were striking UAVs, and the rest were mainly
air defense cruise missiles. There is a very small number of aeroballistic missiles that are difficult to
shoot down. Two days later, on the second day of January 2024, 100 similar means of airstrikes were
launched, 10 of which were the well-known “Kinzhal” aeroballistic missiles. According to some
information, they are launched in the following sequence, first the UAVs with a small number of anti-
radiolocation missiles, then various cruise missiles, after which aeroballistics and ballistic missiles are
launched, which are also accompanied by old anti-aircraft missiles, used as ballistic missiles. In total,
during the five days, according to Ukrainian official data, more than 500 means of airstrikes were
launched, 300 of which were cruise missiles and aero ballistic missiles, and 200 were attack UAVs, a
very small number of anti-radiolocation and other means of airstrikes. Before that, the record of the
Russian army was 50–60 striking UAVs and other cruise missiles per day, or about 500 cruise missiles
per month, most of which were striking UAVs. In May 2023, the Russian army launched 400
“Shahed/Geran” type striking UAVs and a very small number of means of airstrikes with good quality.
In September of the same year, they launched more than 500 of the same UAVs and again a small
number of means of airstrikes.

In December 2023, 780 units of “Shahed-136/131/Geran” strike UAVS were launched.


This ratio expresses the main trend of the last two decades, when many means of airstrikes are
supported by not expensive and not high-quality means of airstrikes which we also saw during the
second Iraq war. According to the Ukrainian and Western official data, as well as our personal sources,
most of these means of airstrike systems, or at least more than half, are hit by anti-aircraft missiles,
which occurs first as a consequence of the poor organization of strikes, and then the technical quality
of means of airstrikes. These data are very impressive at first glance, never has the Russian air force
carried out more strikes, but a detailed study of the strikes shows that as a military operation, they
present several challenges, and they are not effective, in particular:

1. In terms of density, the strikes are not unprecedented in the art of war, however, they lasted for
a long time, which means, that the synchronization was bad, which gives the Ukrainian Air
Defense an opportunity to competently defend itself.
2. The strikes are not accompanied by many Radio-electronic struggle, distracting and fake means,
and with many anti-radiolocation missiles.
3. The efficiency reduced significantly, as a result of the strikes being scattered over a large area.
4. Hits were made only against stationary targets.
5. Strikes were not accompanied by constant, guided intelligence.

In other words, the Russian side did not manage to ensure the six rules we mentioned by organizing
unprecedentedly large strikes for them. It is not surprising that there are no accompanying videos of
these strikes, evidence of their accuracy, and only eyewitnesses’ videos are cited as arguments for
effective strikes, and sometimes there are ridiculous claims that, for example, a Ukrainian factory that
used to produce uniforms was destroyed by those strikes.

On January 15, 2024, the Ukrainian Air Defense, according to their claim, hit the air control plane “A-
50” of the Russian Air Force over the Azov Sea basin. If this fact is confirmed, then it is unprecedented
in the history of military art. According to some experts’ interpretation, the forces of the Ukrainian Air
Defense acted in a very interesting and complex combination, used the Soviet “S-300″ division,
discovered the Russian control airplanes, after which the hidden “Patriot PAC 2/3” station was started
for a short time, presumably one launching station which launched the hitting missiles. At the same
time the crew of the airplane “Su-34″ recorded the launch of the division “S-300.” Everything lasted
for a very short time, during which the hitting means of the Russian Air Force did not manage to detect
and suppress or hit the Ukrainian Air Defense means. This operation is also a complex operation with
network control and delicate cooperation.

Russian authors like I. Popov and M. Khamzatov mentioned the failures of the Russian Air Force in
detail in the work of “War of the Future”.

All this proves that the claim of some theoreticians that UAVs will forever push back the classic
tactical air force is not very valid. There are also such discussions in the USA, where, for example,
theorists Lieutenant General Samuel Hainot, Colonel Maximilian Bremer, Kelly Grieco and others
believe that the era of air superiority is ending. They claim that UAVs are replacing conventional air
forces wherever possible. We do not agree with these claims. In addition, we have appropriate
arguments for that:

1. First of all, the striking UAVs, which are cheap, do not have much efficiency at the operational
and strategic levels, get severely affected by weather conditions, as well as being weak due to
the lack of complex equipment, etc.
2. They are effective at the tactical level, which is the domain of the artillery. For example, FPV
attack drones replace mortars, but not always.
3. Although such cheap means are important, they satisfy the tight ranks.
4. Means of airstrikes are expensive, therefore, they require high-level technical and combat skills.
5. It is not possible to launch high-quality means of airstrikes from UAVs, for that you need
fighters and bombers.

Contrary to that, the Air Force is transforming and becoming more flexible, all possible UAVs, being
varieties of means of airstrikes, are integrated with the classic Air Force and complement each other.
Modern means of airstrikes are getting smaller in size and perform high-level combat tasks. UAVs and
helicopters supplement and often replace attacking aircraft. The capabilities of attack aircraft are
approaching the tactical Air force, the American tactical air force has been conducting strategic strikes
for a long time, and the new strategic bombers are shrinking in size, maintaining the capabilities of the
previous ones. The American “B-21” bomber is almost two times smaller than the other types.

[mh] Force Multiplication and Operational Flexibility

[h] Integrated Risk Management System – Key Factor of the Management System of the Organization

The management of any organization, whether working in the public sector, whether working in the
private sector, aims in order to achieve its objectives to monitor and reduce risks. Risk control is
achieved by managing them effectively, namely by implementing an adequate risk management
system.

Risk management is an important concept related to safety and financial integrity of an organization,
and risk assessment is an important part of its strategic development.

The strategy of an organization on risk management should be that all the risks it faces must be
identified, assessed, monitored and managed so that they are maintained in a certain limit, accepted by
the entity’s management.

Risk management – Defining function within the organization

Risk management is the process of identifying, analyzing and responding to the risks the organization
faces and is exposed to. The costs of implementing this system depend on the methods used to manage
unexpected events.

Risk management process is an ongoing one and the results are embodied in the decisions taken on
accepting, reducing or eliminating risks that affect the achievement of objectives. The aim is to
optimize the organization’s exposure to risk in order to prevent losses, avoid threats and exploit
opportunities.

2.1. Conceptual approaches for risk

In general terms, risk is part of any human effort. Once we leave to go back home, we are exposed to
risks of different levels and degrees. It is significant that some new risks are completely voluntary, and
some are created by us through the nature of activities.

The word “risk” derives from the Italian word „risicare”, which means “to dare”. In this sense, the risk
is a choice, not fate1. From this definition it follows that the risk is not an option, but we are
permanently exposed to risk in everyday life, what is really important is that each time, to gain control
over it.

Nowadays there is no unanimously accepted definition of the concept of risk by all specialists in the
field. Among the most commonly used definitions, we present the following:

“Risk is the possibility of obtaining favorable or unfavorable results in a future action expressed in
terms of probabilities.”

or

“Risk is a possible future event whose production could cause some losses.”

or

"Risk is the threat that an event or action to affect in a negatve manner the capacity of an organization
to achieve its planned goals.2 "

The analysis of these definitions of risk gives rise to the following conclusions:

1. Probability versus consequences. While some definitions given to risk focus only on the
probability of the occurrence of an event, other definitions are more comprehensive, including
both the probability of risk manifestation and the consequences of the event.
2. Risk and threat. In defining the concept, some experts have put an equal sign between risk and
threat. We specify that a threat is an event with a low probability of manifestation, but with high
negative consequences, since the probability of manifestation is difficult to assess in these cases.
A risk is an event with a higher probability of occurrence, for which there is sufficient
information to rate the probability and consequences.
3. Comparing only negative results. Some concepts about risk are focused only on negative events,
while others take into account all variables, both threats and opportunities.
4. Risk is related to profitability and loss. Achieving the expected result of an activity is under the
influence of random factors that accompany it in all stages of its development, regardless of the
domain of activity.

In conclusion, the risk can be defined as a problem (situation, event etc.) which has not yet occurred,
but can occur in the future, threatening the achievement of agreed outcomes. Viewed in this context,
risk is the uncertainty in obtaining expected results and should be treated as a combination of
probability and impact.

The probability of risk occurrence is the possibility that the risk materializes and it can be appreciated
or determined by measurement, when the nature of risk and available information permit such
evaluation.

The risk impact is a consequence of the results (objectives) when risk materializes. If the risk
represents a threat, the consequence upon the results is negative and if the risk represents an
opportunity, the consequence is positive.

The probability of risk occurrence and its impact on the results contribute to establish the risk value.
Based on concepts presented above, in our opinion, the risk is a permanent reality, an inherent
phenomenon that accompanies all activities and actions of an organization and that occurs or not,
depending on the conditions created for it. This could cause negative effects by deteriorating the
quality of management decisions, reducing profit volume and affecting the organization’s
functionality, with consequences even in blocking the implementation of activities.

In the literature, but also in practice, besides the concept of risk other concepts are used, respectively:

Inherent risk is the risk that exists naturally in any activity and is defined as “the risk existing before
the implementation of internal control measures to reduce it” or “all risks that threat the
entity/organization and may be internal or external risks, measurable or immeasurable”.

Residual risk is the risk remaining after implementation of internal control measures. Applying these
measures should have as effect the limitation of inherent risk to a level accepted by the organization.
The residual risk should be monitored in order to maintain it at accepted levels.

Risk appetite is the level of exposure that the organization is prepared to accept, namely the risk
tolerated by the organization.

Practitioners recommend to organizations’ management to bear in mind that risks can not be avoided
and under these conditions to be concerned by their evaluation to keep them “under control” at levels
considered acceptable, tolerated by the organization, and not to seek the total elimination of them, as
this can lead to other unexpected and uncontrolled risks.

2.2. Risk – Threat and opportunity

Internal and external environment in which the organization operates generate risks. In these
circumstances the organization should identify its weaknesses and threats it faces, in order to manage
and minimize them. Also, strengths must capitalize and exploit opportunities.

In this respect, designing and implementing a risk management process at corporate level, is
appropriate and necessary due to uncertainties of threats in achieving organizational goals.

The implementation of this concept leads to certain changes within the organization, whose effects
should be materialized through a better use of available funds and obtaining levels of profitability
planned, namely:

 risk management requires modifications in leadership style, the organization’s management


would be forced besides the consequences treatment measures of events that occurred, to devise
and implement adequate internal control devices to limit or eliminate the possibility of risks
manifestation. Implementing these control devices should enable the organization to master,
within acceptable limits, risks and to achieve the objectives.
 risk management ensures the efficient and effective achievement of objectives, mastering threats
the organization deals with, allows to hierarchy risks based on materialization probability, of
impact magnitude and costs posed by mitigating or limiting unwanted effects.
 risk management requires a healthy internal control system, designing and implementing
adequate internal controls and ensuring their operation require a reasonable assurance that
objectives will be achieved. Enhancing and strengthening the internal/management control
system is indispensable without designing and implementing appropriate risk management.
Risk management is characterized by the establishment and implementation of concrete activities and
actions of identification and risk assessment leading to determine the risk level and by this act to
implement adequate internal control devices to limit the probability of the risk occurring or the
consequences if the risk materializes. The process must be coherent, integrated to the objectives,
activities and operations carried out within the organization.

The staff within the organization, regardless of the current hierarchical level, should be aware of the
importance of risk management to achieve planned results and to form necessary skills in order to
perform monitoring and control based on principles of efficiency and effectiveness.

The functional structures responsibles within the organization have the task to identify and analyze
regularly the risks related to their activities, to propose and substantiate appropriate measures in order
to limit the possible consequences of risks and ensure approval by decision makers within the
organization.

Practice3 recommends that any organization needs to manage its risks, because in many cases the
occurrence of risks can have serious consequences upon the activities, sometimes these consequences
jeopardizing the very existence of the organization4.

The complexity of risks and their increase has led organizations’ management to understand that it is
better to manage a risk than to cover a loss. Based on this requirement, many organizations have
proceeded to implement risk management, developing specific strategies that have defined the
organization’s behavior towards risk and risk management arrangements.

2.3. The importance of risk management organization

Risk management is a preventive attitude on the elimination or limitation of damages, if any possibility
of a risk materializing, namely a process of identifying, analyzing and responding to potential risks of
an organization.

In these conditions, the role of risk management is to help understand the risks the organization is
exposed to, so that they can be managed. This role varies depending on when the analysis is done, as
follows:

 if the risk assessment is conducted before the risk materialization, the goal is to avoid the
occurrence of this event;
 if the risk assessment is carried out after the risk has materialized, the goal is to ensure the
development of the activities and the organization’s activities continuity.

The advantage of implementing the risk management system within the organization is to ensure
economic efficiency. To achieve this requirement, the organization’s management has the
responsibility to make known the risks they face and manage them properly, in order to avoid
consequences for their materialization.

2.4. Responsibility for risc management

Risk management is the responsibility of the organization’s management, and the central objective of
this process aims the risks management so that resources to be used efficiently and effectively in order
to maximize profit and minimize threats, while safeguarding the interests of employees and customers.
In this respect, the entity’s management must act in the following directions:

 establishing the definition of risk that is widely accepted and understood across the organization
and also the types of risk;
 assessing current risks and monitoring potential sources of internal and external risks;
 establishing clear responsibilities on each hierarchical level and per employee concerning the
implementation of risk management process;
 developing an adequate information system for the management on risks and risk assessment
system;
 setting tolerance in taking risks and limits of exposure in accordance with it;
 permanent analysis of achievements and poor results in risk management and continuous
improvement of risk management process;
 ensuring an adequate level of knowledge and skills of employees in accordance with the
requirements followed by the implementation of the risk management process.

To ensure an efficient risk management is necessary to create certain organizational structures


appropriate for the policies and strategies of the organization. In this respect, the organization should
adopt appropriate policies regarding the organization plan, in order to effectively monitor each risk or
category of risk and in an integrated manner, the whole risks system accompanying activities.

Policies and strategies that may be adopted regarding the organization plan are related to:

 establishing and developing its own system of rules and procedures that, implemented, to ensure
avoiding or minimizing risks;
 establishing appropriate functional structure based on a clear concept, which should ensure
appropriate departments in order to contribute at identifying and monitoring risks.

Given that risk can be identified, evaluated and limited, but never completely eliminated, the
organization must develop both general policies and specific policies to limit exposure.

2.5. Effectiveness of risk management

The activity of an organization is characterized by all processes, procedures, inputs, outputs, resources
(financial, material, human and informational) and technical means for recording, processing,
transmitting and storing data and information on activities and environment where the system is
operating.

By internal/management control programs prepared each functional structure should identify the risks
they face, and by using procedures and risk management policies to ensure their maintenance at
acceptable levels.

Risk management is an ongoing, structured process, that allows identifying and assessing risks and
reporting on opportunities and threats affecting the achievement of its objectives. The benefits of
implementing the risk management process include:

 higher probability of achieving the entity’s goals;


 improving the understanding of risks and their implications;
 increased attention to major issues;
 limitation to the consequences by implementing adequate internal controls;
 assuming a certain tolerance to risk is acceptable;
 broader information for adequate decision making in terms of risks.

The organization’s management and staff perform risk management activities in order to identify,
assess, manage and control all types of events or situations that may affect its activities.

In the world today has become increasingly more imperative for corporate managers to monitor and
manage risk5 in all aspects. A good risk management means avoiding or minimizing loss, and also
treating opportunities in a favorable manner.

Risk management is necessary because organizations face uncertainty and the biggest challenge of the
leadership is to determine what level of risk it is prepared to accept to achieve its mission, in order to
add value to activities and to achieve planned goals.

Risk management is an essential component of the organization’s success and must become an intrinsic
part of its functioning. It must be closely related to corporate governance and internal control, but also
connected with performance management.

Integrated approach to risk

Integrated risk management process is designed and set by the management and implemented by the
whole staff within the organization. This process is not linear, a risk management may have impact
also on other risks, and control devices identified as being effective in limiting a risk and keeping it
within acceptable limits, may prove beneficial in controlling other risks.

Risk management currently knows an appreciation and recognition increasingly large, both in theory
and practice, which means, on the one hand the increase of number of specialists in the field, and on
the other hand the interest of managers within organizations to design and implement effective risk
management systems to meet the objectives.

Mastering risk determines organizational development, performance growth, both generally, of the
whole organization and also of individual activities.

3.1. COSO and integrated risk management

Referring to risk management, COSO presented an initial framework methodology for implementing
internal controls, built-in policies, rules, procedures and regulations that have been used by various
organizations to secure control over how to run the plan and meet objectives.

Later, after the appearance of great scandals of fraud and the need to improve corporate governance
processes, large corporations talked about and set up risk management departments to help implement
procedures regarding the identification, assessment and risk control.

Following the emergence of these needs, Treadway Commission, COSO model promoter, initiated a
program in order to develop a general methodology that can be used by organizations’ management to
improve risk management.

Risk management within the organizations was created on the concept of internal controls, but the
focus was particularly on risk management. This was not intended to replace internal controls, but
incorporating basic concepts of internal control in this process.
Thus, between risk management and internal control was preserved a strong connection interrelated
with common concepts and elements.

3.1.1. Risk management and internal control

The main objectives of internal control/management system are to ensure the efficiency and
effectiveness of activities, the reality of reporting and regulations compliance in the field. The internal
control/management system is developed and monitored in order to implement by the organization’s
management, which is responsible for designing adequate internal control devices in order to ensure
limitation of significant risks and keeping them within acceptable limits, aiming to give the security
that the organization’s objectives will be met.

Risk management system was structured on components of internal control/management, structured


according to COSO model, namely on five elements, whose implementation ensures that the
tools/internal control devices exist and function as intended.

These components were defined as:

 the control environment specific to the organization is the one that sets the foundations of
internal controls system, influencing the control awareness of employees and represents the
basis for other components;
 risk assessment is carried out by management, is performed at both corporate and activity level
and includes identifying and analyzing risks that affect the achievement of objectives. In
general, risk assessment involves determining the level of importance of the risk, assessing the
probability that the risk to occur and determining the way to manage it;
 control activities are policies and procedures to ensure that management’s provisions are
respected. By this, it is ensured that all necessary measures are taken in order to manage risks
and achieve the objectives set by management;
 information and communication helps other components through proper communication to
employees of their responsibilities with regard to internal control and provision of relevant,
reliable, comparable and understandable information so that they could perform their duties and
tasks;
 monitoring implies the verification made by the management of the implementation means of
internal controls it demanded, or by responsibles pursuing if internal controls imposed by it
work and if they are sufficient so that activities or actions to take place as planned.

3.1.2. Objective of risk management system

COSO defines integrated risk management as “the process conducted by the Board, management and
others, applied in setting strategy and across the organization, designed to identify potential events that
may affect the entity and to manage risk within the risk appetite to provide a reasonable assurance
regarding the achievement of organizational objectives”7.

From the content of this definition it follows some essential elements, characteristic to the integrated
risk management, as follows:

 the process is conducted permanently throughout the organization, being circumscribed to other
activities;
 the purpose is to manage risks associated with objectives and to secure expected results through
their implementation;
 within the process is involved the whole staff, regardless of the hierarchical level;
 the approach starts from the strategic goals rather than from operational objectives;
 the process is applied to the entire organization and not functional structures.

The general objective of integrated risk management is to effectively manage uncertainties, risks and
opportunities. The need for risk management stems from the fact that uncertainty is a reality and the
reaction to uncertainty is a constant concern.

Risk management involves establishing actions to respond to risk and to implement adequate internal
control devices, with which to limit the possibility of occurrence or consequences of risk, if it would
materialize. In order to ensure efficiency in achieving objectives, the process must be coherent and
convergent, integrated to objectives, activities and operations carried out within the organization.

Also, regardless of the staff’s hierarchical level, it should be aware of the importance of risk
management has in achieving its own objectives and thus to form the necessary skills to perform
monitoring and control based on principles of efficiency and effectiveness.

In order to ensure the success of this approach and to achieve an effective risk management, within the
organization it needs to create a culture of risk, namely developing a risk management philosophy
specific to the organization and management, and awareness of risk’s negative effects at all levels of
the organization.

From the above it is found that the need for internal control/management is determined by the
existence of threats or opportunities in carrying out planned activities or actions with negative
consequences in the organization. This requires the establishment and implementation of certain
internal control devices in order to prevent or limit the risks.

Also, the need for risk management stems from the fact that risk is everywhere, in everything we want
to achieve. It can not be removed; any action to eliminate risk can lead to the emergence of new risks,
uncontrolled, which may affect to much greater extent the organization. In these conditions, the risk
needs to be minimized, process that can be achieved by establishing and implementing adequate
internal controls.

3.2. The role of integrated risk management system

Risk management process is considered to be a set of activities and actions carried out in a certain
manner and order to prevent or reduce exposure to risk, resulting from an operation or several
operations.

In practice, most commonly applied concept of risk management is that managing risks should be
carried out separately within departments independently organized in the organization’s functional
structure. This method provides simplicity and efficiency form in making decisions on risk
management, but leads to actions and multiple records of the same exposure to risk and does not
address correlations between different exposures.

There are other practices too, which considers that each employee must be responsible for the risk
management, having the competence to identify risks and implement appropriate internal controls to
mitigate the probability of their manifestations. This mean of managing risks does not lead to results
and does not ensure the guarantee of conducting activities given that they were planned, because it
does not ensure the requirements for exposure on the same activities, and the process is influenced by
knowledge and understanding by employees of the risk management system implemented within the
organization.

These traditional risk management processes are usually fragmented, meaning they are found
implemented at the operation or transaction level and are aimed at preventing losses. Managing risks in
these cases “does not consider the fact that risks are a source of competitive advantage”.

Recent research on models and risk management strategies focus on competitive advantages of risks if
they are approached as a whole or at system level. In this case the system is considered to be composed
of all processes and activities necessary to achieve the objectives.

This approach requires that all relevant functions within the organization (personnel, finance and
accounting, manufacturing, commercial, procurement, IT, legal, internal control, internal audit,
strategic development, marketing etc.) to participate in risk management process.

For implementing the integrated risk management is necessary that the organization to be viewed from
the standpoint of system, both as the link of the industry in which it operates and as part of it, acting in
accordance with certain principles, features being: the complexity, limitation of resources, factors that
influence its activity, the nature of events, the possibilities for development.

In this view, it is considered that the risks should be managed in an integrated way, to eliminate
multiple records on the same risk exposure and to analyze correlations between different exposures.
This risk management approach is complex; it requires a large volume of information necessary for
decision making and higher costs of administration. At the same time, making wrong decision can have
a high impact on the business, or even on the organization.

The integrated risk management system, based on this concept, must be interdependent with the
organization’s development needs and to include the processes of development and establishment of
elements concerning assessment, monitoring and risk management. At the same time, integrated risk
management must be also approached in correlation with all types of risk management for each
functional structure of the organization.

Integrated risk management system operates with broad categories of risk (personnel risk, financial
risk, legal risk etc.), with different risks attached to various activities, risks associated with different
operations or transactions, and also with external risks that may affect the development of the overall
organization (risks related to legislative changes) or making one or more activities carried out within
the organization.

In these conditions, implementing the concept of integrated risk management within the organization is
more than necessary because the risk management process should be approached by all types of risk
that are found and affect all functional structures of the organization.

The approach in this unitary manner, of the exposures, respectively as a righteous and coherent system
of exposure to various risks, of connections and mutual conditioning between them, will enable
effective management of risks that may affect achieving the objectives and will contribute to improve
activities and performance growth within the organization.

The integrated risk management system can identify all risks that affect the implementation of
processes and activities attached to an organizational goal; it can assess the overall consequences and
adopt measures depending on the level of uncertainty and the existing inherent risk that affects
achieving objectives set.

Also, integrated risk management allows the foundation and decision making to lower hierarchical
levels of the organization and also at the top level and ensures co-ordination of activities in order to
solve current problems between certain functional structures. It helps to increase efficiency within the
organization also by others administrative or managerial ways, such as better allocation of resources.

The implementation of integrated risk management within the organization will provide to
shareholders and potential investors, more concrete and reliable information on the risks to which it is
exposed, which will allow them to base their decisions in more optimal conditions.

Once with the development of organization’s activities, the old risk management systems become
inadequate and risk exposures, especially the risk of fraud and error increases significantly.
Implementing the integrated risk management system involves the design of evaluation criteria capable
of measuring all activities related risks, by considering the relationships and connections between them
and thus, to determine the exposure to any organization’s risk factor or its functional structures at any
time.

This risk management process, characterized by the development of integrated risk management
methodology, shall include as steps: establishing the organizational context and risk management,
identifying, analyzing and assessing risk, risk treatment, risk control, communication and monitoring
the risk management plan.

The process should not be a linear, the risk management may impact on other risks, and measures
identified as being effective in limiting a risk and keeping it within acceptable limits may prove
beneficial in controlling other risks.

3.3. Integrated risk management system functions

The effectiveness of implementing an integrated risk management system, compared with traditional
risk management, is determined by the fact that it reflects the integration of all activities related to risk
and risk management in a single system. This system is operated and controlled from a single
management level, thus eliminating duplication and disruption of communication and action that can
occur within a classical system.

The functions that the integrated risk management system meet within the organization’s management
system can be classified as follows:

1. defining goals and setting objectives of the organization on risk. Setting goals represents a
defining requirement for the identification, assessment and risk response planning. The
organization must define properly its objectives, so to be understood and carried out by people
who were assigned to.

The basic role of integrated risk management is to provide to the management and organization’s board
a reasonable assurance regarding the achievement of objectives. In this respect, COSO8 states that in
order to identify associated risks it should be established in advance the organization’s objectives,
which shall be grouped into four categories as follows:

 strategic objectives, that define the mission and long term development directions;
 operational objectives, that refers to the effective and efficient use of available resources;
 reporting objectives, that refers to reporting reality;
 objectives of compliance, that refers to comply with the regulations, standards, rules or
regulations applicable to the organization.

In order to define the objectives, the key is that, first, to define strategic objectives, and then, of these,
to derive other types of goals: operational, reporting and compliance.

Also, for each goal it is necessary to establish risk tolerance, accepted materiality concerning the
degree of achievement of identified indicators attached to the objectives in order to be considered
achieved.

1. determining courses of action to manage risk. To achieve risk management within the
organization, the lines of action of the integrated risk management are:
2. defining the organization’s strategy on risk;
3. setting activities to be achieved if the risk occurs;
4. evaluating results and measuring performances;
5. risk monitoring at corporate level;
6. reviewing corporate strategy on risk.

The strategy on risk must be coherent, contain how to recover losses caused by an adverse event and to
integrate risk response measures.

Activities to be carried out if the risk materializes deal with the settlement of measures to address the
consequences of risk, recover losses and identifying and implementing appropriate control devices to
eliminate the causes that led to the risk occurrence.

To apply vigorously decisions taken in order to ensure effective functioning of integrated risk
management will ensure continued operations and obtaining the expected results.

Monitoring risk at corporate level refers to observing the functioning of integrated risk management
system, identifying and reporting existant weaknesses to adopt necessary remedial measures.

Updating the strategy on risk is necessary to be made whenever the organization changes its
development strategy or strategic objectives, and also when management’s risk policy changes.

Also, periodic review of risks involves the redistribution and concentration of resources in areas of
interest.

1. determining relations between integrated risk management system and other subsystems of the
organization. The organization’s management must permanently ensure the interdependence
between the objectives of the organization, its functional departments and risk management.

Risk management process aims to identify and assess risks that can affect the objectives’ achievement
and to establish risk response measures. It should “become part of the organization’s functioning as the
base of management approaches9”.

Considering that the objectives concern all levels of the organization, strategic, general and
operational, being defined at strategy level, functional departments and even individual level, in a post,
it is required that risk management to be aware of all the relationships that occur or develops between
them or within them.

The incomplete determination of the relationship between risk management system and other
subsystems of the organization, will lead to an inadequate identification and management of risks
associated to the objectives with major negative consequences on the organization.

1. setting activities, responsibilities on risk. Seeks to identify all activities in progress within
integrated risk management process and establish responsibilities for implementing each
activity. Since the process involves all functions and functional departments of the organization,
it is required that the activities and responsibilities on risks, defined and agreed at their level, to
be communicated to employees involved in carrying out the activities.
2. defining performance indicators. For each strategic objective, operational, reporting or of
compliance defined at corporate level, must establish performance indicators by which to ensure
measurement of the degree of achieving goals. Also, setting goals to achieve within each
indicator, will allow establishing performance resulting from the risk measures imposed within
each goal.
3. allocating resources necessary to carry out activities and training the staff involved. For each
activity planned to be conducted, it must be identified the necessary resources for their
achievement, respectively financial, human, material and information resources. Resources
necessary in order to accomplish the activities must be available and approved in budgets.
4. communication and consultation on the results, performance evaluation related to risk compared
to objectives planned. Communication involves on time and clear transmission of necessary
information about risk, as follows:
5. the responsibles for risk management communicate information about the process content and
also on management decisions relating to any measure on risk;
6. the responsibles for risk of functional structures communicate information on risks associated to
objectives established, and on how risks are managed.
7. the entire staff reports information on identified risks and whose management needs to be
achieved.

The consultation on the results aims to provide information on risk exposure, after their evaluation and
the implementation of control measures. The role is to establish the effectiveness of control measures
applied.

Performance evaluation of risk aims to determine performance obtained due to the risk response
compared to the costs involved for implementing control measures taken to reduce risk and maintain
its level within the risk appetite.

1. monitoring effects and reviewing formulated strategy. It involves evaluating the efficiency and
effectiveness of risk management process within the organization and conducted according to
the results obtained to carry out the appropriate review of the risk strategy, in order to ensure the
minimization of adverse events and appropriate integration of measures to respond to risk.

In our opinion, we believe that the implementation and operation of an integrated risk management is
neccesary, it can be done through ongoing monitoring of risk and integration risk response measures,
based on risk strategies, which ensure the objectives achievement and deliver the expected results, in
case of an event causing loss.
The firm implementation of decision taken, as the effect of the effective operation of integrated risk
management system, gives premises for further activities and obtaining performance across the
organization.

Knowing threats that affect the achievement of the goals will allow their classification according to the
level of materialization, the extent of impact on the objectives and costs involved for the measures
necessary in order to minimize risk effects. Establishing a hierarchy of threats will lead to establish an
order of priorities in resource allocation.

Integrating risk management into the management sistem

The conception, implementation and operation of an integrated risk management system must ensure
ongoing monitoring of risk and the integration of the risk response measures in a coherent risk strategy.

Risk strategy should contain clear objectives on risk policy promoted and applied within the
organization, to define exposure levels and response to risk in all circumstances where it is analyzed
and evaluated. Also it should be set the terms and conditions for recovery of losses whenever the risk is
manifested and had or will have financial consequences.

4.1. Integrated risk management system - Part of the organization’s management system

Implementing an integrated risk management within the organization will allow the organization’s
management to focus its resources on those risks that affect the objectives achievement, in order to
protect assets, ensure continuity of organization’s activities and adopting the effective decisions.

Risk management function must be a defining function within the organization and provide a complete
and coherent set of activities and actions that define decision-making of the organization if the risk
materializes and to guide staff in risk management.

An effectively integrated risk management system must ensure the recovery of the organization in case
of interruption in activity, by maintaining its essential functions, at least of minimal levels from event
appearance until its remediation.

The decisive part in the functioning of an integrated risk management system is the plannification in
order to ensure business continuity, because it contains measures of recovery for activities under risk
event.

The approach, implementation and functioning of an integrated risk management system in the
organization is achieved depending on the processes undertaken, the organization situation and
leadership style. However, to ensure process efficiency it needs to be taken into account primarily the
following:

 COSO10 principles on the integrated risk management, whose compliance involves designing
and implementing an efficient risk management, which contributes to further objectives and
efficient use of resources;
 risk approach within the integrated risk management, starting from strategic risks and then the
operational, reporting and of compliance;
 analysis and risk assessment must be done in terms of relevant factors, materiality, impact,
probability;
 preparing reports on risk management, having practical value that can be used by management
in making decisions.

The role of integrated risk management system is to ensure the implementation of risk management
function within the organization’s management system. Its functions are activated while the
organization’s management system signals the existence of threat in achieving its objectives and
deliver the expected results because of their activities.

Figure. The management system of an organization

From the scheme presented above it can be seen that developing and implementing an integrated risk
management enables entity’s management to focus efforts on the risks affecting the achievement of the
objectives.

Also, the integrated risk management system reflects the integration of all activities and actions related
to risk and risk management in a single system so that it can act upon them at one level. By it, the
parallelism and dysfunction of action and communication are eliminated, occuring within organized
systems operating independently of each other.

Implementing an integrated risk management system within the organization leads to the following:

 strategic risk analysis, operational, reporting and of compliance that may affect the achievement
of organizational objectives;
 definition and prioritization, according to risk level and costs required, of control devices
required to eliminate the consequences if the risk has manifested or to limit the risk, if it
constitutes a threat to the organization;
 identification and evaluation of internal controls related to activities and actions attached to the
objectives, both in terms of their existence and of those expected to exist and the establishment
of areas or zones that require implementing control measures, so that targets set to objectives to
be achieved within conditions planned;
 designing and implementing certain internal control measures to improve activities;
 providing conditions for the organization to comply in different situations;
 establishing data and critical information concerning the environment of the organization, that
may be used in the analysis and decision-making strategy.

Exercising risk management function, as defining function within an organization, involves making
through integrated risk management system a coherent set of processes, activities and operations, by
which it is ensured an effective risk management and defined the decision-making process if risk
occurs.

However, depending on the types of risks identified, on the response to risk determined according to
risk appetite, on the costs involved and the levels at which risks may be maintained after their
treatment, integrated risk management system can guide organization to improve work according to the
benefits of good risk management.

4.2. Assessing and measuring risks – Component of integrated risk management system

In the integrated risk management process, the component on risk assessment is a major step aiming to:

 identify significant risks within the organization, associated to objectives;


 assess the capacity of the internal control/management system to prevent/manage risk
effectively;
 determinate significant risks and uncontrolled adequately by the organization and that are going
to be treated to reduce exposure levels;

Risk assessment depends on the probability of occurrence and severity of the consequences if the risk
materializes, meaning the impact of risk and uses as tools the risk assessment criteria. These criteria
should cover the purpose, in which risk was identified, in terms of compliance and performance.

By prioritizing are selected medium and large risks on which will conclude responses to the risk.

The risk assessment process includes the assessment of inherent risks existing before the
implementation of control measures and residual risks, resulted after implementing control measures
and have two phases, namely:

1. Assessing probability is a qualitative element and is carried out by evaluating the potential for
risk occurrence, by considering qualitative factors specific to the context in which goals are
defined and achieved. This can be expressed on a scale of values on three levels as follows: low
probability, medium probability and high probability. Illustration:

PROBABILITY If

LOW Rare modifications in the regulatory framework, over 3 yearsLess complexity of activities
and actionsExperienced staffObjectives and targets are not changedReliable, adequate
and updated informationProcesses well designed, formal and conducted

Legal framework is relatively new or experienced significant changesAverage complexity


of activities and actionsAverage level of employment and experience of staffRare
MEDIUM
changes of objectives and targetsExisting information from many sources, but
insufficientProcesses related to practice

Very frequent modifications in the regulatory frameworkHigh complexity of activities


HIGH and actionsInexperienced staff and newly employedFrequent changes of objectives and
targetsPoorly designed processes and leadInsufficient and outdated information

Empty heading

1. Assessing impact is a quantitative element and is carried out by evaluating the effects of risk if
it would materialize, by considering quantitative factors specific to the financial nature of the
context of achieving objectives. This can be expressed on a scale of values on three levels as
follows: low impact, moderate impact and high impact. Illustration:

IMPACT If

Low cost of implementation of activities/actions, under planningNo losses of financial


assets, employees nor materialsGood image of the organizationCompetencies and
LOW
responsibilities provided in decision makingGood quality servicesContinuity of activities is
ensured

Costs of implementing the activities/actions equal to planningReduced losses of financial


assets, employees and materialsModerate image of the organizationDecisions made
MODERATE
without assuming responsibilitiesModerate quality of services providedVery rare
interruptions in activity

High costs in relation to implementation planning of activities/actionsPoor image of the


HIGH organizationDecision making without ensuring the competence and responsibilitiesPoor
quality services providedSignificant break in activity

Empty heading

Risk analysis criteria are represented by the probability assessment of risk occurrence and the impact
level assessment if the risk would materialize, as follows:

 the probability assessment is made based on the analysis and evaluation of quality factors
specific to the context in which objectives are defined and met;
 assessing the level of impact is made based on the analysis and evaluation of quantity factors
specific to financial nature of the context of achieving objectives.
Figure. The level of risk depending on the probability and impact

Establishing the response to risk and pursuing if it falls into the risk appetite, agreed by the
organization’s management, is carried out by multiplying probability and risk impact, obtained from
the formula:

where: PT = total risk score

P = probability

I = impact

Depending on the outcome of the risk measurement process, applied to all risks the organization faces
and that affects achieving objectives employment shall be: high risk, medium risk and low risk as
follows:

1. for PT = 1 or 2, low risk


2. for PT = 3 or 4, medium risk
3. for PT = 6 or 9, high risk
4. Assessing internal control

To assess the internal control are considered the risks associated with the objectives the organization
faces and that were measured.

Internal control assessment process involves the identification and analysis of internal controls
expected and existing, implemented by the entity to manage risks and aims to establish areas where it
does not work or work improperly. This can be expressed on a scale of three levels as follows:
compliant internal control, internal control partially compliant and non-compliant internal control.
Illustration:

INTERNAL
If
CONTROL

Implemented internal control system, prevent risk materializing.


Regulatory framework of risk management and internal control known.
Positive attitude towards internal control/management and risks.
Internal control/management integrated into organization’s activities and actions.
COMPLIANT
Risk management ensures identification of significant risks, their assessment,
establishing risk management measures and monitoring their effectiveness.
Systematic reporting on activities development.
Objectives met and appropriate remedies for violations.

PARTIALLY Internal control system is implemented, but does not prevent risk materializing.Neutral
COMPLIANT attitude towards internal control/management and risks.Internal control/management
is partially integrated into the organization’s activities and actions.Risk management
process ensures the identification of risks, their assessment, but risk management
measures are not always adequate and effective.Systematic reporting on activities
development, but states objectives met.

Internal control system not implemented.Regulatory framework of internal


control/management is not known.Uncooperative or indifferent attitude towards
NON- internal control/management.Internal control/management perceived as a separate
COMPLIANT activity, conducted in parallel with the activities of the entity.Risk management does
not provide identification of significant risks.Systematic reporting on activities
development, but information is not reliable.

Empty heading

Risk response involves establishing and implementing possible actions, selecting those appropriate to
the risk appetite and the costs required to implement risk management measures, by considering the
following:

 for objectives whose risks were classified as low risk and for which internal controls have been
assessed as compliant, the risk is residual, so the organization’s exposure is below the accepted
level. In these cases, the organization accepts the risk as such, without interfering for its
treatment, but will provide ongoing permanent monitoring to ensure that the exposure level does
not change.
 for objectives whose risks were classified as medium risk or high risk and for which internal
controls have been assessed as partially compliant or non-compliant, the risk is inherent, so
organization’s exposure is above the accepted level. In these cases, the organization will
proceed to treat, avoid or transfer risks.

The structure of integrated risk management

Achievement of the objectives of integrated risk management within an organization presupposes the
meeting, in a logical sequence, of specific and required activities, as follows: setting the context,
setting the objectives, risk identification, risk assessment, setting a risk response, implementation of
control measures, information and communication and monitoring.

5.1. Integrated risk management process

Integrated risk management is structured on component elements of the COSO model, indicating that
the control environment is defined by the internal environment and risk assessment consists of setting
goals, identifying events, risk assessment and risk response.

5.1.1. The internal environment

It represents the theoretical and conceptual stage of risk management process, which presupposes an
organizational culture on risks and knowledge of risk management operating concepts, and whether
they are implemented and known at all levels within the organization.
This stage involves carrying out specific activities to implement risk management within the
organization, as follows:

 establishing an organizational context, that analysis of objectives, operating structure,


delineation of duties and responsibilities and the main conditions in which the organization
operates. They also set requirements for future development of the organization and key risk
exposures, including the characteristics and consequences;
 setting the context of risk management, the concept of the organization against the risks they
face and the level of acceptance in relation to exposure to risk.

In relation to the means of establishing the context of implementation of risk management it is


established and designed risk management policy, objectives and tasks of the implementation of risk
management methods and methodologies for the identification, evaluation, treatment and control risk.
At the same time, it is determined the structure responsible for risk management, the powers and
responsibilities of it, taking into account the fact that “management activity it means to commonly
achieve the necessary objectives for the final of the organization11”.

The characteristic of this work is the tone given by the organization on risk management and
methodology they use in risk management and how are communicated the concepts of risk and the
response of staff on risk management philosophy.

5.1.2. Objectives establishment

Implementing an integrated risk management system involves identifying and assessing the risks that
are threatening to accomplishment of objectives.

This includes risks related to activities and actions of input and risks of actual processes undertaken
within the organization, risks that prevent achieving the intended results and the risks about the impact
of realized activities on organizational development.

Identification of the events that may affect achieving the expected results is only possible if objectives
are set in advance and under each one were defined activities necessary to ensure their implementation
which, therefore ensures, the delivery of the expected results.

If we consider the approach according to which performance is characterized as "achieving


organizational objectives regardless of their nature and variety” 12, we believe that goals should be
established to represent a challenge for management and employees.

Management by objectives has a beneficial effect for the organization, it facilitates the exercise of
effective control over all activities, motivates employees to participate in the objectives and it creates a
coherent organizational framework which stimulates the collaboration between all structures within the
institution.

The control of meeting the objectives is considered necessary for the management of the organization
and requires each manager to have established controls for each activity and objective for which he has
responsibility. At the same time, it must be taken into account the impact of likely risks that may
jeopardize the attainment of these objectives, so it is necessary to design and implement appropriate
risk management systems.
5.1.3. Identification of events

To ensure achievement of activities as planned, it is necessary for the management to identify all
events, internal and external, positively or negatively affect the objectives, and depending on the
probability of event and type of consequences that can be produced in the organization they are divided
into risks and opportunities.

Risk identification, depending on the time in which the process takes place, involves the following
stages:

 the initial risk identification specific to newly established organizations and those who have not
previously identified risks.
 the permanent identification of risk is specific for those organizations that have implemented a
risk management and necessary for assessment of risks that have not previously shown to
change their circumstances, and the limitation of the probability to manifest13.

An effective risk management involves identifying risks at any level, where there is a threat on the
goals and taking specific measures to limit the problems caused by these risks. Risks can be identified
and defined only in relation to those objectives that are affected by their materialization.

Risk identification can be achieved in two ways:

 self-evaluation of risks carried by each employee involved in the objectives and activities,
regardless of where they performed tasks hierarchically, by monitoring the risks they face daily,

or

 establishing a special department within the entity, that has responsibilities regarding the
evaluation of operations and activities within the organization and on this basis the
identification of risks that characterize the organization's objectives and individual goals set for
thr employees.

Application of either of two ways to identify risks can have negative consequences for the entity
because, first, each employee has a certain culture and training which leads to a different understanding
of risk management, making monitoring, to identify risk differ from employee to employee. Also,
some employees can be more involved in current tasks and pay less attention to their risk management.

Second, establishing a specialized department, with responsibilities in risk identification ensures not
always effective risk management. However, as much the staff of this department is prepared, it is very
difficult to know in detail how to achieve the activities and therefore to identify all threats that may
affect achievement of objectives.

The practical and effective risk identification is the combination of the two forms presented. Thus,
employees from all levels of the organization have responsibility for identifying and reporting threats
to their achievement by the specialized compartment, and it has the responsibility to assess each
reported event and if it finds that the event reported is a risk to do registration, evaluation and its
treatment.

In identifying and defining risks should be considered the following rules14:


 risk must be an uncertainty, so it must be considered whether it is a possibility or about an
existing situation which is an existing problem and not a risk;
 difficult issues identified should be assessed, as they can become repeat risk situations;
 problems not occurring are not risks, this means that the organization has control over them, and
their analysis may lead to consumption of resources;
 problems that are guaranteed to arise are certainties and measures are to be taken as such, with
certainty as a starting point;
 risk should not be defined by its impact on the objectives, as the impact is the result of the risk
materialization;
 risks are identified by correlation with the objectives; the aim is to identify those threats that
could lead to failure of objectives;
 risks have a cause and effect, the effect is the consequence of the materialization, and the cause
is a circumstance which favors the appearance of risk;
 making the distinction between inherent and residual risk. Inherent risk is related to the
objectives and the risk is there before intervening with internal control measures. The residual
risk is the risk result after implementation of internal controls. Residual risk that results from the
inherent risks cannot be controlled completely, whatever measures were taken, uncertainty
remains.

On identifying opportunities, they are performed by employees within the organization regardless of
where they are, and their recovery is the responsibility of management, to be used to increase
efficiency and effectiveness of activities.

5.1.4. Risk assessment

Achieving this step involves assessing the likelihood of risks materializing and the impact of risk when
it would occur, and classification of risk on 3 levels (high, medium or low) based on a risk analysis
matrix.

After the risk assessment process is done, priorities are established so that high risks are considered by
management to treatment.

The purpose of risk assessment is to establish a hierarchy of risks within the organization and to
establish the most appropriate ways of dealing with risk.

Risk assessment process involves consideration of the following:

 the probability of materialization of the risk stems from the fact that, at some point in the
progress of activities, there may be conditions that favor the emergence of risk. In these
conditions, analysis of the causes which favored the emergence of risk can lead to an
appreciation of its opportunities to materialize;
 the impact of risk on the objectives represents the consequence of risk materialization, and how
risk affected the achievement of the objective;
 risk exposure represents the extent to which risk can be accepted by the organization, if it would
materialize;
 determination of the specific outcome, involves risk assessment after deployment of control.
The result may be a risk exposure exceeding the limits of acceptance, which means that risk is
inherent, which involves the review of existing internal control mechanisms, or exposure below
the limits of acceptance, which means that the risk is residual.
The risk assessment is performed to identify the likelihood and impact of risk and thus to determine
how it can be managed.

Risk assessment must be the essential component and a constant concern of management organization,
as the people change, regulations change, the objectives are reviewed or new ones established. All
these contribute to the continuous changing of the map risks, namely the emergence of new risks,
modification of existing risks and the level that the organization accepted the risks.

5.1.5. Reaction to risk

Information collected following the risk assessment is processed and measures to diminish risk
exposure identified. To limit exposure the organization should identify opportunities to reduce risk, the
probability of the event, or if this it is not possible, to establish measures to eliminate risk.

Also, the organization should develop appropriate criteria for risk management to reduce the likelihood
of risk and risk consequences. If risks are not well managed or costs are high relative to benefits of the
activities, the criteria should be directed to transfer the risk or eliminate the risk.

The management of the organization, based on the risk assessment, will determine the response to risk,
as follows:

 accept the risks as they are, without mitigation measures, and without devices to establish and
implement internal control. Acceptance or tolerance of risk as the risk response strategy is
recommended for the risks inherent with low exposure, less than the risk tolerance.

After acceptance, the risk becomes residual and will be monitored regularly, aiming as it does not
change the level of acceptance.

Setting the limit for the tolerance of risk is the responsibility of management and involves the
establishment of the exposure that can be assumed, in conjunction with costs and control measures to
be taken. If the risk exposure is a probabilistic measure on a sized scale (combination of probability
and impact) then the risk tolerance must respect the same features.

 treat risks, and that will identify and implement appropriate control devices, to limit the
probability of risk manifestation and keep it within acceptable limits.

In practice, for risk treatment the following categories of controls instruments are used:

 preventive control tools are used when it is intended for an undesirable outcome not to
materialize or for limiting the risks that may materialize;
 corrective control tools are used when it seeks to correct undesirable results of risk materialized
and is a way to recoup losses;
 direct control tools are used when seeking to ensure any particular outcome, that is when we
intend to effect a particular risk that may materialize to be oriented towards a tolerable direction
of the organization;
 detective control tools are used when aimed at identifying new situations arising from the risk
materialized.

If the risks materialize, the cause is represented by the internal control that either has not been
implemented or was implemented but they not functioned properly.
 avoid risks, risks that cannot be treated, and treatment costs are higher than assumed results, will
be eliminated or kept within reasonable limits by reducing or abolishing their activities.
 transfer risks, risks that cannot be controlled will be transferred to other units or organizations.
This option is especially beneficial for financial or economic risks. Transfer risk is a measure to
help reduce exposure to a functional structure of the organization, but another functional
structure or organization, which are capable or specialized in managing such risks, will take the
risk exposure.

Diversity of internal control is considerable for all aspects of activities and can be classified as:
objectives, resources, information systems, organization, procedures and supervision15.

Objectives - grouping tools/internal control devices implemented through measures aimed at: their
clear defining, their decomposition into a pyramid up to the job, convergence, measurability,
association of measurable outcome indicators and monitoring information system.

Means - is the group of devices/tools of internal control implemented through measures of adequacy of
resources against objectives.

Information system – it groups devices/an internal control instrument operationalized and aims to
achieve a complete information system and steering, reliable, comprehensive and appropriate.

Organization - grouping devices/internal controls instruments resulting from application of measures


aimed at correcting anomalies detected in the procedural and structural organization and that are
circumstances favored for the manifestation of risk.

Procedures - are tools / internal control mechanisms which control the risks arising from lack of
processes and rules to be observed while activities are taking place.

Supervision - grouping instruments/devices of internal controls designed to control risks arising from
abnormal exercise hierarchical control. Such internal control tools are aimed at the management style
of the makers of different levels.

5.1.6. Risk control

Represents policies, procedures, controls and other management practices established by the
organization to make a prudent management of risks, and ensure the implementation of activities as
intended. Also, to control risks is to ensure that objectives are met and significant risks are properly
managed.

To prevent conflicts it is recommended to ensure independence of risk control to functional structures


of the organization that runs the identified risk. Any measure taken to control risks should be placed in
the famous “internal control system”, which is responsible for directing the implementation.

Risk control requires that the functional structure where there is a risk, carry out continuous monitoring
of risks and appropriate mitigation of the manifestation probability or risk impact. Otherwise, the risks
are uncontrollable and there are no means of intervention to limit the probability and risk impact.
5.1.7. Information and communication in the supervision of risk

Activities are initiated by the management entity for transmission to employees of their responsibilities
regarding the identification and monitoring of risks.

At the same time for employees to ensure proper risk monitoring in accordance with the requirements
of established risk management process within the organization, it is necessary for the management to
provide appropriate and timely information for them to accomplish the tasks set.

5.1.8. Risk monitoring and supervision

Risk monitoring involves reviewing and monitoring whether their risk profile changes following the
implementation of internal controls. Review processes are implemented to assess whether: risks persist,
new risks have emerged, the impact and likelihood of risks have changed, internal controls are
effectively put into practice or risks should be redefined.

Risk monitoring involves tracking the knowledge of strategies applied to risk management, of their
implementation and the evaluation of performance after implementation. Risk-sensitive areas are
monitored continuously, and the results are sent in the initial stage for reconsideration, identification
and implementation of adequate internal control tools or application of other ways to reduce exposure
to risk.

The management of risk register, which contains summary information and decisions in risk analysis,
attests that the organization has introduced a risk management system and that it works.

The process of identification, assessment and risk treatment must ensure that risk analysis is carried out
periodically and are established mechanisms for information management on new or emerging risks of
changes in already identified risks so that these changes to be addressed properly.

Risk monitoring is necessary to monitor progress of risk profiles and to ensure that risk management is
appropriate and is obtained by revision of the risks.

Risk monitoring is done through internal control, which must be flexible, and develop appropriate
control tools in areas where the risk is not sufficiently controlled or reduce those instruments where
excessive risks are controlled.

Risk management must consider internal control system implemented in the organization, and the
expected internal controls and internal controls existing, and considering their sufficiency identifies the
risks, makes them subject to the evaluation and based on results establishes the internal control
necessary to be implemented in order to limit exposure.

5.2. Internal and external environment and its influence over the integrated risk management

The implementation of a risk management system within the organization should impose establishing
relationships both within the organization and beyond. Also, the ones responsible for implementing
integrated risk management have relationships with the entity's management and staff of the entity's
functional structures.
The management of the entity shall decide on the risk management strategy adopted in the organization
and approve any measure relating to the risks. In this regard, is regularly informed of the results of risk
management and carry out in order to establish ways in which the risk management is done.

The ones responsible for risk management in the organization are communicating and realizing the risk
strategy and policy promoted to all the employees, and any decision taken by management on risks.
Receive from the structures any information on the risks, analyst, process, and make proposals for the
management on appropriate measures to be taken depending on the nature of managerial implement
these measures.

Risk communication and how they are required to be managed is based starting on the management
level to the level of execution and shall ensure that:

 risks related strategy and all associated risks to the objectives are known by all the staff
involved in achieving the objectives;
 staff in the organization is aware of the risks they assume and their monitoring system.

The nature of relationships established in risk management process is a functional one, respectively;
the ones responsible with the risks have the authority to charge risk of transmitting to the functional
structures of the entity information on risk strategy and information related to risk management process
implemented. At the same time, they require the information about the identification and management
of risks.

The increase of confidence in the risk management system promoted and implemented at the
organizational level is achieved by:

 developing a clear risk management strategy;


 sufficient support at all levels to ensure risk management;
 development of simple risk management systems;
 communication with all parties involved in risk management;
 communication and equal relations and cooperation between different functional structures of
the organization;
 improving the risk assessment.

Entity’s activities are influenced by several external factors, the nature of threats that affect
achievement. Integrated risk management system must identify the nature of the risks these threats, to
analyze, evaluate and determine the response to risk. In some cases, establishing a risk response does
not ensure acceptable risk appetite as risk reduction measures are dependent on the activities and
objectives of the organization.

To ensure acceptable levels of risk there should be a system of relationships established with various
external factors, which, put in place, to ensure reduction of exposure.

Building and implementing an integrated risk management system helps to direct resources to risk
which particularly affect the activities and support the organization in achieving its objectives.
Impact of integrated risk management on the organization

To ensure good risk management it is important to provide assurance that each employee understands
properly the risk management process within the organization and knows his role and responsibilities
in this process.

Risk management process does not require identification and elimination of negative events that may
affect the carrying out, if the risk occurs, but also aims to analyze and evaluate risk and risk appetite
according to design and implement control devices to limit the probability of risk. It provides the
management with a “framework approach to effective risk management and its possibilities”.

Risk management objective is to identify risks, causes that generated them and establish appropriate
control device to reduce its level, but using the lowest cost.

By implementing an integrated risk management system shall ensure:

 strategy development, objective setting and risk management mechanisms considering the risk
appetite. The organization will define its development strategy to the risks they face and how to
manage, taking into account the limit of the appetite to which it may be exposed. The objectives
are dependent on the planned development requirements and performance levels established, but
should be considered the risks to the objectives and the costs necessary to manage these risks.
 development of a framework for the level of response to risk. This involves performing analysis
and diagnostics, in order to determine the level of risk to which the organization can be exposed
and considering the results obtained, to proceed with the acceptance, treatment, and avoiding or
risk transfer.
 improving the expertise to identify events that threaten the organization and establishing
decisions with efficiency and effectiveness. Applying an integrated risk management process
will allow evaluation of the risks, by providing a link between the objectives, functional
departments of the organization and components of risk assessment. Making this process will
help increase the expertise in knowing events facing the organization, the nature of the risks
threatening the objectives and nature of opportunities.
 identifying and managing risks that affect the achievement of objectives and the set planned
results and not risks of every operation or activity achieved. Integrated risk management system,
is not fragmented, to ensure identification and risk assessment, in isolation, only at the operation
or action, but is a system for identifying and addressing risks to the target integrated. This
ensures that by implementing a single control measures to be managed more risks. It also allows
knowledge of risks affecting achievement, which ensures that decisions are based and to
consider the risk exposures.
 identifying opportunities following monitoring events and their capitalization with benefits in
increasing efficiency and effectiveness of the activities. Integrated risk management system
takes into account the analysis and evaluation, events that may affect achievement of objectives.
These can be negative events that are risks and positive events that are opportunities.
 appropriate use of capital. Knowledge of risks the organization is facing in achieving objectives,
allowing management to guide decisions to those activities where the risks are well managed,
thus ensuring better use of available resources.

Integrated risk management model has some limitations due to errors, avoiding checks, and human
judgment in making decisions that can sometimes be wrong. These limitations make it impossible to
issue an insurance of the need to achieve objectives.
At the same time, responsibility for designing and implementing appropriate risk management is the
organization's management and other staff to support risk management philosophy and apply
established rules on risk management, each in their area of responsibility.

The classic risk management process, which was joined and implemented by most of organizations, is
a fragmented on, in the sense that functional structures within an organization manages its own risks
independently. Thus, each compartment, according to the procedures and methodologies developed,
shall identify and manage risks associated to objectives independently, without a coordinated approach
and without taking into account the interdependencies of risk within the entity.

The advantages of implementing integrated risk management

Integrated risk management has mechanisms to help the recovery of the organization in the situation of
work stoppage, major incident or disaster, by maintaining minimum levels of business critical
functions.

The main feature of integrated risk management system is that it integrates risk monitoring
mechanisms of the functional departments of the organization and its culture, with a focus on the risks
associated with strategic objectives. Also, the emphasis is on monitoring and controlling risk, and
minimizing it.

Advantages of an integrated risk management can result as follows:

 insurance of correlation with the risk appetite of the organization strategy. Risk appetite is a
limit to which risk can be accepted and to which the organization may be exposed. The
management has the opportunities available to them to achieve goals and select the most
advantageous option in conjunction with the profile of risk;
 helping to improve decisions about risk treatment. The management options to limit risk, assess
their correlation with risk appetite and costs and determine appropriate measures for risk
management.
 integrated approach of risk allows that, by establishing a single internal control measure to be
able to handle more risk, or one risk, but which is found in several functional structures of the
organization;
 the capitalization of opportunities. Integrated risk management takes into consideration events
outside the negative and positive nature of the risks, the nature of opportunities.
 improvement of management decision. Knowledge of risks the organization is facing and the
level of risk exposure contribute to a more realistic analysis and substantiation of managerial
decisions. Substantiate decisions can be made by considering the following requirements: the
existence of one or more objectives to be achieved; existence of several alternatives, including
economic factors in decision-making plan, making the decision, decision and action unit, clear
and optimal fit between

Between components of integrated risk management and objectives of the organization must be a direct
relationship. The analysis and risk assessment by following the eight components of integrated risk
management, namely internal environment, identification, analysis and risk assessment, risk treatment,
risk control, information and communication and monitoring of risks is done for each structure and
functional organization for each objective.
By applying this method it is showed that risks are assessed and treated for all purposes of the
organization, regardless of their definition (strategic, operational, reporting and compliance) and
regardless of the compartment or structure that are defined.

Meanwhile, the integrated risk management process represents an instrument that allows a coordinated
approach across the organization to identify and analyze the mechanisms of risk whose initial starting
point is the strategic dimension.

Integrated risk management is a powerful tool that enables the management of the organization to have
a picture of the risks affecting the achievement of strategic and operational objectives, and provides at
the same time, leverage for the foundation and management decision making.

The process of identification, analysis and assessment takes into account the events of the organization,
which can take negative shape and are associated with risks or positive shape and are associated with
opportunities.

Chapter 3: "Challenges and Opportunities in UAV Deployment"

[mh] Regulatory Hurdles and Airspace Integration

[h]Risk-Based Framework for the Integration of RPAS in Non-Segregated Airspace

The integration of Remotely Piloted Aircraft System (RPAS) in non-segregated airspace is one of the
most complex and demanding challenges for the aviation community in the years ahead. The beginning
of RPAS integration in non-segregated airspace is expected to be reached by the time frame 2025,
according to European RPAS Steering Group. This aim requires broad and structured analysis of the
current situation as well as the potential solutions to be implemented. In this way, the development of a
risk-based framework to ensure the safe integration of RPAS is crucial for its achievement.

RPAS operation in upper airspace does not require higher technological developments, but it demands
detailed analysis about the safety of their integration with conventional aircraft. European Aviation
Safety Agency (EASA) and Federal Aviation Administration (FAA) require that the integration of
RPAS must not imply a diminish on current safety levels. This requirement means that further research
is required to accomplish this goal. A new framework will be compulsory in the future to take the
operational features of RPAS into account. One of the goals of this framework is to allow setting out
the safety of the RPAS operation jointly with conventional aircraft.

Could RPAS fly safely in non-segregated aircraft? The complexity of the answer does not fall into a
yes or not issue, because it must be yes, but instead we must focus on how. Currently, conventional
aircraft fly according to prefixed routes that are modelled according to air traffic flow patterns,
although there are several airspaces based on free-route. Then, RPAS must adapt to the current airway
network and current air traffic patterns. One of the main concerns is that RPAS operational patterns
can differ from conventional aircraft ones. Although RPAS could be assumed to be modelled as slow
conventional aircraft, there are uncertainties about communications, navigation and surveillance issues
that must be analysed in advance.

Due to this lack of operational and technical knowledge about RPAS operation, regulators and
Airspace Navigation Service Providers (ANSPs) seek to introduce RPAS based on a minimum
interaction with conventional aircraft. The problem arises when both airspace users operate jointly in
the same scenario where the interaction between them cannot be avoided. The first solution to his
problem is the segregation of specific air traffic volumes for the different airspace users. However, this
segregation should only focus on specific flight levels (FLs) or airways, as airspace cannot be
completely segregated in different air traffic volumes for PRAS and conventional aircraft. One of the
expected outcomes of this work is to appraise airways or FLs segregation for RPAS.

The most complex assessments about RPAS integration focus on three research areas. The first deals
with the global problem of risk management. Clothier et al. developed a framework for structuring the
safety case of the RPAS operation. Moreover, various regulators assessed the primary difficulties that
must be solved before RPAS operation. The second research area analyses the risk imposed by the
single flight for one RPAS in terms of the number of casualties. Several authors developed different
risk models to calculate what kind of populated areas are riskier for on-ground pedestrians. The third
research area involves the development of collision/conflict-risk models for the integration of RPAS.
There are several studies about RPAS collision avoidance (similar to conventional aircraft situations)
but few of them focus on conflict risk. Conflict risk is a prior indicator of collision risk. However, none
of those studies responds either how the RPAS integration should be or where RPAS could fly in non-
segregated airspace.

With the goal of responding to the above research questions, it is required to assess the safety level of
the airspace and to develop one specific methodology. Manual 9689 of International Civil Aviation
Organisation (ICAO) sets out that airspace planning requires a thorough analysis of every factor that
could affect safety. In, authors claimed the need for airspace design fulfilling levels of safety under
different operational features. Different models were developed to evaluate the collision risk based on
airspace geometry. A step further, Netjasov developed a conflict-risk model to assess the level of
safety, including air traffic flows. However, there is not a unique methodology that allows analysing
the airspace risk-state for the integration of RPAS.

Therefore, the main goal of this research is to develop a risk-based framework to provide geographical
and temporary restrictions for the safe integration of RPAS. The risk-based framework is split into two
different temporal horizons: design and operation. The risk-based framework evaluates the state of the
scenario regarding different risk-based indicators. The risk-based indicators relies on geometrical and
operational features of airspace. The risk-based indicators sort airways and crossing points to detect
airways (or flight levels): (1) where RPAS can operate because their integration is safe, and (2) when
should be planned the operation of RPAS depending on a particular schedule of conventional aircraft.
A further aim is to set out the pillars of a future decision-making process for ANSPs.

The rest of theis structured as follows. Section 2 presents the structure of the risk-based framework and
defines the different types of variables and indicators that must be considered. The risk-based
indicators constitute the main outputs of the methodology that permit to assess the viability of the
RPAS integration. It also describes the methodology for the design phase and the operational phase.
Section 3 presents the case study and the application to one Spanish airspace volume and discusses the
results. Lastly, Section 4 summarises the main contributions and further works.

Risk-based framework

The risk-based framework aims to analyse the safe integration of RPAS in non-segregated airspace. In
non-segregated airspace, both conventional aircraft and RPAS must operate together. The problem
arises when RPAS operate with different technical and operational features than conventional aircraft.
Then, the integration of RPAS focuses on reducing their impact on conventional aircraft; in other
words, RPAS must adapt themselves to current operations reducing their impact on current aviation.
The risk-based framework is split into two phases depending on the operational information available:

 design phase: this phase aims to appraise the impact of RPAS in non-segregated airspace for
strategical phase. It can be applied both for design purposes and for analysing the operation of
one particular scenario. This phase works with basic information of an airspace volume: airway
structure and air traffic flow; and
 operational phase: this phase addresses a temporal horizon where 1-hour schedule of
conventional aircraft is evaluated. The goal is to analyse how the introduction of RPAS affects
one specific schedule.

2.1 Design phase

This phase evaluates the way the integration of RPAS affects the airspace in a design or strategic
phase. Thus, this analysis covers different input variables as the morphology or geometry and the main
characteristics of the air traffic flow that operates at the airspace. The main results of this phase are:

 thorough knowledge of the current airspace state, where it is intended to integrate RPAS jointly
with conventional aircraft; and
 identification of the airways and FLs that allows their segregated use for RPAS. The segregated
use implies that the RPAS can fly without any affection to the conventional aircraft.

Design-phase indicators provide information about the state of the airways and the crossing points.
They are the most elementary components to analyse the current operational situation of the airspace.
These indicators separately evaluate the morphological and geometrical features of the airspace (static
indicators) and their operation (dynamic indicators).

2.1.1 Static indicators

Static indicators provide information to analyse the current state of the airspace based on its
morphology and geometry. The goal is to perform a prior analysis setting out the airspace design.
Static indicators focus on the basic airspace components: airways and crossing points.

2.1.1.1 Static indicator of airway complexity

The complexity of an airway is characterised by the sections that are exposed to risk. The risk in an
airway is modelled by the locations of the airway that are exposed to conflict with aircraft of other
airways. These sections are denoted as critical sections (di,j) around the crossing point. The static
indicator of airway complexity relates to the ratio of the airway that is exposed to conflict in regards to
the whole length of the airway (Li).
where i and j are the airways that intersect at the crossing point, αi,j is the angle between both
airways, and Smin is the separation minima (typically 5 Nautical Miles—NM).

Static indicator of crossing-point complexity

The complexity of a crossing point depends on the number of intersections between the airway pairs
that coincides at it and the angle between the airway pairs. In this way, combining both factors, it can
be calculated the static indicator of crossing-point complexity:

Where is the sum of all critical sections in a crossing point and


represents the elementary critical section. The elementary critical section is calculated for the crossing
angle of 900, which provides the minimum critical section.

Dynamic indicators

Dynamic indicators focus on the operational features of the airspace. This allows analysing the
operational characteristics of the air traffic flows to select the airway that favour or inhibit the RPAS
integration.

2.1.2.1 Dynamic indicator of airway density

This indicator provides information about the number of aircraft that operates an airway. It relates the
real airway density (Qi) and the theoretical maximum air traffic flow through it (Qmaxi).

where is the average speed of aircraft in airway i.

Dynamic indicator of crossing-point density

Taking into account the operational characteristics of the airspace, the dynamic indicator of crossing-
point density provide an indicator of the number of aircraft that pass through it.
Dynamic indicator of airway conflict

This indicator evolves from the previous dynamic indicators with a different goal. δ

and ϵ are relative counters of the air traffic through the airways and crossing points, while ζ is the
dynamic indicator of airway conflict. This indicator provides information about the possibility of
conflict depending on the airspace operational features.

Moreover, this indicator also works as a reference value to analyse the air traffic segregation by
airways and FLs. Therefore, it is needed to calculate the total value for the whole airspace based on the
sum of every airway conflict indicator:

Operational phase

The operational phase focuses on a different temporal horizon than the design phase. The operational
phase is characterised by the disappearance of generic air traffic flows (modelled by airway density
and average ground speed), and it entails a one-hour schedule. This schedule of air traffic fulfils the
operational characteristics of the scenario, but each aircraft has its own characteristics (speed and entry
time). Besides, this concept will relay on further work based on 4D trajectories. The operational phase
allows the introduction of RPAS in specific schedules. Apart from analysing how this introduction
affects the risk indicators, this phase provides the following results:

 in-depth knowledge of the path evolution from the conventional aircraft schedule;
 safety assessment for the RPAS integration for different schedules based on the risk indicators;
and
 identification of airways and FLs that favour or inhibit the introduction of RPAS based on the
airway availability.

Operational-phase indicators provide information about the whole airspace. In this way, they permit to
appraise the airspace situation by the RPAS integration. These indicators conclude if the integration of
RPAS is feasible and the temporary restrictions.
2.2.1 Number of conflict

Nc

is the number of times that the separation minima are infringed (5 NM in European en-route airspace).

where min(s(t)) is the minimum distance between an aircraft pair.

Conflict severity

Conflict severity (θ

) is an indicator of the seriousness of the conflict, as not every conflict implies the same severity.
Conflict severity is calculated by the combination of the conflict time span (τ) and the minimum
distance reached by an aircraft pair:

Airway availability

This indicator aims to calculate the risk exposition of an aircraft flying an airway. This indicator is
called airway availability because it links the time span the aircraft can safely fly an airway with the
time span the aircraft can suffer a conflict. Knowing the airways that present higher availability (the
time span the aircraft can safely fly without suffering a conflict), it can be extracted the airways that
favour or inhibit the integration of RPAS.

λi

indicator is based on the Temporary-Blocking Windows (TBWs) concept. The TBWs are calculated
for every aircraft pair, i.e. the time span that the airways are blocked because a separation minima
infringement will occur. The primary features of the TBWs are:

 the time duration of the TBWs depends on the crossing angle of the airways and the ground
speed of the aircraft involved; and
 the time location of the TBWs depends on the entry time of the conventional aircraft and RPAS,
length of the airways, the ground speed and the distance between the airway entry-point and the
crossing point.

λi

is calculated by the size of overall TBW (dBW) that affect the airway i. Therefore, the risk exposition of
an aircraft relates the non-available time (tNAi) and the exposition time (texp):
Herein, the exposition time relates to a one-hour schedule. A minor TBW implies a bigger airway
availability, which reduces the risk exposition. Moreover, airway availability is a novel indicator
defined in this work. There is no previous knowledge about the threshold that this indicator should
acquire. Then, the authors propose a division into four stretches (0–25%, 25–50%, 50–75% and 75–
100%). Airways with airway availability greater than 50% are airways where RPAS could be included.

Chapter 4: "Defense Industry Perspectives and Market Trends"

[mh] Challenges for the Promotion of Innovation and R&D in Latin America

Although Latin America can be considered the second world emerging region after Asia, its growth
and gap reduction with respect to the first world is still uncertain. By 2022, GDP per capita reached an
average rate of US$ 7950.00, much lower than more industrialized countries. In the last ten years, it
has yet to show significant growth. Still, even in cases of regional representative countries such as
Brazil and Argentina, there is a certain tendency to a reduction that is worrying in general terms. This
condition differs from other countries with industrialized economies or even emerging Asian
economies, which are showing signs of sustained growth.

Figure. Historical comparison of GDP per capita. Source: Adapted from.

Latin America is marked by what different authors have historically called technological dependency,
that is, a vocation for technology transfer or the importation of technological services. This dependency
reinforces in the region its original vocation as a supplier of raw materials. In fact, its export economy
has an industrial and productive composition based on the extraction of natural resources, mainly in the
mining and oil, manufacturing, and agricultural sectors.
Although governments have been focused on developing policies for industrial promotion, innovation,
and productive diversification in recent years, the growth results are still not clearly visible. The latter
can be visualized with the economic complexity index (ECI). This index measures an economy or
product’s relative knowledge intensity.

As shown in Figure, some Latin American economies show difficulties in diversifying their productive
and export capacity, even with negative trends. On the other hand, notable differences can be seen
between some countries, such as Peru, which still shows a significant gap even concerning other Latin
American countries, for having a low level of sophistication of its exportable products.

Figure. Historical comparison of economic complexity index (ECI). Source: Adapted from.

On the other hand, like all countries, Latin America experiences the impact of global events and faces
significant modern challenges. The COVID-19 pandemic raised poverty levels. On the other hand,
there are challenges related to climate change, deforestation, social inequality, the migratory crisis, and
growing violence. All this adds to a historical stigma of corruption and political instability, reducing
confidence and the possibility of long-term policies.

Boubarkri, in his“Does national culture affect corporate innovation”? based on an analysis of a


comprehensive innovation database around the world, concludes that the probability that a company
innovates is greater in individualistic, indulgent, and long-term-oriented societies. Sagasti, in his book
“Science, Technology and Innovation: Policies for Latin America”, quoted the prominent Latin
American thinker Jorge Sábato with this quote: “It takes fifteen years to create a world-class research
institution, but only two years to destroy it”. Sagasti, compared efforts in Latin America to promote
innovation to the tragic fate of Sisyphus, the mythical and cunning king of Corinth who tricked the
gods more than once and was punished by rolling a boulder up a mountain, only to have it to the top
it would roll down and have to start again, eternally over and over. According to this author, Latin
America is an example of Sisyphus since after having designed and implemented policies with
considerable effort (such as investment in science and technology, institution-building, and training
scientists and engineers,) “has only seen them disappear almost without a trace”.

Box 1.

Political instability in Latin America and its impact on the promotion of innovation.

Despite this, Latin American countries have actively sought new and better ways to promote industrial
and technological development. In recent years, interest has been focused on promoting science,
technology, and innovation as a development strategy. Different types of initiatives are implemented
with their own nuances and particularities in each country and region.

Innovation in Latin America

Innovation and entrepreneurship have gained strength in Latin America during the last decade.
Technological startups have tripled since 2017 and have multiplied their estimated value from USD 7
billion to USD 221 billion, reaching in 2021 the number of twenty-seven unicorns (private startups
valued at more than one billion dollars).

Despite this, the region is still characterized by low levels of public and private investment in research
and development (R&D). As can be seen in Figure, by 2020, most countries are still far from reaching
0.5% investment in R&D with respect to GDP, a fact that contrasts with the average of 2.3% in OECD
countries during the last decade and the notable advances of some Asian countries such as Thailand.
On the other hand, in several countries, registered investment comes primarily from public funds, with
minimal private funds.

Figure. R&D expenditures as a percentage of GDP. Source: Adapted from.

The latter implies that the development of innovation and technological industries continues to be a
difficult challenge. Added to this is the fact that in the Latin American context, the markets are not well
developed, and there is a deficient business climate, with limited inventiveness, lack of resources and
scientific and technological infrastructure, deficient institutional development, and a weak policy.

The process of promoting innovation in Latin America

Policies and initiatives to promote innovation in Latin America have changed and evolved. Tableshows
a summary of the most relevant aspects that marked the promotion of innovation in Latin America.
Although general issues were experienced in a similar way, the process must consider the
particularities of each country or region.

Phase Period Main characteristics

EMERGENCE OF THE PROMOTION OF SCIENCE,


TECHNOLOGY, AND INNOVATION
Linear conception of the relationships between science,
1950– technology, and innovation.
Science push
1970 Support for the scientific community and establishment
of research councils.
Creation of scientific research centers and institutes,
especially universities.

INDUSTRIAL POLICY AND TECHNOLOGY TRANSFER:


Aggressive support policies for the national industry and
import substitution.
Focus on technology transfer: emphasis on selecting
Industrial development policy and 1965–
appropriate technologies for the conditions of the region
regulation of technology transfer 1985
and concern for reducing some negative impacts.
Creation of public entities to regulate foreign direct
investment, license agreements, and intellectual property
rights.

SCIENTIFIC TECHNOLOGICAL DEVELOPMENT AS A


POSSIBILITY:
Emphasis on the implementation of science and
technology policies (policy instruments).
Policy instruments and system 1970– Analysis of interactions and overcoming inconsistencies
approaches 1995 between explicit and implicit science and technology
policies.
A comprehensive perspective on the actors that
participate in the generation, import, demand, use, and
absorption of science and technology.

Adjustment and transformation of 1980– FOCUS ON MARKET LIBERALIZATION, NEUTRALITY, AND


science and technology policy 2000 ECONOMIC STABILITY. SLOWDOWN IN INNOVATION AND
STI
Reduction of investments in science and technology.
Phase Period Main characteristics

Displacement of technological research and development


in companies.
Improvements in the organization of production to
increase productivity.

TRANSITION AND STRENGTHENING OF NATIONAL


INNOVATION SYSTEMS. MARKET FAILURE APPROACH
Emphasis on business technological behavior, the public
policy environment and the absorption of knowledge and
technology from abroad.
Innovation and competitiveness 2000– Support for companies to improve their competitiveness
systems 2020 and face their entry into markets.
The emergence of the market failure correction
approach.
Financial mechanisms to support innovation, especially
the focus on innovation developed by companies (seed
funds, competitive fund schemes.)

INCLUSION OF VERTICAL POLICIES FOR INNOVATION,


TERRITORY, AND SPECIALIZATION
Inclusion of the concepts of decentralization and
territoriality for the promotion of innovation.
Implementation of regional innovation and specialization
Vertical approach, territorial 2010- plans.
approach, and specialization present Implementation of vertical policies for the promotion of
certain sectors and areas of specialization as centers of
excellence.
Implementation of regional innovation agencies.
Implementation of regional funds for the promotion of
innovation.

FOCUS ON ECOSYSTEMS INNOVATION AND


ENTREPRENEURSHIP. FOCUS ON THE VENTURE CAPITAL
Focus on the development of the
2015- MARKET
venture capital market and
present Investments in the promotion of intermediate
acceleration of startups
institutions that foster technological entrepreneurship.
Boost for the development of the venture capital market.

Table. Evolution of innovation promotion policies in Latin America.

Source: adapted from.


These particularities are due to external factors such as changes and innovations in public policies
implemented in more industrialized countries, which are seen as reference models. On the other hand,
internal factors can be: economic, such as the productive vocation focused on raw materials;
technological, marked by human capital and technological infrastructure; political and even cultural,
for example, the recognition of a weak, unstable, and short-term policy and the historical aversion to
the application of sectorial policies of industrial support.

This aversion emerged after the 1970s and 1980s when several Latin American countries devoted
efforts to implement an industrialization process through aggressive import substitution and industrial
development policies. Although this type of policy gave rise to the development of high-level activities
that contributed to initial industrial development, the absence of sanction mechanisms gave way to the
proliferation of a wide range of sectors and activities that were inefficient, which in the face of the
liberalization of markets, international competition and lack of incentives typical of the 90s, ended up
leaving the market, and producing a severe economic and political crisis. These events explain a
recurrence in the innovation policy of Latin America in the use of approaches based purely on demand
and in addressing market failures from a transversal perspective.

In analyzing this historical process, reference cases are identified that show important positive
advances. Among these, Chile stands out, which has historical experience in implementing different
sets of innovation promotion policies, first with a horizontal emphasis focused on correcting market
errors (1980 to 2000), to move on to a more vertical, focused on certain industrial clusters. Among the
horizontal policies, guarantees were applied for loans for small businesses, subsidies for new exports,
and financing programs for innovation (Innova Chile) operated by CORFO. On the other hand, vertical
policies (sectoral and target oriented) were implemented such as the development of FDI (foreign
direct investment) attraction programs in high technology and the operation of Fundación Chile,
through its technology transfer and business generation model.

Some Latin American institutions have maintained an important role in the construction and
articulation of technological innovation systems, for example, the aeronautical technical center in
Brazil, for the gestation of the aeronautical industry or SENASA (National Service of Health and Food
Quality) and INTA (National Institute of Agricultural Technology) for the promotion of agri-food
chains in Argentina ; however, the case of Fundación Chile deserves a special distinction. This
institution for more than four decades has had an active agenda in promoting innovation in
collaboration with the government; it has been a pioneer in promoting and enabling different
productive sectors such as the aquaculture industry, the agro-industrial sector, the movement toward
renewable energies, such as solar, one of the first venture capital funds in Latin America, among
others. The role of Fundación Chile has been related to a “system builder”, that is, an engine for the
development of technological innovation systems in their early phase, as described by .

Fundación Chile is a public-private organization with financing from the Chilean government and the
mining sector, whose purpose is to promote the transformation of Chile toward sustainable
development. For more than 45 years, it has collaboratively created high-impact innovative solutions
for Chile, especially in promoting new industrial sectors that have even been success stories reviewed
by international literature, such as the salmon industry.
Over the years, this institution has undergone four phases of evolution that mark, in different ways,
the type of impact in promoting innovation and economic development in Chile. The first, called the
autarchic model (1982-1998), focused on using its own funds to develop industrial demonstration
projects based on technology transfer, covering activities from opportunity identification, R&D,
company creation, administration, and transfer to the private sector. The second called the alliance
model (1999-2009), continued to carry out complete business incubation and technology transfer
activities, now using additional public funds (CORFO) and alliances with investors. The third, the
Limited VaR model (2010-2014), focused on promoting high-impact companies, especially in the
mining and energy sectors. The fourth and last phase, called “venture capital” (2012-present), focuses
on boosting the innovation ecosystem in Chile, based on support for the players in the venture capital
industry and investment in startups that develop high-impact innovations in sectors of high potential
and interest for the country.
On the other hand, it can be noted that Fundación Chile serves as an operational instrument for the
implementation of particular innovation policies, called by Mazzucato as “mission-oriented”, which
seek to solve national challenges such as dealing with the energy crisis, water scarcity, and sustainable
use of resources.
This model could be considered as an effective mechanism to counteract the negative impacts of
political, economic, or technological factors that affect the innovation promotion environment.

New approaches for innovation promotion

Already in 2007, referred that the economies in the process of industrialization marked an evolution in
the design of STI policies. This evolution begins with a focus on the promotion of R&D,
entrepreneurship, and business development with a general approach and then points toward a targeted
approach, that is, aimed at promoting specific sectors, specialization, and the promotion of venture
capital markets.

Similarly, some Latin American thinkers devoted their attention to innovation promotion policies in the
country’s specific sectors or strategic areas. These types of policies were called vertical. This type of
approach became more relevant in the 2010s (although in other countries, it was earlier), with the
adoption of new European criteria for the administration of regional funds for innovation, focused on
smart specialization (RIS3: Research and Innovation Strategies for Smart Specialization) and the
territorial approach to promoting innovation. This approach focuses on supporting the existing
strengths in the regions and searching for new possibilities for their development. Therefore, R&D&I
resources are concentrated on limited priorities, which in a short time can lead to a technical
transformation of existing sectors, and in the long term, to the emergence of new technologically
advanced industries ; this marks a shift in thinking that goes beyond market failure arguments toward
arguments based more on the logic of strengthening regional innovation systems.

From 2013 onwards, specialization initiatives have been developed in Latin America to promote
innovation at the regional level. In each country, these processes have been taking place at different
times and different levels, beginning with the development of pilot projects to be later replicated in
other regions. Countries, such as Colombia and Chile, have been pioneers; other countries, such as
Peru, started later and are still in the implementation process.

However, innovation strategies based on smart specialization are not unique examples of regional
initiatives with a vertical orientation. For instance, in Argentina, between 2000 and 2008, technological
poles were created in the provinces of Rosario, Córdova, and Buenos Aires, focusing on developing
high-potential technological areas; this gave rise to different initiatives of technological clusters in
other regions. On the other hand, in Chile, from the year 2000, there was the progressive creation of
fourteen regional research centers, each in specialized topics according to regional vocations, led by
regional governments.

In 2022, Alatrista A carried out an analysis of the policies implemented in 12 Latin American regions,
San Juan, Santa Fe, and Formosa from Argentina, O’Higgins, Coquimbo, Aysén, and Biobio from
Chile, Antioquía, Cundinamarca, and Arauca from Colombia, Arequipa, Ayacucho, Cajarmarca and
Huancavelica of Peru. In the study, 22 innovation promotion policies were grouped into four types:
type A: “Strengthen and maintain human capital create critical mass and increase connectivity between
actors,” Type B: “Stimulate knowledge absorption and entrepreneurial dynamics,” type C: “Modernize
productive activities toward value-added niches: Form innovative ecosystem,” type D: “Strengthen
excellence in knowledge creation and development of high-tech industries.”

The table shows the number of regional policies implemented by local organizations, organized by
each type, and differentiating whether they are horizontal (h) or vertical (v). For example, the
Argentine regions implemented three of the nine horizontal policies identified for type A, having also
implemented the only vertical policy identified for type A.

As shown in Table, the analyzed regions of Argentina, Chile, and Colombia showed more significant
activity in implementing innovation policies concerning the Peruvian regions. This increased activity is
also reflected in the implementation of vertically oriented policies, which shows interest in developing
certain sectors or specific technologies. Regional or local bodies’ regional policy implementation
activity shows that regions have different capacities to promote innovation.

Type A Type B Type C Type D Total %

h v h v h v h v h v

Argentina 3/9 1/1 3/7 1/3 2/5 0/1 0/1 1/2 32% 43%

Chile 4/9 1/1 1/7 1/3 1/5 0/1 1/1 1/2 32% 29%

Colombia 4/9 1/1 2/7 1/3 2/5 0/1 0/1 0/2 36% 43%

Peru 1/9 0/1 0/7 1/3 0/5 0/1 0/1 0/2 9% 14%

Table. Innovation policies implemented by 12 Latin American regions.

The study mentioned above, based on the perception of different experts in each country, showed a
comparative analysis of the capacity of regional organizations to implement initiatives to promote
regional innovation using four criteria: Degree of decentralization of decision-making and resources
regarding STI policy, degree of formalization/maturity of the regional policy, development of the
executing body of innovation policies, and resources deployed for the regional promotion of
innovation. The results show that regions in Argentina, Chile, and Colombia have a medium capacity,
while the Peruvian regions show low or even nonexistent capacity.

These results show the vital importance of the empowerment and leadership of regional and local
governments to promote their innovation policies. Here, the national context and the decentralization
efforts of science, technology, and innovation policy play a relevant role.
In the last years, strengthening regional capacities to promote innovation has become a priority in
several Latin American countries. Colombia’s national innovation policy aims to improve multilevel
governance of the national system of science, technology, and innovation (SNCTI) based on the
strengthening of regional capacities in STI. On the other hand, the national innovation policy of Chile
identifies within its essential priorities the development of regional R + D + i capacities, together with
the national plan of centers of excellence. In Argentina, the national science, technology, and
innovation plan (PNCTI) establish territorial agendas and regional agendas as one of four main
agendas, which focus on identifying priority issues for regional development and their definition within
the framework of the regional science and technology councils (CRECYT). Other countries, such as
Peru, have presented recent proposals to promote regional capacities such as the initiative of regional
development agencies. However, the national innovation policy still needs to prioritize strengthening
regional capacity. In fact, unlike the other analyzed countries, Peru’s Ministry of Science, Technology,
and Innovation has not yet been implemented.

In recent decades, Peru has implemented different mechanisms to promote innovation in the context
of low scientific-technological infrastructure capacity.
Since 2010, the Ministry of Production has implemented the main financing mechanisms focused on
business innovation. Likewise, strategic documents have been prepared, such as the Framework Law
on Science, Technology and Technological Innovation, promulgated in 2004; the National CTI Plan,
published in 2006; and the National Innovation Policy, published in 2016.
In that same year, the territorial concept of identifying areas of specialization in the regions was
introduced. In 2016, two pilot projects began (Piura and Arequipa); and by 2020, six additional regions
were financed for the same purpose. Additionally, since 2022, regional development agencies have
been promoted.
However, the eight principles and the four objectives planned in the national innovation policy are
separate from specialization or territoriality. No objective proposes a decentralized strategy from the
regions. This fact shows the challenge of being able to adapt national strategies to modern
approaches. This fact suggests that the delay in adopting certain practices could force premature
policy changes, leading to a short time of maturation that could affect their performance. Adaptation
becomes an important challenge for the promotion of innovation in Peru.

Although the capacities of regional or local governments are crucial to promoting innovation, other
factors could also limit regions’ ability to boost areas or technologies with high potential. One factor
may be the strong economic dependence on the exploitation of raw materials since a kind of “core
rigidity” could be generated. The latter finds an analogy in the new approaches to productive
diversification, which show the vital importance of a specific region’s trajectory and natural capacities
to jump into new sectors or areas of specialization. In the words of, “the probability that a country will
become a significant exporter of a product in a four-year period increases with the fraction of related
products already exported by that country.”

On the other hand, the concept of technological innovation systems (TIS) and their interactions with
different contexts or structures, such as technological, sectoral, geographical, and political, have
become important. The TIS approach can help transcend the limitations and low regional capacities for
promoting innovation. Under this approach, regions can use internal networks or connections with
other regions that have developed related technologies. These collaboration networks can even become
cases of extra-regional exchange, as in the case of some regions of China. Recent initiatives to promote
the renewable energy sector in the Arequipa-Peru region have been joined by different actors from
Puno. This region has not yet implemented a specialization strategy. Here, it is important to note that
some Latin American regions have identified common areas of specialization. For example, mining is
an area of specialization for various regions, including San Juan (Argentina), Arequipa (Peru),
Antioquia (Colombia), and Antofagasta (Chile); on the other hand, biotechnology was identified in San
Juan (Argentina), Lima (Peru), Antioquía (Colombia), and Valparaíso (Chile) among other regions.

Beyond vertical policies promoting strategic clusters, areas, or sectors, some Latin American countries
have recently focused on policies to support and foster entrepreneurship ecosystems and the generation
of impact companies, especially startups. Among them, the boost to the venture capital markets
becomes relevant.

Between 2005 and 2011, the birth of the venture capital market in Latin America took place, with slow
growth, limited investment, limited development of high-tech ventures, and certain failures. In recent
years these figures have evolved strongly, according to reports from the Latin American Venture
Capital Association (LAVCA), Venture Capital investments in Latin America doubled in 2020,
reaching USD$5.4 billion.

As can be seen in Figure, there is a clear trend to boost the venture capital market. Although these
figures are increasing in all countries, the proportion of investments is shown more clearly in
Argentina, Chile, and Colombia, which, in a certain way, narrows the investment gap with giant China.

Figure. Investment in venture capital as a percentage of GDP.

The development of the venture capital market has different drivers in Latin American countries. For
example, in some cases, private sector institutions lead the process such as the case of EPM, Sura,
Bancolombia, and Grupo Bios in Colombia. In cases like Chile, the governmental public sector stands
out such as the case of CORFO and Fundación Chile and their initiatives: “Startup Chile” and “Chile
Global Ventures”. In Peru, the presence of UTEC stands out, a private capital technological university,
which has promoted the venture capital market in Peru.

As shown by, some Latin American universities are becoming business centers that support innovation
in their ecosystems. Many have implemented accelerators, incubators, and centers for developing
entrepreneurship and innovation. They frequently are open to external stakeholders such as
entrepreneurs and SMEs. Additionally, many universities use their multicampus strategy to amplify
their impact and engagement. The leadership of the academic sector can be of special relevance to
strengthening regional innovation systems with limited capacities.

Innovation is a central element that Latin American countries have adopted to promote economic and
sustainable development. Promotion initiatives for innovation have had a historical process marked by
political, cultural, and social factors. On the other hand, different events and forms of thought that
emerged in different periods impacted the ways of promoting initiatives for many years or decades
such as the aversion to developing non-horizontal policies for developing specific sectors or clusters.

Modern concepts of promoting innovation, based on territoriality and regional innovation systems,
imply new challenges for Latin American countries and regions. Regarding the promotion of initiatives
to encourage innovation, the regions have shown different levels of development. Some regions have
more activity and experience, even testing different initiatives. In other regions, the capacity to
implement policies is almost nonexistent. The maturity times to implement initiatives can be very short
in the latter. In some cases, by adopting specific policy approaches late, they find themselves in the
dilemma of making changes and adopting new approaches that are being adopted in other, more
advanced regions.

Political empowerment and the capacity of regional or local governments are crucial in national
innovation policies. Developing these capabilities has become a modern challenge. However, other
challenges are linked to political or social factors such as instability, economic, or technological
factors, such as the still limited technological infrastructure, and the dependence or rigidity of the
productive sectors based on the exploitation and export of raw materials.

All this means that the regions must look for new strategies to reduce gaps. The focus on technological
innovation systems can be an alternative since these focus on specific technologies, thus crossing
geographical limits. In this way, the actors of a given system can seek extra-regional alliances to
promote their regional innovation systems.

On the other hand, universities and actors related to knowledge development play a role that is
becoming more important in different regions. This role is especially linked to initiatives to promote
startups, the generation of technology companies, and the development of the venture capital market;
this may be especially important in regions, where actors related to government sectors have limited
leadership to implement initiatives to promote innovation.

Chapter 5: "UAVs in Modern Warfare: Case Studies"

[mh] UAVs in Counterinsurgency Operations

Factories, refineries, utilities (water/wastewater, electric), and related industrial sites are complex
systems and structures with inspection and maintenance procedures required for optimal operation and
regulatory compliance. For example, consider just the bulk electric power system, which comprises
more than 200,000 miles of high-voltage transmission lines, thousands of generation plants, and
millions of digital controls. More than 1,800 entities own and operate portions of the grid system, with
thousands more involved in the operation of distribution networks across North America. The
interconnected and interdependent nature of the bulk power system requires a consistent and
systematic application of risk mitigation across the entire grid system to be truly effective. Similar
situations are found throughout the automation industry, which also frequently has aging infrastructure.
Consider, for example, the situation in a refinery or chemical processing setting with the requirement
for leak detection inspection of pipes, interconnects, and systems across the plant. The current practices
and challenges relating just to this task of detecting any fugitive emissions and documenting all
measurements, and thereby meeting air compliance regulations, are typically "handled" by a small
army of individuals with handheld or backpack-sized detectors. They crawl through piping racks
conducting measurements at each flange. Such work is performed in difficult conditions (in terms of
temperature, humidity, and physical challenges) and frequently has a high level of employee turnover.
Enter low-cost sensors and mobile platforms-in other words, unmanned aerial systems (UASs or
drones) with enhanced sensing capabilities.

Drones - More than flying cameras

Remotely piloted aerial vehicles have been used, primarily by military forces, since the Second World
War. With recent technological advancements in microprocessor computing power, sensor
miniaturization, and purpose-built software, UAS technology has established a significant new niche in
the evolution of aviation. Alternatively labeled unmanned aircraft systems, unmanned aerial vehicles,
or simply drones, small UASs (sUASs) are becoming readily accessible for commercial, governmental,
and private use across a myriad of far-ranging applications.

Figure. Some individuals view a drone as a camera with wings.

Although a "traditional" drone has a camera where video and still images may be stored on an onboard
memory device or, in some instances, wirelessly transmitted to a handheld device, the operational
situation changes when the drone's sensors can directly communicate with an industrial control or
supervisory control and data acquisition (SCADA) system. Such a level of integration requires
bidirectional communication transmission security, as well as logical protocol synchronization. Such
communications have been demonstrated using cellular telephony as well as licensed- and unlicensed-
band wireless. With respect to the information presented in figure, the sensor-laden drones were
controlled via approved Federal Aviation Administration (FAA) rules, but with the sensor "pods"
communicating directly into the utility's core communication network via 900 MHz wireless and made
available into the SCADA system (as opposed to using the same wireless channel for the drone
control).

Figure. Multiple drones with sensors were used to measure a wide range of parameters at the EPB
training site in Chattanooga, Tenn.

Multiple sensors within the "pod" measured parameters including temperature, humidity, atmospheric
pressure, motion (via accelerometers), electric and magnetic field strength, coronal arc discharge,
forward-look infrared (FLIR) thermal imagery, visual imagery, cell phone signals (Verizon, AT&T, T-
Mobile, Sprint), and CH4 (methane). Additional microcontrollers and specialized miniaturized network
equipment were placed within the sensor pod, along with a separate battery-based power supply system
tailored for the sensor package. A photo of the drone and sensor pod as it inspects an electrical
distribution transformer is shown in figure. These proof-of-concept demonstrations-specifically
showing airborne sensors providing real-time measurements of automation systems-are a glimpse into
future applications.
Figure. Sensor-laden drone inspecting an electrical distribution transformer

Being able to conduct three-dimensional assessments and inspections using a remotely operated sensor
platform has led to a flood of potential uses of UASs-both envisioned and realized. UASs of all types
have already been used in a wide variety of applications in practical ways, such as aerial photography,
agriculture, commercial delivery, entertainment, exploration, national defense, public safety,
surveying, and thermography. Envisioned future applications will help advance precision agriculture,
energy-sector remote sensing, national security and law enforcement reconnaissance, and utilities
analysis. Such "future applications" benefit from the remotely or autonomously controlled mobile
platform bringing a wide range of sensors to a location of interest.

Operation and flight times

Weight versus power versus flight (operational) time presents the classic trade-off in a limited fuel-
supply flight operation, be it aircraft, drones, or spacecraft. Empirical data has given way to estimators
regarding the battery-operated lifetime for drone operation.figurepresents the estimated range based on
a wide array of parameters.
Figure. The range that a drone can go (and return) for a variety of parameters.

Note that thefigurecalculation incorporates overall drone weight, but specifically within the context of
sensor-laden drones. It does not include the possibility of the sensor being directly powered off the
same battery source as the drone itself. The working estimate for a battery-operated drone is a flight
time of approximately 20 minutes. Note that there are wildly varying flight times for differing drone
configurations-ranging from a few minutes to an hour. That said, the working rule of approximately 20
minutes for a standard consumer drone is accurate.

As battery technology (specifically energy density) improvements continue, the companion matter of
battery-operated longer flight ranges or durations are anticipated. Given the FAA rules regarding drone
operation and flight control, a set of questions arises that is typically associated with beyond-visual-
line-of-sight (BVLOS) operation (and its implications on drone use in industrial settings) and flight
times .

Again, the FAA rules are quite clear regarding BVLOS, beginning with the definition: "BVLOS means
flight crew members are not capable of seeing the aircraft with vision unaided by any device other than
corrective lenses."

Although most current UAS applications are carried out in VLOS missions, there are obvious limits to
VLOS inspections. With respect to BVLOS operation for electric utilities, BVLOS allows personnel to
monitor power lines over longer corridor stretches. Via the FAA Extension, Safety, and Security Act of
2016, Congress authorized the FAA to develop new rules specifically to benefit the electric power
industry and other operators of critical infrastructure. Under this new legislation, the FAA will begin to
develop rules enabling BVLOS flights and night flights. Other changes are expected to streamline the
permitting of UAS flights and improve commercial viability and safety while facilitating inspection of
critical infrastructure. Any new rules are not expected to become part of formal regulations until 2018
or beyond. However, many operators under Part 107 are expected to apply for waivers to a number of
Part 107 regulations, such as flying at BVLOS distances. Some of these waivers are likely to be
granted in advance of more formal regulation changes that will arise from the 2016 act. On 28
December 2016, the FAA approved a certificate of authorization for the Northern Plains UAS Test Site
in North Dakota to be the first in the U.S. to have BVLOS operability. Other locations within the U.S.
now have similar exemptions to the BVLOS rule.

Coordinated flight and collaborative sensing

Advancements in control systems for UAS flight dynamics operating on inexpensive, lightweight
microcontrollers with networked wireless communications have led to instances where multiple drones
fly in formation. Such coordinated flight was demonstrated during the 2017 Super Bowl halftime show
and at several amusement parks worldwide. In an automation setting, the coordinated flight of multiple
drones, each equipped with a variety of sensors, leads to collaborative sensing of mobile sensors.
Examples of where such "coordinated flight - collaborative sensing" applications exist include the
possibilities of sensing chemicals-such as CH4-and environmental and ambient conditions associated
with storage tanks at fracking sites, bridge inspection, and power generation facilities.

Figure. A drone as a mobile sensor platform allows for measurements to occur in variety of situations,
such as those associated with fracking.

UAS traffic management systems are critically important for the drone industry. These systems help
keep control of flying drones, and between unmanned and manned traffic. Developments within
academia, national laboratories, and the private sector are underway for reasonable and deployable
UAS detection systems.

Using drone-based sensing

It is not just drone-based sensing, but rather how to use the measurements made via such platforms that
is significant. The following statements from Thomas Haun, vice president of strategy and
globalization at PrecisionHawk UAV Technology, are applicable throughout the application areas
where drone-based sensors may be used:

"Commercial drones are flying in. The industry needs to see beyond the UAV and focus on the real
disruptor: actionable analytics via aerial data. As we prepare for widespread adoption and integration
across major markets, such as agriculture, oil and gas, insurance, infrastructure, emergency response,
and life sciences, businesses need an intelligent solution that combines UAV hardware and automated
data analysis software to deliver tangible results at scale."

The intersection of the Industrial Internet of Things (IIoT), cyber-physical security, and sensor-laden
drones presents an array of opportunities for use in automation. Standards and guidelines can help
carve an orderly path forward. This path will allow industry to incorporate advanced technologies into
procedures and practices as this booming market sector introduces devices and systems. It is
envisioned that in the very near future drones of varying sizes and complexities-equipped with sensors-
will be operated from a remotely located control center with real-time measurements intermixed with
sensor data from other fixed and mobile platforms. It will monitor and record within an industrial
control or SCADA system, such as that shown in figure.

ISA's Test & Measurement Division and Communication Division currently have a joint working
group focused on IIoT, cyber-physical security, and unmanned aerial systems with the associated
examination of functional and operational security for when these devices are deployed into a control
system. Additional information on these and related topics-including videos of the operation of sensor-
laden drones-are available on each division's website.

Let's risk the machines, not the humans.

Figure. Integration of fixed and mobile sensors, UAS- and truck-mounted, with command, control, and
real-time data coordinated at the center.

New FAA rules

The FAA's comprehensive new regulations for the routine, nonrecreational use of sUASs-more
popularly known as drones-went into effect 29 August 2016. The provisions of the new rule, known as
Part 107 or Rule 107 (14 CFR Part 107), are designed to minimize risks to other aircraft and to people
and property on the ground.
The FAA has put several processes in place to help you take advantage of the rule:

Waivers: If your proposed operation does not completely comply with Part 107 regulations, you need
to apply for a waiver of some of the restrictions. You must prove the proposed flight will be conducted
safely under a waiver. Users must apply for these waivers at the online portal www.faa.gov/uas.

Airspace authorization: You can fly your drone in Class G (uncontrolled) airspace without air traffic
control authorization, but operations in any other airspace (i.e., instrument flight rules) need air traffic
approval. You must request access to controlled airspace via the electronic portal at www.faa.gov/uas,
not from the individual air traffic facilities.

Summary of 14 CFR Part 107

The FAA Part 107 regulations, which legalize commercial drone use, dramatically increase the
potential number of UAS users. Anyone can now legally operate a UAS as part of a business after first
passing an aeronautical knowledge test and then registering with the FAA. In the first two days after
the test became available, 1,338 people had completed the test with an 88 percent pass rate.

Small unmanned aerial system operational limitations

The following restrictions and limitations are based on "Operation and Certification of Small
Unmanned Aircraft Systems: Final Rule" as published in the Federal Register, volume 81(124), 28
June 2016, pp. 42063-42214.

 sUASs must weigh less than 55 pounds (25 kg).


 sUASs must remain within visual line of sight of the remote pilot in command and the person
manipulating the flight controls of the sUASs. Alternatively, the sUAS must remain within VLOS
of the visual observer.
 sUASs may not operate over any persons not directly participating in the operation, under a
covered structure, or inside a covered stationary vehicle.
 sUASs are limited to daylight-only operations, or civil twilight (30 minutes before official
sunrise to 30 minutes after official sunset, local time) with appropriate anticollision lighting.
 sUASs must yield right of way to other aircraft.
 First-person view cameras cannot satisfy "see-and-avoid" requirements, but can be used as
long as requirements are satisfied in other ways.
 sUASs are limited to a maximum ground speed of 100 mph (87 knots) and a maximum altitude
of 400 ft above ground level (AGL) or, if higher than 400 ft AGL, remain within a 400-ft radius of
a structure. They must fly no higher than 400 ft above a structure's uppermost limit.
 External load operations are allowed if the object being carried by the UAS is securely attached
and does not adversely affect the flight characteristics or controllability of the aircraft.
 Transportation of property for compensation or hire is allowed if:
o the aircraft, including its attached systems, payload, and cargo weigh less than 55
pounds, total
o the flight is conducted within VLOS and not from a moving vehicle or aircraft
o the flight occurs wholly within the bounds of a state and does not involve transport
between (1) Hawaii and another place in Hawaii through airspace outside Hawaii; (2)
the District of Columbia and another place in the District of Columbia; or (3) a territory
or possession of the U.S. and another place in the same territory or possession
Most of the restrictions enumerated above are waivable if the applicant demonstrates that his or her
operation can safely be conducted under the terms of a certificate of waiver.

Chapter 6: "Ethical, Legal, and Moral Implications"

[mh] Social Media, Ethics and the Privacy Paradox

The use of social media is growing at a rapid pace and the twenty-first century could be described as
the “boom” period for social networking. According to reports provided by Smart Insights, as at
February 2019 there were over 3.484 billion social media users. The Smart Insight report indicates that
the number of social media users is growing by 9% annually and this trend is estimated to continue.
Presently the number of social media users represents 45% of the global population. The heaviest users
of social media are “digital natives”; the group of persons who were born or who have grown up in the
digital era and are intimate with the various technologies and systems, and the “Millennial
Generation”; those who became adults at the turn of the twenty-first century. These groups of users
utilize social media platforms for just about anything ranging from marketing, news acquisition,
teaching, health care, civic engagement, and politicking to social engagement.

The unethical use of social media has resulted in the breach of individual privacy and impacts both
physical and information security. Reports in 2019, reveal that persons between the ages 8 and 11 years
spend an average 13.5 hours weekly online and 18% of this age group are actively engaged on social
media. Those between ages 12 and 15 spend on average 20.5 hours online and 69% of this group are
active social media users. While children and teenagers represent the largest Internet user groups, for
the most part they do not know how to protect their personal information on the Web and are the most
vulnerable to cyber-crimes related to breaches of information privacy.

In today’s IT-configured society data is one of, if not the most, valuable asset for most
businesses/organizations. Organizations and governments collect information via several means
including invisible data gathering, marketing platforms and search engines such as Google.
Information can be attained from several sources, which can be fused using technology to develop
complete profiles of individuals. The information on social media is very accessible and can be of great
value to individuals and organizations for reasons such as marketing, etc.; hence, data is retained by
most companies for future use.

Privacy or the right to enjoy freedom from unauthorized intrusion is the negative right of all human
beings. Privacy is defined as the right to be left alone, to be free from secret surveillance, or unwanted
disclosure of personal data or information by government, corporation, or individual. In this chapter we
will define privacy loosely, as the right to control access to personal information. Supporters of privacy
posit that it is a necessity for human dignity and individuality and a key element in the quest for
happiness. According to Baase in the book titled “A Gift of Fire: Social, Legal and Ethical Issues for
Computing and the Internet,” privacy is the ability to control information about one’ s self as well as
the freedom from surveillance from being followed, tracked, watched, and being eavesdropped on. In
this regard, ignoring privacy rights often leads to encroachment on natural rights.

Privacy, or even the thought that one has this right, leads to peace of mind and can provide an
environment of solitude. This solitude can allow people to breathe freely in a space that is free from
interference and intrusion. According to Richards and Solove, Legal scholar William Prosser argued
that privacy cases can be classified into four related “torts,” namely:
1. Intrusion—this can be viewed as encroachment (physical or otherwise) on ones
liberties/solitude in a highly offensive way.
2. Privacy facts—making public, private information about someone that is of no “legitimate
concern” to anyone.
3. False light—making public false and “highly offensive” information about others.
4. Appropriation—stealing someone’s identity (name, likeness) to gain advantage without the
permission of the individual.

Technology, the digital age, the Internet and social media have redefined privacy however as
surveillance is no longer limited to a certain pre-defined space and location. An understanding of the
problems and dangers of privacy in the digital space is therefore the first step to privacy control. While
there can be clear distinctions between informational privacy and physical privacy, as pointed out
earlier, intrusion can be both physical and otherwise.

This chapter will focus on informational privacy which is the ability to control access to personal
information. We examine privacy issues in the social media context focusing primarily on personal
information and the ability to control external influences. We suggest that breach of informational
privacy can impact: solitude (the right to be left alone), intimacy (the right not to be monitored), and
anonymity (the right to have no public personal identity and by extension physical privacy impacted).
The right to control access to facts or personal information in our view is a natural, inalienable right
and everyone should have control over who see their personal information and how it is disseminated.

In May 2019 the General Data Protection Regulation (GDPR) clearly outlined that it is unlawful to
process personal data without the consent of the individual (subject). It is a legal requirement under the
GDPR that privacy notices be given to individuals that outline how their personal data will be
processed and the conditions that must be met that make the consent valid. These are:

1. “Freely given—an individual must be given a genuine choice when providing consent and it
should generally be unbundled from other terms and conditions (e.g., access to a service should
not be conditional upon consent being given).”
2. “Specific and informed—this means that data subjects should be provided with information as
to the identity of the controller(s), the specific purposes, types of processing, as well as being
informed of their right to withdraw consent at any time.”
3. “Explicit and unambiguous—the data subject must clearly express their consent (e.g., by
actively ticking a box which confirms they are giving consent—pre-ticked boxes are
insufficient).”
4. “Under 13s—children under the age of 13 cannot provide consent and it is therefore necessary
to obtain consent from their parents.”

Arguments can be made that privacy is a cultural, universal necessity for harmonious relationships
among human beings and creates the boundaries for engagement and disengagement. Privacy can also
be viewed as instrumental good because it is a requirement for the development of certain kinds of
human relationships, intimacy and trust. However, achieving privacy is much more difficult in light of
constant surveillance and the inability to determine the levels of interaction with various publics. Some
critics argue that privacy provides protection against anti-social behaviors such as trickery,
disinformation and fraud, and is thought to be a universal right. However, privacy can also be viewed
as relative as privacy rules may differ based on several factors such as “climate, religion, technological
advancement and political arrangements”. The need for privacy is an objective reality though it can be
viewed as “culturally rational” where the need for personal privacy is viewed as relative based on
culture. One example is the push by the government, businesses and Singaporeans to make Singapore a
smart nation. According to GovTech 2018 reports there is a push by the government in Singapore to
harness the data “new gold” to develop systems that can make life easier for its people. The report
points out that Singapore is using sensors robots Smart Water Assessment Network (SWAN) to
monitor water quality in its reservoirs, seeking to build smart health system and to build a smart
transportation system to name a few. In this example privacy can be describe as “culturally rational”
and the rules in general could differ based on technological advancement and political arrangements.

In today’s networked society it is naïve and ill-conceived to think that privacy is over-rated and there is
no need to be concerned about privacy if you have done nothing wrong. The effects of information
flow can be complex and may not be simply about protection for people who have something to hide.
Inaccurate information flow can have adverse long-term implications for individuals and companies.
Consider a scenario where someone’s computer or tablet is stolen. The perpetrator uses identification
information stored on the device to access their social media page which could lead to access to their
contacts, friends and friends of their “friends” then participate in illegal activities and engage in anti-
social activities such as hacking, spreading viruses, fraud and identity theft. The victim is now in
danger of being accused of criminal intentions, or worse. These kinds of situations are possible because
of technology and networked systems. Users of social media need to be aware of the risks that are
associated with participation.

Social media

The concept of social networking pre-dates the Internet and mass communication as people are said to
be social creatures who when working in groups can achieve results in a value greater than the sun of
its parts. The explosive growth in the use of social media over the past decade has made it one of the
most popular Internet services in the world, providing new avenues to “see and be seen”. The use of
social media has changed the communication landscape resulting in changes in ethical norms and
behavior. The unprecedented level of growth in usage has resulted in the reduction in the use of other
media and changes in areas including civic and political engagement, privacy and safety. Alexa, a
company that keeps track of traffic on the Web, indicates that as of August, 2019 YouTube, Facebook
and Twitter are among the top four (4) most visited sites with only Google, being the most popular
search engine, surpassing these social media sites.

Social media sites can be described as online services that allow users to create profiles which are
“public, semi-public” or both. Users may create individual profiles and/or become a part of a group of
people with whom they may be acquainted offline. They also provide avenues to create virtual
friendships. Through these virtual friendships, people may access details about their contacts ranging
from personal background information and interests to location. Social networking sites provide
various tools to facilitate communication. These include chat rooms, blogs, private messages, public
comments, ways of uploading content external to the site and sharing videos and photographs. Social
media is therefore drastically changing the way people communicate and form relationships.

Today social media has proven to be one of the most, if not the most effective medium for the
dissemination of information to various audiences. The power of this medium is phenomenal and
ranges from its ability to overturn governments (e.g., Moldova), to mobilize protests, assist with
getting support for humanitarian aid, organize political campaigns, organize groups to delay the
passing of legislation (as in the case with the copyright bill in Canada) to making social media
billionaires and millionaires. The enabling nature and the structure of the media that social networking
offers provide a wide range of opportunities that were nonexistent before technology. Facebook and
YouTube marketers and trainers provide two examples. Today people can interact with and learn from
people millions of miles away. The global reach of this medium has removed all former pre-defined
boundaries including geographical, social and any other that existed previously. Technological
advancements such as Web 2.0 and Web 4.0 which provide the framework for collaboration, have
given new meaning to life from various perspectives: political, institutional and social.

Privacy and social media

Social medial and the information/digital era have “redefined” privacy. In today’s Information
Technology—configured societies, where there is continuous monitoring, privacy has taken on a new
meaning. Technologies such as closed-circuit cameras (CCTV) are prevalent in public spaces or in
some private spaces including our work and home. Personal computers and devices such as our smart
phones enabled with Global Positioning System (GPS), Geo locations and Geo maps connected to
these devices make privacy as we know it, a thing of the past. Recent reports indicate that some of the
largest companies such as Amazon, Microsoft and Facebook as well as various government agencies
are collecting information without consent and storing it in databases for future use. It is almost
impossible to say privacy exists in this digital world.

The open nature of the social networking sites and the avenues they provide for sharing information in
a “public or semi-public” space create privacy concerns by their very construct. Information that is
inappropriate for some audiences are many times inadvertently made visible to groups other than those
intended and can sometimes result in future negative outcomes. One such example is a well-known
case recorded in anentitled “The Web Means the End of Forgetting” that involved a young woman who
was denied her college license because of backlash from photographs posted on social media in her
private engagement.

Technology has reduced the gap between professional and personal spaces and often results in
information exposure to the wrong audience. The reduction in the separation of professional and
personal spaces can affect image management especially in a professional setting resulting in the
erosion of traditional professional image and impression management. Determining the secondary use
of personal information and those who have access to this information should be the prerogative of the
individual or group to whom the information belongs. However, engaging in social media activities has
removed this control.

Privacy on social networking sites (SNSs) is heavily dependent on the users of these networks because
sharing information is the primary way of participating in social communities. Privacy in SNSs is
“multifaceted.” Users of these platforms are responsible for protecting their information from third-
party data collection and managing their personal profiles. However, participants are usually more
willing to give personal and more private information in SNSs than anywhere else on the Internet. This
can be attributed to the feeling of community, comfort and family that these media provide for the most
part. Privacy controls are not the priority of social networking site designers and only a small number
of the young adolescent users change the default privacy settings of their accounts. This opens the door
for breaches especially among the most vulnerable user groups, namely young children, teenagers and
the elderly. The nature of social networking sites such as Facebook and Twitter and other social media
platforms cause users to re-evaluate and often change their personal privacy standards in order to
participate in these social networked communities.

While there are tremendous benefits that can be derived from the effective use of social media there are
some unavoidable risks that are involved in its use. Much attention should therefore be given to what is
shared in these forums. Social platforms such as Facebook, Twitter and YouTube are said to be the
most effective media to communicate to Generation Y’s (Gen Y’s), as teens and young adults are the
largest user groups on these platforms. However, according to Bolton et al. Gen Y’s use of social
media, if left unabated and unmonitored will have long-term implications for privacy and engagement
in civic activities as this continuous use is resulting in changes in behavior and social norms as well as
increased levels of cyber-crime.

Today social networks are becoming the platform of choice for hackers and other perpetrators of
antisocial behavior. These media offer large volumes of data/information ranging from an individual’s
date of birth, place of residence, place of work/business, to information about family and other
personal activities. In many cases users unintentionally disclose information that can be both dangerous
and inappropriate. Information regarding activities on social media can have far reaching negative
implications for one’s future. A few examples of situations which can, and have been affected are
employment, visa acquisition, and college acceptance. Indiscriminate participation has also resulted in
situations such identity theft and bank fraud just to list a few. Protecting privacy in today’s networked
society can be a great challenge. The digital revolution has indeed distorted our views of privacy,
however, there should be clear distinctions between what should be seen by the general public and
what should be limited to a selected group. One school of thought is that the only way to have privacy
today is not to share information in these networked communities. However, achieving privacy and
control over information flows and disclosure in networked communities is an ongoing process in an
environment where contexts change quickly and are sometimes blurred. This requires intentional
construction of systems that are designed to mitigate privacy issues.

Ethics and social media

Ethics can be loosely defined as “the right thing to do” or it can be described as the moral philosophy
of an individual or group and usually reflects what the individual or group views as good or bad. It is
how they classify particular situations by categorizing them as right or wrong. Ethics can also be used
to refer to any classification or philosophy of moral values or principles that guides the actions of an
individual or group. Ethical values are intended to be guiding principles that if followed, could yield
harmonious results and relationships. They seek to give answers to questions such as “How should I be
living? How do I achieve the things that are deemed important such as knowledge and happiness or the
acquisition of attractive things?” If one chooses happiness, the next question that needs to be answered
is “Whose happiness should it be; my own happiness or the happiness of others?” In the domain of
social media, some of the ethical questions that must be contemplated and ultimately answered are :

 Can this post be regarded as oversharing?


 Has the information in this post been distorted in anyway?
 What impact will this post have on others?

As previously mentioned, users within the ages 8–15 represent one of the largest social media user
groups. These young persons within the 8–15 age range are still learning how to interact with the
people around them and are deciding on the moral values that they will embrace. These moral values
will help to dictate how they will interact with the world around them. The ethical values that guide our
interactions are usually formulated from some moral principle taught to us by someone or a group of
individuals including parents, guardians, religious groups, and teachers just to name a few. Many of the
Gen Y’s/“Digital Babies” are “newbies” yet are required to determine for themselves the level of
responsibility they will display when using the varying social media platforms. This includes
considering the impact a post will have on their lives and/or the lives of other persons. They must also
understand that when they join a social media network, they are joining a community in which certain
behavior must be exhibited. Such responsibility requires a much greater level of maturity than can be
expected from them at that age.

It is not uncommon for individuals to post even the smallest details of their lives from the moment they
wake up to when they go to bed. They will openly share their location, what they eat at every meal or
details about activities typically considered private and personal. They will also share likes and
dislikes, thoughts and emotional states and for the most part this has become an accepted norm. Often
times however, these shares do not only contain information about the person sharing but information
about others as well. Many times, these details are shared on several social media platforms as
individuals attempt to ensure that all persons within their social circle are kept updated on their
activities. With this openness of sharing risks and challenges arise that are often not considered but can
have serious impacts. The speed and scale with which social media creates information and makes it
available—almost instantaneously—on a global scale, added to the fact that once something is posted
there is really no way of truly removing it, should prompt individuals to think of the possible impact a
post can have. Unfortunately, more often than not, posts are made without any thought of the far-
reaching impact they can have on the lives of the person posting or others that may be implicated by
the post.

Social media has provided a platform for people to share their thoughts and express concerns with
others for what they regard as a worthy cause. Cause related posts are dependent on the interest of the
individual. Some persons might share posts related to causes and issues happening in society. In one
example, the parents of a baby with an aggressive form of leukemia, who having been told that their
child had only 3 months to live unless a suitable donor for a blood stem cell transplant could be found,
made an appeal on social media. The appeal was quickly shared and a suitable donor was soon found.
While that was for a good cause, many view social media merely as platforms for freedom of speech
because anyone can post any content one creates. People think the expression of their thoughts on
social media regarding any topic is permissible. The problem with this is that the content may not be
accepted by law or it could violate the rights of someone thus giving rise to ethical questions.

Chapter 7: "Future Trends and Emerging Technologies"

[mh] Swarm Intelligence - Recent Advances, New Perspectives, and Applications

Swarm intelligence has emerged as one of the most studied artificial intelligence branches during the
last decade, constituting today the most high-growing stream on bioinspired computation community.
A clear trend can be deduced by analyzing some of the most renowned scientific databases available,
showing that the interest aroused by this branch has been in crescendo at a notable pace in the last
years. Undoubtedly, the main influences behind the conception of this stream are the extraordinarily
famous particle swarm optimization (PSO, ) and ant colony optimization (ACO, ) algorithms. These
meta-heuristic lighted the fuse of the success of this knowledge area, being the origin and principal
inspiration of their subsequent research. Such remarkable success has led to the proposal of a myriad of
novel methods, each one based on a different inspirational source such as the behavioral patterns of
animals, social and political behaviors, or physical processes. The constant proposal of new methods
showcases the capability and adaptability of this sort of solvers to reach a near-optimal performance
over a wide range of high-demanding academic and real-world problems, being this fact one of the
main advantages of swarm intelligence-based meta-heuristics.
Brief history of swarm intelligence

The consolidation of swarm intelligence paradigm came after years of hard and successful scientific
work and as a result of the proposal of several groundbreaking and incremental studies, as well as the
establishment of some cornerstone concepts in the community.

In this regard, two decisive milestones can be highlighted in swarm intelligence history. First of these
breakthrough landmarks can be contextualized on horseback between the 1960s and 1970s. Back then,
influential researchers such as Schwefel, Fogel, and Rechenberg revealed their first theoretical and
practical works related to evolving strategies (ES) and evolutionary programming (EP). An additional
innovative notion came to the fore some years later from John H. Holland’s hand. This concept is the
genetic algorithm (GA, ), which was born in 1975 sowing the seed of the knowledge field today known
as bioinspired computation. All the three outlined streams (i.e., ES, EP, and GA) coexisted in a
separated fashion until the 1990s, when they all erected as linchpin elements of the unified concept
evolutionary computation.

The second milestone that definitely contributed to the birth of what currently is conceived as swarm
intelligence is the conception of two highly influential and powerful methods. These concrete
algorithms are the ACO, envisaged by Marco Dorigo in 1992, and the PSO, proposed by Russell
Eberhart and James Kennedy in 1995. Being more specific, the PSO was the method that definitely lit
the fuse of the overwhelming success of swarm intelligence, being the main inspiration of a plethora of
upcoming influential solvers. Therefore, since the proposal of PSO, algorithms inheriting its core
concepts gained a great popularity in the related research society, lasting this acclaim until the present
day. For the modeling and design of these novel approaches, many inspirational sources have been
considered, commonly categorized by (able to collect these sources in three recurring groups):

 Patterns found in nature: we can spotlight two different branches that tie (fall) together within
this category. The first one is related to biological processes, such as the natural flow of the
water (water cycle algorithm, ), chemotactic movement of bacteria (bacterial foraging
optimization algorithm, ), pollination process of flowers (flower pollination algorithm, ), or
geographical distribution of biological organisms (biogeography-based optimization, ). The
second inspirational stimulus is the behavioral patterns of animals. This specific trend is quite
outstanding in recent years, yielding a design based on creatures such as bats (bat algorithm, ),
cuckoos (cuckoo search, ), bees (artificial bee colony, ), or fireflies (firefly algorithm, ).
 Political and social behaviors: several human conducts or political philosophies have also
inspired the proposal of successful techniques. Regarding the former, we can find promising
adaptations of political concepts such as anarchy (anarchic society optimization, ) or
imperialism (imperialist competitive algorithm, ). With respect to the latter, social attitudes have
been also served as inspiration for several methods such as the one coined as society and
civilization, which emulates the mutual interactions of human and insect societies, or the
hierarchical social meta-heuristic, which mimics the hierarchical social behavior observed in a
great diversity of human organizations and structures.
 Physical processes: physical phenomena have also stimulated the design of new swarm
intelligence algorithmic schemes, covering a broad spectrum of processes such as gravitational
dynamics and kinematics (gravitational search algorithm, ), optic systems (ray optimization, ),
or the electromagnetic theory (electromagnetism-like optimization, ). A recent survey published
by Salcedo-Sanz revolves around in this specific sort of methods.
In addition to the above-defined categories, many other fresh branches spring under a wide range of
inspirations such as business tools (brainstorming optimization, ) or objects (grenade explosion
method, ).

It is also worth mentioning that besides these monolithic approaches aforementioned, there is an
additional trend which prevails at the core of the research activity: hybridization of algorithms. Since
the dawn of evolutionary computation, many efforts have been devoted to the combination of diverse
solvers and functionalities aiming at enhancing some capabilities or overcoming the disadvantages of
well-established meta-heuristic schemes. Obviously, memetic algorithms (MAs), conceived by
Moscato and Norman in the 1980s in, beat this competition. Despite MAs were initially defined as
hybridization of GAs and local search mechanisms, MAs rapidly evolved to a broader meaning.
Related to SI, today is straightforward to find hybridization of SI meta-heuristic schemes with
separated local improvement and individual learning mechanisms in the literature. Some examples of
this research trend can be found in.

Finally, up to now, SI methods have been applied to a wide variety of interesting topics along the
years. Being impossible to gather in this introductory chapter all the applications already addressed by
SI paradigms, we refer the reader to some remarkable and highly valuable survey works specially
devoted to outline the application of SI algorithms in specific domains. In a survey dedicated to
geophysical data inversion was published. In the latest findings of portfolio optimization are studied.
An additional interesting work can be found in focused on summarizing the intensive work done
related to the feature selection problem. Intelligent transportation systems are the crossroads of the
works gathered in, while in authors conducted a comprehensive review of SI meta-heuristics for
dynamic optimization problems. We acknowledge that the literature focused on all these aspects is
immense, which leads us to refer the interested readers to the following significant and in-depth
surveys.

With reference to the scientific production, SI represents the most high-growing stream in today’s
related community, with more than 15,000 works published since the beginning of the twenty-first
century. Analyzing the renowned Scopus® database, a clear upward trend can be deduced.
Specifically, scientific production related to SI grows at a remarkable rate from nearly 400 papers in
2007 to more than 2000 in 2018. In fact, the interest in SI has been in crescendo at such a pace that the
number of published scientific material regarding this field is greater than other classical streams such
as evolutionary computation every year since 2012.

Thus, and taking advantage of the interest that this topic arises in the community, the edited book that
this chapter is introducing gravitates on the prominent theories and recent developments of swarm
intelligence methods and their application in all the fields covered by engineering. This material
unleashes a great opportunity for researchers, lecturers, and practitioners interested in swarm
intelligence, optimization problems, and artificial intelligence as a whole.

Preface

Unmanned Aerial Vehicles (UAVs), often referred to as drones, have emerged as a transformative
technology with the potential to revolutionize defense operations across the globe. From their humble
beginnings as remote-controlled reconnaissance aircraft to their current sophisticated iterations capable
of autonomous flight and precision strikes, UAVs have rapidly evolved, offering unparalleled
advantages and posing unique challenges to defense establishments worldwide.
The proliferation of UAV technology presents an opportunity for transformational change in defense
strategies, operations, and tactics. This preface explores the prospects for such transformation and
delves into the key factors driving this evolution.

First and foremost, UAVs offer a cost-effective alternative to manned aircraft for a wide range of
missions, including reconnaissance, surveillance, intelligence gathering, and precision strikes. By
eliminating the need for onboard pilots, UAVs reduce operational costs and risk to human life, while
simultaneously enhancing operational flexibility and endurance. This cost-effectiveness opens up new
possibilities for defense establishments to allocate resources more efficiently and adapt to evolving
security challenges.

Moreover, the advancement of autonomy and artificial intelligence (AI) technologies has empowered
UAVs to operate with increased autonomy and intelligence, enabling them to execute complex
missions with minimal human intervention. AI-driven algorithms facilitate autonomous navigation,
target recognition, and decision-making, enhancing the efficiency and effectiveness of UAV operations
while reducing the cognitive burden on human operators.

Additionally, the versatility and adaptability of UAVs make them invaluable assets across a spectrum
of military operations, from conventional warfare to counterinsurgency and counterterrorism. Their
ability to operate in contested or denied airspace, conduct persistent surveillance, and deliver precision
strikes with minimal collateral damage provides defense establishments with a strategic advantage on
the modern battlefield.

However, along with these opportunities come challenges and ethical considerations. The proliferation
of UAV technology raises concerns about privacy, civilian casualties, and the potential for autonomous
weapons systems to undermine human control over military operations. Addressing these challenges
will require careful regulation, ethical guidelines, and international cooperation to ensure that UAVs
are used responsibly and in accordance with international law.

In conclusion, the prospects for transformational change in defense enabled by UAV technology are
vast and promising. By leveraging the capabilities of UAVs in conjunction with other emerging
technologies, defense establishments can enhance their operational effectiveness, achieve strategic
objectives, and adapt to the evolving security landscape in the 21st century.

About the book

Unmanned Aerial Vehicles (UAVs), or drones, have evolved significantly, offering transformative
potential in defense operations globally. This introduction explores the prospects for this
transformation and the driving forces behind it.

UAVs provide a cost-efficient alternative to manned aircraft for various missions, including
reconnaissance and precision strikes, reducing operational costs and risks while enhancing flexibility.
Advancements in autonomy and AI enable UAVs to operate autonomously, improving mission
efficiency and reducing human workload.

Their versatility makes UAVs invaluable in diverse military operations, including contested
environments, surveillance, and precision strikes. However, challenges such as privacy concerns and
ethical considerations arise with their proliferation.
UAV technology presents transformative opportunities in defense, optimizing resource allocation and
operational efficiency. Addressing ethical and regulatory challenges is crucial to ensure responsible
UAV use and compliance with international law. By leveraging UAV capabilities alongside other
emerging technologies, defense establishments can adapt to evolving security dynamics effectively.

You might also like