Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 94

6.

UAVs: Design, Development and Deployment

Chapter 1: Inception: The Birth of UAV Technology

[mh]Historical Evolution of UAVs


Unmanned Aerial Vehicles (UAVs) include both autonomous (capable of operating without human input)
drones and remotely piloted vehicles (RPVs). A UAV is capable of controlled, sustained level flight and is
powered by a jet, reciprocating, or electric engine. In the twenty-first century, technology reached a point of
sophistication that the UAV is now being given a greatly expanded role in many areas of aviation.

A UAV differs from a cruise missile in that a UAV is intended to be recovered after its mission, while a
cruise missile impacts its target. A military UAV may carry and fire munitions on board, while a cruise
missile is a munition. Loitering munitions are a class of unmanned aircraft intermediate between them.

[h]Austrian incendiary balloon attack on Venice


The earliest recorded use of an unmanned aerial vehicle for warfighting occurred in July 1849, serving as a
balloon carrier (the precursor to the aircraft carrier) is the first offensive use of air power in naval aviation.
Austrian forces besieging Venice attempted to float some 200 incendiary balloons each carrying a 24- to 30-
pound bomb that was to be dropped from the balloon with a time fuse over the besieged city. The balloons
were launched mainly from land; however, some were also launched from the Austrian ship SMS Vulcano.
The Austrians used smaller pilot balloons to determine the correct fuse settings. At least one bomb fell in the
city; however, due to the wind changing after launch, most of the balloons missed their target, and some
drifted back over Austrian lines and the launching ship Vulcano.

[h]World War I
The first pilotless aircraft were built during World War I. From a suggestion that A. M. Low’s expertise in
early television and radio technology be used to develop a remotely controlled pilotless aircraft to attack the
Zeppelins a remarkable succession of British drone weapons in 1917 and 1918 evolved. Designers from
Sopwith Aviation and its contractor Rushton Proctor, de Havilland and the Royal Aircraft Factory all
became involved. They were all designed to use Low's radio control system developed at the Royal Flying
Corps secret Experimental Works at Feltham. Of these Low confirmed that Geoffrey de Havilland’s
monoplane was the one that flew under control on 21 March 1917. Low is known as '"father of radio
guidance systems" and in 1976 Low was inducted into the International Space Hall of Fame. Alternatively,
John Taylor suggested Low was the ‘Father of the Remotely Piloted Vehicle’.

Soon after, on September 12, the Hewitt-Sperry Automatic Airplane, otherwise known as the "flying bomb"
made its first flight, demonstrating the concept of an unmanned aircraft. They were intended for use as
"aerial torpedoes" an early version of today's cruise missiles. Control was achieved using gyroscopes
developed by Elmer Sperry of the Sperry Gyroscope Company.

Later, in November 1917, the Automatic Airplane was flown for representatives of the US Army. This led
the army to commission a project to build an "aerial torpedo", resulting in the Kettering Bug which first flew
in 1918. While the Bug's revolutionary technology was successful, it was not in time to fight in the war,
which ended before it could be fully developed and deployed.
[h]Interwar period
After World War I, three Standard E-1s were converted to drones. The Larynx was an early cruise missile in
the form of a small monoplane aircraft that could be launched from a warship and flown under autopilot; the
Royal Navy tested it between 1927 and 1929. The early successes of pilotless aircraft led to the development
of radio controlled pilotless target-aircraft in Britain and the US in the 1930s. In 1931 the British developed
the Fairey Queen radio-controlled target from the Fairey IIIF floatplane, building a small batch of three, and
in 1935 followed up this experiment by producing larger numbers of another RC target, the "DH.82B Queen
Bee", derived from the de Havilland Tiger Moth biplane trainer. The name of "Queen Bee" allegedly led to
the use of the term "drone" for pilotless aircraft, particularly when they are radio-controlled. During this
period, the U.S. Navy, continuing work that reached back to 1917, also experimented with radio-controlled
aircraft. In 1936 the head of US Navy research group used the term "DRONE" to designate radio-controlled
aerial targets. From 1929 Hungarian scientist Kálmán Tihanyi worked on television guidance for defense
applications, building prototypes of a camera for remotely-guided aircraft in London for the British Air
Ministry, and later adapting it for the Italian Navy. In 1929, Tihanyi invented the first infrared-sensitive
(night-vision) electronic television-camera for anti-aircraft defense in Britain. The solutions of the
technology that Tihanyi depicted in his 1929 patent became so influential that American UAV-producing
companies still used many of its solutions even half a century later, until the mid-1980s.

Subsequent British "drones" included the Airspeed Queen Wasp, the Miles Queen Martinet, and the US-
supplied Curtiss Queen Seamew. After WW II these would be replaced by the jet-powered Anglo-Australian
GAF Jindivik.

[h]World War II

Figure: A Radioplane OQ-3 and its launcher, Wright Field, October 1945

Figure: A US Navy OQ-2 shot down by the USS Makin Island during a gunnery exercise off Wakanoura,
Japan (October 1945)

Reginald Denny and the Radioplane


The first large-scale production, purpose-built drone was the product of Reginald Denny. He served with the
British Royal Flying Corps during World War I, and after the war, in 1919, he returned to the United States
to resume his career in Hollywood. Denny was a successful leading man and between acting jobs, he
pursued his interest in radio control model aircraft in the 1930s opening a shop.

The shop evolved into the "Radioplane Company". Denny believed that low-cost RC aircraft would be very
useful for training anti-aircraft gunners, and in 1935 he demonstrated a prototype target drone, the RP-1, to
the US Army. Denny then bought a design from Walter Righter in 1938 and began marketing it to hobbyists
as the "Dennymite", and demonstrated it to the Army as the RP-2, and after modifications as the RP-3 and
RP-4 in 1939. In 1940, Denny and his partners won an Army contract for their radio controlled RP-4, which
became the Radioplane OQ-2. They manufactured nearly fifteen thousand drones for the Army during World
War II.

The true inventor of a radio-controlled aircraft that could fly out of sight was Edward M. Sorensen as
evidenced by his US patents. His invention was the first to be able to know from a ground terminal what the
airplane was doing, such as climbing, altitude, banking, direction, rpm and other instrumentation. Without
these patents the early radio-controlled aircraft could only operate within visual sight of the ground pilot.

[h]Aerial torpedoes
The US Navy began experimenting with radio-controlled aircraft during the 1930s as well, resulting in the
Curtiss N2C-2 drone in 1937. The N2C-2 was remotely controlled from another aircraft, called a TG-2.
N2C-2 anti-aircraft target drones were in service by 1938.

The US Army Air Forces (USAAF) adopted the N2C-2 concept in 1939. Obsolescent aircraft were put into
service as "A-series" anti-aircraft target drones. Since the "A" code would be also used for "Attack" aircraft,
later "full-sized" targets would be given the "PQ" designation. USAAF acquired hundreds of Culver "PQ-8"
target drones, which were radio-controlled versions of the tidy little Culver Cadet two-seat light civil
aircraft, and thousands of the improved Culver PQ-14 Cadet derivative of the PQ-8. The US also used RC
aircraft, including modified B-17 Flying Fortress and B-24 Liberator heavy bombers in Aphrodite and Anvil
operations in combat on a small scale during World War II as very large aerial torpedoes, though with no
great success and the loss of aircrew including Joseph P. Kennedy, Jr.

The "TDN-1" was an unmanned aerial vehicle that was developed for use in 1940. The TDN was capable of
delivering a 1,000-pound bomb but never saw operational duty.

The Naval Aircraft Factory assault drone "Project Fox" installed an RCA television camera in the drone and
a six-inch television screen in the TG-2 control aircraft in 1941. In April 1942 the assault drone successfully
delivered a demonstration torpedo attack on a US destroyer at a range of 20 miles from the TG-2 control
aircraft. Another assault drone was successfully crashed into a target moving at eight knots. The Navy
Bureau of Aeronautics then proposed a television-assisted remote control assault drone program of 162
control planes and 1,000 assault drones. Disagreements arose within the Navy concerning the relative
advantages of the proposed program for full-scale combat implementation versus a small-scale combat test
with minimum aircraft resource expenditure which might reveal the concept to the enemy and allow
development of countermeasures prior to full production. Assault drones remained an unproven concept in
the minds of military planners through major allied advances of 1944. Utilization was limited to a 4-drone
attack on a beached Japanese merchant ship in the Russell Islands at the end of July followed by expenditure
of 46 drones in the northern Solomon Islands. Two hits and two near-misses were scored on the stationary
ship. Several of the later drones failed to reach their targets, but most were effective.
The V-1 flying bomb was the first cruise missile ever built. It was built in the Peenemünde Army Research
Center and first tested in 1942. The V-1 was intended to target London and was massively fired, achieving
more than one hundred launches a day. The V-1 was launched from a rail system to achieve the speed
needed to operate its pulsejet engine and would achieve a 250 kilometers radius, at one point flying at
640 km/h.

McDonnell built a pulsejet-powered target, the TD2D-1 Katydid, later the KDD-1 and then KDH-1. It was
an air-launched cigar-shaped machine with a straight mid-mounted wing, and a vee tail straddling the
pulsejet engine. The Katydid was developed in mid-war and a small number were put into service with the
US Navy.

After the war, the Navy obtained small numbers of another pulsejet-powered target, the Curtiss KD2C Skeet
series. It was another cigar-shaped machine, with the pulsejet in the fuselage and intake in the nose. It
featured straight, low-mounted wings with tip tanks, and a triple-fin tail.

Balloons

Japan launched long distance attacks on the US Mainland using their Fu-Go unmanned balloons. They used
the high-altitude jet stream and a novel ballast system to reach the northwestern US. Though intended to
cause forest fires and widespread panic, their impact was not significant.

[h]Target drone evolution


In the post-World War II period, Radioplane followed up the success of the OQ-2 target drone with another
very successful series of piston-powered target drones, what would become known as the Basic Training
Target (BTT) family (the BTT designation wasn't created until the 1980s, but is used here as a convenient
way to resolve the tangle of designations), including the OQ-19/KD2R Quail and the MQM-33/MQM-36
Shelduck. The BTTs remained in service for the rest of the 20th century. The first target drone converted to
the battlefield unmanned aerial photo reconnaissance mission was a version of the MQM-33 conversion for
the US Army in the mid-1950s designated the RP-71, later re-designated the MQM-57 Falconer.

The US military acquired a number of other drones similar in many ways to the Radioplane drones. The
Globe company built a series of targets, beginning with the piston-powered KDG Snipe of 1946, which
evolved through the KD2G and KD5G pulsejet-powered targets and the KD3G and KD4G piston-powered
targets, to the KD6G series of piston-powered targets. The KD6G series appears to have been the only one of
the Globe targets to be built in substantial numbers. It was similar in size and configuration to the BTT
series, but had a twin-fin tail. It was redesignated "MQM-40" in the early 1960s, by which time it was
generally out of service.

The use of drones as decoys goes back to at least the 1950s, with the Northrop Crossbow tested in such a
role. The first operational decoy drone was the McDonnell Douglas "ADM-20 Quail", which was carried by
Boeing B-52 Stratofortress bombers to help them penetrate defended airspace.

By the late 1950s combat aircraft were capable of Mach 2, and so faster targets had to be developed to keep
pace. Northrop designed a turbojet-powered Mach 2 target in the late 1950s, originally designated the Q-4
but later given the designation of AQM-35. In production form, it was a slender dart with wedge-shaped
stubby wings, swept conventional tail assembly, and a General Electric J85 turbojet engine, like that used on
the Northrop F-5 fighter.

[h]Nuclear tests
In 1946, eight B-17 Flying Fortresses were transformed by American airmen into drones for collecting
radioactive data. They were controlled at takeoff and landing from a transmitter on a jeep, and during flight
by a transmitter on another B-17. They were used on Bikini Atoll (Operation Crossroads) to gather samples
from inside the radioactive cloud. During test Baker, two drones were flown directly above the explosion;
when the shock wave reached them, both gained height, and the lowest was damaged. The U.S. Navy
conducted similar tests with Grumman F6F Hellcat drones. The B-17 drones were employed in a similar
manner in Operation Sandstone in 1947, and in Operation Greenhouse in 1951. In this latter test, also several
Lockheed P-80 Shooting Star jets were used, modified into drones by Sperry Corporation; however, the
complex system resulted in a very high accident rate. One of the B-17 drones, tail number 44-83525, is
currently under restoration at Davis–Monthan Air Force Base.

[h]Reconnaissance platforms
In the late 1950s, along with the Falconer, the US Army acquired another reconnaissance drone, the Aerojet-
General SD-2 Overseer. It had a similar configuration to the Falconer, but featured a vee tail and was about
twice as heavy.

The success of drones as targets led to their use for other missions. The well-proven Ryan Firebee was a
good platform for such experiments, and tests to evaluate it for the reconnaissance mission proved highly
successful. A series of reconnaissance drones derived from the Firebee, the Ryan Model 147 Lightning Bug
series, were used by the US to spy on North Vietnam, Communist China, and North Korea in the 1960s and
early 1970s.

The Lightning Bugs were not the only long-range reconnaissance drones developed in the 1960s. The US
developed other, more specialized reconnaissance drones: the Ryan "Model 154", the Ryan and Boeing
"Compass Copes", and the Lockheed D-21, all of which were more or less cloaked in secrecy.

[h]Soviet Union projects


The USSR also developed a number of reconnaissance drones, though since many programs the Soviets
pursued were cloaked in secrecy, details of these aircraft are unclear and contradictory.

Figure: Yakolev Pchela-1K on Stroy-P launcher

Known drone systems planned or developed by the former Soviet Union include (in alphabetical order):

 Lavochkin La-17
 Tupolev Tu-123
 Tupolev Tu-141
 Tupolev Tu-143
 Tupolev Voron
 Yakovlev Pchela

Vietnam War: Reconnaissance drones


By late 1959, the only spy plane available to the US was the U-2. Spy satellites were another year and half
away, and the SR-71 Blackbird was still on the drawing board. In such a climate, concerns appeared about
the negative publicity from the foreseen capture of US airmen on the communist territory. Pilots' fears were
realized in May 1960, when U-2 pilot Francis Gary Powers was shot down over the USSR. Not surprisingly,
work intensified on an unmanned drone which would be capable of penetrating deep into enemy territory,
and return with precise military intelligence. Within three months of the downing of the U-2, the highly
classified UAV (called RPV back then) program was born, under the code name of Red Wagon.

Just after the incident involving the US Navy destroyers USS Maddox (DD-731) and USS Turner Joy (DD-
951), and even before it escalated into the presidential "Tonkin Gulf Resolution" and war with North
Vietnam, the USAF had issued an immediate order for the UAV units to deploy immediately for Southeast
Asia on any available C-130s or C-133s. The first birds (drones) would be Ryan 147Bs (AQM-34s) piggy-
backed on C-130s, after completing their missions they would be parachuted for recovery near Taiwan.

USAF drones (UAVs) of the Strategic Air Command deployed to the Republic of South Vietnam (RVN) as
the 4025th Strategic Reconnaissance Squadron, 4080th Strategic Reconnaissance Wing in 1964. In 1966 the
unit was redesignated as the 350th Strategic Reconnaissance Squadron, 100th Strategic Reconnaissance
Wing.

The Squadron operated Ryan Firebees, launching them from modified DC-130A Hercules transport aircraft,
normally two drones under each wing, each Hercules carrying 4 drones total. The UAVs deployed
parachutes upon completing their missions and were usually recovered by helicopters which were tasked for
those missions.

The North Vietnamese Air Force utilized U.S. drone flights to practice their aerial combat skills, and
although claiming several successful interceptions, only 6 are known to have been shot down by NVAF
MiGs. But there was a draw back for chasing drones; one North Vietnamese MiG ran out of fuel, causing the
pilots to eject, a North Vietnamese SAM shot down a NVAF MiG-17 while in "hot pursuit of a drone."
While another NVAF MiG-17 shot down another MiG which got into his line of fire while chasing a drone.

Between 1967 and until near the end of the U.S. involvement with the war in 1972, varied models of the
147SC Lightning Bug flew over half the missions over enemy territory. The average sortie per drone was
three missions, before it was lost. The most famous Lightning Bug was a 147SC drone named "Tom Cat."
Tom Cat flew sixty-eight missions before an enemy gunner finally brought him "down over Hanoi on
September 25, 1974." From August 1964, until their last combat flight on 30 April 1975 (the fall of Saigon),
the USAF 100th Strategic Reconnaissance Wing would launch 3,435 Ryan reconnaissance drones over
North Vietnam and its surrounding areas, at a cost of about 554 UAVs lost to all causes during the war.

[h]Iran–Iraq War
During the Iran–Iraq War, Iran sought the need for a new recon platform in addition to the RF-4. In the early
1980s, development of the Qods Mohajer-1 began, and production began in 1985. They were operated by the
IRGC's Raad brigade in many key battles of the war, including Operation Karbala 5 and Operation Valfajr 8.
They participated in 619 various missions, taking nearly 54,000 photographs. Iran also armed them with
RPGs, as some images show, however it is unknown if they were used in combat with that configuration.

[h]Post-war reflections
The usefulness of robot aircraft for reconnaissance had been demonstrated in Vietnam. At the same time,
early steps were being taken to use them in active combat at sea and on land, but battlefield Unmanned aerial
vehicles (UAV) would not come into their own until the 1980s.
During the early years, target drones were often launched from aircraft; or off a rail using solid-fuel rocket
assisted takeoff (RATO) boosters; or hydraulic, electromagnetic, or pneumatic catapult. Very small target
drones can be launched by an elastic bungee catapult. Few target drones have landing gear, and so they are
generally recovered by parachute or, in some cases, by a skid landing. Beginning in April 1966, and lasting
through the end of the war in 1975, the USAF successfully conducted approximately 2,655 Mid-Air
Retrieval System (MARS) catches, out of 2,745 attempts, primarily using the Ryan 147J model drone.

The most combat sorties flown during the war were made by the Ryan 147SC (military designation AQM-
34L) with 1,651 missions. About 211 AQM-34Ls were lost during the war. The highest mission bird was a
147SC, named "Tom Cat", it accomplished 68 combat missions in Vietnam, before failing to return on 25
September 1974. Tom Cat was followed by Budweiser (with 63 missions), Ryan's Daughter (52 missions),
and Baby Duck (46 missions).

The largest UAVs in Vietnam were the 147T, TE, and TF (Military model AQM-34P, 34Q, and 34R). These
machines were 30' long, and had 32' wing spans, with 2,800 lb thrust engines. These flew 28, 268, and 216
combat sorties respectively; of which 23 AQM-34Q drones were lost, AQM-34R machines were destroyed,
and 6 AQM-34P models never made it home.

[h]War on Terror
The use of armed drones came into its own with the start of the War on Terror. The global audience was
exposed to armed drones and their lethal uses when after the September 11, 2001 attacks an American UAV
killed Qaed Salim Sinan al-Harethi (aka Abu Ali al-Harithi) in a November 2002 drone strike that killed six
people, including Qaed, the alleged mastermind of the 2000 USS Cole bombing.

The attitude towards UAVs, which were often seen as unreliable and expensive toys, changed dramatically
with the Israeli Air Force’s victory over the Syrian Air Force in 1982. Israel’s coordinated use of UAVs
alongside manned aircraft allowed the state to quickly destroy dozens of Syrian aircraft with minimal losses.
Israeli drones were used as electronic decoys, electronic jammers as well as for real time video
reconnaissance.

The US military is entering a new era in which UAVs will be critical to SIGINT payloads, or Electronic
countermeasures systems should be in widespread use following 2010, with the UAVs controlled and
relaying data back over high-bandwidth data links in real time, linked to ground, air, sea, and space
platforms. The trend had been emerging before the American war in Afghanistan began in 2001, but was
greatly accelerated by the use of UAVs in that conflict. The Predator RQ-1L UAV (General Atomics) was
the first deployed UAV to the Balkans in 1995 Iraq in 1996 and was proved very effective in Operation Iraqi
Freedom as well as Afghanistan.

[h]Miniature and Micro UAVs


Another growth field in UAVs are miniature UAVs, ranging from "micro aerial vehicles (MAVs)" and
miniature UAVs that can be carried by an infantryman to UAVs that can be carried and launched like an
infantry man-portable air-defense system.

[h]Endurance UAVs
The idea of designing a UAV that could remain in the air for a long time has been around for decades, but
only became an operational reality in the 21st century. Endurance UAVs for low-altitude and high-altitude
operation, the latter sometimes referred to as "high-altitude long-endurance (HALE)" UAVs, are now in full
service.
On August 21, 1998, an AAI Aerosonde named Laima becomes the first UAV to cross the Atlantic Ocean,
completing the flight in 26 hours.

The idea of using UAVs as a cheaper alternative to satellites for atmospheric research, earth and weather
observation, and particularly communications goes back at least to the late 1950s, with conceptual studies
focused on UAVs with conventional propulsion, or new forms of propulsion using microwave beamed
power or photovoltaic solar cells.

Raytheon suggested what would now be described as a UAV using beamed power, flying at an altitude of 15
kilometers (9.3 mi), as far back as 1959, and actually performed a proof-of-concept demonstration in 1964,
with a transmitting antenna powering a helicopter on a 20-meter (65 foot) tether. The helicopter carried a
rectifying antenna or "rectenna" array incorporating thousands of diodes to convert the microwave beam into
useful electrical power.

The 1964 demonstration received a good deal of publicity, but nothing came of it, since enthusiasm for Earth
satellites was very high and the rectenna system was heavy and inefficient. However, in the 1970s, NASA
became interested in beamed power for space applications, and, in 1982, published a design for a much
lighter and cheaper rectenna system.

The NASA rectenna was made of a thin plastic film, with dipole antennas and receiving circuits embedded
in its surface. In 1987, the Canadian Communications Research Center used such an improved rectenna to
power a UAV with a wingspan of 5 meters (16 feet 5 inches) and a weight of 4.5 kilograms (9.9 pounds), as
part of the Stationary High Altitude Relay Platform (SHARP) project. The SHARP UAV flew in a circle at
150 meters (490 feet) above a transmitting antenna. The UAV required 150 watts, and was able to obtain this
level of power from the 6 to 12 kilowatt microwave beam.

In the 1980s, new attention was focused on aircraft propelled by solar power. Solar photovoltaic (PV) cells,
are not very efficient, and the amount of power provided by the Sun over a unit area is relatively modest. A
solar-powered aircraft must be lightly built to allow low-powered electric motors to get it off the ground.
Such aircraft had been developed in the competition for the Kremer prize for human-powered flight. In the
early 1970s, Dr. Paul B. MacCready and his AeroVironment company took a fresh look at the challenge, and
came up with an unorthodox aircraft, the "Gossamer Condor", to win the Kremer Prize on 23 August 1977.

In 1980, Dupont Corporation backed AeroVironment in an attempt to build a solar-powered piloted aircraft
that could fly from Paris, France, to England. The first prototype, the "Gossamer Penguin", was fragile and
not very airworthy, but led to a better aircraft, the "Solar Challenger". This success led in turn to
AeroVironment concepts for a solar-powered UAV. A solar-powered UAV could in principle stay aloft
indefinitely, as long as it had a power-storage system to keep it flying at night. The aerodynamics of such an
aircraft were challenging, since to reach high altitudes it had to be much lighter per unit area of wing surface
than the Solar Challenger, and finding an energy storage system with the necessary high capacity and light
weight was troublesome as well.

In 1983, AeroVironment investigated the concept, which was designated "High Altitude Solar (HALSOL)".
The HALSOL prototype first flew in June 1983. HALSOL was a simple flying wing, with a span of 30
meters (98 feet 5 inches) and a width of 2.44 meters (8 feet). The main wing spar was made of carbon fiber
composite tubing, with ribs made of styrofoam and braced with spruce and Kevlar, and covered with thin
Mylar plastic film. The wing was light but remarkably strong.
The wing was built in five segments of equal span. Two gondolas hung from the center segment, which
carried payload, radio control and telemetry electronics, and other gear. The gondolas also provided the
landing gear. Each gondola had dual baby-buggy wheels in front and a bicycle wheel in back for landing
gear. HALSOL was propelled by eight small electric motors driving variable-pitch propellers. There were
two motors on the center wing segment, two motors on each inner wing segment, and one motor on each
outer wing segment. The aircraft's total weight was about 185 kilograms (410 pounds), with about a tenth of
that being payload.

Nine HALSOL flights took place in the summer of 1983 at the isolated and secret Groom Lake base in
Nevada. The flights were conducted using radio control and battery power, as the aircraft had not been fitted
with solar cells. HALSOL's aerodynamics were validated, but the investigation led to the conclusion that
neither PV cell nor energy storage technology were mature enough to make the idea practical for the time
being. HALSOL was put into storage, and as it turned out, would be resurrected for greater glories later, as
discussed later. For the moment, though, it remained a complete secret.

In the mid-1980s, not long after HALSOL went into mothballs, NASA awarded a contract to Lockheed to
study a solar-powered HALE UAV named the "Solar High Altitude Powered Platform (Solar HAPP)" for
missions such as crop monitoring, military reconnaissance, and communications relay. The Solar HAPP
effort did not result in a prototype. Solar-powered HALE UAVs were a concept a bit ahead of their time, and
early practical work on endurance UAVs focused on more conventional concepts.

[h]Amber
In 1984, DARPA issued a $40 million US contract to Leading Systems Incorporated (LSI) of Irvine,
California, to build an endurance UAV named "Amber". Amber was to be used for photographic
reconnaissance, ELINT missions, or as a cruise missile. The US Army, Navy, and Marine Corps were
interested, and DARPA eventually passed control over to the Navy.

Amber was designed by a team under Abraham Karem of Leading Systems. Amber was 4.6 meters (15 feet)
long, had a wingspan of 8.54 meters (28 feet), weighed 335 kilograms (740 pounds), and was powered by a
four-cylinder liquid-cooled piston engine providing 49 kW (65 hp), driving a pusher propeller in the tail. The
wing was mounted on a short pylon above the fuselage. The cruise missile version of Amber would discard
the wing when it made its final dive on a target.

Amber had an inverted v-tail, which would prove a popular configuration for a pusher UAV, since it
protected the propeller during takeoff and landing. The airframe was made of plastic and composite
materials, mostly Kevlar, and the UAV had retractable stiltlike tricycle landing gear to ensure propeller
clearance. Amber had a flight endurance of 38 hours or more.

The initial contract specified three "Basic Amber" A-45 cruise missile prototypes and three B-45
reconnaissance prototypes. Initial flights were in November 1986, with long-endurance flights the next year.
Up to this time, Amber was a deep secret, but in 1987 details of the program were released.

Amber was only one of a number of different US UAV programs in planning at the time, and the US
Congress became impatient with what was perceived as confusion and duplication of effort. Congress
ordered a consolidation of UAV programs in 1987, freezing funding until June 1988, when the centralized
Joint Program Office for UAV development, mentioned earlier, was established. Amber survived the
consolidation of UAV efforts into JPO, resulting in the first "Amber I" reconnaissance UAV, which first
flew in October 1989. Seven Amber Is were built, and were used in evaluations along with Basic Ambers
through 1990. However, funding for reconnaissance assets was being cut, and in 1990 the Amber program
was killed. LSI was faced with bankruptcy, and was bought out by General Atomics in 1991, who would
later develop the Amber into an operational platform, the MQ-1 Predator.
[h]U.S. domestic use
The U.S. Customs and Border Protection agency has experimented with several models of UAVs, and has
begun purchasing a fleet of unarmed MQ-9 Reapers to survey the U.S. border with Mexico. "In more than
six months of service, the Predator's surveillance aided in nearly 3900 arrests and the seizure of four tons of
marijuana", border officials say.

On May 18, 2006, the Federal Aviation Administration (FAA) issued a certificate of authorization which
will allow the M/RQ-1 and M/RQ-9 aircraft to be used within U.S. civilian airspace to search for survivors
of disasters. Requests had been made in 2005 for the aircraft to be used in search and rescue operations
following Hurricane Katrina, but because there was no FAA authorization in place at the time, the assets
were not used. The Predator's infrared camera with digitally enhanced zoom has the capability of identifying
the heat signature of a human body from an altitude of 10,000 feet, making the aircraft an ideal search and
rescue tool.

According to a 2006 Wall Street Journal report, "After distinguished service in war zones in recent years,
unmanned planes are hitting turbulence as they battle to join airliners and weekend pilots in America's
civilian skies. Drones face regulatory, safety and technological hurdles – even though demand for them is
burgeoning. Government agencies want them for disaster relief, border surveillance and wildfire fighting,
while private companies hope to one day use drones for a wide variety of tasks, such as inspecting pipelines
and spraying pesticides on farms."

Recreational drones became popular in the United States in 2015, with approximately one million expected
to be sold by the end of the year.

[h]Drones Over Canada


The Government of Canada is considering the purchase of UAV's for arctic surveillance. The Canadian
government wants to buy at least three high-altitude, unmanned aerial vehicles for potential Arctic use. The
Canadian government wants to modify the existing Global Hawk drone, which can operate at 20,000 metres,
to meet the rigours of flying in Canada's Far North.

[h]Small-player use
At one time the cost of miniature technology limited the usage of UAVs to larger and better funded groups
such as the US military, but due to falling costs of UAV technology, including vehicles and monitoring
equipment in their simpler forms, it has become available to groups that before would not have had the
funding to use it. Beginning in 2004, it was reported that the Lebanese Shi'ite militia organization Hezbollah
began operating the Mirsad-1 UAV, with the stated goal of arming the aircraft for cross-border attacks into
Israel. According to one blogger, however, the drone was an actually an Iranian Ababil-2 loitering munition.
Iranian-backed militias across the Middle East now operate advanced UAVs, including the Houthis in
Yemen who used Samad drones in an effective attack on Aramco facilities in Saudi Arabia in 2019.

[mh]The Rise of Military Applications


[h]New Technologies and Decision-Making for the Military
If the qualities required of a leader to be a good commander and a good decision maker remain constant in human
history in the face of the complexity of the battle, the leader of tomorrow will have to adapt to the uses of new
technologies. This will allow him to be better informed, and consequently to be more reactive in order to keep the
initiative in the manoeuvre, but also to carry his action further and delegate certain tasks to the machines he will have
at his disposal. Such adaptations are not trivial, because they reconsider the existing military doctrines, and can call
into question the very principle of the hierarchy that makes the strength of armies. It is therefore necessary for the
military to know how to use these new technologies through training, but also to know how to keep control of the use
of new systems integrating a certain form of autonomy. Above all, it is important for the military leader to preserve the
very essence of his very identity: to give meaning to military action and command to achieve his goals.

Primarily, a military leader must command, which implies legitimate decision-making authority and a
responsibility towards the soldiers entrusted to him for the mission which he must ensure.

The command is the very expression of the personality of the leader. It depends on the tactical situation
which includes the risk and the obligations of the mission to be carried out.

To be a good military leader implies several additional qualities: to be demanding, to be competent, to have a
high moral strength in the face of the difficulties of war, to have confidence in his own abilities and in those
put at disposal, to be in responsibility to assume his decisions and for that to put in responsibility his
subordinates, and finally to be able to decide in complete freedom.

He is the one who decides and commands. He is the one to whom all eyes turn in difficulty , but the exercise
of his command requires a demanding discernment between reflection and action.

The military world is very demanding and dangerous. Having to take into account the danger for his soldiers,
the danger for himself and the responsibility of the mission he has been given, the military leader should:

 discern in complexity (deploy true situational intelligence);


 decide in uncertainty (have the strength of character to accept calculated risks);
 act in adversity (to unite energies, encourage collective action and make conscious decisions);

This forms the basis of the educational project of the Saint-Cyr Coëtquidan military academy, and perfectly
synthesises the objectives of a training system adapted to the officers of the 21st century. However, this
initial training must take into account the technological evolutions allowing military decision-makers of
today and tomorrow to reduce the fog of war.

[h]Military leader is accountable for the decision


What is decision-making for a military officer? It consists of choosing between different possibilities and
opting for a conclusion among the different possible solutions, while having analysed all effects that this
decision implies.

In order to decide, the leader must master the various areas listed below: a perfect knowledge of the mission
entrusted to him, of the means at his disposal and of his troops. Nothing is worse than indecision when the
lives of soldiers are in danger. His decision must call for moral and intellectual courage.

“The unknown is the governing factor in war” said Marshal Foch. However, the role of the leader is above
all to be able to adapt and modify his analysis and the behaviour of his troop in order to respond to
unforeseen situations. This ability to adapt is essential to maintain the freedom of action that allows for
initiative on the battlefield, and to be able to innovate according to the constraints.
The leader must show discernment in action, to appreciate facts according to their nature and their fair value.
This implies being cautious in his choices and the scope of his choices.

Finally, the leader must be lucid, and control his stress, pressure and emotions. These to preserve his “esprit
d’initiative”.

[h]Information, the key to victory


To meet all these requirements, information is one of the major foundations for the exercise of the command
of the chief. It is the keystone of all military action, to keep the initiative and maintain supremacy on the
ground .

In fact, information allows the chief to plan the military action, taking into account the means at his disposal,
ensuring the transport logistics, and confronting the possible friendly and enemy modes of action in order to
determine the manoeuvre that he will conduct.

The management of the information received is reflected “en conduite” by the regular rhythm of reports and
situation updates to higher or subordinate levels, in order to anticipate threats and maintain a capacity to
react as quickly and efficiently as possible in the face of adversity or any obstacle hindering the manoeuvre.

For the decision-making process to run smoothly, the information must be updated regularly because the
situation can change very quickly and the leader will have to adapt his analysis accordingly.

Thus, there is no single decision of the military commander in operation, but a continuum of decisions, some
of which are almost routine or implicit, while others require extensive analysis. Some decisions are
ultimately critical, as they can result in a favourable or tragic outcome to a given situation.

[h]What is fundamentally changing


This chapter addresses the change in the art of decision-making for a military officer, implied by the use of
some technologies that will gradually invade the battlefield.

Indeed, some technologies will allow the leader to be better informed, but also to be more reactive in order
to keep the initiative. Their management requires a mastery of new data management processes resulting
from the digitisation of the battlefield, in particular the possible influx of operational data from the field and
their synthesis for the military leader.

[h]A more accurate and faster remote information acquisition


The one who sees further and before the others is the one who dominates the military manoeuvre. This is
what enables him to gain a tactical advantage because the one who acts first with determination is most often
the one who wins. Moreover, the ability to see further and more accurately thanks to remote sensors or
cameras brings an undeniable advantage to the military leader, enabling him to react faster than his enemy.

Today, spaces are getting tighter, and information can be transmitted in a few milliseconds to any point on
the planet, provided that the sensor capturing the information is available. This is done through cyberspace
which must be secured for military forces so that they can be sure of the veracity of the data they use. This
immediacy of information is a new parameter in the art of command. It forces the leader to make a quick
analysis and to be reactive in his response.
It also raises the question of his capacity to process the information, if there is too much data to process. In
this case, it will be necessary to process automatically the data as soon as it is received by the systems, to
extract only the relevant information. And if these systems are unable to do this, the leader will have to be
assisted in the analysis and decision-making by a third party, which may also be a machine. This raises the
question of the control of these decision aids provided and which he must rely on.

[h]Act remotely to remove the danger and increase the area of action
One of the major military revolutions that began at the start of the 21st century in the Iraq and Afghanistan
wars is the robotisation of the battlefield. It is unavoidable and will gradually be introduced into the
battlefield because the use of unmanned robots (UAV, USV, UUS and UGV) offers many advantages to the
armies that will use them on the ground.

Firstly, it avoids exposing our own combatants, which is all the more important in our modern armies where
the latter are a scarce and expensive resources to train.

Secondly, it extends the area of perception and action of a military unit. In a sense, they are the “5 deported
senses” of the fighter, i.e. his eyes (camera), his ears (reception), his mouth (transmission), his touch
(actuator arm) and even his sense of smell and taste (detection of CBRN products).

As tools placed at the disposal of the combatant, robots will allow him to control the battlefield by deporting
effectors or sensors allowing a control of the various dimensions and spaces of the battlefield, on land, in the
air, at sea and even electromagnetically. These will thus progressively move the combatant behind the
contact zone, in order to move him away from the dangerous area and reduce the risks, or allow him to dive
in with the maximum of means at his disposal, thus significantly reducing the vulnerability of the
combatants .

Finally, the ability to act remotely while preserving the lives of his men will allow the leader to act even the
enemy can even deploy his forces for his manoeuvre.

Robotic systems will thus become new tactical pawns that the military leader will now use to prepare his
action, to facilitate his progress, allowing him new effects on the enemy, the terrain, the occupation of space
and on the rhythm of the action. Especially since these machines will eventually be more efficient, more
precise and faster for specific tasks than a human being can be. This is currently evident in industrial
manufacturing and assembly plants.

[h]The disruption of autonomy


This military revolution of deporting action with robotic systems is accompanied by another, no less
disruptive, that of the autonomy of these systems. Autonomy will allow for omnipresence of action in the
area, 24 hours a day, subject to energy sufficiency. It will allow the machines to adapt to the terrain and its
unforeseen events in order to carry on the mission entrusted to them by the military leaders. Autonomous
systems will allow them to react to complex situations by adapting their positioning strategy, and even
adapting the effects it produces on the battlefield. For example, it may be an automatic reorganisation of the
swarm formation adopted by a group of robots to follow an advancing enemy, followed by the decision to
block an axis of progression with smoke or obstacles to hinder enemy progression.

However, autonomy is not fundamentally new for a leader. A section or a platoon leader has combat groups
under his command, whose group leader who receives a mission has full autonomy to carry it out. The new
fact is that if robots are tactical pawns at the disposal of the combatant, and if they can have a certain form of
autonomy in the execution of their action, they do not have and will never have the awareness of their action
and the capacity of discernment which are characteristics of the human being. This opens up a number of
ethical questions regarding the opening of fire that will not be addressed in this chapter .
[h]The contribution of new technologies to military decision-making
These upheavals are based on technologies that create new opportunities in military decision-making
processes.

[h]All deployed systems are interconnected


The digitisation of the battlefield stems from the constant trend towards the integration of electronic
components in all future military equipment, which, coupled with a means of transmission, allow for their
interconnection and the dissemination of the information collected. It affects all systems deployed in the
field (from weapons systems to military vehicles), right down to the disembarked combatant who, just like
any civilian with a smartphone, will be connected to the great digital web of the battlefield and therefore
traceable and reachable. Just like every individual in the civil society, every actor on the battlefield is
traceable and able to communicate.

[h]Enriched information
As explained above, technology will enable a faster detection of threats on the battlefield. The Law of Moore
has sometimes been used to describe the increase in the capabilities of digital cameras, according to a ratio
of “twice as far” or “twice as cheap” or “twice as small” every 3 years. In fact, each innovation allows to see
further for a smaller footprint. The digital zoom allows high magnifications but at the cost of algorithmic
processing of the image which causes lesser definition quality. It is often paired with the optical zoom,
which consists of adapting the focal length to the target you want to look at. Cameras can now merge data
from multiple sensors of different types. In particular, thermal imaging allowing you to see a large fraction
of the spectrum and to view and measure the thermal energy emitted by an equipment or a human. To which
one can add light intensification processes to amplify the existing residual light to recreate an image usable
by the human eye, in low light conditions.

All of this fused data can enrich the field of vision of the combatant by superimposing additional data that
completes his knowledge of the tactical situation. This is the principle of augmented reality.

[h]The immediacy of information processing


If data acquisition and transmission is possible, the information should nevertheless be processed. However
processing it requires easily accessible hardware and software resources offering the necessary computing
capacity to react as quickly as possible, particularly in order to be extremely reactive in situations where the
analysis time is too short for a human to do it by himself. Embedded computer software can provide such
capacity at the core of deployed systems, but this capability can also be moved to a secure cloud, which can
be both a tactical cloud, i.e. a cloud deployed on the battlefield in support of the manoeuvre, or to a further
away, highly sovereign and secure cloud.

[h]To the detriment of human decision-making


This immediacy of information processing allows a hyper-reactivity of systems, foreshadowing the concept
of “hyperwar” formulated by General John Allen & Amir Hussain Allen in 2019, which puts forward the
idea that the advent of hyperwar is the next fundamentally transformative change in warfare.

“What makes this new form of warfare unique is the unparalleled speed enabled by automating decision-
making and the concurrency of action that become possible by leveraging artificial intelligence and machine
cognition… In military terms, hyperwar may be redefined as a type of conflict where human decision-
making is almost entirely absent from the observe-orient-decide-act (OODA) loop. Consequently, the time
associated with an OODA cycle will be reduced to near-instantaneous responses. The implications of these
developments are many and game changing”.

[h]A support for information processing


For information processing, the volume of data produced increases exponentially and the accuracy and
granularity of the data produced by sensors grows. This trend will become more and more pronounced over
time .

Military experts usually process observation data retrieved from the battlefield by satellites, reconnaissance
aircraft, drones or sensors abandoned on the ground. However, as human resources are scarce and the
volume of data is constantly increasing, it will be necessary to delegate the processing of this amount of data
to AI algorithms in support of the human being, at the risk of not being able to process all of them without
this technology.

On the ground, the deployed combatant will be increasingly charged cognitively by the complexity of the
systems to operate and the amount of information to process. It will be vital to automate the processing of
certain information in order to unload it, so that only what is really necessary will be presented. This needs
to be done in an extremely ergonomic way. This requires defining which data can be subjected to artificial
processing, and up to what hierarchical level their processing can be automated.

[h]The contribution of artificial intelligence


Automated management of routine, repetitive and time-consuming procedures could emerge. In a
headquarters, for example, reports management and automatic production of summaries adapted to the level
of command would immediately make the chain of command more fluid. The AI could take the form of a
dashboard to stimulate the reflection of the commander and his advisers by dynamically delivering relevant
information and updated statements .

During operational preparation, depending on the tactical situation, the leader must confront the possible
modes of action he envisages with the reference enemy situation and the possible enemy modes of action.
Very often he does not have the material time to confront his action with several enemy modes of action, and
he only anticipates certain non-compliant cases that he considers probable. Artificial intelligence could be
more exhaustive in confronting more possible modes of action of the enemy, and thus present a more
complete analysis of possible options to the military leader who could then decide accordingly.

[h]Reduction of the OODA decision cycle


The technologies listed above have a direct effect on the OODA decision cycle, which will be profoundly
impacted by the new technologies.

This concept was defined in 1960 by an American military pilot by the name of John Boyd to formalise the
decision cycle in air combat. It has since been used to schematise any decision cycle. The author will use it
here in the light of the potential offered by the technologies detailed above .
Figure. OODA cycle time reduction: A better reactivity.

[h]Observe: a better detection


“Seeing without being seen” is essential in military operations, and remains a common adage. Technology is
helping, with the extended distances made possible by long-range cameras and their deportation to robotic
systems. It can now also help to overcome several natural detection constraints such as night, fog or walls.

Moreover, digitised systems can operate 24 hours a day with great consistency, where humans are subject to
fatigue and inattention, avoiding the risk of missing information.

For surveillance or patrol missions, where human resources are often lacking, the leader can delegate to
systems the analysis of images of the area for the detection of movements and the potential presence of
enemies. It should be noted that this detection should filter out false alarms as much as possible, such as the
movement of leaves in the trees when the wind picks up.

[h]Orient: a better analysis


Remotely seeing will make it possible to identify a potential target from afar, to discriminate it (is the target
a combatant) and to characterise its behaviour (is it hostile or not). If these criteria are met, the target
becomes a potential target that can easily be geolocated, this information will then be transmitted to the
decision-making levels. The gain here is that of anticipating the analysis for better decision-making.

The leader will also be able to rely on the automatic processing of data acquired within the digital
environment of the battlefield. Faced with the potential ‘infobesity’ of the battlefield, artificial intelligence
will enable massive data processing, subject to the availability of a computing capacity directly embedded in
remote robotic platforms, or by remote processing of information via long-distance communications. It will
allow constant monitoring of the analysis of captured images or sounds, a task that the best human experts
can only supervise because they are subject to fatigue and inattention. This is particularly the case with
satellite images or images captured by surveillance drones, which can monitor an area 24 hours a day.
Finally, it will also enable the detection of weak signals that would be invisible to humans, by correlation
between several distinct events, or by cross-checking.

There are still two essential components to the analysis of the situation that a machine can never integrate.
Firstly, instinct and intuition, which a machine cannot have and which are the fruit of a life-long learning of
human experience, and secondly, the transcendence of military action which only a metaphysical dimension
in the literal sense can provide.

[h]Decide: a better reaction


The military commander is the decision-maker for military action. It is therefore up to him to take the
decision according to the information at his disposal. He can of course rely on a deputy or on operational
advisers who help him analyse the situation, if time permits.

For example, France is intervening in Mali and the Sahara as part of the Barkhane military operation to
combat Salafist jihadist armed groups infiltrating the entire Sahel region. Launched on 1 August 2014, this
operation replaces operations Serval and Épervier. The following scenario is fictitious: an armed Reaper
drone of the French army flies over a region of the Malian desert at night and its cameras (incorporating AI
for automatic motion detection processing of the captured images) detect a suspicious movement. The sensor
operator of the drone is alerted and zooms in on the area to detect a jihadist 4x4 occupied by armed
personnel via its Infrared camera. This vehicle is moving towards a village 20 kilometres away. Setting up
an operation with Special Forces is not possible because they are not in the area, and there is a great risk that
the occupants of the 4x4 will disperse once they reach the village. The legal advisor on duty quickly
confirms the possibility of the drones firing on the target because no collateral damage is possible in this
desert area. The head of the operation decides to give the order to fire the drone.

This example clearly shows the drastic reduction in the OODA decision cycle offered by the new
technologies: the chief detects and is informed as soon as possible by an automatic detection of a suspicious
movement of an enemy vehicle. He confirms with his image operator the Positive identification (PID) of the
target as an enemy. He then reports it to his hierarchy and receives the order to open fire. He can thus, in
compliance with IHL, open fire from a distance. The enemy has not even spotted him.

There are still situations where time is critical and the leader will not have time to make a decision due to the
rapidity of the attack. The automation of response processes then becomes a possible option, i.e. he can
delegate to a machine the possibility of giving an appropriate response to a situation by itself. This is already
the case with missiles or ballistic threats, which require armies to use automatic systems to counter them.
This requires automatic systems that are faster and more precise than human beings (e.g. coupling weapons
and radar). Tomorrow, faced with future systems that will develop unpredictable trajectory strategies (enemy
missiles with AI), faced with saturating threats that risk overwhelming our defences, faced with swarms of
offensive robots, our systems will have to adapt in real time to counter the threat. Only a certain autonomy of
the defensive systems will make it possible to face them, an autonomy which will have to remain under the
control of the leader having these systems at his disposal.

[h]Act: a quicker and more accurate reaction


A quicker reaction: A man reacts in a few seconds, the machine in a few milliseconds or less. Where a
human thinks in a few seconds for the best, the machine will analyse parameters in a few milliseconds and
propose a response in near real time.

A more accurate action: A human shooter who moves, breathes and shakes is less accurate than a machine
that does not move, breathe or shake because it is not subject to emotion. Precision in action will therefore
increasingly be the prerogative of the machine.
The outcome of a fight or a counter-measure may depend on these factors 10 or 100 seconds to a thousand
seconds.

[h]Technology as a decision aid for the leader


Military decision-making is centred on the military leader, because he is at the heart of the command
situation. He takes responsibility for military action, a mission given to him by the legitimately elected
political power.

The leader must therefore control the decisions taken within the framework of military action because he is
the guarantor and he assumes the consequences.

What lessons can one learn from the opportunities offered by new technologies for military decision-making
and the possible resulting changes in the art of command?

[h]To reduce the “fog of war”


The leader must rely on technology to reduce the uncertainty and fog of war. It will allow him to be more
aware of his tactical situation by searching for intelligence. Furthermore, it will enable him to delegate to
machines the management of repetitive tasks that do not require constant situational intelligence.

Depending on the circumstances and if he has time to reflect, the digitisation of battlefield information will
also allow the leader to replay certain possible scenarios before taking a decision. Finally, it will give him
the possibility to select the information he has received that he deems important, to view it several times
(especially if the information is imprecise) before making a decision.

[h]For decision support


A digital aid will be welcome to synthesise the multiplication of digital actors on the ground with whom he
is in contact, or whom he must command or coordinate as a leader.

One of the consequences of the digitization of the battlefield is that it may lead to information overload for
the leader who is already very busy and focused on his tasks of commanding and managing. It is already
accepted in the military community that a leader can manage a maximum of seven different information
sources at the same time, and even less when under fire.

Delegating is one way to avoid cognitive overload. Thus, one possible solution is to create a “digital
assistant” who can support the leader in the information processing steps.

His digital deputy can be a digital assistant, an autonomous machine that will assist the leader in filtering and
processing information, which will help the leader in the decision-making process.

Nevertheless, the leader will have to fight against the easy way out, take a step back, allow himself time to
reflect, and reason with a critical sense when faced with machines that will think for him. This process will
help him fight against a possible inhibition of human reasoning. Artificial intelligence does not mean
artificial ignorance if it is used as an intellectual stimulant, although it can have this flaw.

[h]For an optimization of its resources


The chief will be able to entrust machines with the execution of certain time-consuming and tedious tasks,
such as patrols or the surveillance of sectors, and thus conserve his human resources for missions where they
will have a higher added value.
The same applies to missions that require reactivity and precision, especially if there is a need to be
extremely quick to adapt to the situation. For example, it will be useful in the case of saturating threats,
where targeted destruction or multi-faceted and omnipresent threats such as swarms of drones must be dealt
with.

[h]But technology as a decision aid subject to control and confidence


Delegation of tasks to increasingly autonomous machines raises the question of the place of humans who
interface with these systems and should stay in control.

[h]The leader must always control execution of an autonomous system


At first, the military will not use equipment or tools that they do not control, regardless of the army in the
world. Every military leader must be in control of the military action, and for this purpose, must be able to
control the units and the means at his disposal. He places his confidence in them to carry out the mission,
which is the basis of the principle of subsidiarity.

For this reason, it is not in his interest to have a robotic system that governs itself with its own rules and
objectives. Moreover, this system could be disobedient or break out of the framework that has been set for it.
Thus, machines with a certain degree of autonomy must be subordinate to the chain of command, and
subject to orders, counter orders, and reporting .

[h]Operators must have confidence when delegating tasks to an autonomous system


The military will never use equipment or tools that they do not trust. This is the reason why a leader must
have confidence in the way a machine behaves or could behave. For that, military engineers should develop
autonomous systems capable of explaining their decisions.

Automatic systems are predictable, thus, one can easily anticipate how it will perform the task entrusted to it.
However, this becomes more complex with autonomous systems, especially self-learning systems where one
may well know the objective of the task to be performed by the machine, but has no idea how it will operate.
This raises a serious question of trust in this system. As an example, when I ask an autonomous mowing
robot to mow my lawn, I know my lawn will be mowed, but I do not know exactly how the robot will
proceed.

The best example to focus on are the expectations of the soldier about Artificial Intelligence embedded in
autonomous systems.

AI should be trustable. This means that adaptive and self-learning systems must be able to explain their
reasoning and decisions to human operators in a transparent and understandable manner;

AI should be explainable and predictable: one must understand the different steps of reasoning carried out by
a machine that delivers a solution to a problem or an answer to a complex question. For this, a human-
machine interface (HMI) that explains its decision-making mechanism is needed.

One must therefore focus on more transparent and personalised human-machine interfaces for the operator
and the leader .

[h]Tunnel effect
Easy access to information or possible information overload both favour a possible tunnel effect. This effect,
due to a sudden rise in adrenaline, causes a failure in the analysis of signals and data received by a brain that
is no longer able to step back and analyse the situation. For the military, this tunnel effect is clearly the
enemy of the soldier who has to concentrate on a screen, on a precise task, forgetting to look at the enemy
threat around him and thus exposing himself seriously. It is also the enemy of the leader who, because he
focuses on a piece of information that he finds crucial, becomes unable to step back and fulfil his role as a
leader, which is to take into account the globality of the military action, and not one of its particular aspects
highlighted by this information. Too much information should not prevent the commander from stepping
back and reflecting.

The question of the gender of the soldier operator may be an avenue of exploration here, as women may
have the capacity to manage several tasks simultaneously better than men.

[h]Inhibit the action


Easy access to information encourages another possible flaw in decision-making. That of not deciding
anything until one has all the information at his disposal. This flaw can probably become a major concern in
the future. With the responsibility of the soldier at stake, he may hesitate until the last moment to take a
decision because he lacks information that he can hope to recover by technological means. This is the death
of daring, of manoeuvre by surprise, which often ensures a victory for the leaders who dare to practice them.

[h]AI will influence the decision of the leader


Stress is an inherent component of taking responsibility. It is common for a military leader to have the
feeling of being overwhelmed in a complex (military) situation. In such contexts, the leader will most often
be inclined to trust an artificial intelligence because it will appear to him, provided he has confidence in it, as
a serious decision-making aid not influenced by any stress, having superior processing capabilities, and able
to test multiple combinations for a particular effect.

[h]Too much predictability in operational decision-making patterns


The modelling of human intelligence by duly validated but very fixed algorithmic processes can lead to the
inhibition of human intelligence. In particular, there will be a risk that military thinking will be locked into
decision-triggering software. In other words, the formatting of military thought into controlled and
controllable decision-making processes, developed by the need to respect the rules of engagement and
international rules, particularly those of the decision to open fire. The processes will certainly be validated,
but once activated, these processes may become completely rigid technological gems, admirably designed,
but incorporating doctrinal biases that cannot be challenged in the face of unpredictable enemy behaviour .
By the time, these systems and their uses are adapted, it will be too late and the battle will be lost.

Another major risk is the predictability of the behaviour of these systems by the enemy. As these systems are
known, their vulnerability will also be known. It will therefore be easy for the enemy to circumvent them by
manoeuvres combining cunning and opportunity, with victory only reflecting the inability of these highly
technical systems to adapt to an unpredictable or simply illegal conflict.

The leader must therefore anticipate these pitfalls and use the means at his disposal with intelligence. On
these aspects, the French army has developed the concept of “major effect” to be achieved. This major effect
conceptualises the way in which the leader intends to seize the initiative in the execution of his mission and
which makes it possible to adapt the means and methods of execution to the final effect sought .

[h]A principle of subsidiarity undermined


As a corollary to the extraordinary potential of the digitisation of the battlefield, namely to allow all levels of
the hierarchy to access information in real time and simultaneously, there is also a new risk at every level of
the military hierarchy: that of the leader having the possibility of directly accessing ‘target information’, thus
breaking the principle of subsidiarity, which requires him to delegate to his subordinates the responsibility
for and the use of the means made available to him. The temptation to interfere in the decisions of
subordinates and to decide in their place will be great, given his experience and his position. In order to
avoid this possible risk, it will be necessary to define precisely the right level of information to be
communicated for the right strategic level, in order to respect the freedom of action of each level and to
avoid a general and systematic dissemination of information without intermediate processing and filtering.

Chapter 2: Wings of Innovation: UAV Design Principles

[mh]Aerodynamics and Flight Dynamics


Flight dynamics in aviation and spacecraft, is the study of the performance, stability, and control of vehicles
flying through the air or in outer space. It is concerned with how forces acting on the vehicle determine its
velocity and attitude with respect to time.

For a fixed-wing aircraft, its changing orientation with respect to the local air flow is represented by two
critical angles, the angle of attack of the wing ("alpha") and the angle of attack of the vertical tail, known as
the sideslip angle ("beta"). A sideslip angle will arise if an aircraft yaws about its centre of gravity and if the
aircraft sideslips bodily, i.e. the centre of gravity moves sideways. These angles are important because they
are the principal source of changes in the aerodynamic forces and moments applied to the aircraft.

Spacecraft flight dynamics involve three main forces: propulsive (rocket engine), gravitational, and
atmospheric resistance. Propulsive force and atmospheric resistance have significantly less influence over a
given spacecraft compared to gravitational forces.

[h]Aircraft

Flight dynamics is the science of air-vehicle orientation and control in three dimensions. The critical flight
dynamics parameters are the angles of rotation with respect to the three aircraft's principal axes about its
center of gravity, known as roll, pitch and yaw.

Aircraft engineers develop control systems for a vehicle's orientation (attitude) about its center of gravity.
The control systems include actuators, which exert forces in various directions, and generate rotational
forces or moments about the center of gravity of the aircraft, and thus rotate the aircraft in pitch, roll, or yaw.
For example, a pitching moment is a vertical force applied at a distance forward or aft from the center of
gravity of the aircraft, causing the aircraft to pitch up or down.

Roll, pitch and yaw refer, in this context, to rotations about the respective axes starting from a defined
equilibrium state. The equilibrium roll angle is known as wings level or zero bank angle, equivalent to a
level heeling angle on a ship. Yaw is known as "heading".

A fixed-wing aircraft increases or decreases the lift generated by the wings when it pitches nose up or down
by increasing or decreasing the angle of attack (AOA). The roll angle is also known as bank angle on a
fixed-wing aircraft, which usually "banks" to change the horizontal direction of flight. An aircraft is
streamlined from nose to tail to reduce drag making it advantageous to keep the sideslip angle near zero,
though aircraft are deliberately "side-slipped" when landing in a cross-wind, as explained in slip
(aerodynamics).
The forces acting on space vehicles are of three types: propulsive force (usually provided by the vehicle's
engine thrust); gravitational force exerted by the Earth and other celestial bodies; and aerodynamic lift and
drag (when flying in the atmosphere of the Earth or another body, such as Mars or Venus). The vehicle's
attitude must be controlled during powered atmospheric flight because of its effect on the aerodynamic and
propulsive forces. There are other reasons, unrelated to flight dynamics, for controlling the vehicle's attitude
in non-powered flight (e.g., thermal control, solar power generation, communications, or astronomical
observation).

The flight dynamics of spacecraft differ from those of aircraft in that the aerodynamic forces are of very
small, or vanishingly small effect for most of the vehicle's flight, and cannot be used for attitude control
during that time. Also, most of a spacecraft's flight time is usually unpowered, leaving gravity as the
dominant force.

Modern aerodynamics only dates back to the seventeenth century, but aerodynamic forces have been
harnessed by humans for thousands of years in sailboats and windmills, and images and stories of flight
appear throughout recorded history, such as the Ancient Greek legend of Icarus and Daedalus. Fundamental
concepts of continuum, drag, and pressure gradients appear in the work of Aristotle and Archimedes.

In 1726, Sir Isaac Newton became the first person to develop a theory of air resistance, making him one of
the first aerodynamicists. Dutch-Swiss mathematician Daniel Bernoulli followed in 1738 with
Hydrodynamica in which he described a fundamental relationship between pressure, density, and flow
velocity for incompressible flow known today as Bernoulli's principle, which provides one method for
calculating aerodynamic lift. In 1757, Leonhard Euler published the more general Euler equations which
could be applied to both compressible and incompressible flows. The Euler equations were extended to
incorporate the effects of viscosity in the first half of the 1800s, resulting in the Navier–Stokes equations.
The Navier–Stokes equations are the most general governing equations of fluid flow but are difficult to solve
for the flow around all but the simplest of shapes.

Figure: A replica of the Wright brothers' wind tunnel is on display at the Virginia Air and Space Center. Wind tunnels
were key in the development and validation of the laws of aerodynamics.

In 1799, Sir George Cayley became the first person to identify the four aerodynamic forces of flight (weight,
lift, drag, and thrust), as well as the relationships between them, and in doing so outlined the path toward
achieving heavier-than-air flight for the next century. In 1871, Francis Herbert Wenham constructed the first
wind tunnel, allowing precise measurements of aerodynamic forces. Drag theories were developed by Jean le
Rond d'Alembert, Gustav Kirchhoff, and Lord Rayleigh. In 1889, Charles Renard, a French aeronautical
engineer, became the first person to reasonably predict the power needed for sustained flight. Otto Lilienthal,
the first person to become highly successful with glider flights, was also the first to propose thin, curved
airfoils that would produce high lift and low drag. Building on these developments as well as research
carried out in their own wind tunnel, the Wright brothers flew the first powered airplane on December 17,
1903.

During the time of the first flights, Frederick W. Lanchester, Martin Kutta, and Nikolai Zhukovsky
independently created theories that connected circulation of a fluid flow to lift. Kutta and Zhukovsky went
on to develop a two-dimensional wing theory. Expanding upon the work of Lanchester, Ludwig Prandtl is
credited with developing the mathematics behind thin-airfoil and lifting-line theories as well as work with
boundary layers.

As aircraft speed increased designers began to encounter challenges associated with air compressibility at
speeds near the speed of sound. The differences in airflow under such conditions lead to problems in aircraft
control, increased drag due to shock waves, and the threat of structural failure due to aeroelastic flutter. The
ratio of the flow speed to the speed of sound was named the Mach number after Ernst Mach who was one of
the first to investigate the properties of the supersonic flow. Macquorn Rankine and Pierre Henri Hugoniot
independently developed the theory for flow properties before and after a shock wave, while Jakob Ackeret
led the initial work of calculating the lift and drag of supersonic airfoils. Theodore von Kármán and Hugh
Latimer Dryden introduced the term transonic to describe flow speeds between the critical Mach number and
Mach 1 where drag increases rapidly. This rapid increase in drag led aerodynamicists and aviators to
disagree on whether supersonic flight was achievable until the sound barrier was broken in 1947 using the
Bell X-1 aircraft.

By the time the sound barrier was broken, aerodynamicists' understanding of the subsonic and low
supersonic flow had matured. The Cold War prompted the design of an ever-evolving line of high-
performance aircraft. Computational fluid dynamics began as an effort to solve for flow properties around
complex objects and has rapidly grown to the point where entire aircraft can be designed using computer
software, with wind-tunnel tests followed by flight tests to confirm the computer predictions. Understanding
of supersonic and hypersonic aerodynamics has matured since the 1960s, and the goals of aerodynamicists
have shifted from the behaviour of fluid flow to the engineering of a vehicle such that it interacts predictably
with the fluid flow. Designing aircraft for supersonic and hypersonic conditions, as well as the desire to
improve the aerodynamic efficiency of current aircraft and propulsion systems, continues to motivate new
research in aerodynamics, while work continues to be done on important problems in basic aerodynamic
theory related to flow turbulence and the existence and uniqueness of analytical solutions to the Navier–
Stokes equations.

[h]Fundamental concepts

Figure: Forces of flight on a powered aircraft in unaccelerated level flight

Understanding the motion of air around an object (often called a flow field) enables the calculation of forces
and moments acting on the object. In many aerodynamics problems, the forces of interest are the
fundamental forces of flight: lift, drag, thrust, and weight. Of these, lift and drag are aerodynamic forces, i.e.
forces due to air flow over a solid body. Calculation of these quantities is often founded upon the assumption
that the flow field behaves as a continuum. Continuum flow fields are characterized by properties such as
flow velocity, pressure, density, and temperature, which may be functions of position and time. These
properties may be directly or indirectly measured in aerodynamics experiments or calculated starting with
the equations for conservation of mass, momentum, and energy in air flows. Density, flow velocity, and an
additional property, viscosity, are used to classify flow fields.
[h]Flow classification
Flow velocity is used to classify flows according to speed regime. Subsonic flows are flow fields in which
the air speed field is always below the local speed of sound. Transonic flows include both regions of
subsonic flow and regions in which the local flow speed is greater than the local speed of sound. Supersonic
flows are defined to be flows in which the flow speed is greater than the speed of sound everywhere. A
fourth classification, hypersonic flow, refers to flows where the flow speed is much greater than the speed of
sound. Aerodynamicists disagree on the precise definition of hypersonic flow.

Compressible flow accounts for varying density within the flow. Subsonic flows are often idealized as
incompressible, i.e. the density is assumed to be constant. Transonic and supersonic flows are compressible,
and calculations that neglect the changes of density in these flow fields will yield inaccurate results.

Viscosity is associated with the frictional forces in a flow. In some flow fields, viscous effects are very
small, and approximate solutions may safely neglect viscous effects. These approximations are called
inviscid flows. Flows for which viscosity is not neglected are called viscous flows. Finally, aerodynamic
problems may also be classified by the flow environment. External aerodynamics is the study of flow around
solid objects of various shapes (e.g. around an airplane wing), while internal aerodynamics is the study of
flow through passages inside solid objects (e.g. through a jet engine).

[h]Continuum assumption
Unlike liquids and solids, gases are composed of discrete molecules which occupy only a small fraction of
the volume filled by the gas. On a molecular level, flow fields are made up of the collisions of many
individual of gas molecules between themselves and with solid surfaces. However, in most aerodynamics
applications, the discrete molecular nature of gases is ignored, and the flow field is assumed to behave as a
continuum. This assumption allows fluid properties such as density and flow velocity to be defined
everywhere within the flow.

The validity of the continuum assumption is dependent on the density of the gas and the application in
question. For the continuum assumption to be valid, the mean free path length must be much smaller than
the length scale of the application in question. For example, many aerodynamics applications deal with
aircraft flying in atmospheric conditions, where the mean free path length is on the order of micrometers and
where the body is orders of magnitude larger. In these cases, the length scale of the aircraft ranges from a
few meters to a few tens of meters, which is much larger than the mean free path length. For such
applications, the continuum assumption is reasonable. The continuum assumption is less valid for extremely
low-density flows, such as those encountered by vehicles at very high altitudes (e.g. 300,000 ft/90 km) or
satellites in Low Earth orbit. In those cases, statistical mechanics is a more accurate method of solving the
problem than is continuum aerodynamics. The Knudsen number can be used to guide the choice between
statistical mechanics and the continuous formulation of aerodynamics.

[h]Conservation laws
The assumption of a fluid continuum allows problems in aerodynamics to be solved using fluid dynamics
conservation laws. Three conservation principles are used:

Conservation of mass
Conservation of mass requires that mass is neither created nor destroyed within a flow; the mathematical
formulation of this principle is known as the mass continuity equation.
Conservation of momentum
The mathematical formulation of this principle can be considered an application of Newton's Second Law.
Momentum within a flow is only changed by external forces, which may include both surface forces, such as
viscous (frictional) forces, and body forces, such as weight. The momentum conservation principle may be
expressed as either a vector equation or separated into a set of three scalar equations (x,y,z components).
Conservation of energy
The energy conservation equation states that energy is neither created nor destroyed within a flow, and that
any addition or subtraction of energy to a volume in the flow is caused by heat transfer, or by work into and
out of the region of interest.

Together, these equations are known as the Navier–Stokes equations, although some authors define the term
to only include the momentum equation(s). The Navier–Stokes equations have no known analytical solution
and are solved in modern aerodynamics using computational techniques. Because computational methods
using high speed computers were not historically available and the high computational cost of solving these
complex equations now that they are available, simplifications of the Navier–Stokes equations have been
and continue to be employed. The Euler equations are a set of similar conservation equations which neglect
viscosity and may be used in cases where the effect of viscosity is expected to be small. Further
simplifications lead to Laplace's equation and potential flow theory. Additionally, Bernoulli's equation is a
solution in one dimension to both the momentum and energy conservation equations.

The ideal gas law or another such equation of state is often used in conjunction with these equations to form
a determined system that allows the solution for the unknown variables.

[h]Branches of aerodynamics

Figure: computational modelling

Aerodynamic problems are classified by the flow environment or properties of the flow, including flow
speed, compressibility, and viscosity. External aerodynamics is the study of flow around solid objects of
various shapes. Evaluating the lift and drag on an airplane or the shock waves that form in front of the nose
of a rocket are examples of external aerodynamics. Internal aerodynamics is the study of flow through
passages in solid objects. For instance, internal aerodynamics encompasses the study of the airflow through a
jet engine or through an air conditioning pipe.

Aerodynamic problems can also be classified according to whether the flow speed is below, near or above
the speed of sound. A problem is called subsonic if all the speeds in the problem are less than the speed of
sound, transonic if speeds both below and above the speed of sound are present (normally when the
characteristic speed is approximately the speed of sound), supersonic when the characteristic flow speed is
greater than the speed of sound, and hypersonic when the flow speed is much greater than the speed of
sound. Aerodynamicists disagree over the precise definition of hypersonic flow; a rough definition considers
flows with Mach numbers above 5 to be hypersonic.

The influence of viscosity on the flow dictates a third classification. Some problems may encounter only
very small viscous effects, in which case viscosity can be considered to be negligible. The approximations to
these problems are called inviscid flows. Flows for which viscosity cannot be neglected are called viscous
flows.

An incompressible flow is a flow in which density is constant in both time and space. Although all real
fluids are compressible, a flow is often approximated as incompressible if the effect of the density changes
cause only small changes to the calculated results. This is more likely to be true when the flow speeds are
significantly lower than the speed of sound. Effects of compressibility are more significant at speeds close to
or above the speed of sound. The Mach number is used to evaluate whether the incompressibility can be
assumed, otherwise the effects of compressibility must be included.

[h]Subsonic flow
Subsonic (or low-speed) aerodynamics describes fluid motion in flows which are much lower than the speed
of sound everywhere in the flow. There are several branches of subsonic flow but one special case arises
when the flow is inviscid, incompressible and irrotational. This case is called potential flow and allows the
differential equations that describe the flow to be a simplified version of the equations of fluid dynamics,
thus making available to the aerodynamicist a range of quick and easy solutions.

In solving a subsonic problem, one decision to be made by the aerodynamicist is whether to incorporate the
effects of compressibility. Compressibility is a description of the amount of change of density in the flow.
When the effects of compressibility on the solution are small, the assumption that density is constant may be
made. The problem is then an incompressible low-speed aerodynamics problem. When the density is
allowed to vary, the flow is called compressible. In air, compressibility effects are usually ignored when the
Mach number in the flow does not exceed 0.3 (about 335 feet (102 m) per second or 228 miles (366 km) per
hour at 60 °F (16 °C)). Above Mach 0.3, the problem flow should be described using compressible
aerodynamics.

[h]Compressible aerodynamics
According to the theory of aerodynamics, a flow is considered to be compressible if the density changes along a
streamline. This means that – unlike incompressible flow – changes in density are considered. In general, this is the
case where the Mach number in part or all of the flow exceeds 0.3. The Mach 0.3 value is rather arbitrary, but it is
used because gas flows with a Mach number below that value demonstrate changes in density of less than 5%.
Furthermore, that maximum 5% density change occurs at the stagnation point (the point on the object where flow
speed is zero), while the density changes around the rest of the object will be significantly lower. Transonic,
supersonic, and hypersonic flows are all compressible flows.

[mh]Structural Design and Materials

[h]Framework for Design and Additive Manufacturing of Specialised Multirotor UAV Parts

In the last 10 years, the market for unmanned aerial vehicles (UAVs) in the civil sector has been growing
enormously. This was certainly preceded by a period of intensive research that continues to this day, so, an
even greater step forward is expected in the future. Technological advances in the design and manufacture of
mechatronic system components have enabled many applications from the aspect of automation. The
development of control, propulsion, power supply components, and other subsystems has contributed to
greater speed of data processing and greater autonomy, which enables the performance of complex flight
missions. The development of propulsion components and numerous studies of propulsion configurations
have facilitated applications in various sectors, such as precision agriculture , surveillance , and aerial
photography . The application possibilities of UAVs are plentiful in many other sectors, such as transport ,
construction , fire protection , and more.

The propulsion configuration defines how the aircraft will move in three-dimensional space and it depends
on the type of application or mission that the UAV needs to perform. Numerous types of aircraft with
various propulsion configurations are used to perform different tasks, activities, and for research and
development. In addition to conventional types of UAVs with fixed wings and rotary wings , a number of
hybrid configurations and bioinspired propulsion configurations are being investigated. Fixed-wing aircraft
can achieve high speeds and compared to other types, consume less energy to achieve movement, but on the
other hand, unable to perform the stationary flight. Generally, they need a runway or special launchpad to be
able to take off. Aircraft with rotary wings do not have this problem because they have the ability to take off
and land vertically (VTOL), and thus stationary flight and flight at moderate speed. This makes them
suitable for missions that require complex manoeuvres and a higher degree of system autonomy. Within the
rotary-wing UAV type, there are numerous subtypes of aircraft. It is important to highlight two typical
representatives, aircraft with variable pitch propellers, such as helicopter aircraft and multirotor aircraft
(multicopter) , consisting of N rotors on which fixed-pitch propellers are mounted.

Multirotor type of UAV has greater agility and manoeuvrability, which allows them to perform missions that
involve precise and complex movements. On the other hand, they are characterised by high-energy
consumption, so it is extremely important to choose the right components and parameters of the system. The
most commonly used configuration utilises four rotors (so-called quadrotor) and to a lesser extent the
configuration with six (hexarotor), and eight rotors (octorotor). Generally, conventional configurations are
characterised by a planar geometric arrangement of an even number of rotors. In addition to conventional
purposes, a variety of propulsion configurations makes the multirotor type of UAV suitable for usage as
aerial robotic systems. Since this type of application is expected for specialised tasks, there is a need to
design custom aircraft and make small series or customised systems. It is also important to save time in the
design and production phase and lower production costs compared to conventional manufacturing
technologies. Rapid prototyping technologies, such as additive manufacturing (AM), allow the fabrication of
assembly parts of such systems . Numerous studies have shown the possibilities of rapid prototyping
technologies and their application .

In this chapter, the framework for design and AM of specialised multirotor UAV parts is presented. In the
system design phase, it is necessary to select components and design multirotor UAV based on the purpose
of the aircraft. The division into modules (subsystems) allows a greater degree of modularity that leads to a
wider range of applications (by fitting the aircraft with different equipment). In the prototyping and
production phase, the procedure for making parts using three different AM technologies is described.
Depending on the mechanical and other requirements, which are defined in the system design phase, FDM,
SLS, and SLA technologies are used within this framework. Professional and hobby 3D printers and related
software packages were used in the production process. The procedure was validated for two considered case
studies, for a small fully-actuated modular aircraft, and a heavy-lift multirotor UAV. The last part of this
chapter presents experimental testing in certain phases of the specialised UAV development, which is
necessary for this type of aircraft to be safely used.

[h]Multirotor UAV system description


Multirotor type of UAV is classified as rotary-wing UAV, aircraft that are heavier than air and are powered
by motors. The ability to take off and land vertically, hover, and fly at moderate speeds, amongst other flight
manoeuvres, allows multirotor UAVs to perform complex movements, making them suitable for a wide
range of tasks. From a mechanical point of view, the multirotor type of UAV system is described as a rigid
body consisting of N rotors (propulsion units) that exist in 3D space; hence, it has six degrees of freedom
(DOF). Such a multivariable system is mathematically described by a dynamic model with six second-order
differential equations. The geometric arrangement of the propulsion subsystem defines the aircraft
configuration. To perform missions such as aerial filming, conventional configurations characterised by a
planar arrangement of the even number of rotors are generally used. Commercial aircraft for these and
similar purposes are mainly quadrotor (quadcopter), hexarotor (hexacopter), and octorotor (octocopter)
aircraft. The listed configurations can be in + and × arrangement (layout), such as configurations shown in
Figure.
Figure. Conventional multirotor UAV configurations in ×-layout.

The design of the aircraft system primarily depends on the purpose, respectively, the mission profile that the
aircraft should typically perform. To allow easier analysis of aircraft parameters and design, the aircraft
system can be divided into four key subsystems . The equipment and payload to be carried by aircraft dictate
the choice of parameters and components of other subsystems. The rotors of the propulsion subsystem are
mainly electric propulsion units (EPUs) whose central part is a brushless DC (BLDC) motor with a
corresponding electronic speed controller (ESC), and a fixed-pitch propeller mounted on a motor rotor. By
their rotation, the propellers create aerodynamic forces and moments and directly affect the flight dynamics,
which means that the rotors angular velocities are the input variables of the propulsion subsystem. The
characteristic of the multirotor UAVs is high-energy consumption, so an energy subsystem must deliver a
large amount of energy. In conventional EPUs, the power subsystem mainly consists of one or more lithium-
polymer (LiPo) batteries with associated electronics. The design of the control subsystem or the selection of
components primarily depends on the mission or the degree of autonomy that determines the selection of the
flight controller, sensors, and other peripheral modules (telemetry, RC, VTx, and others). It follows that the
performance of a multirotor type of UAV is determined by the parameters and components of the propulsion
and energy subsystems. These two subsystems are interdependent because, for example, as the power of the
aircraft increases, the energy demand increases, resulting in a higher mass of the aircraft. The energy
requirements of the propulsion subsystem must be taken into account when selecting batteries, which, in
turn, depends on the weight and size of the aircraft and the number of EPUs. When designing a system, the
ratio of mass and capacity of the battery is one of the key data.

Figure. Multirotor UAV main subsystems.

In this chapter, the design of specialised multirotor aircraft is considered, and two case studies are presented
through the design, production, and testing phases. Aircraft, such as those used in the case study, cannot be
procured in form of commercial aircraft produced in large series. They are produced in small series or even
as unique models designed to perform a specialised task. The first case is an experimental modular
multirotor (EMMR) UAV with a power of 350–700 W, which has so far been proposed as an engineering
educational platform . EMMR can be used as an aerial robotic system since fully-actuated UAV
configurations can be assembled. Such a platform represents a suitable engineering educational tool due to
the complexity of the system, which requires an interdisciplinary approach in the field of mechanical
engineering, electrical engineering, and computing. The second case is a heavy lift aircraft that can be a
power of approximately 10–20 kW, depending on the number of rotors. Such an aircraft is considered for
use in precision agriculture for smart spraying tasks. In addition to the fact that these aircraft are not
commercially available in a form that would allow change of the parameters within open-source software, it
is also important to point out that in small series production the cost per unit increases dramatically. For this
reason, technologies for rapid prototyping were chosen, mostly AM in which the cost per unit is the same
regardless of the number of units produced , which is a known fact described in numerous studies . AM is
often appropriate for small to medium-sized production series but there is always an inflexion point at which
other manufacturing methods become more cost-effective.

Figure. Cost per unit with respect to quantity for conventional and additive manufacturing technologies.

[h]Additive manufacturing technologies


In this chapter, AM technologies are used for the rapid prototyping and development of specialised
multirotor UAVs. In addition to the fact that for small batches AM is cheaper compared to conventional
processes, it also significantly shortens the development time by rapid iteration and the possibility of early
and often testing many different designs or partial designs with critical features, which further reduce the
cost of the final product. Conventional production technologies are much more expensive for small batches
due to preparation, tool selection, manufacturing of tools, and other costs. AM, on the other hand, allows the
production of parts directly from solid CAD models using software packages, so-called slicers. AM is also
suitable for the production of spare parts for damaged aircraft.

There are a large number of low-cost 3D printers on the market, so for low-power multirotor aircraft, parts
can be produced very cheaply and quickly. 3D printers may vary greatly in price, size, material, and AM
technology used. The paper further considers three AM technologies: FDM, SLS, and SLA. 3D printing uses
a wide range of materials, the choice of which is related to AM technology and the purpose of the part. In the
case of aircraft parts, plastic materials in the raw form of filament, powder, or resin are mainly used. To
determine whether certain materials and AM technologies are suitable for the production of a particular part,
the desired strength, stiffness, and weight of the part must be taken into account, but the influence of
environmental conditions and the expected duration of the part must also be considered. In addition to the
choice of material, the mechanical properties of the part can be alternated and adjusted by changing the
printing parameters and the orientation of the printed part. Because parts are fabricated gradually, layer by
layer, the inevitable result is the anisotropic properties of printed parts. Better mechanical properties are
achieved along with the printing layer and worse in a direction normal to the printing layer. There are many
ways in which the mechanical properties of materials can be tested . Also, greater precision and greater
detailed geometry can be achieved in planes parallel to the print layer where print accuracy is higher. Table
shows the main characteristic of the used 3D printers in combination with the associated software.

AM technology 3D printer Raw material form Build volume Software

FDM Prusa i3 MK3S+ Continuous thermoplastic filaments 250 × 210 × 210 mm PrusaSlicer

FDM Markforged Onyx Pro Composite base filaments 320 × 132 × 154 mm Eiger

SLS Sinterit Lisa Pro Powder 150 × 200 × 260 mm Sinterit Studio

SLA Formlabs Form 3 Resin 145 × 145 × 185 mm PreForm

Table. Used 3D printers with associated software.

[h]Fused deposition modelling


Fused deposition modelling (FDM) or known as fused filament fabrication (FFF) is a manufacturing
technology in which objects are created by extruding polymer filament onto a built platform through a
heated nozzle. There are numerous versions of FDM printers with various price ranges. In this research,
Prusa i3 MK3 is used as a low-cost FDM printer where the platform moves in the Y-axis and the nozzle in
the X- and Z-axes. When one layer is done, the nozzle will move up vertically to allow a new layer to be
applied to the previous one. The thickness of the layer (slice) depends on the print parameters, and in the
case of the used Prusa printer, the slices are between 0.05 and 0.30 mm thick . Prior to the AM process, the
constructed CAD model must be exported in a compatible file format, such as STL. Such a model is then cut
into horizontal slices in a software package (so-called slicer). The paths of the platform and the nozzle are
calculated by the software according to the parameters set by the user. In addition to the mentioned layer
thickness, which significantly affects the accuracy, some of the other variable parameters are the number of
layers in the outer wall and the number of layers at the bottom and top of the part, the percentage and
structure of the filling, extrusion speed, and others. Because the next printing layer prints on top of the last
one, supporting structures are required to print large overhangs or holes. They are printed together with the
part and removed after printing is done. In general, overhangs should be avoided by proper orientation of the
part or by using angled overhangs where possible. The most common materials used in FDM technology are
ABS, PLA, PC, ASA, PPSF/PPSU, ULTEM, PH-HD. PE-LD, PET, TPU, and others. Figure shows a
working principle of the FDM technology.
Figure. The principle of operation of FDM technology.

[h]Continuous fibre fabrication


In addition to classic FDM technology, devices that can produce parts from composite materials using FDM
processes are known as continuous fibre fabrication (CFF). In this paper, Markforged Onyx Pro is used, in
which the platform moves in the Z-axis and the nozzle in the X- and Y-axes. Compared to the Prusa printer,
it is a much more expensive device but allows 3D printing of composite materials made of plastic matrix and
inlaid fibreglass fibres for better mechanical properties and increased lifetime, compared to plastic alone.
The strength and stiffness of a fibre-reinforced part can be comparable to aluminium. The software package
allows adjustment of the classic print parameters and further adjustment of the composite reinforcements
parameters as shown in Figure.

Figure. Fibre reinforcement layout—CFF technology .

[h]Selective laser sintering


The next AM technology considered in the chapter is selective laser sintering, which with the advent of
cheaper 3D printer systems allows the application not only for industrial purposes but also for research. The
material used in this technology is available in the form of powder that is laser-sintered to create a designed
geometry. The powder delivery mechanism consists of two chambers, in the first, there is construction
powder that is delivered to the second chamber through rollers and a piston in form of a powder layer. In the
second chamber, a layer is precisely sintered to the desired shape utilising laser beams. This technology does
not require a support structure, as the unsintered powder provides support to the object under construction.
This allows the production of parts of more complex geometry from different types of materials, and it is
possible to produce prefabricated assemblies with movable joints. After the production process, further
processing of the part or assembly is required to achieve certain mechanical properties of the finishing
quality. In this chapter, the SLS system is discussed, which consists of the SLS 3D printer Sinterit Lisa Pro
and the associated equipment for the preparation of powder materials (nylon 11, nylon 12, TPU, TPE, and
polypropylene) and processing of parts and assemblies. Figure shows the working principle of the SLS.

Figure. The principle of operation of SLS technology.

[h]Stereolithography
Stereolithography (SLA) is the first commercially available AM technology developed in 1986 by 3D
Systems. With this technology, CAD models are created by curing polymer resin using a laser beam system.
With SLA technology, the laser is focussed on a mirror scanning system that cures polymer resin with very
high precision. When one layer is cured by laser, the built platform moves upwards in the z-direction and the
new layer can be treated. The materials for SLA are thermoset photosensitive resin-shaped polymers. SLA
technology makes it possible to achieve high accuracy and a smooth surface, making it the most cost-
effective AM technology. Compared to the previously considered technologies, SLA parts have poorer
mechanical properties; therefore, SLA technology is not recommended for structurally loaded parts. Figure
shows the scheme of the SLA procedure.
Figure. The principle of operation of SLA technology.

Framework for additive manufacturing of specialised UAV parts


The design of the multirotor type of UAV propulsion subsystem is considered and the additive
manufacturing framework is shown. This framework can also be used for rapid prototyping of parts from
carbon fibre plates. The process of making parts is presented for two experimental aircraft that can be used
for specialised purposes, such as performing tasks involving complex and precise movements and in tasks
involving the transfer of heavy cargo.

[h]Propulsion subsystem design considerations


The propulsion subsystem is defined by the parameters of the geometric arrangement and characteristics of
the EPUs. A suitable fixed-pitch propeller is mounted on the rotor of the outrunner BLDC motor . The basic
parameter of a propeller is its diameter. As the diameter of the propeller increases, the angular velocity of the
motor rotor decreases. The motor is defined by a motor velocity constant kV. Motors with a lower motor
constant are used in combination with larger diameter propellers and are driven at higher voltages. The ESC
is responsible for starting the motor and, depending on the control signal, controls the motor speed. The
EPUs are connected to one or more LiPo batteries of the appropriate number of cells and capacity.

Figure. Electric propulsion unit of multirotor type of UAV .

The motor stator must be connected to the aircraft assembly which consists of a central part and the rotor
arms. Propulsion assembly design is the most complex part of the overall design in terms of the mechanical
properties that assembly parts should possess. The aircraft can be used in a wide range of powers, from a few
tens of watts to several tens of kilowatts. It is necessary to choose materials and technologies concerning the
selected propulsion components. Figurea shows the stator geometry which is important from the aspect of
mounting the motor to the aircraft assembly. Figureb shows the characteristics of the propulsion unit
considered in the case of a heavy-lift aircraft.
Figure. Electric propulsion unit: (a) BLDC motor geometry ; (b) characteristics.

The configuration of the multirotor UAV is defined by the geometric arrangement of the rotors. Mostly
conventional configurations with a planar rotor layout are commercially available. It is possible to select
configuration parameters that will result in an increased degree of actuation, which potentially allows the
performance of complex tasks in the field of aerial robotics. A fully-actuated aircraft with passively tilted
rotor arms are considered in this research .

Figure Fully-actuated multirotor configurations with passively tilted rotors: (a) PTX6; (b) PTX8.

[h]Additive manufacturing procedure


A framework for the production of parts for specialised multirotor UAVs using additive manufacturing is
presented. It consists of an aircraft design stage in which various software packages can be used for the
needs of 3D modelling of parts and assemblies, and also for simulations. In this research, the
SOLIDWORKS software package is used in the design stage. After the process of creating a model is done,
triangulation of the 3D CAD model is performed and the model is exported into an STL format. In the
prototyping stage, it is necessary to adjust the parameters of the 3D print in accordance with the selected AM
technology using associated software, the so-called slicer. The next step is the execution of the g-code by
which the given parts are produced. After finishing the print, the parts need to be post-processed .
Figure Additive manufacturing procedure.

[h]Experimental verification
Manufactured parts of specialised multirotor UAVs are connected together with other components into
functional assemblies. Through the prototyping phase, different test phases were conducted for the two
aircraft based on propulsion units with the parameters given in Table. By assembling and testing individual
subsystems, potential design errors can be identified, and improvements offered.

Multirotor configuration BLDC motor Propeller ESC

PTX6 MN1806 CF7024 Air 10A


D = 500 mm 1400 Kv d = 7″ 3S

X4 P80 G32x11 Flame 80A


D = 1500 mm 100 Kv d = 32″ 12S

Table. Considered multirotor configuration main parameters.

The control subsystem of the experimental aircraft is based on the open-source Pixhawk FC. To operate a
fully-actuated aircraft, custom firmware has been developed. Figure2 shows the indoor testing phase where
attitude control experiments were conducted. Indoor testing provides a safe way to set the basic parameters
of the control subsystem and set up and test all safety elements. It is also possible to tune the parameters of
the control algorithm. After the indoor phase, the remote control of the aircraft was tested in two cases that
differ by control inputs from the RC transmitter. The first case is represented with conventional control
inputs (thrust, roll, pitch, and yaw), while in the second, control inputs were three forces and yaw moment
with respect to body axes.

Figure Experimental testing of PTX6 configuration in case of attitude control.

For the second experimental aircraft, the propulsion unit was tested in different operating regimes at the full
power range. Characteristics were obtained and other parameters, such as heating, were monitored . Given
the power of the aircraft, the described framework is used in a wider range of rapid prototyping, which
includes cutting carbon plates, which together with printed parts and prefabricated tubes form the rotor arm
assembly . In the coming period, it is planned to assemble the propulsion subsystem into a functional
assembly so that tests can be carried out as in the case of the first experimental aircraft.
Figure Heavy-lift aircraft propulsion: (a) EPU testing; (b) EPU assembly.

[mh]Modular and Scalable Architectures


Modular design, or modularity in design, is a design principle that subdivides a system into smaller parts
called modules (such as modular process skids), which can be independently created, modified, replaced, or
exchanged with other modules or between different systems. A modular design can be characterized by
functional partitioning into discrete scalable and reusable modules, rigorous use of well-defined modular
interfaces, and making use of industry standards for interfaces. In this context modularity is at the
component level, and has a single dimension, component slottability. A modular system with this limited
modularity is generally known as a platform system that uses modular components. Examples are car
platforms or the USB port in computer engineering platforms.

In design theory this is distinct from a modular system which has higher dimensional modularity and degrees
of freedom. A modular system design has no distinct lifetime and exhibits flexibility in at least three
dimensions. In this respect modular systems are very rare in markets. Mero architectural systems are the
closest example to a modular system in terms of hard products in markets. Weapons platforms, especially in
aerospace, tend to be modular systems, wherein the airframe is designed to be upgraded multiple times
during its lifetime, without the purchase of a completely new system. Modularity is best defined by the
dimensions effected or the degrees of freedom in form, cost, or operation.

Modularity offers benefits such as reduction in cost , interoperability, shorter learning time, flexibility in
design, non-generationally constrained augmentation or updating (adding new solution by merely plugging
in a new module), and exclusion. Modularity in platform systems, offer benefits in returning margins to
scale, reduced product development cost, reduced O&M costs, and time to market. Platform systems have
enabled the wide use of system design in markets and the ability for product companies to separate the rate
of the product cycle from the R&D paths. The biggest drawback with modular systems is the designer or
engineer. Most designers are poorly trained in systems analysis and most engineers are poorly trained in
design. The design complexity of a modular system is significantly higher than a platform system and
requires experts in design and product strategy during the conception phase of system development. That
phase must anticipate the directions and levels of flexibility necessary in the system to deliver the modular
benefits. Modular systems could be viewed as more complete or holistic design whereas platforms systems
are more reductionist, limiting modularity to components. Complete or holistic modular design requires a
much higher level of design skill and sophistication than the more common platform system.

Cars, computers, process systems, solar panels, wind turbines, elevators, furniture, looms, railroad signaling
systems, telephone exchanges, pipe organs, synthesizers, electric power distribution systems and modular
buildings are examples of platform systems using various levels of component modularity. For example, one
cannot assemble a solar cube from extant solar components or easily replace the engine on a truck or
rearrange a modular housing unit into a different configuration after a few years, as would be the case in a
modular system. These key characteristics make modular furniture incredibly versatile and adaptable. The
only extant examples of modular systems in today's market are some software systems that have shifted
away from versioning into a completely networked paradigm.

Modular design inherently combines the mass production advantages of standardization with those of
customization. The degree of modularity, dimensionally, determines the degree of customization possible.
For example, solar panel systems have 2-dimensional modularity which allows adjustment of an array in the
x and y dimensions. Further dimensions of modularity would be introduced by making the panel itself and
its auxiliary systems modular. Dimensions in modular systems are defined as the effected parameter such as
shape or cost or lifecycle. Mero systems have 4-dimensional modularity, x, y, z, and structural load capacity.
As can be seen in any modern convention space, the space frame's extra two dimensions of modularity
allows far greater flexibility in form and function than solar's 2-d modularity. If modularity is properly
defined and conceived in the design strategy, modular systems can create significant competitive advantage
in markets. A true modular system does not need to rely on product cycles to adapt its functionality to the
current market state. Properly designed modular systems also introduce the economic advantage of not
carrying dead capacity, increasing the capacity utilization rate and its effect on cost and pricing flexibility.

Aspects of modular design can be seen in cars or other vehicles to the extent of there being certain parts to
the car that can be added or removed without altering the rest of the car.

A simple example of modular design in cars is the fact that, while many cars come as a basic model, paying
extra will allow for "snap in" upgrades such as a more powerful engine, vehicle audio, ventilated seats, or
seasonal tires; these do not require any change to other units of the car such as the chassis, steering, electric
motor or battery systems.

[h]In machines and architecture


Modular design can be seen in certain buildings. Modular buildings (and also modular homes) generally
consist of universal parts (or modules) that are manufactured in a factory and then shipped to a build site
where they are assembled into a variety of arrangements.

Modular buildings can be added to or reduced in size by adding or removing certain components. This can
be done without altering larger portions of the building. Modular buildings can also undergo changes in
functionality using the same process of adding or removing components.

Figure: Modular workstations

For example, an office building can be built using modular parts such as walls, frames, doors, ceilings, and
windows. The interior can then be partitioned (or divided) with more walls and furnished with desks,
computers, and whatever else is needed for a functioning workspace. If the office needs to be expanded or
redivided to accommodate employees, modular components such as wall panels can be added or relocated to
make the necessary changes without altering the whole building. Later, this same office can be broken down
and rearranged to form a retail space, conference hall or another type of building, using the same modular
components that originally formed the office building. The new building can then be refurnished with
whatever items are needed to carry out its desired functions.

Other types of modular buildings that are offered from a company like Allied Modular include a guardhouse,
machine enclosure, press box, conference room, two-story building, clean room and many more applications.

Many misconceptions are held regarding modular buildings. In reality modular construction is a viable
method of construction for quick turnaround and fast growing companies. Industries that would benefit from
this include healthcare, commercial, retail, military, and multi-family/student housing.

[h]In computer hardware

Figure: Modular computer design

Modular design in computer hardware is the same as in other things (e.g. cars, refrigerators, and furniture).
The idea is to build computers with easily replaceable parts that use standardized interfaces. This technique
allows a user to upgrade certain aspects of the computer easily without having to buy another computer
altogether.

A computer is one of the best examples of modular design. Typical computer modules include a computer
chassis, power supply units, processors, mainboards, graphics cards, hard drives, and optical drives. All of
these parts should be easily interchangeable as long as the user uses parts that support the same standard
interface.

The idea of a modular smartphone was explored in Project Ara, which provided a platform for manufactures
to create modules for a smartphone which could then be customised by the end user. The Fairphone uses a
similar principle, where the user can purchase individual parts to repair or upgrade the phone.

In televisions

In 1963 Motorola introduced the first rectangular color picture tube, and in 1967 introduced the modular
Quasar brand. In 1964 it opened its first research and development branch outside of the United States, in
Israel under the management of Moses Basin. In 1974 Motorola sold its television business to the Japan-
based Matsushita, the parent company of Panasonic.

[h]In weaponry
Some firearms and weaponry use a modular design to make maintenance and operation easier and more
familiar. For instance, German firearms manufacturer Heckler & Koch produces several weapons that, while
being different types, are visually and, in many instances, internally similar. These are the G3 battle rifle,
HK21 general-purpose machine gun, MP5 submachine gun, HK33 and G41 assault rifles, and PSG1 sniper
rifle.

[h]In trade show exhibits and retail displays


The concept of modular design has become popular with trade show exhibits and retail promotional displays.
These kind of promotional displays involve creative custom designs but need a temporary structure that can
be reusable. Thus many companies are adapting to the Modular way of exhibit design. In this they can use
pre engineered modular systems that act as building blocks to creative a custom design. These can then be
reconfigured to another layout and reused for a future show. This enables the user to reduce cost of
manufacturing and labor (for set up and transport) and is a more sustainable way of creating experiential set
ups.

Some authors observe that modular design has generated in the vehicle industry a constant increase of
weight over time. Trancossi advanced the hypothesis that modular design can be coupled by some
optimization criteria derived from the constructal law. In fact, the constructal law is modular for his nature
and can apply with interesting results in engineering simple systems. It applies with a typical bottom-up
optimization schema:

 a system can be divided into subsystems (elemental parts) using tree models;
 any complex system can be represented in a modular way and it is possible to describe how different
physical magnitudes flow through the system;
 analyzing the different flowpaths it is possible to identify the critical components that affect the
performance of the system;
 by optimizing those components and substituting them with more performing ones, it is possible to
improve the performances of the system.

A better formulation has been produced during the MAAT EU FP7 Project. A new design method that
couples the above bottom-up optimization with a preliminary system level top-down design has been
formulated. The two step design process has been motivated by considering that constructal and modular
design does not refer to any objective to be reached in the design process. A theoretical formulation has been
provided in a recent paper, and applied with success to the design of a small aircraft, the conceptual design
of innovative commuter aircraft, the design of a new entropic wall, and an innovative off-road vehicle
designed for energy efficiency.

Chapter 3: Navigating the Sky: Avionics and Control Systems

[mh]Air Traffic Control Tracking Systems Performance Impacts with New Surveillance
Technology Sensors

Nowadays, the radar is no longer the only technology able to ensure the surveillance of air traffic. The
extensive deployment of satellite systems and air-to-ground data links lead to the emergence of other means
and techniques on which a great deal of research and experiments have been carried out over the past ten
years.

In such an environment, the sensor data processing, which is a key element of an Air Traffic Control center,
has been continuously upgraded so as to follow the sensor technology evolution and, at the same time,
ensure a more efficient tracking continuity, integrity and accuracy.

In this book chapter we propose to measure the impacts of the use of these new technology sensors in the
tracking systems currently used for Air Traffic Control applications.

The first part of the chapter describes the background of new-technology sensors that are currently used by
sensor data processing systems. In addition, a brief definition of internal core tracking algorithms used in
sensor data processing components, is given as well as a comparison between their respective advantages
and drawbacks.

The second part of the chapter focuses on the Multi Sensor Tracking System performance requirements.
Investigation regarding the use of Automatic Dependent Surveillance – Broadcast reports and/or with a multi
radars configuration, are conducted.

The third part deals with the impacts of the “virtual radar” or “radar-like” approaches that can be used with
ADS-B sensors, on the multi sensor tracking system performance.

The fourth and last part of the chapter discusses the impacts of sensor data processing performance on sub-
sequent safety nets functions that are:

 Short term conflict alerts (STCA),


 Minimum Safe Altitude Warnings (MSAW), and
 Area Proximity Warnings (APW).

[h]Air traffic control


Air Traffic Control (ATC) is a service provided to regulate the airline traffic. Main functions of the ATC
system are used by controllers to (i) avoid collisions between aircrafts, (ii) avoid collisions on maneuvering
areas between aircrafts and obstructions on the ground and (iii) expediting and maintaining the orderly flow
of air traffic.

[h]Surveillance sensors
Surveillance sensors are at the beginning of the chain: the aim of these systems is to detect the aircrafts and
to send all the available information to the tracking systems.
Figure. Surveillance sensor environment

Current surveillance systems use redundant primary and secondary radars. The progressive deployment of
the GPS-based ADS systems shall gradually change the role of the ground based radars. The evolution to the
next generation of surveillance system shall also take into account the interoperability and compatibility with
current systems in use.

The figure above shows a mix of radar, ADS and Multilateration technologies which will be integrated and
fused in ATC centers in order to provide with a high integrity and high accuracy surveillance based on
multiple sensor inputs.

[h]. Primary Surveillance Radar (PSR)


Primary radars use the electromagnetic waves reflection principle. The system measures the time difference
between the emission and the reception of the reflected wave on a target in order to determine its range. The
target position is determined by measuring the antenna azimuth at the time of the detection.

Reflections occur on the targets (i.e. aircrafts) but unfortunately also on fixed objects (buildings) or mobile
objects (trucks). These kind of detections are considered as parasites and the “radar data processing”
function is in charge of their suppression.

The primary surveillance technology applies also to Airport Surface Detection Equipment (ASDE) and
Surface Movement Radar (SMR).
[h]. Secondary Surveillance Radar (SSR)
Secondary Surveillance Radar includes two elements: an interrogative ground station and a transponder on
board of the aircraft. The transponder answers to the ground station interrogations giving its range and its
azimuth.

The development of the SSR occurs with the use of Mode A/C and then Mode S for the civil aviation.

Mode A/C transponders give the identification (Mode A code) and the altitude (Mode C code).
Consequently, the ground station knows the 3-dimension position and the identity of the targets.

Mode S is an improvement of the Mode A/C as it contains all its functions and allows a selective
interrogation of the targets thanks to the use of an unique address coded on 24 bits as well as a bi-directional
data link which allows the exchange of information between air and ground.

[h]. Multilateration sensors


A multilateration system is composed of several beacons which receive the signals which are emitted by the
aircraft transponder. The purpose is still to be able to localize the aircraft. These signals are either unsolicited
(squitters) or answers (SSR or Mode S) to the interrogations of a nearby interrogator system (can be a radar).
Localization is performed thanks to the Time Difference Of Arrival (TDOA) principle. For each beacons
pair, hyperbolic surfaces whose difference in distance to these beacons is constant are determined. The
aircraft position is at the intersection of these surfaces.

The accuracy of a multilateration system depends on the geometry of the system formed by the aircraft and
the beacons as well as the precision of the measurement time of arrival.

Nowadays, multilateration is used mainly for ground movement’s surveillance and for the airport approaches
(MLAT). Its use for en-route surveillance is on the way of deployment ).

[h]Automatic Dependant Surveillance – Broadcast (ADS-B)


The aircraft uses its satellite-based or inertial systems to determine and send to the ATC center its position
and other sort of information. Aircraft position and speed are transmitted one time per second at least.

ADS-B messages (squitters) are sent, conversely to ADS-C messages which are transmitted via a point-to-
point communication. By way of consequence, the ADS-B system is used both for ATC surveillance and on-
board surveillance applications.

[h]Sensor data processing


As shown in figure hereunder, a sensor data processing is composed generally of two redundant trackers.
Radar (including Surface Movement Radar) data are received directly by the trackers while ADS-B and
WAM sensor gateways help in reducing the data flow as well as checking integrity and consistency.
Figure. Sensor Data Processing

As shown in figure above, trackers are potentially redundant in order to prevent from sub-systems failure.

Sensor Data Processing architectures have been shown and discussed in details in .

[h]Sensor characteristics and scenarios


Radar sensor characteristics are available in table.

ADS-B sensor characteristics are available in table.

Scenarios that are used to compare the horizontal tracking performance among all possible sensor
configurations are composed of straight line motion followed by a set of maneuvers including turn with
different bank angles.

These scenarios are mainly derived from the EUROCONTROL performances described in . They have been
used to provide relative comparisons. Results extrapolation to live data feeds must take into account the
sensor configuration, the traffic repartition over the surveillance coverage and specific sensor characteristics.

RADAR CHARACTERISTICS PSR SSR PSR + SSR

Up to 250
Range Up to 250 NM Up to 250 NM
NM

Antenna rotation time 4 up to 12 s 4 up to 12 s 4 up to 12 s

Probability of detection "/ 90 % "/ 97 % "/ 95 %

Clutter density (number of plots per scan) 40

Nominal measurement accuracy: - Range (m) - Azimuth (deg) 40 0.07 30 < 0.06 30 < 0.06
Measurement quantization (ASTERIX standard): - Range (NM) -
1/256 0.0055 1/256 0.0055 1/256 0.0055
Azimuth (deg)

< 0.2 < 0.1 < < 0.2 < 0.1 <
SSR false plots (%): - Reflection - Side lobes - Splits
0.1 0.1

Mode A code detection probability "/ 98 % "/ 98 %

Mode C code detection probability "/ 96 % "/ 96 %

Mode C measurement accuracy (m) 7.62 7.62

Time stamp error <= 100 ms <= 100 ms <= 100 ms

Nominal time stamp error (time disorder) 50 ms 50 ms 50 ms

Table. Radar sensor characteristics

ADS-B CHARACTERISTICS (1090ES) NOMINAL VALUE

Range 250 NM

Refresh period 1s

Probability of detection "/ 95%

Nominal Position Standard Deviation

Figure Of Merit 7

Altitude Standard Deviation 25 fts

ADS-B transponder consistency 100%

Table. ADS-B sensor characteristics

[h] Simulation results


Multi sensor tracking accuracy has been evaluated among 5 sensor configurations that are:

1. PSR only: radar with 4s revolution period,


2. SSR only: radar with 4s revolution period,
3. Multi radars configuration including 1 PSR radar, 1 SSR radar and 1 PSR + SSR radar,
4. ADS-B only: one ADS-B ground station at 1s update rate,
5. Multi sensors configuration that includes both multi radars configuration and the ADS-B ground
station.
Figure. RMS position error comparison

Multi sensor tracking coverage helps to globally improve the tracking performance in term of:

 Latency metrics: Latency reduced in update/broadcast modes to several hundreds of milliseconds


instead of several seconds thanks to:
1. the update rate of new technology sensors (1s) compared to radar sensors (at least 4s and up
to 12s),
2. the variable update technique used which does not make any bufferisation of new technology
sensors data.
 Continuity/integrity metrics:
1. Possible reduction of multi sensor tracks broadcast cycle thanks to the update rate of new-
technology sensors,
2. Quicker track initiation.
3. Bigger coverage areas including airport areas (MLAT) and desert areas (ADS-B) where no
radar data are available,
 Accuracy metrics:
1. Improved accuracy even if the multi sensor configuration relies on one ADS-B ground station
only, as can be seen on figure.

[h]Virtual radar emulation – “radar like” solutions


As can be seen in the previous paragraph, introduction of new technology sensors in the tracking systems
that are used for Air Traffic Control applications improves the global performance of the systems compared
to what is used at the current time (multi radar tracking systems).

Use of these new technology sensors require an evolution that leads from multi-radar tracking systems to
multi-sensor tracking systems.
Figure. Virtual radar concept

However, in most cases, the transition from the existing radar based surveillance means (network, radar data
processing, …) cannot be done straight away, and the Air Navigation Service Providers mainly ask for an
integration of these new sensors into the existing system by a “radar-like” or “virtual radar” approach. Then,
decisions could be done to have the WAM/MLAT reports or ADS-B reports appearing as if they are from
any radars. This process is explained in details in (Thompson et. Al). This concept can be shown in figure.

Most of the advantages of the “radar-like” or “virtual radar” approaches are discussed in and in .

"Radar like" approach with new


Multi radars tracking system
technology sensors as ADS-B and WAM

Multi sensor coverage allowed: provides Only multi radars coverage. When an area is
Multi sensor coverage
coverage where none currently exist. covered by ADS-B only, no control can be done.

Transition from former to New technology sensors not used in existing


Allow transition and test environment
new technology systems

Table. “Radar like” solution main advantages

A comparison between a “radar like” approach and an integrated multi sensor fusion with Variable Update
technique is done in the following table.

Integrated multi sensor


"Radar like" approach fusion with Variable
Update technique

Existing radar Degrade the quality of the ADS-B / WAM report by introducing an No latency introduced by
data network additional latency (at least 1 s) to buffer the reports. The refresh r ate is any radar data network. The
impacts increased to typically 4 s (3 report ignored upon 4) or 12s (11 reports refresh rate is the one
provided by the sensor
ignored upon 12).
itself.

Time stamping available in


Depending on the radar data format, the time stamping is sometimes
Time stamping the ADS-B and WAM
not available.
standard.

This approach is not able to associate a correct standard deviation to the


Fitting accurate polar radar coordinates. For Radar, the error standard deviation in range
data into useless and azimuth are fixed. For ADS-B / WAM report, the standard Information available in the
radar format deviation is not constant and mainly depends either on the satellite ADS-B / WAM standards.
impacts configuration / Inertial Navigation System precision/bias or on the
geometry of the receivers.

Down-linked
Does not allow the transmission of DAPs information including Mode
Aircraft Information available in the
S data if CD2 or ASTERIX Category 001 /002 is used to transmit ADS-
Parameters ADS-B / WAM standards.
B / WAM data
(DAPs)

Table. “Radar like” solution discussion

Figure provides a comprehensive comparison of the RMS position error accuracy between three
configurations:

1. ADS-B data are fitted into a multi sensor tracking system using Multiple Report Variable Update
technique,
2. ADS-B data fitted into standard radar data and multi sensor tracking system makes use of these
ADS-B data as they are radar ones,
3. ADS-B data fitted into useless radar data format (introducing high quantization in range and in
azimuth: Common Digitizer 2 format) and multi sensor tracking system makes use of these ADS-B
data as they are radar ones.

Figure. RMS position error comparison between “radar like” and standard data fusion

By way of conclusion, we can say that:


 the “radar like” solution is interesting, whatever the kind of coverage:
1. when the tracking system is based on a track-to-track data fusion technique, and
2. when the ADS-B data has a high level of integrity.
 the “radar like” solution is interesting only when the area to cover is not yet covered by other kind of
sensors when the existing tracking system uses a multiple report variable update technique,
 the accuracy of “radar-like” solution is worse than if we use the available ADS-B standards ,
 the gain in term of accuracy is very low when the area is covered by multiple radars.

[h]Safety Nets impacts


Safety Nets are functions intended to alert air traffic controllers to potentially hazardous situations in an
effective manner and with sufficient warning time so that they can issue appropriate instructions to resolve
the situation.

Safety Nets monitoring systems typically include:

 Short term conflict alerts (STCA),


 Minimum safe altitude warnings (MSAW),
 Area proximity warnings (APW).

[h]Definitions
STCA (Short Term Conflict Alert) checks possible conflicting trajectories in a time horizon of about 2 or 3
minutes and alerts the controller prior the loss of separation. The algorithms used may also provide in some
systems a possible vectoring solution, that is, the manner in which to turn, descend, or climb the aircraft in
order to avoid infringing the minimum safety distance or altitude clearance.

Minimum Safe Altitude Warning (MSAW) is a sub-system that alerts the controller if an aircraft appears to
be flying too low to the ground or will impact terrain based on its current altitude and heading.

Area Penetration Warning (APW) is a tool that informs any controller that a flight will penetrate a restricted
area.

[h]Performance impacts discussion


The most widely used safety net is STCA which is mandatory in many areas and appreciated by air traffic
controllers. STCA requires short term trajectory predictions of up to 2 minutes. This is the maximum time
over which it is considered valid to predict aircraft paths based solely on surveillance data. The trajectory
data are

The utility of safety nets depends on both the reliability of conflict detection and the false alert rate. The
false alert rate tends to be highest in the areas where such tools are most needed i.e. in the Terminal Major
Areas and particularly during the approach and climb out phases of flight.

Safety nets function directly benefits from the more accurate state vector (position and velocity for both
horizontal and vertical axis) provided by any multi sensor tracking system. Indeed, the use of more accurate
information and Down-linked Aircraft Parameters such as ADS-B or MLAT/WAM, specifically in Terminal
Major Areas, improves the tracking in term of accuracy.

These enhancements of the safety nets ensure safer and more efficient operations, by taking into account the
development of new approach and climb procedures and by generalizing the use of user defined routes and
closely spaced route networks.
The possibility of using additional information (such as Aircraft Derived Data) for improving prediction
(with regard to safety issues) needs to be mentioned, as well as the technical feasibility of adapting safety
nets separation parameters to aircraft types.

Figure. RMS heading error comparison between update and broadcast at several update rates

Figure. RMS velocity error comparison between update and broadcast at several update rates

Multi sensor tracking performance helps to globally improve the STCA sub-system performance in term of:

 Quicker STCA detection thanks to the reduction of multi sensor tracks broadcast rate:
1. the update rate of new technology sensors (1s) compared to radar sensors (at least 4s and up
to 12s),
2. the variable update technique used which does not make any bufferisation of new technology
sensors data.
 Reduction of tolerances required for STCA,
 More accurate multi sensor track velocity vector as can be seen on figures 6 and 7 that leads to less
false STCA’s, especially for maneuvering aircrafts,
 Transmission of down-linked parameters including rate of turn and trajectory intent information that
helps the STCA to enhance and predict the track state vector more accurately.

Nowadays, the development of advanced ATM systems is realized by the implementation of advanced
means of communication, navigation and surveillance for air traffic control (CNS/ATM).

The definition of a new set of surveillance standards has allowed the emergence of a post-radar infrastructure
based on data-link technology. The integration of this new technology into gate-to-gate architectures has
notably the following purposes:

 fluxing air traffic which is growing continuously,


 increasing safety related to aircraft operations,
 reducing global costs (fuel cost is increasing quickly and this seems to be a long-term tendency), and
 reducing radio-radiation and improving the ecological situation.

In this context, sensor data processing will continue to play its key rule and its software as well as its
hardware architecture is expected to evolve in the meantime. In a previous paper ), we investigated the past
and future of the sensor data processing architectures. In this paper, we have demonstrate the interest to
integrate new technology sensors either in existing centers through the use of “radar-like” solutions or in
future ATC centers in order to improve the global performance of the system.

Chapter 4: Propelling Progress: UAV Power and Propulsion

[mh] Wetland Monitoring Using Unmanned Aerial Vehicles with Electrical Distributed
Propulsion Systems

The Andean region, which comprises paramos and wetlands, is considered a biodiversity hotspot that
contains about one-sixth of earth’s plant life . This extension of land is of great importance, since it
represents the main water reservoir for major cities in Colombia, Ecuador, and Peru . Both wetlands and
paramos are endangered ecosystems, and, hence, efficient and suitable monitoring solutions are urgently
needed. In this way, different monitoring techniques including satellite imagery and the use of high-
resolution cameras mounted on manned airplanes have been utilized. Nonetheless, the aforesaid methods are
not commonly affordable because they are costly and require long setup times.

The advent of unmanned aerial vehicles (UAVs) has encouraged periodical and low-cost management of
threatened ecosystems through real-time data acquisition. The incursion of aerial platforms into forestry
remote sensing has had a positive impact thanks to the usage of high-resolution sensors to gather data
regarding flora health, species inventory, or mapping in a periodic way. In this respect, multicopters have
been seen as the first option for monitoring; however, their low autonomy limits the area covered per flight.
Conversely, fixed-wing UAVs have been introduced to overfly larger areas. The imagery provided by these
tools has been collected using different payloads, ranging from basic RGB cameras to sophisticated radars.
Nevertheless, the time employed for a specific mission profile is higher when operating at the Andean
highlands because of the harsh atmospheric conditions, which constrain the UAV autonomy and
performance.

Commercial UAVs usually perform under sea level conditions with low wind gusts (lower than 16 m/s) and
higher air density. This denotes that an improvement in some UAV subsystems is required to tailor them for
high-altitude monitoring applications . Among the different characteristics that need to be upgraded to
enhance the UAV performance, the following can be summarized: robust flight control system able to
withstand the strong wind gusts (18 m/s), aerodynamic and high volumetric fuselage to store the avionics
and payload, and high-efficient and eco-friendly propulsion system, which reduces energy consumption. The
two latter options are linked, and, thus, their implementation into the conceptual design requires the
assessment of their suitability to explore synergies for a more efficient UAV configuration.

The purpose of the present chapter is to investigate the performance of an electric-powered blended wing
body (BWB) UAV deployed on the aforesaid ecosystems. This baseline configuration has been selected
based on previous research , where it has been found that BWB configurations offer high volumetric
efficiency while providing good aerodynamic performance as a result of the elliptical lift distribution
improvement over the whole airframe . Furthermore, the BWB model facilitates the integration of different
propulsion architectures, which results in a broader spectrum of configurations for distributed propulsion .

Regarding the power source, the electric option has been seen attractive because of the reduction of polluting
gas emissions, moderate cost, lighter weight, and high reliability. In the next section, a deeper explanation
about the propulsion configuration for this conceptual design is described.

[h]Distributed electric propulsion


Reconnaissance and surveillance of endangered environments through overflight missions require short
setup time, versatility, and noise mitigation. In this sense, electric propulsion has emerged as a reliable and
potential solution to accomplish the aforesaid requirements thanks to its high-efficiency, moderate cost, and
eco-friendly essence. Nonetheless, the capacity of commercial batteries remains to be an issue because of
their lower energy density compared with their counterpart, the fossil fuels .

The distributed propulsion is a revolutionary technology that seeks to reduce the noise and weight of an
aircraft by means of replacing large propulsors with a moderate amount of small ones along the airframe as
depicted in Figure. This offers the possibility of increasing the propulsive efficiency because a larger
propulsive area is considered, which in turn, implies a lower jet velocity. Its application on small fixed-wing
UAVs has not been formally studied , and, consequently, the present chapter aims to assess the performance
of small UAV configurations with electric distributed propulsion. The study will focus mainly on power
consumption and performance improvements to demonstrate the feasibility of employing this technology in
small UAVs. Propulsive efficiency has not been considered as a figure of merit in the present study, because
of the low operating speeds and the electrical propulsion system, where the use of this parameter does not
capture well the improvement in the aircraft performance, as it does for turbofan engines.

Figure. Difference between distributed and non-distributed propulsion in a BWB.

It is important to note that distributed propulsion may offer other numerous benefits such as the elimination
of the aircraft control surfaces (thrust vectoring), flexible maintenance, decrease in noise, and reduction in
aircraft weight through inlet-wing integration . The study of these advantages is beyond the scope of the
present work, since this chapter is aimed at setting the basic conceptual configurations and assessing their
suitability for the case study. Nonetheless, these various features will be implemented in further research,
where the selected conceptual configurations will be assessed using a more holistic perspective.

For the assessment of suitable UAV configurations for wetland monitoring, parametric sizing and
aerodynamic assessment approaches were implemented into the conceptual design stage. Then, a brief
insight about the influence of electric distributed propulsion into the performance of the UAV configuration
using a semiempirical approach is exposed. In the next sections, initial sizing and modelling of the UAV
systems are further explained.

[h]Initial sizing
The design procedure starts by defining the mission requirements such as flight altitude, velocities, and
payload sensors. In this sense, a precise study of wetlands and paramos demands the usage of special sensors
applied in monitoring activities such as crop scouting, precision agriculture, surveillance, and air quality
monitoring. Some sensors used in these monitoring tasks are listed in Table. In this work, the payload?s
weight was assumed to be 1 kg for practical purposes

Main camera Mass [g] Resolution [MP]

Logitech C510 225 8

Canon PowerShot S60 230 5

Kodak Professional DCS Pro Back 770 16

Sony DSC-R1 929 10.3

Table. Monitoring sensors applied for wetlands and forestry .

Next, the UAV layout is carried out, and main aircraft characteristics such as preliminary weight (W TOp),
wing planform area (S), and preliminary power required (P Rp) are delineated through the constraint analysis
technique . This method consists of a matching plot that allows defining the design space of the aircraft
depending on their performance requirements such as stall speed, maximum speed, takeoff run, and ceiling
altitude . The outcomes of this design stage represent the general characteristics of a preliminary aircraft
architecture and will be employed to size other parts.

Afterwards, the wing shape is outlined by defining its geometrical parameters and sectional airfoils . The
wing geometry was set according to technical and semiempirical correlations , and then, the obtained results
were contrasted with corresponding data of commercial UAVs with similar characteristics . In this way, the
aerodynamic assessment was accomplished through the employment of the open-source Athena Vortex
Lattice (AVL) software which incorporates the vortex lattice method (VLM) . On the other hand, due to the
lack of suitable analytical methods to calculate the weight of small UAV configurations, the preliminary
weight was estimated as a function of the internal volume and the structural material?s density of the
aircraft .

At the end of this stage, the external shape of a conceptual model is obtained. Thereafter, it is necessary to
define a proper propulsion system through the match of the thrust required and the thrust available. Finally,
the weight of the resulting architecture is assessed through a refined model that takes into account the
propulsion system and the power source weight. Figure depicts the road map of the methodology employed
to generate and characterize a conceptual BWB UAV model. It is worth to mention that all the symbols
utilized along the chapter are reported in the nomenclature section at the end.
Figure. General methodology of initial aircraft sizing.

[h]Propulsion modeling
The main function of aircraft propulsion systems is to generate enough thrust to overcome the drag and
maintain a steady flight. For this work, firstly, suitable propellers were selected based on operating
conditions and performance requirements of the UAV model. Then, the rest of propulsion elements (motor,
electronic speed control and battery) was outlined based on propeller’s characteristics. Finally, the
established propulsion set is evaluated to verify that both the power and the thrust available satisfy the
requirements for cruise condition. It is important to mention that, at this conceptual stage, the distortion and
momentum drag reduction of the incoming flow to the propeller has not been considered and will be studied
in future work.

The propellers are commonly characterized by the thrust (C T) and power (CP) coefficients through
semiempirical models at early stages of design . For this study, the aforesaid parameters were obtained from
an experimental database of low Reynolds propellers by using the advance ratio (J) as the key driver for the
selection routine. This latter parameter relates the freestream velocity, the propeller diameter (ϕ prop), and its
rotational speed (ω). For this work, the freestream velocity was set according to the desired cruise speed. In
this way, the proposed method involves an iterative scheme that consists of estimating the thrust and power
generated by a preselected propeller through the variation of ϕ prop and ω for the desired freestream velocity
(Vc). The iterative loop stops when the thrust and power required are met by a certain configuration. Finally,
these results were used to select appropriate electric motors and batteries which can adapt well to the design
requirements. This semiempirical scheme was preferred since most of the available techniques are focused
on large propeller assessment and, hence, they present limitations for their implementation into small aerial
platforms.

Brushless electric motors are commonly employed for small UAVs considering their simple design,
potential to downsize, little maintenance, and independent performance of the flight altitude. In addition,
their purely inductive nature and their outrunner configuration (rotor with magnets that surrounds the fixed
coils of the stator) enable them to generate high torque at a low rotational speed, eliminating the need of a
gearbox and facilitating their integration and test at early stages of UAV design . In this context, appropriate
motors were outlined based solely on basic parameters provided by manufacturers like rotational speed and
torque. The selected motor must be able to generate the torque required by the propeller for its adequate
functioning at a certain rotational speed . Once a motor has been selected, various operating parameters like
no-load current, voltage constant, and internal resistance, together with torque and rotational speed that were
taken from the datasheets, were employed to estimate the required voltage (U m) and current (Im) of the
motor .

The motor current (Im) is then employed to select a proper electronic speed control (ESC) device and a
lithium-polymer (LiPo) battery. It is important to highlight that batteries for small UAVs are almost
exclusively lithium-based because they offer high capacity, low weight, and high discharge rates . For the
battery selection, two different scenarios were explored in this work. The first consisted of defining a
nominal battery capacity based on commercial off-the-shelf devices to estimate the flight endurance. The
second scenario aims to determine a suitable battery by giving a target endurance. This latter approach was
employed to assess the maximum endurance that could be achieved by the UAV, without the constraints of
off-the-shelf electronic components. Figure illustrates the road map to establish the electric propulsion
system during the conceptual design stage.
Figure. Electric propulsion definition and performance assessment methodology.

Once the propulsion system and the aircraft external shape have been framed, it is necessary to estimate the
UAV total weight in a more refined and accurate way. This value is then contrasted with the admissible
weight stated in the design requirements. Since typical procedures are focused on civil aviation, their
application cannot be extended to small aerial platforms. Instead, this work proposes a method that
individually accounts for the airframe, propulsion system, battery, and payload weights and then adds each
contribution to obtain the total weight as stated in Eq. (1).

The structural weight of the airframe was calculated with respect to the fuselage internal volume and the
material’s density. The former was estimated through the convex hull method , and high-density foam was
assumed as the major airframe material . The weight of remaining components from Eq. (1) was readily
obtained from manufacturer’s datasheets.

[h]Performance evaluation
The performance analysis is an engineering discipline that relies on inputs from aerodynamic and propulsion
assessments. In this sense, the performance evaluation aims to verify if a propulsion set (battery, motor and
propeller) meets the mission requirements such as endurance and range. For this purpose, both the power
required (PR) and the power available (P A) are determined. The former depends of the weight and the
aerodynamic efficiency of an aircraft, while PA depends on the propulsion system and its power source .

The power required (PR) is calculated by means of Eq. (2) , where ρ alt is the air density at a desired altitude, S
is the planform wing area, W TO is the takeoff gross weight, and C D and CL are the drag and lift aerodynamic
coefficients. Note that WTO and S were previously defined in the initial sizing phase through the constraint
analysis and the aerodynamic coefficients were estimated through the employment of the AVL software and
parametric characterization . This term represents the power required for steady cruise condition. However,
the target power available (PA) must be greater than PR to consider a more demanding flight condition such
as the takeoff phase. This excess of power is linked to the rate of climb (RC) as Eq. (3) shows :

The power available (PA), which depends of the propeller, motor, and battery characteristics, was estimated
through analytical relationships regarding non-dimensional coefficients (C T, CP, and J) . The computed value
of PA was verified to be greater or equal to P R in order to guarantee that the aircraft reaches the absolute
ceiling altitude at the desired rate of climb as explained before. Note that P R will be less for the cruise
condition because the airplane is not climbing anymore and, thus, the excess of power is zero. Finally, for
the distributed propulsion case, the total power available is estimated by multiplying the number of
propellers by their generated power, respectively.

It is important to highlight that the battery must provide a greater power than P A to account for energy losses
as shown in Eq. (4), where ηprop represents the propeller efficiency, ηe is the efficiency of the electric set
(motor, electronic speed driver and battery), and Pbat is the power supplied by the battery:

The endurance and range are the key performance parameters because they reflect the time and distance that
the aircraft is able to fly without recharging. Their estimation for electric-powered airplanes through
analytical models has not received special attention because the devices for the efficient energy storage are
still under development and research. Nevertheless, few authors have introduced distinct and elaborated
methods to predict the aforesaid parameters from aerodynamic characteristics and battery working
conditions .

For this case, a simplified but good enough approach has been employed to estimate the endurance and range
. This method assumes that the voltage remains constant and the battery capacity is decreased linearly. In
this sense, Eq. (5) was used to calculate the endurance at cruise condition, where C min represents the battery
minimum capacity that can be reached in a safety margin and I b is the battery current. The former was
supposed to be 20% of the total battery capacity because lithium-based batteries can be damaged if
discharged more than 80% . On the other hand, I b is a function of the motor current, avionics current, and
internal resistance. Its calculation is further explained in Ref. , and, hence, it will not be addressed in this
work. The numeric value (0.06) in Eq. (5) represents a unit conversion factor because the capacity of
batteries is commonly given in milliamperes-hour, I b in amperes, and the computed time is given in minutes.
It is important to highlight that only a single battery device was considered and its number of cells was
determined based on the voltage required by the motor. The range was calculated, by using the cruise speed
the endurance through the assumption of a rectilinear displacement:

Chapter 5: The Art of Integration: Sensor and Payload Technologies


[mh] Building Blocks of the Internet of Things: State of the Art and Beyond
ICT has simplified and automated many tasks in the industry and services sector. Computers can monitor
and control physical devices from very small to very large scales: they are needed in order to produce
semiconductor wafers and can help operating ships, airplanes or manufacturing devices. Until some years
ago though, these solutions were monolithic and thus application specific.

In the field of monitoring and control, the wide adoption of modular design patterns and standardization,
together with the improvements in communication technologies, paved the way to the diffusion of single
component products that could be integrated as building blocks for ever more complex applications. An
array of embedded devices and autoID technologies are now available as well as off-the-shelf platforms (ref
Oracle, IBM, Arduino, Arch Rock, Sensinode) which can be used and customized for addressing specific
purposes.

One of the biggest paradigms behind this trend is the Internet of Things (IoT) which foresees a world
permeated with embedded smart devices, often called “smart objects”, inter-connected through the Internet

A better definition of the phrase “Internet of Things” will be provided in the next Section.

. These devices should help blending together the digital and the physical world by providing Things with
“identities and virtual personalities” (European Technology Platform on Smart Systems Integration , 2008)
and by providing pervasive sensing and actuation features.

This scenario is very challenging as not all the building blocks of the IoT are yet in place. Standardization
efforts are essential and have only recently been made and a reference architecture is still missing. Other
researches on this topic nowadays focus on hardware and software issues such as energy harvesting, efficient
cryptography, interoperability, communication protocols and semantics. The advent of IoT will also raise
social, governance, privacy and security issues.

This work provides a historical and conceptual introduction to the IoT topic. In the second part of the
chapter, a wide perspective on the aforementioned issues is provided. The work also outlines key aspects in
the process of moving from the current state of the art of IoT, where objects have digital identities, towards a
network of objects having digital personalities and being able to interact with each other and with the
environment. In the last part, a selection of the possible impacts of the IoT is analyzed.

[h]Evolution of a vision
The concept of Internet of Things was originally coined by Kevin Ashton of the MIT Auto-ID Center to
describe the possibility of using RFID tags in supply chains as pointers to Internet databases which
contained information about the objects to which the tags were attached. The concepts heralded in the
presentation made by Ashton in 1998, were soon realized in practice with the birth of the EPCglobal, a joint
venture aiming to produce standards from the Auto-ID Center, which eventually created the EPC suite of
standards and the homonymous architecture framework .

The phrase maintained this meaning , untill 2004, when, for the first time a world where “everyday objects
[had] the ability to connect to a data network” was conceived . Innovative concepts such as the extreme
device heterogeneity and IP-based, narrow-waist protocol stack were for the first time introduced for what
was also called Internet0.
In the last years the hype surrounding the IoT grew in proportions. In the last years, quite a few definitions
have been given and we will analyse them briefly in order to provide a better definition of the Internet of
Things phrase.

In the final report of the Coordination and Support Action (CSA) for Global RFID-related Activities and
Standardisation [CASAGRAS] project the reader can find a compiled list of definitions which capture
different aspects of and meanings given to the concept of Internet of Things:

Initial CASAGRAS definition: “A global network infrastructure, linking physical and virtual objects through
the exploitation of data capture and communication capabilities. This infrastructure includes existing and
evolving Internet and network developments. It will offer specific object-identification, sensor and
connection capability as the basis for the development of independent cooperative services and applications.
These will be characterised by a high degree of autonomous data capture, event transfer, network
connectivity and interoperability”, Anthony Furness, European Centre of Excellence for AIDC

The CASAGRAS definition was given in the first part of year 2009, and was then confirmed in the final
report of the project. In this definition the IoT is first and foremost a network infrastructure. This is coherent
with the semantic meaning of the phrase which assumes that the IoT builds upon the existing Internet
communication infrastructure. The definition is also focused on connection and automatic identification and
data collection technologies that will be leveraged for integrating the objects in the IoT.

SAP definition: “A world where physical objects are seamlessly integrated into the information network, and
where the physical objects can become active participants in business processes. Services are available to
interact with these 'smart objects' over the Internet, query and change their state and any information
associated with them, taking into account security and privacy issues.” Stephan Haller, SAP AG

We would like to note here the focus on the physical objects which are in the center of the attention as main
participants of the IoT. They are described as active participants in the business processes. Besides, the IoT
here is more a vision than a global network, as the word “world” would suggest. Also the idea of using
services as communication interfaces for IoT is explicited. Services will soon become one of the most
popular tools to broaden the basis of communication interoperability in the IoT vision. Security and privacy,
though not related to the definition of IoT, are also highlighted as critical issues .

Future Internet Assembly/Real World Internet definition: The IoT concept was initially based around
enabling technologies such as Radio Frequency Identification (RFID) or wireless sensor and actuator
networks (WSAN), but nowadays spawns a wide variety of devices with different computing and
communication capabilities – generically termed networked embedded devices (NED). More recent ideas
have driven the IoT towards an all encompassing vision to integrate the real world into the Internet […].

More recent definitions seem to emphasize communication capabilities, and to assign a certain degree of
intelligence to the objects .

“a world-wide network of interconnected objects uniquely addressable, based on standard communication


protocols.”

“Things having identities and virtual personalities operating in smart spaces using intelligent interfaces to
connect and communicate within social, environmental, and user contexts.”

In conclusion, we can thus identify two different meanings (and thus definitions) of the phrase: the IoT
network and the IoT paradigm. First and foremost, the Internet of Things is a global network, an extension of
the current Internet to new types of devices – mainly constrained devices for WSANs and auto-ID readers –,
aiming at providing the communication infrastructure for the implementation of the Internet of Things
paradigm. The Internet of Things paradigm, on the other hand, refers to the vision of connecting the digital
and the physical world in a new worldwide augmented continuum where users, either humans or physical
objects (the things of the Internet of Things), could cooperate to fulfill their respective goals.

Figure. The paradigm of IoT: from the current situation where digital and physical environments are uncoupled (a), to
one where physical and digital world can interact (b) and finally to one where physical and digital worlds are merged
sinergically in an augmented world (c).

In order to realize the IoT paradigm, the following features will be gradually developed and integrated in or
on top of the Internet of Things network infrastructure, slowly transforming it into an infrastructure for
providing global services for interacting with the physical world:

 object identification and presence detection


 autonomous data capture
 autoID-to-resource association
 interoperability between different communication technologies
 event transfer
 service-based interaction between objects
 semantic based communication between objects
 cooperation between autonomous objects.

[h]A model for the Internet of Things


The aim of this section is to provide insight on the actors and components of the Internet of Things and how
they will interact. We will provide our definition on the concepts we deem essential in the Internet of Things
as previously defined in Section 2.What is expressed in the following paragraphs has been heavily
influenced by the fruitful interaction with our partners in the IoT-A project.
The generic IoT scenario can be identified with that of a generic User that needs to interact with a (possibly
remote) Physical Entity of the physical world. In this short description we have already introduced the two
key actors of the IoT. The User is a human person or a software agent

We prefer, wherever it is possible, not to introduce a distinction between the world of constrained devices
and the one of full function devices. Some authors refer to the IoT as a concept related only to constrained
devices. We prefer to stick to the previously provided definition, where the IoT is conceived as an extension
of the Internet, thus including it and all the related concepts and components.

In this case for example, the ‘software agent’ can equally be one residing on a server, on an autonomous constrained
device or running on the mobile phone. that has a goal, for the completion of which the interaction with the physical
environment has to be performed through the mediation of the IoT. The Physical Entity is a discrete, identifiable part
of the physical environment that can be of interest to the User for the completion of his goal. Physical Entities can be
almost any object or environment, from humans or animals to cars, from store or logistic chain items to computers,
from electronic appliances to closed or open environments.

Figure. Basic abstraction of the IoT interaction

In the digital world Digital Entities are software entities which can be agents that have autonomous goals,
can be services or wimple coherent data entries. Some Digital Entities can also interact with other Digital
Entities or with Users in order to fulfill their goal. Indeed, Digital Entities can be viewed as Users in the IoT
context. A Physical Entity can be represented in the digital world by a Digital Entity which is in fact its
Digital Proxy. There are many kinds of digital representations of Physical Entities that we can imagine: 3D
models, avatars, objects (or instances of a class in an object-oriented programming language) and even a
social network account could be viewed as such. However, in the IoT context, Digital Proxies have two
fundamental properties:

 they are Digital Entities that are bi-univocally associated to the Physical Entity they represent. Each
Digital Proxy must have one and only one ID that identifies the represented object. The association
between the Digital Proxy and the Physical Entity must be established automatically
 they are a synchronized representation of a given set of aspects (or properties) of the Physical Entity.
This means that relevant digital parameters representing the characteristics of the Physical Entity can
be updated upon any change of the former. In the same way, changes that affect the Digital Proxy
could manifest on the Physical Entity in the physical world.

While there are different definitions of smart objects in literature , we define a Smart Object as the extension
of a Physical Entity with its associated Digital Proxy. We have chosen this definition as, in our opinion, what
is important in our opinion is the synergy between the Physical Entity and the Digital Proxy, and not the
specific technologies which enable it. Moreover, while the concept of “interest” is relevant in the IoT context
(you only interact with what you are interested in) the term “Entity of Interest” focuses too much attention
on this concept and doesn’t provide any insight on its role in the IoT domain. This term was an alternative to
Entity in , which in turn we view as an unnecessary abstraction that can also be misleading. For these
reasons we have preferred the term Smart Object, which, even if not perfect (a person might be a Smart
Object), is widely used in literature.

Indeed, what we deem essential in our vision of IoT though, is that any changes in the properties of a Smart
Object have to be represented in both the physical and digital world. This is what actually enables everyday
objects to become part of the digital processes.
This is usually obtained by embedding into, attaching to or simply placing in close vicinity of the Physical
Entity one or more ICT devices which provide the technological interface for interacting with or gaining
information about the Physical Entity, actually enhancing it and allowing it to be part of the digital world.
These devices can be homogeneous as in the case of Body Area Network nodes or heterogeneous as in the
case of RFID Tag and Reader. A Device thus mediates the interactions between Physical Entities (that have
no projections in the digital world) and Digital Proxies (which have no projections in the physical world)
extending both.

From a functional point of view, Device has three subtypes:

 Sensors can provide information about the Physical Entity they monitor. Information in this context
ranges from the identity to measures of the physical state of the Physical Entity. The identity can be
inherently bound to that of the device, as in the case of embedded devices, or it can be derived from
observation of the object’s features or attached Tags. Embedded Sensors are attached or otherwise
embedded in the physical structure of the Physical Entity in order to enhance and provide direct
connection to other Smart Objects or to the network.. Thus they also identify the Physical Entity.
Sensors can also be external devices with onboard sensors and complex software which usually
observe a specific environment in which they can identify and monitor Physical Entities, through the
use of complex algorithms and software training techniques. The most common example of this
category are face recognition systems which use the optical spectrum. Sensors can also be readers .
 Tags are used by specialized Sensor devices usually called readers in order to support the
identification process. This process can be optical as in the case of barcodes and QRcode, or it can be
RF-based as in the case of microwave car plate recognition systems and RFID.
 Actuators can modify the physical state of the Physical Entity. Actuators can move simple Physical
Entities or activate/deactivate functionalities of more complex ones.

It is also interesting to note that, as everyday objects can be logically grouped together to form a composite
object and as complex objects can be divided in components, the same is also true for the Digital Entities and
Smart Objects which can be logically grouped in a structured, often hierarchical way. As previously said,
Smart Objects have projections in both the digital and physical world plane. Users that need to interact with
them must do so through the use of Resources. Resources

In this work we depart from the original and abstract meaning of the term which we consider closer to the
definition of Entity of Interest.

are digital, identifiable components that implement different capabilities, and are associated to Digital Entities,
specifically to Digital Proxies in the case of IoT. More than one Resource may be associated to one Digital Proxy and
thus to one Smart Object. Five general classes of capabilities can be identified and provided through Resources:
Figure. Conceptual model of a Smart Object

 retrieval of physical properties of the associated Physical Entity captured through Sensors;
 modification of physical properties of associated Physical Entity through the use of Actuators;
 retrieval of digital properties of the associated Digital Proxy;
 modification of digital properties of the associated Digital Proxy;
 usage of complex hardware or software services provided by the associated Smart Object

The use of remote processing capabilities for computation intensive operations (e.g. the resolution
and lookup processes) or the usage of specific hardware (e.g. printers or projectors) are good
examples of this kind of Resources.

In order to provide interoperability, as they can be heterogeneous and implementations can be highly
dependent on the underlying hardware of the Device, actual access to Resources is provided as Services.
Figure. Proposed Internet of Things reference model

The associations between Smart Objects and Resources (i.e. their identity) and the locations (i.e. network
addresses) of the relative Services is either recorded in the Smart Object itself or can be stored (along with a
small amount of auxiliary information) in what we call Resolution Service, an infrastructural component of
the Internet of Things. The Resolution Service is conceived as a registry-based provider of the essential
resolution service. Its task is very similar to that of current DNS or ONS service: it takes as input the ID of a
Smart Object or Resource and provides as output the network addresses of the Services associated to it.

In the same way, a semantic description of the Resources and the ID of the associated Virtual Proxy is
recorded in what we define the Lookup Service. This is similar to nowadays semantic search engines in that
it accepts an input query and provides a relevance-ordered set of IDs, identifying Resources that might be
useful to the User, according to the semantic query provided by the User.

[h]Identification, data collection and communication


The IoT vision had its base in the automatic identification (autoID). For the first time, ICT systems could
assign an identity to common objects and soon these were able to become –

passive – part of automated, computer-managed processes. Such processes initially aimed at shadowing
physical processes by monitoring them through the use of autoID.
Figure. Representation of the resolution registry

In the beginning, barcodes provided the first means of identifying items through optical labels. Barcodes
eventually evolved, also thanks to the spread of camera-integrating mobile phones, to bi-dimensional optical
codes such as QRcode (Denso Wave, n.d.). In the meanwhile, the well known RFID technology allowed for
the first time real-world objects to be efficiently integrated in the digital processes, making in this way the
first step towards the convergence and integration of digital and real world as the IoT paradigm proclaims. A
relative small form-factor and low price together with the limited need of maintenance made this technology
a good solution for specific supply chain and asset management solutions .

Unfortunately though, the RFID technology has its limits. Designed for identification, it can only provide
information about presence and it also brings along a set of privacy and security issues. Semi-passive RFID
tags can provide readings from battery powered sensors, but communication is still one-way and objects are
not connected.

Sometimes passive RFID is also erroneously thought of as an authentication technology. This is a


misconception, albeit common. The option of using RFID for authentication purposes should be thoroughly
investigated prior to adoption and could prove even dangerous if system designers believe that RFID could
provide a secure way of identifying things .

When it comes to data collection, networks provide the most powerful solution. Bidirectional
communication, enabling constant monitoring as well as command actuation, an always-available
connection and higher data-rates sound definitely appealing. Wireless networks on the other hand prove to
be a good solution because they need no physical infrastructure for operating and the deployment process is
easier. And so, Wireless Sensor Networks (WSN) were born and provided new performance levels that were
needed in some fields of data collection.

WSNs are made of a number of network (usually WPAN or LR-WPAN) nodes that often have automatically
(re-)configuration capabilities and provide a wireless communication channel for the data gathered by
onboard sensors. A user or a central business logic can get the collected data through a special node, usually
the coordinator of the network, that acts as gateway. A good knowledge base on WSNs can be found in .
Bidirectional communication is also useful for requesting real-time data and commanding actuators. Hence
the phrase Wireless Sensor and Actuator Network

While in literature the term ‘WSN’ is much more used, we prefer to use ‘WSAN’ because, limiting the
functional definition of such network to sensing doesn’t fit with the IoT scenario, where the interaction with
the real world is bidirectional

thereafter WSAN) was coined. Bidirectional communication is also useful for reprogramming devices directly on the
field .

These technologies paved the way to a whole new set of applications thanks to their ease of deployment.
With almost no need for a physical network infrastructure, WSANs attracted a lot of interest from
application designers aiming to employ them in fields ranging from home and industrial automation , smart
metering , to precision agriculture , environmental monitoring and healthcare .

These applications though are just the top of a submerged iceberg when it comes to the possibilities provided
by embedding sensor and actuators in the environment. The real revolution will take place when embedded
devices will be able to provide and access resources through the Internet. This, together with the use of
semantics will also uncover the untapped potential of context-awareness and autonomous decision making.

The first steps towards this vision have already been taken. As the IP protocol is the cornerstone of the
Internet and, as the IoT will be an extension of the current Internet, many have proposed to use IP, and in
particular IPv6, as the shared narrow-waist of IoT-capable protocol stacks . Indeed, the perspective of having
50 to 100 billion devices by 2020 can be even viewed as one of the drivers of the adoption of the IPv6.

In this context, the work of the 6LoWPAN group in providing an adaptation layer between IPv6 NWK layer
and the MAC layer of IEEE 802.15.4 is worth mentioning . The adaptation was needed because of the
different purposes of the IPv6 and of the IEEE 802.15.4 standard for Low Rate WPANs . The former was
based on the existing features of IPv4, and was designed for the Internet while, at design time, LR-WPANs
were required to optimize energy consumption. Thus the work had to deal with the typical limitations of
constrained devices.

One of the greatest issues was that the LR-WPAN PHY layer packet length of 127 bytes. This forced the
workgroup to rely on the compression for the 40 bytes IPv6 header in order to achieve larger application-
level payloads and thus greater efficiency in communication, which lead to RFC4944 . The reasons behind
this choice can be understood considering that the MAC header has a maximum length of 25-bytes, that the
possible overhead due to the MAC layer security can take up to 21 bytes and that fragmentation support in
upper layers can reduce even more the actual application payload.

The potential of having small – though constrained devices – to the Internet has been readily perceived by
the actors of the embedded devices market. For example, alongside the interest focused from the academic
environment, it is relevant that all embedded platforms previously cited already provide support to
6LoWPAN. Contiki and Tiny OS, two of the major operating systems for embedded devices, also provide
modules for 6LoWPAN.

Communication capabilities are essential for achieving other features that have been associated to the
Internet of Things. Cooperation among Smart Objects and the auspicated context-awareness are the most
relevant. In order for devices to exchange meaningful data a proper support at service and application layer
level is essential.

[h]The missing building blocks


The IoT paradigm is a visionary one. Currently there are more questions than answers and many challenges
need to be taken into account. Some building blocks, such as autoID technologies, WSANs and basic IP-
based communication are (almost) available, yet others are still needed and obstacles pave the path to the
advent of IoT. Nonetheless, this vision, unlike many others, is in the realm of possibility and the sheer
momentum of the effort it focuses might lead to its success.

This section lists and analyzes the most relevant technological and scientific missing building blocks. Many
of these topics have been discussed in the frame of the Internet of Things Architecture [IoT-A] project ,
which aims at bridging many of these gaps.

A governance framework is also considered to be necessary, yet missing, and the relative issues will be
depicted in the relative sub-section.

[h]Interoperability
The paramount challenge at the moment seems to be interoperability. This issue has many facets, some of
which are tightly intertwined to technical aspects. Even though there are many other challenges for the IoT,
one of the most important requirement to keep in mind when addressing them is that they need to be solved
in a common way for interoperability’s sake. We have identified the following topics on which efforts from
the research and stakeholder community in creating inter-operable solutions and towards standardization are
most needed:

 reference architecture and protocol suites


 identification schemes
 routing and addressing
 resource resolution and lookup
 semantics

Though not strictly related to standardization, governance and intellectual property management also have to
be addressed jointly and in an international frame. In this case though, it’s not the research or stakeholder
community that has to make efforts and take decisions, but the international entities that will be responsible
of the management of the infrastructure of the IoT.
[h]An architecture and a reference conceptual framework
Despite the interest in the topic and the huge amount of scientific papers, books and workshops about the
Internet of Things, there is a manifested lack of consensus on some concepts and definitions related to the
IoT.

As seen in Section 2, there is a certain degree of misalignment even in the definition of the Internet of
Things and this also extends to other concepts used in this context. This misalignment translates in the fact
that the set of expected capabilities of the IoT is not the same throughout the scientific community. For
example, it is not clear whether the ability of co-located objects to interact must be necessarily mediated by
the central infrastructure services or could be realized by local service discovery processes.

Also, there’s much uncertainty on the functional components of the IoT. Depending on the required features
of the IoT, new infrastructure services will be needed. In Section 2, we have proposed the definition of
Lookup and Resolution Services, but many other may be needed to cope with security and privacy issues for
example. Such services also raise the problem of scalability from three perspectives:

 number of devices requesting a service from the IoT infrastructure.


 number of Resource entries in the registry of an infrastructure service on which to perform the search
 client device resources

In this context, the fact that there is no reference architecture for the IoT is almost a consequence. To our
best knowledge, there is very limited literature on the topic yet . The IoT-A project , as the name suggests,
will address thoroughly this issue in its three years’ course.

[h]. Privacy and security


Privacy and security, or the lack of, also pose a significant challenge for the correct deployment of the IoT
concept. Clearly, the peripheral part of the IoT is the most vulnerable one. Here, networks of constrained
devices and data-collection systems, generally characterized by very limited resources, aim to collect and
transport sensible and sometimes critical data.

More and more often, such systems rely on wireless communication, which has greatly improved the ease of
deployment of data-collection systems, overcoming physical limitations related to the weaving of cables
needed for the communication infrastructure. From a security point of view though, wireless systems have
an intrinsic downside: they use a shared physical medium for communication. To share the air as physical
medium means that attackers can easily and anonymously obtain access to packets sent over the air from far
away and with minimum costs. Access to data is then a simple matter if this is not encrypted. Moreover, as
there is no physical authentication, malicious users can inject forged packets at Link Layer level, disrupting
the network and possibly compromising any functionality of the upper layers.

Though many solutions for improving passive RFID security have been proposed in the scientific
community, very few standards actually implement relevant security features . The general problem is that
passive RFID tags provide a very limited and vulnerable memory storage as well as minimal processing
capabilities. These aspects limit in turn the flexibility of the security features, so that, at the best of our
searches to date, it is impossible to secure (provide at least authentication, confidentiality and freshness) the
typical IoT scenarios where RFID tags can move around and interact with different readers, pertaining to
different security domains.

For what concerns peripheral networks, in order to provide confidentiality, integrity and authentication
features, security frameworks can be used. These frameworks work at Link Layer level in order to protect
the functionalities of the higher layers. On the downside though, they introduce a relatively consistent
communication and processing overhead to achieve their goal. Authentication in particular is essential in
order to deny packet forging and avoid replay attacks.

Even for these systems, there is another common issue: in such systems there is no trusted actor (i.e. device)
by default. The process of defining a trusted actor and sharing the “secret”, or key, subsequently used for
authentication or encryption, is a critical and vulnerable one. Because of its utter importance, it also has to be
done in a safe environment which generally means connecting to the devices physically (by cable) or
wirelessly in a safe environment. Moreover, in such systems, keys are usually network-wide because of
memory constraints, which means that compromising one node, might compromise the whole
network/system. Also, such keys cannot be as long as the standard length for unconstrained devices because
of the limited computational power.

As pointed out in , smart objects with communication capabilities usually use a gateway in order to connect
to the Internet. This gateway usually is at the edge between the domain of constrained and unconstrained
devices and usually having less constraints than peripheral devices. It is interesting to note that these devices
are also on the border of two domains characterized by different security capabilities. It is thus reasonable to
delegate to these devices the task of providing the needed security scalability for providing end-to-end
security features. In Figure we describe three possible scenarios for what concerns authentication scalability.
We consider that gateways can authenticate all traffic incoming from the Internet side with a standard length
key and that, in the most demanding scenario, they, at the same time authenticate all the outgoing from the
WSAN. These scenarios can be easily adapted for the confidentiality and integrity features.

A more complex scenario though configures in the case of nomadic nodes that use unfamiliar networks to
connect to the Internet and thus were not pre-configured. In this case, there is also the issue of having nodes
that do not trust the gateway by default. A Certification Authority could be used to provide mutual trust
between the gateway and he mobile node, but this might prove risky as the access to the CA is provided by
the (un-trusted) gateway.
Figure. Security scalability scenarios: only the gateway (and thus the source network) can be authenticated, b)
tunnelling and c) active scaling of features

In conclusion, in current WSAN-based IoT-like systems a) an Internet-wide security framework


implementation for granting Link Layer security features is not feasible, b) long keys cannot be used for
cryptography/signing on embedded devices, c) security of mobile constrained devices is even more
problematic and d) we suggest the use of gateways as tools for scaling the security features between the core
area of the IoT network and the peripheral part.

[h]Governance
A specific regulatory frame that takes into account private, business and public needs for the IoT is needed.
Also, governing bodies have to adopt a shared strategy for managing, maintaining a global, international and
possibly critical infrastructure such as the Internet of Things.

Defining and enforcing policies for such a network is also an important and delicate issue due to its
characteristics (physical pervasiveness, trans-national reach, transport of sensitive data,...) and the criticality
of the potential implications.

If today’s Internet has stretched the previous definitions of intellectual property, the IoT scenario will likely
challenge the old definitions. For example, imagine a future environment, a private ground that is publicly
accessible, such as a supermarket. Many people move through this space and the devices they wear
continuously collect data in the open environment. This data might be made accessible to users all over the
world through the IoT. But whose property is this data? Does it belong to the environment’s owner, to the
sensor’s owner, or to the collector of the data? And in the case that the sensor simply traces other devices?
How will the user be informed that some data regarding him has been acquired? Actually will he be
informed? How will privacy rules be enforced in such an environment?

Many questions, but few answers. It is certain though that the Internet of Things will introduce a whole new
set of security and privacy issues and that users shall be able to understand and manage the security and
privacy features of their devices in order to benefit from the deployment of IoT. This is of the utmost
importance and we believe that it is one of the main priorities of the governance to design and supervise this
process.

[h]Social implications
Pervasive technologies might have a consistent and positive impact on society but they also have the power
to be very disruptive. For these reasons, the design and adoption of the Internet of Things shall be performed
taking into account the implications on society that we can foresee and limiting, where it can be done, the
consequences we cannot predict.

For example, the digital divide is one of the issues which we can foretell that will be accentuated by the
adoption of the IoT, if not correctly designed. In first stance, the digital culture is not homogeneously
distributed across the territory: people living in densely populated areas usually are more used to technology
and adapt easier to new IT developments. IoT will be a great challenge for all users because the interaction
paradigm will be completely new to them. In order just to realize how difficult it will be, one could think of
how the users will manage the privacy and security features of the 3 to 8, display-less devices which have
been forecasted belong to their personal space.

In second stance, while deploying stand-alone WSAN solutions in remote areas is relatively easy, taking the
Internet of Things to rural areas will be very difficult due to the fact that a proper infrastructure and
maintenance will be needed. The advent of IoT without a properly established and pervasive infrastructure
for connecting to the Internet could accentuate the division between less and more urbanized areas.
Another underestimated impact of the IoT in on education. The acceleration of the information flow and the
handily availability of information in what will be an augmented reality will drastically change the way
people learn things. If young people manage to master the new interaction paradigms that will characterize
IoT, their relation to the physical world will also be drastically changed and we do not know how these
changes will manifest later on.

[h]Economic implications
The IoT will doubtlessly have an impact on economy. While in the first phase we expect only a limited,
vertical impact, with the wider adoption of IoT-enabled solutions, the benefit derived from adopting the IoT
paradigm will increase in a typically exponential way.

The first impact we foresee is the improvement of process efficiency in all economic sectors thanks to the
adoption of large-scale automation. In second stance, brand new services for private, public and business
users will be designed and developed.

There are though many open questions related to the economic implications of large-scale adoption of IoT.
Such questions involve all scales from enterprise-level to an international level:

 what will the underlying business models look like? When will the ROI rate be high enough to
sustain the spontaneous adoption of the IoT paradigm by mainstream enterprises in industry and
agriculture?
 will excessive automation change the economic model of countries? Will it have a negative impact
on society?
 as the time of adoption of the IoT by developed countries will come sooner than in in-development
countries, will this accentuate the gap between them? Or could this even help the economy of such
countries?

While we strongly believe that it’s important to keep these questions in mind while designing the Internet of
Things, we also believe that, due the expected highly-accelerated rate of development and adoption, having
accurate long-term forecasts in the IoT scenario is very difficult.

[h]Environmental impact
Having such a large amount of “things” integrating electronic circuitry and components might have a
significant impact on the environment. First of all, the shear amount of hardly recyclable or even hazardous
materials that will be introduced in the environment could represent a serious pollution danger. Moreover
objects integrating electronic devices that are disposed of at the end of their life cycle will be difficult to
treat, let alone to recycle, which again can increase pollution. Thus, new materials and recycling techniques
for objects incorporating electronic devices should be developed.

Chapter 6: From Blueprint to Reality: and UAV Prototyping Testing


[mh] The State of Augmented Reality in Aerospace Navigation and Engineering

Augmented Reality (AR) technology has had one of the most significant impacts in the aerospace sector. Caudell and
Mizell first coined the term “Augmented Reality” to explain an optical see-through head-mounted display that
superimposed and anchored computer-generated graphics in an aircraft manufacturing plant. The technology would
track the user’s head pose and place a Computer Aided Design (CAD) or other relevant information in a simplified
format augmented over the user’s visual field of the real world, hence naming it “Augmented Reality.” While the
name only existed three decades ago, the concept of AR existed long before then. Both aircraft Head-Up Display
(HUD) and Head-Worn Display (HWD) existed long before that. In this chapter, we attempt to discuss the evolution
of these technologies slightly differently than several existing literatures but also provide information on how it is
evolving particularly in the navigation, engineering, and design sectors.

[h]Flight navigation
Today, flying an aircraft depends on three factors: the machine, the controls, and instruments, and the human
operator . The machine is what flies, and the human operator (i.e., pilot) is the one that flies the machine.
But, without the proper controls and instrumentation, pilots would have no clue how or which direction to
fly the aircraft in. Of course, the first powered aircraft by the Wright Brothers, the 1903 Wright Flyer that
flew in Kitty Hawk, North Carolina, did not have any instruments to guide the pilots of such information .
Instead, the person on the ground would use a stopwatch, an anemometer, and an engine revolutions counter
to calculate distance flown, speed, and horsepower of the propeller engine. Following the Wright Brother’s
invention, many would continue to fly the aircraft. However, without proper instrumentation, many pilots
would lose their lives because of structural failure or stalling, leading to the implementation of the first
visual indicators in 1907. Pilots would be trained to fly the Wright aircraft using an incidence indicator
consisting of two limiting red marks on the scale to identify the relative pitch of the aircraft . Mechanical
displays would continue to evolve for the next 5–6 decades followed by the electromechanical displays
between 1930s and 1970s, and then by the first and second generations of Electronic Flight Instrument
System (EFIS) . Mechanical displays were pressure-based instruments and would often result in slower than
required indication of various flight parameters. Unlike them, the electromechanical instruments would be
electrically powered while the indications would still be driven pneumatically . After decades of research
within civil and military aviation, a standard arrangement of various instruments was developed which is
still used in old aircraft today. While such displays provided more stable and accurate data for pilots during
flight, the need to have more information for better situational awareness would lead to the need to requiring
more eyes on the flight deck, resulting in the development and evolution of the EFIS. EFIS is a purely digital
display system that receives its data through the onboard flight computer which receives its data from the
onboard sensors. The newer generation of EFIS, referred to as the glass cockpit, uses a standard set of
display units including a Primary Flight Display (PFD), a Navigational Display (ND), an Engine Indicating
and Crew Alerting System (EICAS), or an Electronic Centralized Aircraft Monitor (ECAM), a
Multifunctional Display (MFD), and a Flight Management Computer (FMC) .

Much like the evolution of HDDs, the first recorded usage of a HUD dates to the 1920s, used as a reflector
gunsight in a fighter aircraft by Sir Howard Grubb. His design was important because the gunsight would
project a distant virtual image of a back illuminated aiming graticule such that the graticule could be
superimposed over the distant target. For a typical gunsight back then, the gunman would have to align the
target with a backsight and a foresight. That said, it was not until the 1940s that a dynamic visual component
would be added to a reflector gunsight. Maurice Hancock designed this gyroscopic gunsight and used it on
the RAF Spitfire and Hurricanes. For his invention, he used two independent sights: one was a version of
Grubb’s sight, and the second was an aiming symbol that shifted across the line of sight by an angle that
changed based on aircraft speed, altitude, attitude, and turn rate . Following this important feat, military
aircraft in the 1950s and 1960s would begin displaying other flight-related details such as flight path vector
into the displays. In 1962, a British strike aircraft named the Blackburn Buccaneer would be the first aircraft
to have a fully operational HUD . By the 1970s, HUDs would start being used in commercial aircraft,
starting with Sextant Avionique in the Dassault Mercure aircraft in 1975, shortly followed by Sundstrand
and Douglas in their MD80 series aircraft. Once the technology hit the commercial market, HUDs were
prioritized for safe landing and low visibility operations. By the early 2000s, HUD-equipped commercial
aircraft had logged over 6 million flight hours with 30 thousand low visibility operations . In 2009, the Flight
Safety Foundation (FSF) released a report stating that the Head-Up Guidance System Technology (HGST)
prevented about 38 percent overall potential accidents and 69 percent overall accidents caused during take-
off or landing . Today, almost all the airlines and business jet aircraft are equipped with an HUD system. The
evolution of HUD and VR would later inspire the invention of the Sword of Damocles by Ivan Edward
Sutherland in 1968 , and the development of the Visually Coupled Airborne Systems Simulator (VCASS)
and the Super Cockpit Program , both led by Thomas A. Furness III between 1960s and 1980s. Their work
would inspire the military to consider the usage of Helmet Mounted Displays (HMDs) to be able to always
visualize minimum flight and combat information during the flight.

[h]Head-up display (HUD)


A HUD is comprised of two components: a Pilot Display Unit (PDU), and a HUD computer . The PDU is
simply a semi-transparent visor that is situated in the glareshield or above the pilot’s head. The HUD
computer generates an image based on the flight information which is then reflected onto the PDU through a
projector connected to the computer. To ensure visibility throughout the various stages of flight, the
displayed contents are usually either monochrome green or a combination of monochrome green and
magenta. The combiner glass on the PDU is specially coated so that only the color of light projected from
the image source is visible to the pilot.

The main purpose of a HUD is to superimpose imagery over the pilot’s forward Field of View (FOV)
outside the window . In doing so, it reduces the amount of time pilots would have to focus on the HDD,
especially during landing or low visibility conditions. HUD contents are being collimated on the visor which
means the light rays are traveling parallel with the eye resulting in an infinite visual. Hence, the focus of the
eyes would not need to be readjusted when transitioning between the display and the OTW. Lastly, the
HUD’s graphical contents are generated digitally. Hence, some modified components of the imagery can be
conformed with what the visuals are trying to represent. For instance, in a taxiway, HUD can be adjusted to
overlay conformed representation of the horizon line as seen on the OTW from the pilot’s point of view as
seen in Figure. Or it could be used to display advanced symbologies such as the Tunnel-in-the-Sky (TS)
visual as later shown for a conceptual Urban Air Mobility (UAM) simulation in Figure.
Figure. C-130 j HUD .

Figure. The next generation of UAM AR-based cockpit (a) HUD view for UAM pilot’s point of view on a transparent
AR screen (b) HUD view for using Microsoft HoloLens 2 to proof the concept of the UAM flight corridors .

[h]Helmet mounted Display (HMD)


When using the HUD, it is assumed that the pilot only needs to focus on his/her forward FOV. As shown in
Figure, HUD’s total FOV is much smaller than that of the HMD’s. This is mainly because the HUD’s total
FOV is often the same as its instantaneous FOV as the pilot is assumed to be focusing only on the HUD.
On the contrary, HMDs are equipped with the head tracking feature allowing the pilots to move around.
Hence, their total FOV is much larger than their instantaneous FOV .
Figure. HUD versus HMD: FOV .

Although HMDs tend to provide better SA around the aircraft during flights, they are often prone to pilot
discomforts. Imagine a pilot flying an aircraft with a HUD while only looking in one direction. Now,
imagine the same pilot flying the same aircraft with an HMD while trying to look in the same direction.
Since the HMD is locked to the pilot’s head directly, his/her head also needs to be rigid, which is a difficult
task for any living being. As a result, HMDs are equipped only in military aircraft and not on any
commercial aircraft.
Figure. Thales TopOwl HMD .

That said, many aerospace officials have begun to rely on modern AR Head-Worn Displays for research and
training purposes. The new generation of AR and XR headsets such as Varjo XR-3, Microsoft HoloLens,
Magic Leap 2, etc. is not only capable of generating extremely high-resolution visuals but also capable of
generating spatially anchored data in the close-proximity environment . While these devices are not certified
for in-field navigation purposes, these have proven to be a great tool for pilot training , simulation , and HMI
testing purposes .
Figure. Magic leap 2 AR headset being used for a potential application of airport surface navigation.

Figure. Flight display testing using Microsoft HoloLens 1 .

[h]Degraded visual environment


In aviation, one of the most dominant factors of aircraft accident is the Degraded Visual Environment
(DVE), similar to the one shown in Figure . A degraded visual condition is referred to as a state in which
pilots experience partial or complete loss of visual cues, often due to fog, time of day, brownouts, whiteouts,
or simply due to bad weather . Flights during such situations often result in reduced Situational Awareness
(SA). As will be discussed in the next couple of subsections, pilots heavily rely on visual cues to taxi, take-
off, and land an aircraft or a rotorcraft. However, if they cannot see these cues, they need to rely on the
instruments. These rules are categorized as Visual Flight Rules (VFR) and Instrument Flight Rules (IFR).
One of the key problems with IFR during DVE conditions is that pilots can experience spatial disorientation
between the Out-The-Window (OTW) visuals and what they see on the Head-Down Displays (HDD). Even
for experienced pilots operating an aircraft or a rotorcraft via IFR can be challenging. More often than not, a
small fault in the instrument can also lead to a disaster as described in . One way to mitigate this problem is
to utilize the AR technology to overlay the runway or taxiway information along with relevant terrain data to
increase the pilot’s awareness. Moreover, a combination of these symbologies along with a properly crafted
SVS can help pilots operate in VFR conditions even in a DVE state .
Figure. DVE caused during helicopter landing on desert .

[h]Vision systems
As mentioned in the previous section, in a DVE condition, while protocol dictates pilots to follow IFR when
operating an aerial vehicle in relation to the ground, pilots can occasionally experience spatial disorientation
resulting in potential accidents. To prevent this, HUDs and HWDs are often equipped with various vision
system technologies. In aviation, the three most common systems include: Enhanced Vision, Synthetic
Vision, and Combined Vision .

[h]Enhanced vision system (EVS)


EVS, or Enhanced Flight Vision System (EFVS), uses onboard sensors and light emitters to improve the
visibility of the OTW environment. These sensors or emitters could include a Forward-Looking Infrared
(FLIR) sensor, a millimeter wave radar scanner, a millimeter wave radiometer camera, or a set of Ultraviolet
(UV) sensors. Besides the typical flight information, EFVS data are presented in the HUD or HMD via an
analog or a digital video format as recorded from front of the aircraft with the visibility enhancements. Since
EFVS is basically using standard physical equipment to improve the visibility, it is not supported for all
environmental conditions and poses a limit to how helpful it could be when used with a HUD or an HWD .

[h]Synthetic vision system (SVS)


Unlike EFVS, SVS uses a 3D rendering tool to generate surrounding terrain models using databases based
on the Global Navigation Satellite System (GNSS) data for position, heading, and elevation data. Since the
data is generated separately, similar to developing scenes on 3D development platforms, any geolocated
features such as airport markers, obstacles, or runway features can be conformed onto the virtual terrain
architecture. Moreover, since the terrain model is generated based on available data, it can be used in all
weather conditions .
[h]Combined vision system (CVS)
As the name suggests, CVS combines the details captured from the real-world view in the EFVS and
superimposes them onto the models generated for the SVS. It allows for a selective blending between the
two technologies while providing real-time synthetic data, resulting in potentially better situational
awareness than either of the previous systems .

[h]Surface navigation
One of the most challenging aspects of aircraft navigation is taxiing it along the airport taxiway . Especially
for large aircraft, pilots must be able to ride the aircraft while following the taxiway centerlines precisely.
Traditionally, pilots rely on verbal communication with the Air Traffic Control Officers (ATCOs) and taxi
charts. Airport taxiways and runways are often equipped with a collection of pavement markings and
designation signs. Both ATCOs and taxi charts make references to these markings and signs allowing pilots
to follow the taxiway and runway prior to take-off or after landing. Besides these two, most aircraft are also
equipped with Electronic Moving Maps (EMM) or Onboard Aircraft Navigation System (OANS) to assist
pilots taxi more efficiently . Despite some infrastructure built to enhance their capability to taxi the aircraft, a
single miscommunication between the pilots and ATCOs, or their (pilots’) misinterpretation of the taxi
charts or maps can cause mild to fatal damage to the aircraft, its crew, and passengers, as reported in . One
way to minimize such incidents or accidents on the airport taxiways and runways is to use AR technology.

In 1996, David C. Foyle, et al. introduced a HUD symbology configuration consisting of scene-linked 3D
symbologies for taxiway centerlines and traffic edge cones and 2D symbologies for additional textual
information such as Ground Speed (GS). These symbologies were designed to provide additional support to
the pilots while minimizing their need to divert their attention to other visual contexts for the task and
improve overall Situational Awareness (SA). Between 1996 and 2010, Foyle, Andre, and Hooey would lead
multiple improvements on the design, focused on different aspects of the design such as importance of
different types of information, or automated versus manual display of HUD components during a simulated
flight.

One of the biggest challenges with surface flight operations using a HUD is that implementing head tracking
is extremely complex as the scene-linked visual markers need to be relatively conformal to what they are
representing. The simplest solution to this problem is to use a Head-Worn Display (HWD). Arthur et al. led
this area of research and implementation following the T-NASA study. Their concept for Beyond-RVR
would allow them to view the scene-linked symbologies within a certain distance while still providing other
flight information even if the pilots were to move their head around. An example of a similar concept is
provided in Figure.
Figure. T-NASA display on a HUD .

[h]Air traffic control (ATC)


Potential avenues to enhance airport operations through the use of mixed reality have been proposed for
decades, with a particular focus on air traffic control (ATC). This section will serve to highlight some of the
noteworthy progress made in establishing a framework for mixed reality integration into ATC operations. In
2006, Reisman and Brown published a paper detailing the design of a prototype for augmented reality tools
to be used in ATC towers. The Augmented Reality Tower Tool (ARTT) consisted of two phases; a prototype
development and evaluation phase followed by an engineering prototype that resulted in the creation of a
head-mounted display that superimposed simulated 3D images of runways, significant landmarks, and ATC
data for the user to view and utilize to make decisions . The system received mostly positive feedback from
the ATC operators regarding its usefulness in a variety of tasks, including instances where coordination with
aircraft under multiple low visibility scenarios was required. Another paper to note is Masotti’s work on
designing and developing a framework to prototype AR tools specifically for ATC tower operations . Using
augmented reality, Masotti proposed several benefits that included a reduction in the amount of visual
scanning required and an increase in situation awareness due to the relevant information being superimposed
on the real-world view for the ATC operators in an organized manner. AR tools implemented in low
visibility conditions aided in increasing situation awareness for operators and allowed for less time to be
spent analyzing head-down operations.

Safi and Chung provide a detailed exploration of AR applications and their uses in aerospace and aviation.
Their chapter discusses the benefits and drawbacks of integrating AR into ATC operations. They also
highlight the contrast between head-up and head-mounted displays (HUD and HMD respectively). While
HUDs provide a larger field of view (FOV) and reduce computer processing lag through their direct
connection to the terminals in the ATC tower, the lack of motion tracking capability and information only
being accessible on these see-through displays limits the freedom of the ATC operator to move around and
reduces their immersion. By contrast, HMDs solve these drawbacks but suffer with their limited FOV and
discomfort during extended periods of wearing the HMD due to its weight. Moruzzi et al. have proposed the
design and implementation of eye tracking application on a see-through display , with the goal being to
achieve the concept of a Remote and Virtual Control Tower (RVT). Depending on the movement of each
eye, digital content would then be overlaid onto the display where it would be the most appropriate and
convenient for the user to view. Using a Microsoft Kinect device, the location of the eye on a human face
was able to be distinguished and an algorithm tracked the motion of each user’s eye to then determine the
placement of digital information to be superimposed on the screen.

[h]Urban air mobility (UAM)


Recent developments, particularly regarding electric propulsion and battery storage, have led to flying
vehicle concepts for personal usage . Urban Air Mobility (UAM) is the new air transport system that uses
low-mid level urban airspace below ~2000 ft. UAM is a subset of the Advanced Air Mobility (AAM) under
development by NASA, FAA, and industry . UAM focuses on the urban and suburban environments .

Currently, the main challenges UAM is facing are community acceptance, safety concerns, airspace
management, and required advances in ATC and autonomy. And many companies such as
Airbus/Boeing/Honeywell even organizations like NASA, EASA, and ICAO are involved to speed up the
development and acceptance of UAM in modern countries. Considering the growing interest in AR
technology, rapid growth of UAM industry requires the incorporation of AR. Accordingly, instead of dealing
with the physical controls, everything will be digital and imaginary, and flight mechanics and dynamics of
the aircraft will be shown on the AR screen using the concepts of Human-Machine Interfaces and
Interactions (HMI2) through symbology design while projecting the orientation and position of the UAM
aircraft. Furthermore, using this new technology, the required time to train UAM pilots will significantly
drop compared to the existing technologies, as the integrated AR system could be used from start to finish to
train the pilots.

UAM needs to be easily, safely, and semi-automatically operated to be accepted by the public. Accordingly,
three major areas could be targeted including flight monitoring, operation, and training. Among these,
particularly monitoring and operation of UAM aircraft could be enhanced by AR which will be discussed in
the following.

The monitoring is mostly about the next generation of control tower concepts that would benefit from AR .
For the purpose, traffic visualization, predictable corridors, and automated mission management are the key
aspects of semi-autonomous operation of UAM aircraft. All information related to the flight, aircraft
specifications, weather data, and safety of the airspace and each air vehicle should be shown in real time on
the AR windows and presented to the tower operator . This will also be part of the tasks that are defined for
the Providers of Services for UAM (PSU) that are responsible for the operations planning, flight intent
sharing, airspace management functions, off-nominal operations, operations optimization, and airspace
reservations . Figure presents the new AR-based ATC system using a transparent AR screen for UAM flight
monitor in an urban environment .

Advances in Airspace Management and Automation for UAM operation are must due to safety concerns.
The operation of UAM aircraft can also benefit from AR-based cockpits to modernize flight corridors
according to the safety concerns and risk evaluation methods, augment the flight by auto-generation of new
routes, and approve the new route before execution, use optimized flight path where autonomously adapt
flight considering weather conditions and other aircraft, train, and obstacles .
Figure. View from the next generation of UAM control tower concepts that will be using AR windows .

[h]3D reconstruction and visualization


3D reconstruction is the process of capturing the shape and appearance of real objects. 3D reconstruction can
be accomplished by numerous methods. Single cameras are computationally efficient but require other
sensors to determine depth scale. Stereo cameras utilize images captured from two cameras set a defined
distance. An algorithm is used to evaluate the depth using the two images. Lastly, RGBD cameras, such as
LiDAR, can perform range detection using structured light sensors to directly capture depth information .

In recent years, LiDAR technology has shown more applications with its capabilities for remote sensing and
data acquisition. LiDAR technology provides a unique advantage over traditional remote sensing methods
through high-resolution data acquisition with spatial and real-time capabilities. LiDAR sensors can construct
detailed point clouds representing the shape, structure, and surface characteristics of objects and landscapes.

LiDAR technology has shown promising results for the inspection of airframes and aerodynamic surfaces .
The ability to capture highly detailed 3D representations of aircraft components and structures facilitates
advanced inspection techniques. The traditional method of visual inspection, although common, has
limitations in detecting these defects due to the lack of contrast and reflectance on most surfaces. Moreover,
the reliance on human inspections and specialized equipment increases inspection time and cost
significantly. By obtaining a 3D point cloud of the aircraft’s parts and comparing it with a reference CAD
model, surface deformations can be detected. This comparison generates a disparity map that highlights the
differences between the reference CAD model and the inspection point cloud. With current LiDAR
technologies, reconstruction accuracies can be obtained with errors less than 1 millimeter.

3D reconstruction of the environment is another important use case in aerospace, primarily for analyzing
terrain. With the advance of drone technology, practical applications of 3D reconstruction include the
inspection and mapping of areas that are difficult to access by humans. For instance, the mapping of an
accident site can be accomplished using a high-resolution camera and 3D reconstruction tools . Using
LiDAR technology, this is achieved by relating the RGB frame with its depth frame. The simultaneous
localization and mapping algorithm, also known as SLAM, is one such method that builds a map of the
environment while localizing the camera in the map at the same time . SLAM allows aircraft to map out
unknown environments, which is extremely useful to carry out tasks such as path planning and obstacle
avoidance.
Chapter 7: Mission Ready: Deployment Strategies and Tactics

[mh] Infantry tactics


Infantry tactics are the combination of military concepts and methods used by infantry to achieve tactical
objectives during combat. The role of the infantry on the battlefield is, typically, to close with and engage
the enemy, and hold territorial objectives; infantry tactics are the means by which this is achieved. Infantry
commonly makes up the largest proportion of an army's fighting strength, and consequently often suffers the
heaviest casualties. Throughout history, infantrymen have sought to minimise their losses in both attack and
defence through effective tactics.

Infantry tactics are the oldest method of warfare and span all eras. In different periods, the prevailing
technology of the day has had an important impact on infantry tactics. In the opposite direction, tactical
methods can encourage the development of particular technologies. Similarly, as weapons and tactics evolve,
so do the tactical formations employed, such as the Greek phalanx, the Spanish tercio, the Napoleonic
column, or the British 'thin red line'. In different periods the numbers of troops deployed as a single unit can
also vary widely, from thousands to a few dozen.

Modern infantry tactics vary with the type of infantry deployed. Armoured and mechanised infantry are
moved and supported in action by vehicles, while others may operate amphibiously from ships, or as
airborne troops inserted by helicopter, parachute or glider, whereas light infantry may operate mainly on
foot. In recent years, peacekeeping operations in support of humanitarian relief efforts have become
particularly important. Tactics also vary with terrain. Tactics in urban areas, jungles, mountains, deserts or
arctic areas are all markedly different.

[h]Ancient history
The infantry phalanx was a Sumerian tactical formation as far back as the third millennium BC. It was a
tightly knit group of hoplites, generally upper and middle-class men, typically eight to twelve ranks deep,
armored in helmet, breastplate, and greaves, armed with two-to-three metre (6~9 foot) pikes and overlapping
round shields. It was most effective in narrow areas, such as Thermopylae, or in large numbers. Although the
early Greeks focused on the chariot, because of local geography, the phalanx was well developed in Greece
and had superseded most cavalry tactics by the Greco-Persian Wars. In the fourth century BC Philip II of
Macedon reorganized his army, with emphasis on phalanges, and the first scientific military research.
Theban and Macedonian tactics were variations focused on a concentrated point to break through the enemy
phalanx, following the shock of cavalry. Carefully organized—into tetrarchia of 64 men, taxiarchiae of two
tetrarchiae, syntagmatae of two taxiarchiae, chilliarchiae of four syntagmatae, and phalanges of four
chilliarchiae, with two chilliarchiae of peltasts and one chilliarchia each of psiloi and epihipparchy (cavalry)
attached—and thoroughly trained, these proved exceedingly effective in the hands of Alexander III of
Macedon.

However, as effective as the Greek phalanx was, it was inflexible. Rome made their army into a complex
professional organization, with a developed leadership structure and a rank system. The Romans made it
possible for small-unit commanders to receive rewards and medals for valor and advancement in battle.
Another major advantage was a new tactical formation, the manipular legion (adopted around 300BC),
which could operate independently to take advantage of gaps in an enemy line, as at the Battle of Pydna.
Perhaps the most important innovation was improving the quality of training to a level not seen before.
Although individual methods were used by earlier generations, the Romans were able to combine them into
an overwhelmingly successful army, able to defeat any enemy for more than two centuries.

[h]Early modern period

Figure: A tercio in "bastioned square," in battle.

As firearms became cheaper and more effective, they grew to widespread use among infantry beginning in
the 16th century. Requiring little training, firearms soon began to make swords, maces, bows, and other
weapons obsolete. Pikes, as a part of pike and shot formation, survived a good deal longer. By the mid-16th
century, firearms had become the main weapons in many armies. The main firearm of that period was the
arquebus. Although less accurate than the bow, an arquebus could penetrate most armours of the period and
required little training. In response, armor thickened, making it very heavy and expensive. As a result, the
cuirass replaced the mail hauberk and full suits of armour, and only the most valuable cavalry wore more
than a padded shirt.

Soldiers armed with arquebuses were usually placed in three lines so one line would be able to fire, while the
other two could reload. This tactic enabled an almost constant flow of gunfire to be maintained and made up
for the inaccuracy of the weapon. In order to hold back cavalry, wooden palisades or pikemen would be in
front of arquebusiers. An example of this is the Battle of Nagashino.

Maurice of Nassau, leader of the 1580s Dutch Revolt, made a number of tactical innovations, one of which
was to break his infantry into smaller and more mobile units, rather than the traditional clumsy and slow-
moving squares. The introduction of volley fire helped compensate for the inaccuracy of musket fire by and
was first used in European combat at Nieuwpoort in 1600. These changes required well-drilled troops who
could maintain formation while repeatedly loading and reloading, combined with better control and thus
leadership. The overall effect was to professionalise both officers and men; Maurice is sometimes claimed as
the creator of the modern officer corps.

His innovations were further adapted by Gustavus Adolphus who increased the effectiveness and speed of
volley fire by using the more reliable wheel-lock musket and paper cartridge, while improving mobility by
removing heavy armour. Perhaps the biggest change was to increase the numbers of musketeers and
eliminating the need for pikemen by using the plug bayonet. Its disadvantage was that the musket could not
be fired once fixed; the socket bayonet overcame this issue but the technical problem of keeping it attached
took time to perfect.

Figure: Prussian line infantry attack at the 1745 Battle of Hohenfriedberg.

Once this was resolved in the early 18th century, the accepted practice was for both sides to fire then
charging with fixed bayonets; this required careful calculation since the closer the lines, the more effective
the first volleys. One of the most famous examples of this was at Fontenoy in 1745 when the British and
French troops allegedly invited each other to fire first.

The late 17th century emphasised the defence and assault of fortified places and avoiding battle unless on
extremely favourable terms. In the 18th century, changes in infantry tactics and weapons meant a greater
willingness to accept battle and so drill, discipline and retaining formation became more important. There
were many reasons for this, one being that until the invention of smokeless powder, retaining contact with
the men on either side of you was sometimes the only way of knowing which way to advance. Infantry in
line was extremely vulnerable to cavalry attack, leading to the development of the carré or square; while not
unknown, it was rare for cavalry to break a well-held square.

[h]Mobile infantry tactics


South Korean and American marines during an amphibious warfare exercise, supported by Assault Amphibious
Vehicles and V-22 Ospreys

As part of the development of armored warfare, typified by blitzkrieg, new infantry tactics were devised.
More than ever, battles consisted of infantry working together with tanks, aircraft, artillery as part of
combined arms. One example of this is how infantry would be sent ahead of tanks to search for anti-tank
teams, while tanks would provide cover for the infantry. Portable radios allowed field commanders to
communicate with their HQs, allowing new orders to be relayed instantly.

Another major development was the means of transportation; no longer did soldiers have to walk (or ride a
horse) from location to location. The prevalence of motor transport, however, has been overstated; Germany
used more horses for transport in World War II than in World War I, and British troops as late as June 1944
were still not fully motorized. Although there were trucks in World War I, their mobility could never be
fully exploited because of the trench warfare stalemate, as well as the terribly torn up terrain at the front and
the ineffectiveness of vehicles at the time. During World War II, infantry could be moved from one location
to another using half-tracks, trucks, and even aircraft, which left them better rested and able to fight once
they reached their objective.
A new type of infantry, the paratrooper, was deployed as well. These lightly armed soldiers would parachute
behind enemy lines, hoping to catch the enemy off-guard. First used by the Germans in 1940, they were to
seize key objectives and hold long enough for additional forces to arrive. They required prompt support from
regulars, however; First British Airborne was decimated at Arnhem after being left essentially cut off.

To counter the tank threat, World War II infantry initially had few options other than the so-called "Molotov
cocktail" (first used by Chinese troops against Japanese tanks around Shanghai in 1937) and anti-tank rifle.
Neither was particularly effective, especially if armor was accompanied by supporting infantry. These, and
later anti-tank mines, some of which could be magnetically attached to the tank, required the user to get
closer than was prudent. Later developments, such as the Bazooka, PIAT, and Panzerfaust, allowed a more
effective attack against armor from a distance. Thus, especially in the ruined urban zones, tanks were forced
to enter accompanied by squads of infantry.

Marines became prominent during the Pacific War. These soldiers were capable of amphibious warfare on a
scale not previously known. As Naval Infantry, both Japanese and American Marines enjoyed the support of
naval craft such as battleships, cruisers, and the newly developed aircraft carriers. As with conventional
infantry, the Marines used radios to communicate with their supporting elements. They could call in sea and
air bombardment very quickly.

The widespread availability of helicopters following World War II allowed the emergence of an air mobility
tactics such as aerial envelopment.

[h]Squad tactics
Small unit tactics, squad in particular, had basic principles of assault, and support elements that were
generally adopted by all the major combatants, with differences being in the exact size of units, placement of
the elements and specialized guidance.

[h]Offensive tactics
The main goal was to advance by means of fire and movement with minimal casualties while maintaining
unit effectiveness and control.

The German squad would win the Feuerkampf (fire fight), then occupy key positions. The rifle and machine
gun teams were not separate, but part of the Gruppe, though men were often firing at will. Victory went to
the side able to concentrate the most fire on target most quickly. Generally, soldiers were ordered to hold fire
until the enemy was 600 metres (660 yards) or closer, when troops opened fire on mainly large targets;
individuals were fired upon only from 400 metres (440 yards) or below.

The German squad had two main formations while moving on the battlefield. When advancing in the Reihe,
or single file, formation, the commander took the lead, followed by the machine gunner and his assistants,
then riflemen, with the assistant squad commander moving on the rear. The Reihe moved mostly on tracks
and it presented a small target on the front. In some cases, the machine gun could be deployed while the rest
of the squad held back. In most cases, the soldiers took advantage of the terrain, keeping behind contours
and cover, and running out into the open when there were none to be found.

A Reihe could easily be formed into Schützenkette, or skirmish line. The machine gun deployed on the spot,
while riflemen came up on the right, left or both sides. The result was a ragged line with men about five
paces apart, taking cover whenever available. In areas where resistance was serious, the squad executed "fire
and movement". This was used either with the entire squad, or the machine gun team down while riflemen
advanced. Commanders were often cautioned not to fire the machine gun until forced to do so by enemy fire.
The object of the firefight was to not necessarily to destroy the enemy, but Niederkämpfen - to beat down,
silence, or neutralize them.

The final phases of an offensive squad action were the firefight, advance, assault, and occupation of position:

The Fire Fight was the fire unit section. The section commander usually only commanded the light machine
gunner (LMG) to open fire upon the enemy. If much cover existed and good fire effect was possible,
riflemen took part early. Most riflemen had to be on the front later to prepare for the assault. Usually, they
fired individually unless their commander ordered them to focus on one target.

The Advance was the section that worked its way forward in a loose formation. Usually, the LMG formed
the front of the attack. The farther the riflemen followed behind the LMG, the more easily the rear machine
guns could shoot past them.

The Assault was the main offensive in the squad action. The commander made an assault whenever he was
given the opportunity rather than being ordered to do so. The whole section was rushed into the assault while
the commander led the way. Throughout the assault, the enemy had to be engaged with the maximum rate of
fire. The LMG took part in the assault, firing on the move. Using hand grenades, machine pistols, rifles,
pistols, and entrenching tools, the squad tried to break the enemy resistance. The squad had to reorganize
quickly once the assault was over.

When occupying a position (The Occupation of Position), the riflemen group up into twos or threes around
the LMG so they could hear the section commander.

The American squad's basic formations were very similar to that of the Germans. The U.S. squad column
had the men strung out with the squad leader and BAR man in front with riflemen in a line behind them
roughly 60 paces long. This formation was easily controlled and maneuvered and it was suitable for crossing
areas open to artillery fire, moving through narrow covered routes, and for fast movement in woods, fog,
smoke, and darkness.

The skirmish line was very similar to the Schützenkette formation. In it, the squad was deployed in a line
roughly 60 paces long. It was suitable for short rapid dashes but was not easy to control. The squad wedge
was an alternative to the skirmish line and was suitable for ready movement in any direction or for emerging
from cover. Wedges were often used away from the riflemen's range of fire as it was much more vulnerable
than the skirmish line.

In some instances, especially when a squad was working independently to seize an enemy position, the
commander ordered the squad to attack in sub-teams. "Team Able", made up of two riflemen scouts, would
locate the enemy; "Team Baker", composed of a BAR man and three riflemen, would open fire. "Team
Charlie", made up of the squad leader and the last five riflemen, would make the assault. The assault is given
whenever possible and without regard to the progress of the other squads. After the assault, the squad
advanced, dodging for cover, and the bayonets were fixed. They would move rapidly toward the enemy,
firing and advancing in areas occupied by hostile soldiers. Such fire would usually be delivered in a standing
position at a rapid rate. After taking the enemy's position, the commander would either order his squad to
defend or continue the advance.

The British method formations depended chiefly on the ground and the type of enemy fire that was
encountered. Five squad formations were primarily used: blobs, single file, loose file, irregular arrowhead,
and the extended line. The blob formation, first used in 1917, referred to ad hoc gatherings of 2 to 4 men,
hidden as well as possible. The regular single file formation was only used in certain circumstances, such as
when the squad was advancing behind a hedgerow. The loose file formation was a slightly more scattered
line suitable for rapid movement, but vulnerable to enemy fire. Arrowheads could deploy rapidly from either
flank and were hard to stop from the air. The Extended Line was perfect for the final assault, but it was
vulnerable if fired upon from the flank.
The British squad would commonly break up into two groups for the attack. The Bren group consisted of the
two-man Bren gun team and second in command that formed one element, while the main body of the
riflemen with the squad commander formed another. The larger group that contained the commander was
responsible for closing in on the enemy and advancing promptly when under fire. When under effective fire,
riflemen went to fully fledged "fire and movement". The riflemen were ordered to fall to the ground as if
they had been shot, and then crawl to a good firing position. They took rapid aim and fired independently
until the squad commander called for cease fire. On some occasions the Bren group advanced by bounds, to
a position where it could effectively commence fire, preferably at 90 degrees to the main assault. In this case
both the groups would give each other cover fire. The final attack was made by the riflemen who were
ordered to fire at the hip as they went in.

[h]Defensive tactics
German defensive squad tactics stressed the importance of integration with larger plans and principles in
posts scattered in depth. A Gruppe was expected to dig in at 30 to 40 metres (33 to 44 yd) (the maximum
that a squad leader could effectively oversee). Other cover such as single trees and crests were said to attract
too much enemy fire and were rarely used. While digging, one member of the squad was to stand sentry.
Gaps between dug-in squads may be left, but covered by fire. The placing of the machine gun was key to the
German squad defence, which was given several alternative positions, usually being placed 50 metres (55
yards) apart.

Pairs of soldiers were deployed in foxholes, trenches, or ditches. The pair stood close together in order to
communicate with each other. The small sub-sections would be slightly separated, thus decreasing the effect
of enemy fire. If the enemy did not immediately mobilize, the second stage of defense, entrenching, was
employed. These trenches were constructed behind the main line where soldiers could be kept back under
cover until they were needed.

The defensive firefight was conducted by the machine gun at an effective range while riflemen were
concealed in their foxholes until the enemy assault. Enemy grenades falling on the squad's position were
avoided by diving away from the blast or by simply throwing or kicking the grenade back. This tactic was
very dangerous and U.S. sources report American soldiers losing hands and feet this way.

In the latter part of the war, emphasis was put on defense against armored vehicles. Defensive positions were
built on a "tank-proof obstacle" composed of at least one anti-tank weapon as well as artillery support
directed by an observer. To intercept enemy tanks probing a defensive position, squads often patrolled with
an anti-tank weapon.

[h]Platoon tactics
Platoon is made up of squads and a command element. Usually 4 squads make up a platoon, but this can
vary by the army and time period. Command element is small and is often just one officer and one NCO.
Together a platoon is about 40 soldiers.

For tactics, platoon can function independently, providing its own covering fire, and have an assaulting
element. For this there is a division made by the platoon leader regarding what squads are assigned what
combat task (defense or offense). Within offense (assault, fire support, or in reserve)

It can also function as part of the company.

[h]Company level tactics


Company is made up of platoons. Usually some platoons are “line” platoons, meaning they consist of
soldiers with standard weapons, and then there is a support element (totaling the size of another platoon),
with heavier weapons, namely mortars, heavier machine guns. It also has a larger supply unit, usually 3-10
soldiers, a small medic unit, supply unit and a communication unit. Companies in 20th century varied quite a
bit by country of origin, but for Germany, USA, and UK between 170 and 200 soldiers was about the
normal.

Dedicated supply section entered the military hierarchy at the company level.

Tactics start to become more complex at the company level, as more weapon systems are available at the
commander's disposal.

In World War II, some interesting variations are bicycle messengers in German infantry companies, and two
snipers in a Soviet rifle company.

Generally, in all the armies of the 20th century, a company is the first unit that is designed to function
autonomously.

[h]Battalion level tactics


Battalion is made up of line companies, larger headquarters, and heavier support weapons.

Ratio remains the same. Usually three line companies and one support unit.

Battalions are led by a Major or Lieutenant Colonel, with a staff of about 30-40 soldiers. Exception here are
Soviet units which traditionally had smaller staffs than American or German counterparts.

A battalion is the first level that intelligence, combat engineers, air-defense and anti-tank artillery entered a
unit.

Tactics of a battalion gave a lot more flexibility to the commander. In 20th century, usual deployment
involved certain amount of units deployed, with specific weapon systems supporting it, creating a chess style
scenario, where the side on the offensive would generally try to attack least powerful elements, while the
defense would try to anticipate the correct threat and neutralize it with appropriate weapon systems.

Jungle warfare was heavily shaped by the experiences of all the major powers in the Southeast Asian theatre
of operations during World War II. Jungle terrain tended to break up and isolate units. It tended to fragment
the battle. It called for greater independence and leadership among junior leaders, and all the major powers
increased the level of training and experience level required for junior officers and NCOs. But fights in
which squad or platoon leaders found themselves fighting on their own also called for more firepower. All
the combatants, therefore, found ways to increase both the firepower of individual squads and platoons. The
intent was to ensure that they could fight on their own ... which often proved to be the case.

Japan, as one example, increased the number of heavy weapons in each squad. The "strengthened" squad
used from 1942 onwards was normally 15 men. The Japanese squad contained one squad automatic weapon
(a machine gun fed from a magazine and light enough to be carried by one gunner and an assistant
ammunition bearer). A designated sniper was also part of the team, as was a grenadier with a rifle-grenade
launcher.

The squad's weaponry also included a grenade-launcher team armed with what some historians might often
mistakenly call a "knee mortar". This was in fact a light mortar of 50 mm that threw high explosive,
illumination and smoke rounds out to as far as 400 metres. Set on the ground and fired with arm
outstretched, the operator varied the range by adjusting the height of the firing pin within the barrel
(allowing the mortar to be fired through small holes in the jungle canopy). The balance of the squad carried
bolt-action rifles.

The result was that each squad was now a self-sufficient combat unit. Each squad had an automatic weapons
capability. In a defensive role, the machine gun could be set to create a “beaten zone” of bullets through
which no enemy could advance and survive. In an attack, it could throw out a hail of bullets to keep the
opponent's head down while friendly troops advanced. The light mortar gave the squad leader an indirect
"hip-pocket artillery" capability. It could fire high-explosive and fragmentation rounds to flush enemy out of
dugouts and hides. It could fire smoke to conceal an advance, or illumination rounds to light up any enemy
target at night. The sniper gave the squad leader a long-range point-target-killing capability.

Four squads composed a platoon. There was no headquarters section, only the platoon leader and the platoon
sergeant. In effect, the platoon could fight as four independent, self-contained battle units (a concept very
similar to the U.S. Army Ranger "chalks".)

The British Army did extensive fighting in the jungles and rubber plantations of Malaya during the
Emergency, and in Borneo against Indonesia during the Confrontation. As a result of these experiences, the
British increased the close-range firepower of their individual riflemen by replacing the pre-World War II
bolt-action Lee–Enfield with lighter, automatic weapons like the American M2 carbine and the Sterling
submachine gun.

However, the British Army was already blessed in its possession of a good squad automatic weapon (the
Bren) and these remained apportioned one per squad. They comprised the bulk of the squad's firepower,
even after the introduction of the self-loading rifle (a semi-automatic copy of the Belgian FN-FAL). The
British did not deploy a mortar on the squad level. However, there was one 2-inch mortar on the platoon
level.

The U.S. Army took a slightly different approach. They believed the experience in Vietnam showed the
value of smaller squads carrying a higher proportion of heavier weapons. The traditional 12-man squad
armed with semi-automatic rifles and an automatic rifle was knocked down to 9 men: The squad leader
carried the M16 and AN/PRC-6 radio. He commanded two fire teams of four men apiece (each containing
one team leader with M16, grenadier with M16/203, designated automatic rifleman with M16 and bipod, and
an anti-tank gunner with LAW and M16).

Three squads composed a platoon along with two three-man machine gun teams (team leader with M16,
gunner with M60 machine gun, and assistant gunner with M16). The addition of two M60 machine gun
teams created more firepower on the platoon level. The platoon leader could arrange these to give covering
fire, using his remaining three squads as his maneuver element. The M16/203 combination was a particular
American creation (along with its M79 parent). It did not have the range of the Japanese 50 mm mortar.
However, it was handier, and could still lay down indirect high-explosive fire, and provide support with both
smoke and illumination rounds. The US Army also had 60 mm mortars. This was a bigger, more capable
weapon than the Japanese 50 mm weapon. But it was too heavy for use on the squad or even the platoon
level. These were only deployed on the company level.

The deficiency of the US formation remained the automatic rifleman, a tradition that had gone back to the
Browning Automatic Rifle (BAR) gunner of World War II. The US Army discovered that an automatic rifle
was a poor substitute for a real machine gun. A rifle fired in the sustained automatic role easily overheated,
and its barrel could not be changed. In post-Vietnam, the US Army adopted the Belgian Minimi to replace
the automatic M16. With an interchangeable barrel and larger magazine, this weapon, known as the M249 in
U.S. inventory, provided the sustained automatic fire required.

The Republic of Singapore Army, whose experience is 100% in primary and secondary jungle as well as
rubber plantation terrain, took the trend one step further. Their squad contained only seven men, but fielded
two squad automatic gunners (with 5.56mm squad automatic weapons), two grenadiers with M16/203
underslung grenade launchers, and one anti-tank gunner with rocket launcher and assault rifle.

So in short, jungle warfare increased the number of short/sharp engagements on the platoon or even squad
level. Platoon and squad leaders had to be more capable of independent action. To do this, each squad (or at
least platoon) needed a balanced allocation of weapons that would allow it to complete its mission unaided.

Preface
Unmanned Aerial Vehicles (UAVs) have emerged as a transformative force in modern aviation,
revolutionizing industries, military operations, and humanitarian efforts worldwide. As we embark on this
journey through the intricate realms of UAV design, development, and deployment, we delve into a dynamic
landscape where innovation meets necessity, and where the skies are no longer the limit.

In this comprehensive exploration, we navigate through the intricacies of UAV technology, from its humble
beginnings to its cutting-edge applications in today's rapidly evolving world. Each chapter offers insights
into the multifaceted aspects of UAVs, from their design principles and development methodologies to their
strategic deployment strategies and future trends.

UAVs have transcended their initial role as mere reconnaissance tools, evolving into versatile platforms with
capabilities ranging from surveillance and data collection to disaster response and delivery services. Their
impact spans across various sectors, including agriculture, transportation, environmental monitoring, and
beyond, shaping the way we interact with the world around us.

Through the lens of design, we uncover the engineering marvels behind UAVs, exploring aerodynamics,
propulsion systems, avionics, and sensor integration. We witness the convergence of cutting-edge
technologies and innovative concepts, driving the evolution of unmanned flight towards greater efficiency,
autonomy, and adaptability.

Development processes come to life as we delve into the iterative journey from concept to reality, navigating
through prototyping, testing, and validation stages. Challenges abound, yet with each obstacle overcome, we
inch closer to realizing the full potential of UAVs as transformative tools for societal advancement.

Deployment strategies unveil the tactical intricacies of UAV operations, from mission planning and
execution to collaborative swarm tactics and beyond-line-of-sight operations. We witness firsthand the
pivotal role of UAVs in enhancing situational awareness, enabling rapid response capabilities, and
optimizing resource utilization across diverse scenarios.

As we peer into the future, we envision a world where UAVs continue to push the boundaries of innovation,
ushering in new era of connectivity, efficiency, and sustainability. With each chapter, we embark on a
journey of discovery, exploring the past, present, and future of UAVs, and unlocking the boundless potential
that lies within these soaring machines. Welcome aboard as we embark on this exhilarating expedition into
the world of UAVs: Design, Development, and Deployment.
About the book
Unmanned Aerial Vehicles (UAVs) have emerged as a transformative force in modern aviation, reshaping
industries, military operations, and humanitarian endeavors globally. This exploration navigates the complex
landscape of UAV design, development, and deployment, where innovation converges with necessity,
pushing boundaries beyond the sky.

Throughout this journey, we delve into UAV technology's evolution from its origins to its contemporary
applications. Each chapter offers insights into UAV intricacies, spanning design principles, development
methods, deployment strategies, and future trajectories.

UAVs have transcended their initial reconnaissance role, evolving into versatile platforms crucial in
agriculture, transportation, environmental monitoring, and more. The engineering marvels of UAVs, from
aerodynamics to sensor integration, are unveiled, showcasing the fusion of technology and innovation
propelling unmanned flight forward.

Development processes illuminate the iterative path from concept to realization, tackling challenges to
harness UAVs' transformative potential. Deployment strategies reveal tactical nuances, enhancing situational
awareness and response capabilities across diverse scenarios.

Looking ahead, UAVs promise continued innovation, driving connectivity, efficiency, and sustainability.
With each chapter, we embark on an expedition through UAV history, present, and future, unlocking the vast
potential within these soaring machines. Join us as we journey into the world of UAVs: Design,
Development, and Deployment.

You might also like