Professional Documents
Culture Documents
From Drones To Geospatial Analysis
From Drones To Geospatial Analysis
Lebanese University
Kuban State University
Acknowledgements
2
Preface
Unmanned aerial vehicles were once the stuff of rumor and legend, iden-
tified as new and Mysterious robots in the sky.
Apart from military applications there are many jobs to be performed
monitoring and rescue, the unmanned aerial vehicles are used for photogramme-
try tasks.
This new technology been the case across various disciplines such as
mapping, remote sensing, civil engineering, geology, geomorphology, military
engineering, land planning, and communications. It is encouraging that more
literature is now available in this discipline. This book was written with the in-
tention to add to and benefit this field of study.
I am pleased to present you with this 1st edition of this book. It introduces
the classes and types of platforms available, and examines their performance,
structure, propulsion and systems to be suitable for photogrammetry. Sensor and
payloads, methods of launch and recovery, image processing and geospatial
analysis.
In its nine chapters, this book carries out several tasks: Drones field of
uses, classification of drones, technical structures of drones, drones flight plan-
ning and software, drone photogrammetry methods, data processing and geospa-
tial analysis. The purpose of this book is to give an idea about drone for re-
searchers and students to apply this new technology in researches and projects,
and introduce them to the Unmanned Aerial Vehicles new era leading to a treas-
ure of information. My message to the readers is: “Use new technology”.
3
Table of Contents
Chapter 1 Introduction to Photogrammetry ............................................................................... 5
What are a UAV and UAS? ................................................................................................... 7
Description of a drone ............................................................................................................ 8
Drones field of Uses ............................................................................................................... 9
Chapter 2 Advantage and limitation of Drones ........................................................................ 11
Advantages of drones ........................................................................................................... 11
Limitations in the use of drones ........................................................................................... 12
Chapter 3 Classification of drones ........................................................................................... 13
Fixed wings .......................................................................................................................... 13
Rotary systems ..................................................................................................................... 14
Chapter 4 Drones structures ..................................................................................................... 19
Unmanned Aerial Vehicles (UAV) ...................................................................................... 19
Unmanned Aircraft System (UAS) ...................................................................................... 21
Chapter 5 Drone Software’s ..................................................................................................... 26
Drones flight planning software ........................................................................................... 26
Drones mapping software..................................................................................................... 28
Processing workflow ............................................................................................................ 30
Chapter 6 Drones photogrammetry methods ........................................................................... 34
Type of photogrammetry ...................................................................................................... 34
Close range photogrammetry (CRP) .................................................................................... 34
Distortions and Camera calibration ...................................................................................... 38
Bundle adjustment ................................................................................................................ 41
Structure from Motion (SFM) .............................................................................................. 43
Aerotriangulation ................................................................................................................. 46
Chapter 7 Flight Planning Principles and image processing .................................................... 48
Ground sampling distance (GSD) ........................................................................................ 48
Photocontrol points .............................................................................................................. 49
The Global Positioning System (GPS) ................................................................................. 51
Flight planning ..................................................................................................................... 53
Image processing .................................................................................................................. 55
Drone mapping accuracy ...................................................................................................... 57
Ground Sampling Distance (GSD) accuracy ....................................................................... 58
DSM Accuracy Assessment ................................................................................................. 58
Chapter 8 Drones and Geospatial Analysis .............................................................................. 59
Multi-scale Digital Surface Models ..................................................................................... 61
Multi-scale landforms classification .................................................................................... 68
Terrain analysis for parcels evaluation................................................................................. 77
Multi-scale Terrain Roughness ............................................................................................ 83
Digital Surface Models to planar areas. ............................................................................... 89
Chapter 9 Drones regulations and buyers guide....................................................................... 98
Drones Regulations .............................................................................................................. 98
Flight safety .......................................................................................................................... 98
Drones buyers guide ........................................................................................................... 100
Conclusion .............................................................................................................................. 101
References .............................................................................................................................. 102
4
Chapter 1 Introduction to Photogrammetry
5
Fig.1.1: The logic system for understanding photogrammetry.
6
In 1855, Nadar (Gaspard Felix Tournachon) used a balloon at 80-meters
to obtain the first aerial Photograph, in 1859 Emperor Napoleon ordered Nadir
to obtain reconnaissance photography in preparation of the Battle of Solferino
(Gruner, 1977).
The English meteorologist E.D. Archibald was among the first to take
successful photographs from kites in 1882.
In France, M. Arthur Batut took aerial photographs using a kite, over
Labruguiere, France, in May 1888.
In 1893, Dr. Albrecht Meydenbauer (1834-1921) was the first person to
use the term "photogrammetry" (Meyer, 1987).
In 1903, Julius Neubranner, photography enthusiast, designed and patent-
ed a breast-mounted aerial camera for carrier pigeons.
1903: Airplane invented by Wright brothers.
1909: The Wright brothers take the first photograph from a plane over
Centocelli, Italy.
Captain Cesare Tardivo (1870-1953) is thought to be the first to use aerial
photography from a plane for mapping purposes. He created a 1:4,000 mosaic of
Bengasi in Italy that was described in his paper to the 1913 International Society
of Photogrammetry meeting in Vienna.
First drones investigations started at Carnegie Mellon University (Ei-
senbeiss, 2011).
(Nagai et al., 2004) used LiDAR system mounted on a drone, in 2008 they
showed the latest results for multi-sensor integration on drones.
After 2010 the evolution and the invention of new drones systems were
very fast and put a multidisciplinary powerful scientific engine among civilian
hands.
7
The use of professional civilian drones is increasing rapidly around the
world and is expected to explode in the coming years.
In the following chapters of our book, we will use the term "drone" in-
stead of UAV and UAS.
Description of a drone
A simple drone system has two basic parts, airborne part, aircraft frame
carrying the payload of the system and containing (frame, camera, battery, gim-
bal, etc.) and the ground-based part which constitute the operation base interface
of all the system (base station and a radio transmitter) in chapter 6 we will dis-
cuss it in details.
Some drones have the possibilities of automated take-off and landing
without any manual operator, few drones can be functioned without a link to a
Remote Control (RC) transmitter or base station. This constraint is implemented
mainly as a safety precaution to ensure that manual control of the UAV can be
resumed if needed at any given point during a flight.
Drones are categorized into fixed wing, and helicopters. These drones are
powered by brushless electric motors or by combustion engines. Electric motors
cause significantly fewer vibrations in the airframe than internal combustion en-
gines and are preferred for photographic applications.
It is not very difficult to learn how to fly RC helicopters or fixed wing in a
short time by it needs practice and training. Some vehicles are very easy to con-
trol and don’t need any flight experience and training because it contains auto-
mated orientation sensors and processing power keeping the platform aloft.
A basic flight control system for these drones contains very specific sen-
sors linked to a processor that manages power distribution to the motors to stabi-
lize flight. Most flight control systems are composed of the magnetometer, a ba-
rometer, and Global Positioning Systems (GPS) receivers to support three-
dimensional drones navigation.
In this book, we will give an overview of existing classifications, struc-
tures, compositions of these drones and their field of applications, especially in
geography.
The unmanned aircraft system (UAS) is a fully designed system compris-
es:
a) A control station (CS) constituting the operation base interface of all
the system.
b) Aircraft frame carrying the payload of the system.
c) A communication system between the CS and the aircraft and contrary,
this system is achieved by radio transmission (Austin 2010).
8
Drones field of Uses
Drones photogrammetry opens various new applications at a very low-
cost comparing to the classical photogrammetry, it allows to capture high-
resolution and to process data manually or automatically.
Drones are used in a numerous very huge number of fields:
Agriculture, drones provide an efficient way to assess plant health, after a
flight drone images are integrated into a special software for the generation of
index maps showing clearly, where plants are struggling or vegetation monitor-
ing (Sugiura et al., 2005). Lidar systems mounted on drones to measure the
height of crops, multi-spectral instruments can count plants, infrared imagery
can check the health of plants etc...
Archeology, drones used for the detection and mapping of archeological
sites and monuments (Bendea et al., 2007; Patias et al., 2007).
Architecture, simultaneous as built and scanning of the projects made by
drones can help in faster design and inspections, quality control, make meas-
urements and modifications.
Civil engineering, this field benefits from inspecting infrastructure,
bridges, cell phone towers, dams, powerlines, solar fields and road planning and
designs.
Crime analysis, drones can help in crime detections and investigations.
Customs, drones can surveil coastline, ports, bridges and other access
points for the import of illegal substances is the remit of the customs (Surveil-
lance for illegal imports)
Electricity Companies, inspections of power-lines, damage of structures
and deterioration of insulators.
Environmental impact assessment, the use of drones for glacial model-
ing to animal’s movement study, coastal erosion, etc.…
Fire Services, Fire departments can use drones to track and map wildfires
or for forest fire monitoring (Zhou et al., 2005).
Fisheries, the prevention of illegal fishing aided by patrolling drones
Forestry, drones can take a forest control by trees counting, trees health
and fire controls.
Gas and Oil Companies, use drones for pipes patrolling to look for dis-
ruption or leaks in accidents.
Geology, drones with specific electromagnetic sensors, can be used to
gather geological information to approximate the location and presence of min-
erals, oil, and natural gas.
Humanitarian aid, Drones are being increasingly used by NGOs and
governmental organizations to respond to and assess the impact of natural disas-
ters. For Rescue, drones with infrared sensors can be used to detect humans that
is helpful in search scenarios.
9
Information Services and Mass Media, television companies, newspa-
per publishers could have the means of covering events, whether planned or ac-
cidental. Sports events could be covered in real-time.
Land surveying, Land surveyors use drones to survey inaccessible areas
like cliffs, rivers, woods etc… to create geo-referenced maps, 3D models, Digi-
tal Surface Models (DSM) and other data products.
Land registry, drones have made it possible to count buildings and popu-
lation growth estimation.
Meteorological Services, use drones to sample the atmosphere for fore-
casting by integrating special sensor.
Mining, Drone data can be used for taking measurements, automated
workflows create orthophotos as well as 3D models that can be used to quickly
calculate quantities for materials such as aggregate stockpiles or cut and all.
Photomontage, drones are used for posters making and cinemas filming.
Rivers Authorities have used drones successfully in monitoring water-
course flow and water levels. Rivers Authorities Water course and level moni-
toring (Masahiko, 2007).
Traffic monitoring (Haarbrink and Koers, 2006, Puri, 2004), road fol-
lowing (Egbert and Beard, 2007), vehicle detection (Kaaniche et al., 2005), car
accident and flare inspection of an industrial flue (Haarbrink and Koers, 2006).
Geo-informatics, Drones are increasingly being used in place of satellite
and photogrammetry images for the creation of up-to-date, high-resolution base
maps Geospatial data collection with high geometric and temporal resolution.
Below a small comparison between drones, photogrammetry and satellite
images.
Table 1.1: A brief comparison between satellite images, photogrammetry, and drones.
10
Chapter 2 Advantage and limitation of Drones
Advantages of drones
Main drones advantages compared to manned aircraft systems it can be
used in high-risk situations without endangering a human life and inaccessible
areas. These regions are for example natural disaster sites, floodplains, earth-
quake and desert areas and scenes of accidents. Furthermore, in cloudy and driz-
zly weather conditions, the data acquisition with drones is possible, such weath-
er conditions do not allow to fly into manned aircraft.
Moreover, drones have the real-time capability and the ability for fast data
acquisition, while transmitting the image, video and orientation data in real time
to the ground control station.
Most of the (non-)commercially available UAV systems on the market fo-
cus on low-cost systems, and thus a major advantage of using UAVs is also the
cost factor, as UAVs are less expensive and have lower operating costs than
manned aircraft have. But, sometimes depending on the application the cost can
be similar to manned systems. Due to the low operation altitude, UAVs achieve
a very high resolution in terms of ground sampling distance and can, therefore,
compete with airborne large format digital camera system (Irschara et al, 2010).
In addition to these advantages, the UAV-images can be also used for the
high-resolution texture mapping on existing DSMs and 3D-models, as well as
for image rectification. The rectified images and derivate, like image mosaics,
maps, and drawings, can be used for image interpretation.
The implementation of GNSS systems as well as the stabilization and nav-
igation units allow precise flights with sufficient image coverage and overlap
and enabling the user to estimate the expected product accuracy preflight.
With the huge amount of divert drone advantages, we listed it in a sum-
marized way as:
1. Autonomous and stabilized: real-time capability.
2. Can fly at low altitude close to the objects where manned systems
cannot be flown (natural disaster sites, mountainous and volcanic areas, flood-
plains, earthquake and desert areas etc.…).
3. Data acquisition in cloudy and drizzly weather conditions.
4. Data acquisition with high temporal and spatial resolution.
5. More economical than human pilots.
6. Providing high-resolution texture mapping on existing DSMs and 3D-
models.
7. Real-time capability and the ability for fast data acquisition, while
transmitting the image.
8. Use in high-risk situations without endangering a human life and inac-
cessible areas.
9. Flexibility, a drone can be launched on demand.
11
10. Timely, drones produce completely up-to-date imagery. This makes
drones suitable for monitoring projects.
11. Efficient, using a drone is fast and requires minimal staff.
12. Cost-effective, the project cost of a professional drone system is typi-
cally lower than a manned imaging aircraft.
13. Discrete, electric-powered drones make a little of noises and are rarely
disturbing people on the ground if they notice them at all.
Limitations in the use of drones
The limitations of drones compared to manned aircraft systems, first of
all, is related to the sensor payload in weight and dimension, so that often low
weight sensors like small or medium format amateur cameras are mounted on
drones, in comparison to large format cameras, drones have to acquire a higher
number of images in order to obtain the same image coverage and comparable
image resolution.
The drones payload limitations require the use of low weight navigation
units, which implies less accurate results for the orientation of the sensors.
For general mapping purposes, payload weights may vary from 200 g for
a small digital camera to 3 kg for larger digital single lens reflex or multispectral
cameras. Apart from the weight of the actual sensor and battery, a stabilizing
gimbal mount may have to be added to the payload weight. The take-off weight
of "small" UASs for mapping purposes will typically vary from around 1 kg to
5 kg.
Existing commercial software packages applied for photogrammetric data
processing are rarely set up to support drone images, as though no standardized
workflows and sensor models are being implemented (Eisenbeiss, 2009).
Based on the communication and steering unit of drones, we can state
that the operation distance depends on the range of the radio link for the rotary
and fixed wing, which is equivalent to the length of the rope for kites and bal-
loon systems used in the past.
radio frequencies may be subject to interferences caused by other systems
(remote controlled cars and model aircraft, as well as band radios), which use
the same frequencies or may suffer from signal jamming. Thus, depending on
the local situation of the area of interest.
The limitations in the drone uses are could be summarized in:
1. Limitations of the payload, the drone could not carry big weights.
2. Regulations and insurance, it is very dangerous to fly above a crowd
and to give a child a drone guidance.
3. Use of Low-cost Sensors.
4. Short flight distances (civilian use), low battery life.
12
Chapter 3 Classification of drones
Fixed wings
A fixed-wing drone is generally composed of a central body, which hous-
es all the drone's electronics, and two wings. The aerodynamic profile of the
wings enables the drone, once in flight, to generate lift that compensates for the
weight of the aircraft.
Like an airplane, fixed-wing drones also feature ailerons, which enable
the aircraft to steer. Some drones also feature a rudder and elevators, sometimes-
even flaps.
Fixed-wing drones typically feature one engine with a propeller attached,
either a forward-mounted (tractor) propeller or a backward- facing (pusher) pro-
peller. Most propeller-powered drones include a system that folds these compo-
nents somehow especially if, an aircraft lands on its belly
(http://planner.ardupilot.com).
Fig. 3.1: Fixed wing drone, Sky Cruiser from SOUTH Company.
13
face and length. This usually takes the form of a ramp along which the aircraft is
accelerated on a trolley, propelled by a system of rubber bungees, by com-
pressed air or by a rocket, until the aircraft has reached an airspeed at which it
can sustain airborne flight.
Recovery equipment. This also will usually be required for aircraft with-
out a vertical flight capability, unless they can be brought down onto the terrain,
which will allow a wheeled or skid-borne run-on landing. It usually takes the
form of a parachute, installed within the aircraft (figure 3.2), and which is de-
ployed at a suitable altitude over the landing zone (Reg Austin, 2010).
Fig. 3.2: A) catapult drone launching, b) parachute for the drone landing.
Parachute deployment, this is the most usual method; it requires the drone
to carry a parachute. One disadvantage of this method is that the parachute is at
the mercy of the wind, and its precise point of touchdown may, therefore, be un-
predictable figure 3.2b.
Rotary systems
Rotary drones, also called multi-rotors, are more complex systems than
fixed-wing.
A rotary system moves through the air by varying the power supplied to
the different propellers. This determines their revolutions per minute (RPM) and
therefore the thrust these generate.
Two key technical challenges must be met to optimize a rotary system's
performance.
First, to ensure a stable flight they need highly advanced autopilot
technology to continually choose the correct RPM for the different propellers;
making hundreds of tiny adjustments per second.
Secondly, rotary systems require very fine motor control to rapidly vary
the power sent to different motors, ensuring the propellers achieve the exact
RPM commanded by the autopilot.
In the case of a four-propeller system quadcopter model, diagonal pairs of
propellers spin in opposite directions.
14
A quadcopter climbs and descends by simultaneously varying the RPM,
and therefore the thrust, of all four of its propellers. creates a force allowing the
drone to climb (ascend), hover or descend.
Flying forwards or backward. To pitch forwards, the RPM of the front
two motors is decreased compared to that of the back pair. With the drone
pitched forwards, the combined thrust of its four motors is also pitched
forwards, pushing the drone in that direction.
Flying sideways, or rolling, is a case of increasing the RPM of two motors
on the same side; increasing the power to the rotors on the left, while decreasing
it to the right, will 'roll'and move the aircraft to the right.
While different multi-rotors feature different numbers of propellers, these
basic flight concepts remain the same.
Table 3.1: Questionnaire for fixed wing and rotary system comparison.
The comparison between fixed wing and rotary drones of table 3.1 gives
an idea for the user to choose which type to use depending on project size, a
fixed wing for big projects (global) and a rotary system for small projects (lo-
cal).
The rotary system drones come with different types such as tricopter,
quadcopter, an octocopter. In this section, we will discuss the description, ad-
vantages, and disadvantages of each type.
Tricopter. From its name, we can understand that it is constituted from
three arms, each connected to one motor figure 3.3. The front of the drone tends
to be between two of the arms, the rear motor normally needs to be able to rotate
(using a normal RC servomotor).
15
Fig. 3.3: Tricopter unmanned aerial vehicle.
Advantages: Most drones in the market are quadcopters, very simple the
arms/motors are symmetric. All flight controllers are suitable for this multirotor
design.
Disadvantages: There is no redundancy, so if there is a failure anywhere
in the system, especially a motor or propeller, the craft is likely going to crash.
Hexacopter. Six arms connected to six motors figure 3.5, the front tends
to be between two arms, or also in one arm with the form of (+).
Advantages: hexacopter can lift more payload than other kinds of copters.
If a motor fails, there is still a chance the copter can land rather than crash. All
flight controllers support hexacopters configuration.
16
Fig. 3.5: Hexacopter with six arms and motors, Sky Walker X61 from SOUTH Company.
17
Fig. 3.7: Octocopter eight arms and eight motors.
18
Chapter 4 Drones structures
19
Fig 4.1: Main electronic elements of a quadcopter.
Drones should be powered with LiPo batteries, which are much faster
than other types of batteries because they output power faster, store a large
amount of power, and have a long life.
The battery life for a flight in good conditions (no wind or cold weather)
is for 15 minutes.
Propellers are blades attached to drones spin to create lift and move up
the drone, some drones come with special propellers that self-tighten like DJI
phantom, which does not need a key to tight.
These Propellers are attached to motors and an ESC for each controls the
speed of each motor independently and ensure the stability of the drone by
varying the speed in this case the drone is able to hover in place, climb or
descend and move in all directions.
Flying with remote control (RC) transmitter means that you will be
limited to flying line of sight and you won't have the benefit of advanced
communication that comes with smart devices like phones and tablets. Rc
transmitters have the capabilities of controlling a drone for long distances, RC
controllers do not communicate any position data or battery charge status. The
radio transitions are always present everywhere in the word, most remote control
drones use 900 MHZ for transmission and smartphones and tablet controllers
don't have the range that RC transmitter does (Lafay, 2015).
Flight controller, a flight controller is essentially a normal programmable
microcontroller, but has specific sensors onboard; a very simple flight controller
will include only a three-axis gyroscope to be able to auto level the drone.
More sophisticated flight controller includes more specific sensors like:
Accelerometer, it measures linear acceleration in up to three axes (X,
Y, and Z). The units are normally in "gravity" (g) which is 9.81 meters per se-
cond. Accelerometers detect gravity, and it can know which direction is “down”.
This allowing multirotor aircraft to stay stable (www.robotshop.com).
20
Gyroscope, a gyroscope measures the rate of angular change in up to
three angular axes (alpha, beta, and gamma) the units are often degrees per se-
cond.
Inertial Measurement Unit (IMU), IMUs are electronic devices that
are capable of providing three-dimensional velocity and acceleration infor-
mation of the vehicle they are installed on at high sampling rates. They consist
of three accelerometers and three gyroscopes mounted in a set of three orthogo-
nal axes, the major problems encountered with such developed cheap IMUs, are
the need to perform the calibration process for the used sensors (Dissanayake et
al. 2001).
Compass/Magnetometer, an electronic magnetic compass is able to
measure the earth's magnetic field and used it to determine the drone ‘s compass
direction This sensor is almost always present in the system has GPS input and
is available in one to three axes.
Pressure/Barometer, since atmospheric pressure changes the farther
away you are from sea level, a pressure sensor can be used to give you a pretty
accurate reading for the drone’s height. Most flight controllers take input from
both the pressure sensor and GPS altitude to calculate a more accurate height
above sea level (www.robotshop.com).
GPS, Global Positioning Systems (GPS) use satellites signals to de-
termine the specific geographic location of the drone; it registered the coordi-
nates of the takeoff points to return to it in case of emergency of automated
flights. A flight controller can have either onboard GPS or one, which is con-
nected to it via a cable. The accuracy of these GPS is not very high.
All the listed above electronic system is standard for all professional
UAV’s
The comparison of a drone with a human body is that the flight controller
is the brain, the wires are the blood vessels and nerves, and the motors are your
muscles, limbs, and hands (Issod, 2015).
Unmanned Aircraft System (UAS)
As we mentioned in the first chapter that UAV is an Unmanned Aerial
Vehicle remote control or autonomously guided. Otherwise, the UAS Un-
manned Aircraft System is very similar the UAV's one but with a ground station
control.
A UAS is a term more suitable to professional drones specialized in a spe-
cific field as military and geomatics.
The structure of a UAS comparing to a UAV is constituted from the above
listed electronic devices and sensors:
Long distance radio transmitter (TX)/ receiver (RX) for controlling
the drone.
Long distance First Person View (FPV) for flight monitoring.
21
Telemetry and On Screen Display (OSD) receiving information about
the flight and display on the screen.
Hi-Resolution cameras to move the real flight image.
Weatherproof design.
Accurate GPS (DJI Naza or RTK).
Flexible payload (body of the drone, motors, etc.…).
Autonomous flight compatible, a route tracing for long distances
flights.
Besides the electronic elements, the UAS must be equipped with:
Field computer or tablet pc for automatic flight control.
Workstation computer for data processing.
Ground control points marks.
High accuracy professional GNSS receivers.
The main parts of a typical UAS are: Autopilot, payload, communication
system and a ground control station discussed in details below.
An autopilot is a system used to guide drones without assistance from
human operators, consisting of both hardware and its supporting software. The
autopilot is the base for all the other functions of the UAS platform.
Autopilots allow defining the coordinates relative to fixed home location
or to takeoff position.
An autopilot can take control of different objectives such as:
a) Pitch attitude hold.
b) Altitude hold.
c) Speed hold.
d) Automatic take-off and landing.
e) Roll-Angle hold.
f) Turn coordination.
g) Heading hold.
The autopilot needs also to communicate with the ground station for con-
trol mode switch, to receive the broadcast from GPS satellite for position up-
dates and to send commands to UAS motors.
New autopilot systems come with new functions as follow me you can be
all time follow by your drone or flying around a waypoint with an adjusted radi-
us, etc…
GPS plays an indispensable role in the autonomous control of UAVs be-
cause it provides an absolute position measurement. A known bounded error
between GPS measurement and the real position can be guaranteed as long as
there is a valid 3-D lock. There are also differential GPS units, which could
achieve centimeter-level accuracy. The disadvantage of GPS is its vulnerability
to weather factors and its relatively low updating frequency of 4Hz, which may
not be enough for flight control applications.
22
Payload, the payload of UAV could be a camera, or other emission devic-
es like Lidar mostly for intelligence, surveillance, and reconnaissance uses.
The type and performance of the payloads are driven by the needs of the
operational task it could vary from 200g to 4 kg (Reg Austin, 2010).
A communications system providing the data links between the Control
System and the drone through radio frequencies and it provides:
Transitioning of the flight path to be stored in the drone flight control
sytem
Transitioning real-time flight control commands.
Transmit control commands to the aircraft-mounted payloads (gimbal or
camera mounted).
Transmit updated positional information to the drone (Reg Austin, 2010).
Most UAV have more than one wireless link supported. For example, RC
link for safety pilot, WIFI link for large data sharing.
Control Stations (CS). A CS is simply the control center of a UAS sys-
tem, within which the mission is pre-planned and executed. The launching and
recovery of the aircraft may be controlled from the main CS.
It is necessary for the operators to know, on demand, where the aircraft is
at any moment in time. It may also be necessary for the aircraft to ‘know’ where
it is if autonomous flight is required of it at any time during the flight. This may
be either as part or all of a pre-programmed mission or as an emergency ‘return
to base’ capability after system degradation (Reg Austin, 2010).
From a laptop, the operator could easily input waypoints onto a map base
(google map), and providing easy access to the key and frequently used features
figure 4.2.
23
A control station can include:
Primary battery, used to power the LCD monitor and/or FPV glasses
and possibly the video receiver.
Secondary battery for the transmitter.
Mounting for the LCD monitor.
Mounting for the video receiver
Space for storing the RC transmitter.
Mounting for the long-range antenna.
A laptop is not something everyone needs within the field but it can give
you some nice features like mapping, missions, telemetry, spectrum analyzer,
and disk video recorder. You will not need anything too special for computer
hardware, just as long as it is capable of running google earth at a minimum and
the battery holds for as long you need it (Glover, 2014).
Moreover, control stations are equipped with:
On Screen Display (OSD), allows the pilot to see various sensor data
sent back from the drone. One of the easier ways to include on-screen data is to
use a camera with analog output and place an on Screen display board between
the camera output and the video transmitter.
First person view (FPV), FPV describes photography where you see
what the drone is seeing. The video is beamed back to a small monitor or to a set
of special goggles, the operator is wearing. FPV gear for drones will work dif-
ferently than the purely digital cameras we are used to. These are often analog
systems and therefore use either a different second camera. This output is cou-
pled to a transmitter with its own antenna- and often needing its own battery.
The video signal is then transmitted from the drone to your ground station and
displayed on a small monitor on the side of specially designed goggles. FPV
does not require a high-resolution camera (Issod, 2015).
Smart Devices, Smartphone, and tablets can be used to display video in
real time. The difficulty with using smart devices is that most receivers are not
made to receive a video signal from a wireless video receiver. A smartphone
currently works best with the video sent via WiFi (WiFi camera) with applica-
tion to run this camera (www.robotshop.com).
With the evolution of the drone and the smart devices, tablets or phones
could replace nowadays control stations with an application installed fully func-
tioned controlling the UAS very easily and without any added equipment's, the-
se applications give you advance positioning, the first-person video controls
FPV, programmable flight routes, etc.
Smart devices during flight can display on screen drone position using
GPS, flight status, speed, battery life and flight time.
Sensor. Many of the most promising application areas for UAS relate to the
gathering of information that can be remotely sensed. This ranges from visual
range cameras gathering data for surveillance of various kinds to meteorological
instruments, to geologic surveying and crop analysis among a wide variety of
other existing and potential applications.
24
The sensors need gimbals to be mounted on UAS, a gimbal is often used to sta-
bilize a camera or a sensor, connecting a sensor directly to a UAS frame means
it is always pointing in the same direction as the frame itself.
Gimbals are high end mounting systems that reduce shake by stabilizing the
camera; some gimbals provide remote control for adjusting and rotating the
camera to capture a different perspective. Gimbals tend to reduce flight time due
to their weight (Lafay, 2015).
UAS remote sensing functions include electromagnetic spectrum sensors, gam-
ma ray sensors, biological sensors, and chemical sensors.
Cameras. In UAS applications cameras, are made useful and highly
adaptable by the addition of gimbals for pointing and stabilization software for
removing distortions caused by aircraft vibration and atmospheric buffeting.
These cameras are used as photogrammetry sensors to take aerial photography it
could be a professional camera or a sports one depending on the resolution
needed.
Infrared Detectors, a thermographic camera or infrared camera is a de-
vice that forms an image using infrared radiation, similar to a common camera
that forms an image using visible light. Instead of the 450-750-nanometer range
of the visible light camera, infrared cameras operate in wavelengths as long as
14,000 nm (14 µm).
These infrared detectors are used for the Normalized Differential Vegeta-
tion Index (NDVI) extractions.
Multispectral and Hyperspectral Sensors, Recent advances in remote
sensing and geographic information have led the way for the development of
hyperspectral sensors. Hyperspectral remote sensing, also known as imaging
spectroscopy, is a relatively new technology that is being investigated by re-
searchers and scientists in the detection and identification of minerals, and land
use (Hyperspectral Remote Sensing).
Radar, many of the most promising applications of radar-based sensing to
UAS utilize Synthetic Aperture Radar (SAR). SAR is a form of radar, which
uses relative motion between an antenna and a target region to provide distinc-
tive long-term coherent-signal variations, which are exploited to obtain a finer
spatial resolution for the production of Digital Elevation Models (DEM)
LIDAR (Light Detection and Ranging), is an optical remote sensing
technology that can measure the distance to a target by illuminating the target
with light, often using laser pulses. LIDAR technology has application in geo-
matics, archaeology, geography, geology, geomorphology, seismology, forestry,
remote sensing, and atmospheric physics (Arko Lucieer et al., 2012).
This technology helps in 3D terrain modeling by producing 3D point
clouds.
Meteorological Sensors, the use of a UAS enables the sensor to be de-
ployed to a location in the atmosphere remote from the user of the sensed data.
The National Weather Service and others have used radiosondes and operated
aircraft to reach regions of the atmosphere remote from the ground observer.
The use of radiosondes, however, is inefficient and costly.
25
Chapter 5 Drone Software’s
26
3. Location the area of interest and drawing the flight path by defining
the border of a polygon bounding the area.
4. Drawing a survey grid automatically inside the drawn polygon, taking
into account the overlapping side of the aerial photography by dividing the area
of interest into flight lines linked by waypoints figure 6.1. In some old software,
you can no find this function so you must draw these flight lines manually by
dropping a pin on the map of the software background.
Fig. 5.1: Tablet flight planner at Zaarour region (Lebanon) displays of Lichi Company
At any time, you can amend or cancel your planned operations. Past
flights will also be saved and can be re-used or modified for new upcoming ac-
tivity.
1. Defining the altitude of each waypoint and the angle of the mission by
rotating the survey grid, in case of fixed wings the grid must be stated with the
direction of the wind.
2. Selecting the start point which is the taking off and it is usually the
place of the aircraft and defining the landing point by a waypoint or simply se-
lecting the back home command that returns the drone to the takeoff position.
3. Adjusting the flight speed and the gimbal orientation with the number
of camera frames by second if the camera is found on board the drone.
4. Camera adjustment to take simultaneous photos in case of DSLR
cameras (time lapse), in case of the camera is built in adjustment is made direct-
ly from the software application with the mission planning.
The mission planning software displays the GPS number of satellites
tracked, battery life and flight duration.
Figure 5.1 shows the waypoints dropped manually forming the flight path,
the position of the aircraft expressed by latitude and longitude with the begin-
ning and the end of the mission.
We define three scenarios in the flight planning module:
Documentation of a surface with the flat or moderate terrain.
27
Exposure of a rough/ mountainous terrain like hazard areas.
3D modeling of buildings and other objects.
Some new tablet personal computers applications with new functions like
flips and rolls (Abbeel et al., 2007), collision avoidance (Bellingham et al., 2003
and Pettersson and Doherty, 2004), automated target tracking (Nordberg et al.,
2002) and operations like the “follow me” modus.
Nowadays, waypoint navigation for UAVs is a standard tool (Niranjan et
al., 2007). Thus, the autonomous flight based on defined points in a global coor-
dinate system is possible for most UAV systems. For the autonomous flights of
UAVs, a start and a home point have to be defined relative to its coordinate.
Moreover, some packages allow in addition to the waypoints, the definition of
lines, paths, boundaries and no-go areas (Gonzalez et al., 2006; Wzorek et al.,
2006).
Since most of the autonomous systems are stabilized, it can be expected
that the flight trajectory of the autonomous systems is more stable. Thus, for the
automation of the workflow, the orientation values coming from the navigation
units can be used as an approximation for the image orientation and for the fast
production of overview images (Eisenbeiss, 2009).
Drones mapping software
After finishing the flight mission successfully, a post-flight image pro-
cessing is needed to extract data, this part is the photogrammetry episode de-
scribed in detail in chapter 6.
For data processing, a powerful PC station and a professional software
can make the job, we listed below a series of open source software’s found on
the net it could help students and researchers to do their job:
Airphoto SE offers essential features needed for rectification of oblique
aerial imagery with geo-referencing. It is capable to make an automatic correc-
tion for radial lens distortion. It is designed for beginners or experienced users
for combining aerial images with maps, orthophotos, and satellite images
(http://www.uni-koeln.de/~al001/airphotose.html).
Fiji, Fiji is an image-processing package based on Java3D and many
plugins organized into a coherent menu structure (http://fiji.sc/Fiji).
ImageJ is a public domain Java image-processing program for Macin-
tosh. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit
images(http://imagej.nih.gov/ij/index.html).
MapKnitter, MapKnitter is a free and open source tool for combining
and positioning images in geographic space into a composite image map.
Known as “orthorectification” or “georectification” to geographers
(http://mapknitter.org/).
Visual SFM, VisualSFM is an application for 3D reconstruction using
structure from motion (SFM). The reconstruction system integrates several other
projects: Bundle Adjustment, and Linear-time Incremental Structure from Mo-
tion (http://ccwu.me/vsfm/).
28
CMPMVS, CMPMVS is a multi-view reconstruction software. From im-
ages to texture mesh (http://flightriot.com/post-processing-software/cmpmvs/).
CloudCompare, Cloud Compare is a 3D point cloud (and triangular
mesh) processing software. It was also meant to deal with huge point clouds
(typically more than 10 million points, and up to 120 million with 2 Gb of
memory)(http://www.cloudcompare.org/).
Meshlab, MeshLab is an open source, portable, and extensible system for
the processing and editing of unstructured 3D triangular meshes. The system is
aimed to help the processing of models arising in 3D scanning, providing a set
of tools for editing, cleaning, healing, inspecting, rendering and converting
meshes (http://meshlab.sourceforge.net/).
Drone image processing is based on a classical photogrammetric model,
which is enhanced in turn by a powerful computer vision algorithm. This ena-
bles the automatic extraction of numerous key points in the images and optimiz-
es camera parameters such as external orientation and camera model.
Other quality processing programs can alternatively be used, such as Pho-
toScan by Agisoft, Pix4D, SkyPhoto of SOUTH Company and many others.
All these softwares have a first processing phase employed by identifies
and extracts matching key points in the overlapping sections of the aerial images
acquired by the drone. These key points are entered into an equation that deter-
mines the precise position and orientation of each image, as well as the internal
and external camera parameters. The next step of the process is point cloud den-
sification, which is required to obtain a highly accurate 3D model. This is used
to generate a digital surface model (DSM) and orthophotos.
Obviously, we won´t tell you which software is “the best”, but we can
definitely present you some of the most interesting solutions which we have
tested. As a good mapping software, we can only recommend SkyPhoto from
SOUTH surveying company, if you need to generate high-resolution georefer-
enced orthophotos or extraordinarily detailed DEMs. Which we will speak about
its structure in details. This software creates a 3D model from multiple digital
photos of the area to map. Moreover, drones use airborne GPS data in order to
fulfill the geo-referencing task. If you need a better accuracy, into the 3D model
can be imported and matched GCP (Ground Control Points) or use a drone with
Real Time Kinematic (RTK) mounted on board which can give you directly
georeferenced data.
The 3D mapping program I would like to present you in detailed is SKY
PHOTO from SOUTH surveying company, which we are grateful to give us the
permissions and material to introduce you the structures of drone processing
software. With this solution, you will be able to handle large volumes of data
from drone orthophotos by bringing them into a virtual environment.
This software is linking between drone’s data and traditional geodetic
surveying; it gives you a chance to explore your project with your UAS with a
very high accuracy.
29
All UAV software for data processing has the same interface principle,
the workflow is very similar with some advanced function, a basic notion ad-
vanced way is as follows figure 5.2.
After getting photos from the camera the first step in a data processing
software begins with the picture insertion, a structure from motion method align
the picture to each other see chapter 7, the bundle adjustment of these photos
allow the generation of 3D point clouds. A mesh of triangulated irregular net-
work (TIN) interpolated from point clouds for a Digital Surface Model extrac-
tion (DSM), some software has the possibility to add Ground Control Points
(GCP) for the georeferencing.
All the photos joined to form a mosaic of the whole flight area draped on
the DSM to produce an orthophotoplan.
Some more advanced photogrammetric software’s own more functions
than a basic software helpful for specialists in the field of geomatics, as an ex-
ample of SkyPhoto.
Processing workflow
All professional drones data processing software designed for transform-
ing low-altitude aerial images into consistent and accurate points cloud, DEM
(Digitized Elevation Modeling), DOM (Digitized Orthophoto Modeling) mosa-
ics, etc. These software's features sharply are not only one-key processing for
workflow automation but also advanced settings and editable output options.
The special functions of this software, begin from indoor camera calibra-
tion, dodging process, accuracy quality report, measurement tool, 3D modeling
generation and browse, DLG (Digitized Line Graphics) production based on ste-
reo image pair and so on, the sequential processing actions are summarized in
figure 5.3.
30
GPS and IMU data of each photo these data obtained from the flight control
software of the drone as a text file figure 5.4.
This file in figure 5.4 containing the name of the image, coordinates,
flight altitude and IMU data Heading, Pitch and Roll.
These three rotations are the transformation between the image reference
system, and a flat, projected mapping plane, most often in Universal Transverse
Mercator (UTM).
Omega – Rotation about the X-axis.
Phi – Rotation about the Y-axis.
Kappa – Rotation about the Z-axis.
These angles will produce the same result as all of the above-described
transformations (based on Heading, Pitch, Roll), mapping the raw image directly
into the UTM mapping plane figure 5.5.
31
In Skyphoto the aero-triangulation is made basing to GPS/IMU data and
as we said that the GPS accuracy is not very high about approximately 2 meters,
we must refer to Ground control points (GCP) surveyed by accurate geodetic
instruments.
When these previously surveyed points, GCP's added in the software a re-
aero-triangulation is needed to reprocess the data after matching the GCPs.
This SkyPhoto aero-triangulation editing function allows the user to con-
trol the accuracy and adjust it if possible based on a plotted report of errors.
The intellectualized aerial triangulation algorithm satisfactorily deals with
tough cases like images from unstable flight attitude (Kappa or Omega angle out
of tolerance) and sparse textures, for example, deserts, a large water area with
just a little land, etc. The overlap percentage and rotating angle of images are
very little restricted.
High-precision POS data from airborne GNSS-RTK system successfully
gets you to minimize the huge efforts in dealing with ground control points
fieldwork, and you may go straight to adjustment then mapping without the
GCP concern.
Similar to all other photogrammetry software’s SkyPhoto can generate a
mosaic, a DSM and orthophotos, more than that SkyPhoto has the possibility to
DEM edit, this function transforms the DSM to a DTM by subtraction of the
trees and buildings heights.
Millions of orientation points are attributed and even reach to hundreds of
millions after densification. Instead of monochromatic points cloud, the colorful
output is more convenient for users to view and analyze the shape and properties
of surface features. In addition, you may browse the points cloud in the software
like the way that you do with a 3D laser scanner.
Many terrain analysis software based on DEM can extract contour lines,
ridges, and thalwegs, after aero-triangulations and DEM edit SkyPhoto allow the
user the quick extraction of terrain features in DLG format without any addi-
tional software.
UAS use a simple digital camera, these cameras are not metric to calculate
the cameras internal parameters that give us the possibility to use it as metric
one, SkyPhoto has a specialized camera calibration program is included for im-
age distortion correction, as similarly required by all other professional aerial
photogrammetry software solutions. By manually inputting camera parameters
(eg.principal point, principal distance, pixel size, etc.), you may easily finish the
procedures indoors with a desktop LCD monitor. It is to either be done before or
after photography, but normally, better before the flights
(www.southinstrument.com).
As an example of photogrammetry processing figure, 5.6 shows the work-
flow of SkyPhoto software by South surveying instruments company china.
This workflow is similar to the workflow of the aerial mapping system
with some different, it consists of the definition of the new project, importing
32
the aerial images and their position file for processing and image stitching inser-
tion of GCP’s in case of non-accurate GPS image position files for the genera-
tion of dense point clouds, Digital Surface Model (DSM) and Digital Ortho
Model (DOM).
33
Chapter 6 Drones photogrammetry methods
34
such as light sources, properties of the surface of the object, the medium through
which the light travels, sensor and camera technology, image processing.
Close range photogrammetry has significant links with aspects of graphics
and photographic science, for example, computer graphics and computer vision,
digital image processing, computer-aided design (CAD), geographic information
systems (GIS) and cartography. Traditionally, there are also strong associations of
close-range photogrammetry with the techniques of surveying, particularly in
the areas of adjustment methods and engineering surveying.
The application of CRP is very wide we list below some specialist area of
uses such as:
Architectural photogrammetry: architecture, heritage conservation, ar-
chaeology.
Engineering photogrammetry: general engineering (construction) ap-
plications.
Industrial photogrammetry: industrial (manufacturing) applications.
Forensic photogrammetry: applications to diverse legal problems.
Biostereometrics: medical applications.
Motography: recording moving target tracks.
Multi-media photogrammetry: recording through media of different
refractive indices.
Shape from stereo: stereo image processing (computer vision).
A very important question to ask, how does automated CRP work?
1. Data collection: Multiple overlapping photos from different locations
2. Automated feature matching: Over 2000 matches and nearly 1000 in-
correct matches.
3. Derived interpolated surface geometry (mesh).
Figure 6.1 shows the close-range photogrammetry sequential processing
workflow.
Beginning with the image preparation which could be photo enhancement
contrast illumination and selection, Keypoint Detection the selection of similar
pixels between photos, Keypoint Matching for the two first images than to the
remaining ones, Loop Closing image preparation for the Bundle Adjustment step
of least square calculation which lead to the Transformation to absolute coordi-
nates. After that comes the Image Rectification based to the absolute coordi-
nates for the Dense Depth Estimation to build a 3D model by Triangle Mesh
Generation of the Model Fusion.
35
Fig. 6.1: CRP processing workflow.
36
Table 6.1: The difference between aerial photogrammetry and close-range photogrammetry
(Marshall, 1989).
Aerial photogrammetry Close range photogrammetry
Relief is small comparing to flying height. Objects may have truly spatial characteristics
(large depth).
Precision requirement differs for height and Precision in all three coordinates may be
planimetry. equally important.
The entire format is usable. A restricted format is likely.
Vertical photography is used exclusively. Spatial nature of the object necessitates pho-
tography with varying position and orienta-
tion.
Target only for control points if at all. Maybe possible to target all points.
A large block may consist of thousands of The total number of photography is usually
photographs. too small.
Auxiliary data have only limited accuracy. It is possible to determine camera parameters
accurately.
A fairly standardized approach for all appli- Flexible approach required due to differences
cations. from project to project.
Table 6.1 of A.R. Marshall (1989) still valid and could be applied as a
comparison between drone photogrammetry and plane photogrammetry with
some modifications such digital cameras precision, high planimetry accuracy
GPS one, point cloud extractions, 3D models, etc…
In CRP camera precision plays a very important role, especially in the
spatial precision of the output data.
A few years ago metric cameras were used in CRP, "A metric camera is a
general term applied to a camera which has been designed for surveying and has
a well-defined inner orientation. That is a camera with a good lens of small dis-
tortion, in which the position of the principal point can be located in the image
plane by reference to fiducial marks. All Cameras not possessing these charac-
teristics can be defined as simple or non-metric Cameras" (Adams, 1980).
It is around this time (1980) that non-metric cameras were established as a
suitable tool for close-range photogrammetry, and that the accuracy of projects
using non-metric cameras could equal those using metric cameras (Karara, and
Faig, 1980).
Nowadays non-metric cameras could be used in close range photogram-
metry after camera calibration procedure for obtaining the internal parameter
and characteristics of the camera.
In his paper, Xie Feifei spoke about The Design of Four-combined Wide-
angle Camera.
There are a number of developed products of the four-combined wide-
angle camera (Xie Feifei et al., 2014).
The structure of the new four-combined wide-angle camera is shown in
figure 6.2.
37
Fig. 6.2: Combined oblique cameras system.
38
This is due to a combination of decentering distortion and radial distor-
tion, decentering distortion is caused by the camera lenses not being perfectly
centered in relation to each other, and radial distortion is a distortion in each
lens. This type of error is easily corrected by the photogrammetry software (Dai
and Lu, 2010).
2 – The second type of errors is errors due to human factors. Most human
errors are attributed to the imprecise marking of points in two different photos.
This can be overcome by marking the points in three or more photos This is why
most programs require the points to be marked in at least three photos (Dai and
Lu, 2010).
To reduce these distortions, we need a camera calibration, camera calibra-
tion is the determination of principal distance (c), principal point coordinates
(Xp and Yp) and lens distortion parameters (K1, K2, K3, P1, and P2).
Without calibration, high accuracy cannot be achieved, for sub-millimeter
surveying accuracy, the camera calibration should be carried out in laboratories.
The photogrammetric approach in early vision calibration methods, where
an object with known characteristics is used to infer the internal and external
parameters from images. The advent of structure from motion and projective
reconstruction without knowledge of camera parameters has made this unneces-
sary; scene reconstruction is attempted where no calibration objects are present.
In briefing we discussed four categories of camera calibration approaches:
1. Classical calibration: using the geometry of known objects in the world
(Leibowitz, and College, 2001).
2. Auto-calibration assuming fixed internal parameters: assuming that the
same or an identical
camera is used for each view of a scene.
3. Auto-calibration with varying internal parameters: relaxing (partially)
the assumption of identical cameras.
4. Special or constrained camera motion: solving the special cases where
the motion between cameras are known to be of a particular form (Leibowitz,
and College, 2001).
Classical calibration using known objects originated with photogramme-
try, aimed at extremely accurate camera calibration and making use of specially
designed calibration devices. The Direct Linear Transformation (DLT) of Ab-
del-Aziz attempts to apply the calibration from objects to common types of
cameras in a computationally direct manner (Abdel-Aziz, Karara, 1971).
The calibration grids of Tsai represent a move towards automated calibra-
tion from images. Tsai uses a planar calibration pattern (which has come to be
known as the Tsai grid) consisting of a grid of black and white squares. An im-
age of grid points with known world coordinates allows calibration from a single
view by computing the projection matrix from the images of points and their
known world coordinates. The calibration method of Tsai is the most applied
calibration method in most of photogrammetry software (Tsai, 1986).
39
Auto-calibration was first explored by Faugeras et al, using the Kruppa
equations. The Kruppa equations embody the constraint that the epipolar planes
tangent to the absolute conic project to corresponding epipolar lines tangent to
the Image of the Absolute Conic (IAC) in each view (Faugeras
et al., 1992).
Auto-calibration with varying internal parameters, the constant inter-
nal parameter of many auto-calibration techniques is impractical. The assump-
tion that a single camera with identical parameter used for the entire sequence is
often invalid, although some of the parameters may remain constant, the focal
length and principal point change. This is the approach taken by Heyden and
Astrom allowing for the varying, unknown focal length and the principal point
of square pixel cameras. An iterative minimization with the square pixel para-
metrization of the camera is used to compute metric rectified cameras (Heyden,
Astrom,1997).
Special or constrained camera motion auto-calibration, Certain cam-
era motions, reduce the number of ambiguities in a calibration problem or allow
a simple solution. The cases mentioned here include pure translation or rotation
of cameras.
When a camera is rotating about its optic center and not translating, all
points in a pair of images are related by a homography. Furthermore, this
homography is the infinite homography between the cameras (Leibowitz, and
College, 2001).
With the advance of the software technologies camera calibration is made
automatically inside the processing workflow. To meet accuracy requirements of
three-dimensional (3-D) photogrammetric mapping, the conventional photo-
grammetric aircraft flying mission is usually exactly designed for some flying
parameters, such as overlap, side-lap, flying height, the direction of flight, veloc-
ity of flight, etc. For the UAV-based photogrammetric mapping system using a
high-resolution non-metric CCD camera, camera calibration has to be performed
to provide the interior orientation parameters. A listing of 91 articles on aspects
of optical camera calibration for the period 1889 until 1951 is provided by (Roe-
lofs, 1951). A large number of camera calibration approaches have been pro-
posed since 1951.
Camera calibration is the main feature in photogrammetric 3D object res-
titution. Calibration parameters such as principal distance, principal point, and
lens distortion are usually determined by a self-calibrating bundle adjustment
based on the collinearity equations and additional correction functions.
The self-calibrating bundle adjustment is a very flexible and powerful tool
for camera calibration and systematic error compensation, and it provides for
accurate sensor orientation and object reconstruction while treating all the sys-
tem unknowns as stochastic variables (Hongxia et al., 2007).
40
Bundle adjustment
Bundle Adjustment refines a visual reconstruction to produce jointly op-
timal 3D structure and viewing parameters. ‘bundle’ refers to the bundle of light
rays leaving each 3D feature and converging on each camera center figure 6.3.
Bundle adjustment is a mathematical model, which allows the determina-
tion of camera position, camera orientation, object point coordinates and camera
calibration parameters (SMITH and PARK 2000) (see Figure 6.3 below).
With:
O = center of the projection,
f = calibrated focal length,
41
a = coefficient of the rotation matrix.
Many scientists worked on bundle adjustment methods, all of these meth-
ods are based on the same Principe and structure with some modification, these
methods are Gradient Descent Method, Newton-Rhapson Method, Gauss-
Newton Method, and Levenberg – Marquardt Method. These four are the most
popular techniques for nonlinear least squares optimization, the last two are used
to optimize nonlinear least squares in particular.
Gradient Descent Method:
Based on the first-order optimization algorithm.
To find a local minimum of a function using gradient descent, one
takes steps proportional to the negative of the gradient of the function at the cur-
rent point.
It is robust when x is far from optimum but has poor final conver-
gence.
Newton – Rhapson Method:
Based on the second order optimization method
Newton's method can often converge remarkably quickly, especially if
the iteration begins "sufficiently near" the desired root.
For a quadratic function, it converges in one iteration.
For other general function, its asymptotic convergence is quadratic.
The disadvantage of this method is the high computation complexity
of H .
Gauss-Newton Method:
The Gauss-Newton algorithm is a method used to solve non-linear
least squares problems.
For well-parametrized bundle problems under an outlier-free least
squares cost model evaluated near the cost minimum, the Gauss-Newton ap-
proximation is usually very accurate.
Levenberg – Marquardt Algorithm:
This method interpolates between the Gauss-Newton algorithm and
the method of gradient descent.
When far from the minimum it acts as the steepest descent and it per-
forms gauss newton iteration when near to the solution.
It takes into account the best of both gradient descent and gauss new-
ton method.
General Facts about optimization methods:
Second order optimization methods like Gauss-Newton and LM re-
quires a few but heavy iterations.
First order optimization methods like Gradient descent requires a lot
of light iterations.
Bundle adjustment is really just a large sparse geometric parameter esti-
mation problem, the parameters being the combined 3D feature coordinates,
42
camera poses, and calibrations. bundle adjustment and similar adjustment com-
putations are formulated as nonlinear least squares problems (Brown, 1976;
Cooper, Cross,1991; Atkinson, 1996).
Bundle block adjustment
The name bundle block adjustment is based on the fact that the rays from
the projection center to the photo points are building a bundle of rays, the bundle
block adjustment is using the photo coordinates, based on these photo coordi-
nates the bundle block adjustment is leading to more accurate results than the
other methods. Of course, some additional corrections like self-calibration with
additional parameters are improving the results as well as additional observa-
tions like GPS- positions of the projection centers.
The Bundle block adjustment provides three primary functions:
1. It determines the position and orientation of each image as exterior ori-
entation parameters. In order to estimate the exterior orientation parameters, a
minimum of three/four GCPs is required for the entire block, regardless of how
many images are contained within the project.
2. It determines tie points ground coordinates on the overlap areas of mul-
tiple images. The highly precise ground point determination of tie points is use-
ful for generating control points from imagery.
3. It minimizes and distributes the errors associated with the imagery, im-
age measurements, and GCPs. The bundle block adjustment processes use statis-
tical techniques to automatically identify, distribute, and remove error (Rüther et
al., 2012).
Structure from Motion (SFM)
Structure from Motion (SfM), was introduced that allows the extraction of
the 3D structure of an object by analyzing motion signals over time (Dellaert et
al., 2000). The SfM technique can be applied to large collections of overlapping
photographs to obtain sparse point clouds for a wide range of objects, such as
buildings and sculptures (Snavely et al 2007; Snavely et al 2006). The power of
this technique was demonstrated by (Snavely et al 2007) who developed the
Bundler software and used it to construct 3D models.
The term Structure-from-Motion has evolved from the machine vision
community, specifically for tracking points across sequences of images occupied
from different positions (Spetsakis and Aloimonos, 1991; Boufama et al., 1993;
Szeliski, Kang, 1994) (figure 6.4).
Structure-from-Motion (SfM) operates under the same basic tenets as ste-
reoscopic photogrammetry, the 3D structure can be resolved from a series of
overlapping images. However, it differs from traditional photogrammetry by
camera positions and orientation that are solved automatically without the need
to specify the network of targets with known X, Y, and Z.
43
Fig. 6.4: Structure-from-Motion (SfM) functional schema requires multiple, overlapping pho-
tographs of the object.
Many SFM software's are found as a commercial and open source espe-
cially for terrestrial and aerial Unmanned aerial vehicles (UAV) uses, open
sources SfM packages have very limited post-processing functions unlike com-
mercial software for a variety of tools and models, due to their high prices and
fast evolution.
SfM involves the 3D location is determined through automatic iden-
tification of matching features in multiple images, these features called tie
points, used to establish both interior and exterior orientation parameters of
cameras.
In photogrammetry, 3D geometry is obtained by creating images of the
same object from different positions. This makes a single point on the object
visible as a pixel in multiple images. For each image, a straight line can be
drawn from the camera center through the pixel in the image. These lines will
intersect at one point (tie point), which is the 3D location of the object point.
These tie points are tracked from image to image pixel by pixel estimating
camera positions and object coordinates which are then refined iteratively using
non-linear least-squares (Snavely et al., 2008). SfM method identifies features in
each image that is invariant in scale, rotation and in illumination changes condi-
tions and 3D camera viewpoint (Lowe, 2004).
(Micheletti et al., 2014) demonstrated that a big the number of images
produces a very high spatial resolution data at a good accuracy because of the
big overlapping areas covered by pictures. Transparent, reflective or homogene-
ous surfaces present difficulties because incorrect features can be linked during
the automatic feature-matching process (Autodesk, 2014). Tie points number
depends on image texture and resolution; they are automatically identified over
all scales and locations in each image.
44
Sufficient tie points allow for the reconstruction of the relative position of
all images. Additionally, known points or ground control points(GCPs) with 3D
world coordinates should be added to obtain scale and absolute coordinates.
The SfM part of the process generates a sparse point cloud comprising tie
points identified and matched across the input images. In order to construct the
sparse point cloud, several steps are involved such as feature extraction, feature
matching, and bundle adjustment. The SfM algorithm needs to estimate the inte-
rior and exterior orientations for each image by combining all the relative orien-
tations of the image pairs in the form of their fundamental matrices (Verhoeven
et al 2013). Once complete, a technique called image triangulation is used to
calculate the relative position and orientation for each image in every pair. The
overlapping pairs are then combined to form a single block, achieved by a bun-
dle adjustment, because it necessitates adjusting the bundles of rays between
each camera’s projection center and the set of projected 3D points until there is
minimal discrepancy between the positions of the observed and re-projected
points (the image distance between the initial estimated position of a point and
its ‘true’ or measured value) (Verhoeven et al., 2013).
A bundle adjustment step comes after the key point's detection to find 3D
point positions and camera parameters that minimize the re-projection error,
bundle adjustment, and similar adjustment computations are formulated as non-
linear least squares problems (Cooper, Cross, 1988; Granshaw, 1980; Atkinson,
1996; Karara, 1989; Wolf, Ghilani, 1997). Least square problems are the differ-
ences between the observed feature location and the projection of the corre-
sponding 3D point on the image plane of the camera, the problem can be cast as
a re-weighted non-linear least squares problem (Hartle, Zisserman, 2003).
Bundle adjustment is really just a large sparse geometric parameter esti-
mation problem, the parameters being the combined 3D feature coordinates,
camera poses, and calibrations.
After bundle adjustment in data sets, processing comes the point clouds
generations in a relative ‘image-space’ coordinate system, which must be
georeferenced to a real world using a 3D transformation by adding a small num-
ber of known control points (Doumit, Kiselev, 2016).
The technique is based on identifying matching features in images that are
taken from different viewpoints. Image features are identified by the scale invar-
iant feature transform(SIFT) algorithm (Lowe, 2004), which is robust in terms
of its feature descriptors for image features at different viewing angles. Based on
these SIFT matches, the camera positions, orientations, radial lens distortion,
and finally the 3D coordinates of each SIFT feature are calculated using a bun-
dle block adjustment. The 3D positions of the SIFT feature essentially form a
3D point cloud that captures the structure of an object.
The point cloud is known as a sparse point cloud that can be densified
with a more recent technique called multi-view stereopsis (Furukawa, Ponce,
2009). The stereopsis algorithm takes the output from the Bundler algorithm,
45
camera positions, orientations, and radial undistorted images, and applies a
match, expand, and filter procedure. It starts with the sparse set of matched key
points, and repeatedly expands these points to neighboring pixel correspondenc-
es, and finally applies visibility constraints to filter out false matches (Furukawa
& Ponce, 2009). The algorithm is implemented in the Patch View Multi-Stereo
(PMVS2) software tool. SIFT, Bundler, and PMVS2 work in sequence to gener-
ate an extremely dense 3D point cloud just from overlapping photographs. The
PMVS2 point cloud contains the following information for each point: XYZ co-
ordinates, point normal (i.e. the direction of the slope through the point), and
point RGB color values (i.e. derived from the photographs. The dense point
cloud can then be used as the basis of a triangulated irregular network (TIN) or
mesh, onto which textures generated from the input images can be projected.
SFM requires the uploads of multiple images of an object and the results
are produced fully automatically without user interaction. The principal disad-
vantages of SFM are that the mathematical process is not transparent and many
research has been done and published without being confident about metric ac-
curacy and reliability of their results.
Aerotriangulation
An important and critical phase in photogrammetric mapping is rectifying
the aerial images to their appropriate place on the surface of the earth. This is
accomplished by collecting horizontal and vertical data, to ascertain the spatial
location of a number of features that are visible and measurable on the aerial
images.
Geometric stability requires a minimum of four points of known positions
spaced in the corners of a full stereo model be used to fully rectify On a project
involving a few stereo models this may be a conventional ground surveying en-
terprise. The expense and time required to collect the ground survey data in this
manner may render the mapping project impractical (Falkner, Morgan, 2002).
Computer processing has played a major role in driving mapping scien-
tists to develop rigorous and efficient mathematical protocols that allow for the
densification of stereo model control from a minimal number of strategically
positioned ground survey points. This procedure is generally referred to as aero-
triangulation. Analytical software available today, with its built-in quality
checks, has made aerotriangulation the preferred method of image adjustment to
the earth for photogrammetric mapping.
In this book will not discuss the theory of these processes, but rather it
will give the reader a necessary guidance and explanation of procedures to plan
and estimate the efforts required to perform satisfactory aerotriangulation for a
photogrammetric mapping project (Falkner, Morgan, 2002).
Aerotriangulation Computer Processing
After the collection of the field control point, they are imported into the
computer and processed through an aerotriangulation software module proce-
dure including:
46
first, each stereo model is processed through a relative orientation rou-
tine involving a least squares adjustment of collinearity equations. This solution
produces individual model coordinates unrelated to any reference system.
Second, a strip formation procedure joins the independent stereo mod-
els through a three- dimensional transformation (X, Y, Z). A series of equations
link successive models by common pass points. The coordinates, at this stage,
remain in an arbitrary reference scheme.
Then each strip undergoes a polynomial adjustment which produces
preliminary ground coordinates for all of the photo control points, notes that the
control points are in a coordinate system of the project.
A simultaneous bundle adjustment provides a fully analytical aerotri-
angulation solution and the entire block of data passes through an iterative
weighted least squares adjustment until a convergent “best fit” solution is ob-
tained. An RMSE error is noted so that the observer can judge how far the coor-
dinates of each point were mathematically “stretched” out of position in order to
resolve a solution (Falkner, Morgan, 2002).
47
Chapter 7 Flight Planning Principles and image processing
Before beginning a flight project, the operator should ask and answer a se-
ries of sequential questions of drone design projects, approach, and preparations.
What is the size of the area to be mapped?
Depending on the equipment you have, larger the area to be mapped more
the choice goes to a fixed wing.
What is the size of the Ground Sample Distance (GSD)?
Small GSD values generally require lower flying heights and slower
ground speeds. For high-resolution projects (GSD 1 to 2cm) multi-rotors tend to
be the platform of choice.
What is the Nature of Terrain?
Multi-rotors are often referred to as Vertical Take-off and Landing
(VTOL), they are employed in congested areas such as forests and urban envi-
ronments.
What are the Flying Height and Ground Speed?
High ground speeds at low altitude require short exposure times to avoid
image blur. When GSD and focal length dictate a particularly low flying height,
the exposure distance intervals tend to be short, thus requiring the camera to ex-
pose at very short time intervals.
Ground sampling distance (GSD)
The spatial resolution of digital maps is commonly expressed as the
ground sampling distance (GSD). This is the dimension of a square on the
ground covered by one pixel (p) in the image and is a function of the resolution
of the camera sensor, the focal length (f) of the camera and the flying height (H
= the distance between camera and ground) (Barnes et al., 2014).
The formula is:
GSD H
p f
The pixel size (p) is found in the camera technical specifications, the di-
mensions of the image are specified in linear units (e.g. 17.3 x 13.0 mm) as well
as in a number of pixels (4000 x 3000 pixels). Pixel size is simply determined by
dividing the linear units by the number of pixels.
In photogrammetry to generate 3D models image capturing should be in
stereo modes which express an overlap between captured images. Once a flying
height has been determined it is necessary to compute the distance between each
exposure position, the spacing between flight lines, and the overlap (figure 7.1).
48
Fig. 7.1: Image footprints and overlaps (Barnes et al., 2014).
The exposure distance intervals (s) and the spacing between the flight
lines (d) are dependent from the forward and lateral overlaps respectively. For
drone mapping projects we have found that a forward overlap of 80% and a lat-
eral overlap of 70% yield good results.
If the forward overlap is a% then:
s 100 (7.1)
Similarly, if the lateral overlap is b%, then:
100 (7.2)
Where β = GSD (width of the sensor in a number of pixels) and α = GSD
(length of the sensor in a number of pixels).
Some drone cameras cannot trigger at given distance intervals. In such
cases, the camera is programmed to expose at a fixed time. The time interval t
between two successive exposures is then calculated as
(7.3),
where d is the distance between exposures and v is the estimated ground speed
of the drone (Barnes et al., 2014).
Photocontrol points
Prior to commencing mapping from aerial photos, ground survey infor-
mation is required on specific terrain features in order to relate the photogram-
metric spatial model to its true geographical location. These terrain features may
be portrayed in two ways: by identifiable photo image features or ground targets.
Acquisition of ground control data on photo image points is a necessary
requirement for photogrammetric mapping for two primary reasons:
To georeference the imagery.
To check the accuracy of the spatial data collected.
Technology is constantly changing and adding to the tools that can be
used to collect ground control. Recent advances in GPS are the underlying rea-
son for many of the advances in ground survey methods, as well as photogram-
metry in general (Falkner, Morgan, 2002).
A ground target is some kind of a panel point that is placed on the unob-
structed ground prior to photography. Figure 7.2 illustrates the effective presen-
49
tation of a ground target (+) on an aerial photo. Ground targets create a discrete
image point and can perhaps lead to better map accuracy.
0.002 (7.4),
where:
W = width of the target legs (cm),
Sp = scale denominator (cm).
Equation 7.5 defines the length of the target legs of a typical ground tar-
get, as shown in figure 7.3, for any photo scale.
10 (7.5)
Where:
l = length of each cross arm (cm),
w = width of target legs (cm).
50
Fig 7.3: Typical ground target.
To produce the mapping from stereo models, the aerial photo image must
be scaled and leveled it means georeferenced to a true geographic ground loca-
tion. In current mapping procedures, most points contain both horizontal and
vertical information and are used for both scaling and leveling.
If the final mapping products are to be referenced to a defined coordinate
reference frame, then it is necessary to either geo-reference the aerial images or
set ground control points (GCPs). Ground control points are point features in the
object space that can be positively identified in the images figure 7.2. The tar-
gets are surveyed to determine their precise coordinates in a defined spatial ref-
erence frame. In most cases, ground control points are surveyed by means of
differential GNSS.
The Global Positioning System (GPS)
The global positioning system (GPS) an electronic receiver measures the
distances between the ground point and a minimum of four satellites and the in-
tersection of the divergent rays establishes the spatial coordinates of the observ-
ing station.
The anticipated ephemerides (positional) information is broadcast by the
satellite. Several continuous tracking stations are scattered throughout the world,
meticulously charting the paths of the satellites. Both the tracking data and
broadcast information are available to the user, with the former providing more
accurate data for processing receiver information (Falkner, Morgan, 2002).
The determination of the coordinates of a ground station by GPS proce-
dures relies upon intersection geometry. The GPS receiver, a pseudo-range
measuring device, accepts carrier signals from multiple satellites. By measuring
the carrier waves from a ground station to several satellite positions simultane-
ously, the XYZ coordinate can be determined. For ascertaining three-
dimensional coordinates, the receiver must maintain a continuous lock on a min-
imum of four orbiting platforms simultaneously (Falkner, Morgan, 2002).
51
UAV’s usually use a standalone GPS receiver enabling the drone to navi-
gate safely and it is able to achieve a satisfactory accuracy for safe navigation,
the overall accuracy which can be achieved could not reach the survey grade
accuracy.
Static mode
In static traverses, at least two receivers must be used; both must be
locked on to the same group of satellites. One receiver resides over a location
with known coordinates, which could be a previous point in the circuit. The oth-
er receiver is set over a point with unknown coordinates. Observation time spent
at each baseline point pair may span a significant period of time. After the ob-
servation is complete at the baseline pair, both receivers can be moved to new
positions so long as one occupies a point with known coordinates. In this fash-
ion, observations are made at all of the stations on the traverse, in turn (Falkner,
Morgan, 2002).
52
RTK requires a constant communication link between the GNSS/GPS
base and the drone. In case this communication link is lost RTK will not work
anymore, to solve this problem the correction made by post-processing using
correction data coming either from a local base or CORS.
In RTK mode the receiver of the base station is dependent from the link
with the drone receiver trough UHF or mobile SIM card.
Flight planning
In all drone projects before beginning the flight user should plan it first
following these planning steps:
1. Define the mission area, resolution and overlap.
Most professional drones employ waypoint navigation to define their
flight path. Modern flight planning software handles this automatically, incorpo-
53
rating the three pillars of aerial mapping: mission area, overlap and ground reso-
lution.
2. Generate a 3D flight plan.
When flying, the optimal flight planning approach is for the altitude of
each flight line.
An ideal flight planning system will have a high-quality elevation dataset
built-in, as well as allowing the operator's own elevation data to be imported.
3. Define take-off and landing points.
When defining the drone’s take-off trajectory, the software allows you to
define the exact direction the drone will follow after being launched. When
landing, choose the point and the position of landing, with its ground proximity
sensor, to adapt its altitude to the terrain, employing reverse thrust at an altitude
of five meters to reduce its speed and ensure a short, precise and soft landing.
4. Monitor and adapt during flight.
Even with intelligent, automated mapping drone systems, it is crucial to
carefully monitor every flight via line of sight and using the drone's ground sta-
tion software.
The drone's position and important parameters such as its battery's re-
maining charge, and therefore flight time, should always be visible on the screen
display (OSD) figure 7.4.
Fig. 7.4: Ground station on Screen Display of the telemetry, landing and post-flight tasks
(Barnes et al., 2014).
54
Typically, an operator will ensure the safety of an operation by keeping
the drone in visual line of sight. The drone supports this human monitoring by
constantly sending status information displayed on the screen.
8. Landing.
After a mapping mission is finished, when landing, choose the point and
the position of landing, with its ground proximity sensor, to adapt its altitude to
the terrain, employing reverse thrust at an altitude of five meters to reduce its
speed and ensure a short, precise and soft landing.
It is good practice to inspect the UAV immediately after the landing for
any signs of overheating of the motors or electronics. The aerial images should
also be inspected for quality after the first flight and any required adjustments
made to the camera settings before further flights.
Image processing
The drone non-metric cameras are lighter and less costly they are general-
ly preferred over conventional metric cameras. However, non-metric cameras
are not geometrically stable, which is a basic requirement for classic photo-
grammetric mapping. To overcome this problem a combination of classic pho-
togrammetry and computer vision techniques, known as Motion Vision Stereop-
sis (MVS), is used by the most drones image processing software.
55
Fig. 7.5: MVS workflow.
56
Digital surface models can be prepared either in a vector format as trian-
gulated irregular networks (TIN), used extensively in CAD software for engi-
neering applications, or in a raster format for GIS uses.
A suitable DSM must be obtained to provide a vertical datum for an or-
thophoto, DSM for orthophoto rectification does not have to be as dense or as
detailed as a terrain model for contour generation (Falkner & Morgan, 2002).
Today, many uses for geospatial mapping products require current plani-
metric feature data. Analysis and design from geospatial data sets generally re-
quire a known positional accuracy of features. The collection and updating of
planimetric features in a data set can be costly. Many end users are also not ac-
customed to viewing and analyzing vector-based mapping data sets. They prefer
to view planimetric features like a photo image.
The final phase of the orthophoto process is the merger of the digital im-
age and the DSM along with corrections in pixel intensity throughout the image.
Software, used to merge the digital raster image with the DSM, makes adjust-
ments in the horizontal location of pixels based upon their proximity to DSM
points. This process removes the errors due to displacement and produces an
image that is orthogonally accurate (Falkner, Morgan, 2002).
Suitable imagery and ground control are the basic elemental data that de-
termines the final orthophoto reliability which involves both the accuracy of dis-
tances and areas within the orthophoto as well as the relative accuracy of fea-
tures with respect to their true location on the earth. Distance and area accuracy
is based on the pixel size. Relative feature precision is based on the accuracy of
the DSM used in the rectification process. The relative accuracy cannot be more
precise than the reliability of the DSM (Falkner, Morgan, 2002).
Drone mapping accuracy
The optimal condition for high accuracy mapping
Work in soft lighting environment.
Avoid transparent or reflective surface.
Flight time around 11 am to 4 pm.
Avoid Fisheye wide angle cameras because of the high distortions and
errors.
Optimal coverage 80% overlap and 60% sidelap.
Terrestrial, nadir and oblique images capture, must be processed in
different projects and then merge or unify in one project.
Take GCP on transition images.
Use manual stitch when needed.
The use of drones with the highest precision GPS for not doing GCP.
Use an image overlap of >85%.
Set GCPs using a zigzag pattern.
Add checkpoints to verify data quality.
57
Batteries are affected by the cold good idea to pack spare, fully-
charged drone batteries.
Remain aware from bird attacks.
Avoid flying in high winds and thermal winds (uplift), rain and snow
(Reg Austin, 2010).
Ground Sampling Distance (GSD) accuracy
The quality of a project is linked to its accuracy, referring to the positional
accuracy not to be confused with camera resolution or a single image's Ground
Sampling Distance (GSD).
The ground sampling distance achieved depends on an imaging sensor's
pixel size and lens focal length, and is directly proportional to the drone's flight
altitude.
However, data output quality depends on much more than a camera's reso-
lution alone. (In fact, the dataset produced by a 12 MP camera will often be hard
to differentiate from that produced by a 16 MP model.)
DSM Accuracy Assessment
Accuracy assessment of the Digital Surface Model (DSM) is an important task
for horizontal and vertical assessments by comparing DSM with GCPs measured
by precise surveying instruments (GNSS receivers or Total Stations) in terms of
Root Square Error (RMSE). More specifically, assessments in Easting
(RMSEx), Northing (RMSEy) and vertical (RMSEz), horizontal (RMSExy), and
all components (RMSExyz) as suggested Aguera-Vera using equations as fol-
low:
RMSEx SQRT ∑ X X (7.6)
RMSEy SQRT ∑ Y Y (7.7)
RMSEz SQRT ∑ Z Z (7.8)
RMSExy SQRT ∑ X X Y Y (7.9)
RMSExy SQRT ∑ X X Y Y Z
Z (7.10)
Where XGCPi and XDSM are the X-coordinate component of GCP and cor-
responding coordinate in DSM, respectively; YGCPi and YDSM are the Y-
coordinate component of GCP and corresponding coordinate in DSM, respec-
tively; ZGCPi and ZDSM are the Z-coordinate component of GCP and correspond-
ing coordinate in DSM, respectively (Agüera-Vega et al., 2016).
58
Chapter 8 Drones and Geospatial Analysis
There are a set procedures used for geospatial scientific research methods
divided in case studies and field work. Drone Field work is the collection and
gathering of information applied on the case studies of Multi-scale Digital Sur-
face Models, Multi-scale landforms classification, Multi-scale Terrain Rough-
ness, Terrain analysis for parcels evaluation, and Digital Surface Models to pla-
nar areas.
The geospatial analysis based on drone terrain datasets experiments car-
ried out at an altitude of 1700 meter above the sea level, in Zaarour region situ-
ated on the western Lebanese mountainous chain. We choose this bare non ur-
banized mountainous area because of its representative morphological terrain
forms.
The study area with a slight natural slope, represented by bare lands with
elements of anthropogenic relief. The inclusion of anthropogenic micro-relief in
the studying area due not only to the requirements of representativeness, but the
presence of complicating microform for the experimental modeling of the terrain
concave and convex smoothed areas.
An autopilot Dji Phantom 3 Unmanned aerial vehicle (UAV), caring a
camera of 14 megapixels at a focal length of 3.61 mm flown the study area at
different heights. The flight heights are measured from the takeoff point of the
quad copter; the experiment constituted from 6 missions: FA-20, FA-40, FA-60,
FA-120, FA-240 and FA-360 of 20, 40, 60,120,240 and 360 meters height.
The flight path followed by the quad copter was identical for all the flights
and it was designed in a mobile application called Litchi. The application shows
the flight path and the flight parameters (coordinates, height, time, etc…).
Before starting the aerial surveying, well-distinguishable 10 control points
were evenly distributed within the area of interest for scaling and georeferencing
the resulted data. Ground control points (GCP) were collected with Global Posi-
tioning System (GPS) in stereographic coordinate system.
The drone took Aerial photography with 80% overlapping and 70% side
lapping. SfM-based 3D methods operate on the overlapping images. The drone
flight in an autonomous way, defined by waypoints to avoid image coverage
gaps, every surface that will be reconstructed needs at the minimum to be cov-
ered by at least 2 images taken from different positions.
All datasets (photos) of the six missions of different flight heights was
processed in Agisoft photoscan software for the generations of Digital Surface
Models (DSM) and Digital Ortho Model (DOM), when the camera focal length
and the flying height of the UAV are known the scale is determined by this for-
mula:
1
59
The result of scale calculations for each flight Height is listed in table 8.1.
Table 8.1: The spatial resolution, calculated scale and approximated scale of each DSM type.
Figure 8.1 shows six DSM of the study area of different spatial resolu-
tions: FA-20 of 20 meters flight Height with a very high resolution data set high-
lighting all the terrain details even rock textures, passing by FA-60 and ending
by FA-360 of 360 meters’ flight Height with a very low spatial resolution and
high generalization effect with the «disappear» and «growth» of some terrain
morphological features.
60
These 6 DSM can be classified visually from figure 8.1 by rough and
smooth; FA-20, FA-40 and FA-60 for rough and FA-120, FA-240 and FA-360
for smooth, also figure 8.1 constitute an interval of scales and smoothness show-
ing the generalization at different scales.
As per table 8.1 different flight Height lead to different spatial resolution
(pixel size); the minimum spatial resolution is 0.37 m which is a high resolution
showing all terrain details and a maximum resolution of 4.47 m quite good reso-
lution for geomorphological analysis at a local scale.
After the scales calculation we categorized our data in two categories
plans and maps; the first three DSM of flight Height 20, 40 and 60 are related to
the category of plans and the other are related to the category of geographical
maps.
61
resolution obtained from UAV different heights survey. The effect of scale (spa-
tial resolution) on these surface models is analyzed according to the morphomet-
ric indices by calculating their direct indicators of spatial correlation. Therefore,
the experiment implemented the opportunities of multi-scale measurement tech-
nology based on UAV.
For multi-scale analysis we applied Wood and Strahler (1987) method of
Local Variance, the texture method measuring surface properties such as
coarseness and smoothness, and as a third index of fractal dimension.
Many scientists, quantitively discussed the issue of optimal resolution of
digital elevation models. Our study summarized the methods expressing relation
between UAV flight Height and spatial resolution, and they are local variance,
texture analysis and the fractal method. The first two methods are relative sim-
ple and very useful in practical sense, while the third fractal dimension method
has a great potential in detecting the resolution effects and is used especially in
several geosciences researches.
Local variance method proposed by Woodcock and Strahler (Woodcock
and Strahler 1987) is such a method, originally developed in image analysis,
with potential for dealing with scale in DEM analysis (Li, 2008). Local variance
measures the mean of standard deviation within a 3 by 3 pixels in a moving
windows, the mean of all local standard deviation values over the entire image
were then used as an indicator of local variability contained by the image.
The local variance is the degree of similarity between the values of two
points depending on the spatial distance between them. The longer the distance
is, the higher the degree of similarity is, and the smaller the variance is.
According to Schmidt and Andrew (2005), the land surface is hierarchi-
cally structured and it can be represented differently across scales for example
convex hillslope embedded into a concave hillslope, which in its turn is embed-
ded into a valley. These cases could be detected and seen only on high spatial
resolution DSM and are homogeneous relative to scale levels.
In figure 8.2 the variance maps of the study area at FA-20 showing all
morphological forms in detail even small concave and convex forms with some
finesse in FA-40, the same forms became bigger in size and dimensions. In the
last stage of plan categories FA-60 the ridges boarding the roads became more
highlighted with disappearance of the small morphological forms. Hence in the
second category of scales FA-120, FA-240 and FA-360 the degree of smooth-
ness is increasing with the scale.
62
Fig. 8.2: Variance digital models of the six flight Heights.
63
Fig. 8.3: Texture digital models of the six flight Heights.
64
Fig. 8.4: Fractal dimension digital models of the six flight Heights.
65
Table 8.2: Local variance, texture and fractal dimension statistical values over the six DSM’s.
The local variance maximum values increase with the flight Height and
the scale, the standard deviation and the mean are increasing proportionally in
all scales, contrary to local variance values, texture and fractal dimensions max-
imum, mean and standard deviation are decreasing with the flight Height and the
scale as a result of the smoothing effect.
To determine the spatial distribution of the analyzed indices at different
scales, we applied a simple and effective tool – spatial correlation.
66
The scatterplots of the last three levels show a very poor degree of simi-
larity comparing to FA-20 in fact, it shows a lack of connection between DSM
at various levels of scale. It means we are losing accuracy and we have a big
generalization lost in the maps category, built using the UAV. Similar results
were obtained when comparing the estimated parameters of the texture.
Table 8.3: Correlation analysis values between different scales of different indices.
Values of R2%
DSMs DSM Fractal Local Variance Texture
FA-20/FA-40 78.71 63.9 71.79 78.68
FA-20/FA-60 65.39 24.95 25.09 65.28
FA-20/FA-120 40.03 27.04 26.07 39.49
FA-20/FA-240 14.01 20.72 18.39 13.15
FA-20/FA-360 0.78 13.55 11.89 0.79
67
Multi-scale landforms classification
The factor of scale plays a very important role in Landform classification
different levels of measurement (nominal, ordinal, interval and ratio) this study
will discuss the terrain analysis with the applications of Terrain Position Index
(TPI), Iwahashi and Pike index and the morphometric features and their effects
on generalization and spatial resolution at different UAV flights altitudes.
Pike et al. (2009) remarked that no digital elevation models derived map
is definitive, as the generated parameters differs with algorithms and can vary
with resolution and scale.
Landform classification stand out with terrain complexity which necessi-
tated specific methods to quantify its shape and subdivide it into more managea-
ble components (Evans 1990) which constitutes a central research topic in geo-
morphometry (Pike 2002; Rasemann et al., 2004).
An Arc Map Jenness module GIS software for landforms terrain computa-
tions was applied on a three different spatial resolutions drone based DSM’s for
the extractions of Topographic Position Index (TPI), Iwahashi and Pike land-
forms and the morphometric features at different scales.
Throughout the assessment, we comprehensively used this UAV for aerial
images acquisition to the generation and interpretation of Digital Surface Mod-
els (DSM) by using new photogrammetry technologies.
A three DSM of different spatial resolutions, FA-20 of 20 meters’ flight
altitude with a high resolution highlighted all the terrain details even rocks tex-
ture, passing by FA-120 the terrain is smoothed with some concave and convex
areas and ending by FA-360 a very low spatial resolution and a very smoothed
terrain of 360 meters’ flight altitude.
Topographic Position Index (TPI), the analysis was performed by
DSM’s simulation to obtain topographic position index (TPI). The process of
formulae (1) calculate the difference between elevation at a specific cell and the
average elevation of the neighborhood surrounding cells (Tagil & Jenness
2008); describing higher and lower areas for the classification of the terrain into
different morphological forms (Jenness 2005).
The simulation required the radius adjustment of neighborhood and its
geometric shape based on two different scales or two sizes (Barka et al. 2011).
In this study, a radius between 5 m and 25 m was applied to determine the slope
positions.
∑
TPI Z (8.2)
Where:
Z0 = elevation of the model point under evaluation,
Zn = elevation of grid within the local window,
n = the total number of surrounding points employed in the evaluation.
These neighborhood radiuses values were applied for all DSMs spatial
resolutions, to be similar in parameters for best comparison analysis.
68
Positive TPI values represent high locations e.g. ridges, negative values of
TPI represent low terrain representations e.g. valleys otherwise flat areas have
TPI values near zero, high positive values go to high elevations geomorphologi-
cal structures such as peaks and ridges (Jenness, 2010).
The flight altitude FA-20 has a maximum positive value of 1.03, FA-60 of
0.71 and the higher flight altitude FA-360 with 0.48 a decreasing in maximum
and minimum values with the increasing of flight altitude.
Iwahashi and Pike had developed a Landforms classification unsuper-
vised method based on only three terrain attributes: slope gradient, surface tex-
ture and local convexity (Iwahashi and Pike 2007). This method restricts a num-
ber of landform classes 8, 12 or 16 with a physical meaning of statistical land-
scape properties.
The unsupervised approach treats topography as a continuous random sur-
face, especially for the three level of details FA-120, FA- 240 and FA-360 inde-
pendent of any spatial or morphological orderliness imposed by fluvial activity
and other geomorphic processes.
Morphometric elements, the standard method for the identifcation mor-
phological elements is to establish a mutually position for the central cell in rela-
tion to its neighbors (Peucker, Douglas 1974; Evans 1979). The classification
algorithm can be done by maintaining the continuity of linear elements, which
gives advantages over the method of selection on the basis of logical comparison
of neighboring cells (Peucker, Douglas, 1974; Jenson, Domingue 1988; Pogore-
lov, Doumit, 2009).
Morphological elements take the forms of: Planar, pit, channel (thalweg),
pass, ridge (division line), and peak. The names of morphological elements may
vary in different sources, but they can be uniquely explaining in terms of chang-
es in the three orthogonal components x, y and z (Wood J, 1996; Pogorelov,
Doumit 2009).
Landform classifications delineated using the TPI method is shown in fig-
ure 8.6, TPI values present a powerful way to classify the landscape into mor-
phological classes (Jenness, 2005). Landform Classifications consist of “Can-
yons, Deeply Incised Streams’’, ‘‘Midslope Drainages, Shallow Valleys’’ and
‘‘Upland Drainages, Headwaters’’ all tended to have strongly negative curvature
values of a concave shape, while “Local Ridges or Hills”, ‘‘Midslope Ridges,
Small Hills in Plains’’ and ‘‘Mountain Tops, High Ridges’’ all tended to have
strongly positive curvature values of a convex shape.
Figure 8.6 of the three maps shows land forms classification of all mor-
phological forms listed above at different scale level, a visual analysis of these
maps highlight a cartographic generalization between them making a very clear
evolution in morphological forms at each stage.
69
Fig. 8.6: maps of landform elements of the three DSM derived from TPI classification analy-
sis. a) FA-120, b) FA-240, c) FA-360.
The results of table 8.4 shows how the area of some morphological ele-
ments is increasing against other elements relating to scale variations. In table
8.4 the area percentages of some morphological elements are increasing in val-
ues and other are decreasing with the scale variation. streams, plains, open
slopes and high ridges are increasing in area and geometrical forms due to the
variations in spatial resolution. Some morphological elements such as Upland
drainage type are not found in any of the three maps and other like Local ridges
are disappearing with scales variation and constituting a basic for generalization
processes.
Open slopes comprised between 6 and 11% of the total area in all flight
altitudes while midslope drainages increasing with the flight heights between
9.75% and 11.46% from the total study area.
Landforms show a decreasing in their numbers; the dilution of 647886
pixels of different morphological elements from the flight height FA-120 to the
flight height FA-360. All the ten morphological elements are affected by scale
generalization.
Open slopes comprised between 6 and 11% of the total area in all flight
altitudes while midslope drainages increasing with the flight heights between
9.75% and 11.46% from the total study area.
70
Table 8.4: Percentage of Morphological elements and pixels numbers of each Morphological
element in the three DSMs levels based on TPI classification.
71
Fig.8.7: landform maps of unsupervised classification (Iwahashi and Pike method),
a) FA-120, b) FA-240, c) FA-360.
Table 8.5: Iwahashi and Pike landform percentage of areas at different scales.
Area (%)
Type FA-120 FA-240 FA-360
1) very steep slope, fine texture, high convexity 0.00987 ─ ─
2) very steep slope, coarse texture, high convexity 47.33649 48.5 49.9
3) very steep slope, fine texture, low convexity 0.00144 ─ ─
4) very steep slope, coarse texture, low convexity 51.13720 51.2 49.9
5) steep slope, fine texture, high convexity ─ ─ ─
6) steep slope, coarse texture, high convexity ─ ─ ─
7) steep slope, fine texture, low convexity ─ ─ ─
8) steep slope, coarse texture, low convexity ─ ─ ─
9) moderate slope, fine texture, high convexity ─ ─ ─
10) moderate slope, coarse texture, high convexity ─ ─ ─
11) moderate slope, fine texture, low convexity ─ ─ ─
12) moderate slope, coarse texture, low convexity ─ ─ ─
13) gentle slope, fine texture, high convexity ─ ─ ─
14) gentle slope, coarse texture, high convexity 0.67723 0.2 0.1
15) gentle slope, fine texture, low convexity ─ ─ ─
16) gentle slope, coarse texture, low convexity 0.83771 0.2 0.1
72
The concavity and convexity of the very steep slope with fine texture
found only in high spatial resolution models (FA-120), the coarse texture of high
convexity increasing with the pixel size.
Step and moderate slopes are not detected in all three models, gentle slope
coarse texture high and low convexity are increasing with the flight altitude.
Varying DSM spatial resolution can achieve an elements separation of
appropriate scale, without the need of generalization.
Table 8.6: Surface specific points area percentages of the study area at different scales.
Area (%)
Type FA-120 FA-240 FA-360
Planar 0.00001 ─ ─
Pit ─ ─ ─
Channel 49.72501 48.3 47.4
Pass (saddle) ─ ─ ─
Ridge 50.27497 51.7 52.6
Peak ─ ─ ─
As per table 8.6 some morphometric features like pit, pass and peak are
not detected in all flight altitudes, otherwise planar areas are detected in FA-120
the lower flight altitude at a very low percentage of area in order of 0.00001 %,
we cannot judge on this result because the value of this pixel could be a pro-
cessing artifact. The area of channels is increasing with the flight altitude and
the ridge area is decreasing against the channel one.
73
The dominating land forms of surface specific points channel and ridges
of the study area form a comparison models of each flight height with TPI land
forms. By splitting channels and Ridges of FA-120, FA-240 and FA-360 and
exanimating which TPI land forms are included in each type, table 8.7 shows the
area percentage of each landform.
Percentage of area
TPI Landforms Ridge- Channel- Ridge- Channel- Ridge- Channel-
120 120 240 240 360 360
Canyons, deeply in- 1.5 10.4 2.4 16.8 4.3 26.3
cised streams
Midslope drainages, 3.3 16.3 3.9 17.8 4.9 18.6
shallow valleys
Upland drainages, 5.7 19.7 5.4 18.0 5.6 14.4
headwaters
U-shaped valleys 11.9 23.1 9.3 17.1 7.0 11.2
Plains 13.7 12.4 9.6 9.6 6.3 6.5
Open slopes 14.7 7.1 11.4 6.8 7.1 5.4
Upper slopes, mesas 16.5 5.3 15.3 5.7 11.5 6.0
Local ridges, hills in 12.9 3.0 14.9 3.7 13.8 4.9
valleys
Midslope ridges, small 10.7 1.9 14.6 2.7 16.8 3.9
hills in plains
Mountain tops, high 9.1 0.9 13.1 1.6 22.7 2.9
ridges
74
Ridge
25
20
15
10
Ridge-120 Ridge-240
Ridge-360 Логарифмическая (Ridge-120)
Логарифмическая (Ridge-240) Логарифмическая (Ridge-360)
The diagram of figure 8.8 shows the percentage of TPI land forms area in
ridges at different scales, the log curves of 120, 240 and 360 have an intersection
point at upper slope this point made a transition of values from low percentage
to higher percentage of areas.
The correlation value of R2 between land forms of FA-120 is 0.6 for FA-
240 is 0.9 well correlated because of the proportionality and small percentage
interval of areas, for FA-360 the correlation value is 0.6 similar to FA-120.
75
Channel
30
25
20
15
10
5
0
Channel-120
Channel-240
Channel-360
Логарифмическая (Channel-120)
Логарифмическая (Channel-240)
Логарифмическая (Channel-360)
Fig. 8.9: Diagram of area percentage of TPI landforms containing in Channel at flight alti-
tudes of 120, 240 and 360 meter.
Channel usually are concave areas, in figure 8.9 we can see dominating
the area of canyons in FA-360, the correlation of area percentage between the
landforms of FA-360 is very high with R2= 0.97 and a concave logarithmic trend
line.
Otherwise for FA-240 a less concavity logarithmic trend line with R2=
0.75 due to the proportional percentage of areas between landforms.
Fa-120 has a low correlation between landforms R2= 0.35 even less than
the average.
We can conclude from these values that due to cartographic generalization
and the transition from flight altitude to other, the degree of similarity for chan-
nels landforms areas rising with the flight altitude. Hence for ridges land types
the area of canyons and midslope, upper slope local ridge, midslope ridge and
mountain tops are increasing with flight altitude, upland drainage, u-shaped val-
ley, plain and open slope area is decreasing with the flight heights.
By using Topographic Position Index and unsupervised classification of
Iwahashi and Pike, the study area was classified into landform categories of dif-
ferent scale DSM. The result shows that ridges and drainage forms are more af-
fected to generalization than other forms.
76
The landform classes obtained for the three scales differentiate dynamic
terrain characteristics of the study area. Landform classifications extracted form
drone DSM and GIS fast the presented results and discussion by integrating the
geospatial multiscale approach of terrain analysis.
The result shows that TPI provided a powerful tool for describing topo-
graphic attributes of a study area and there is a relationship between landform
map and spatial resolution. By deep understanding of the terrain characteristics,
potential and specific constraints of cartographic generalization. Information and
methods discussed in this study are valuable results for cartographic multiscale
studies and analysis. Landforms are dissolving with scales against each other’s,
some of them gaining areas and some disappeared. This study analyzed the gen-
eralization at three different scales (flight altitude), for future researches we are
planning to examine and monitor changes of landforms at micro, local and glob-
al scales.
77
Fig. 8.10: A) terrain representation DSM of the study area, b) true color Digital Ortho Model.
The DSM generated from Photoscan in figure 8.10 a shows the terrain
structure (ridges and channels) and elevations varying from 1689 to 1821 meters
above the sea level. Terrain texture and pattern in figure 8.10 b highlight a very
smoothed bare land and some manmade traces as roads and ponds.
Usually real estate experts evaluate lands by their terrain, nowadays GIS
modules and algorithms are a very important tool for evaluation and decision
making.
Terrain analysis and land assessment based on DSM resolved: local relief,
slope and terrain curvature for parcels real estate evaluation.
The local relief is defined as the difference between the highest and low-
est elevations occurring within that area. Local relief was introduced by Partsh
(1911), Evans (1972) compared the values of local relief determined over more
than one size of area, and he recommended the use of a fairly large sample area.
In our study we calculated the local relief inside each parcel with a calcu-
lation of all elevation statistic values (minimum, mean, and maximum).
The local relief of a big parcel shouldn’t act same as local relief of a small
parcel that’s why we divided the local relief value by the area in square meter
and multiply it by 100 to reduce decimals value equation 3.
100 (8.3)
Where:
Emax – Maximum elevation,
Emin – Minimum elevation,
A – Parcel area.
78
The values of V express the difference in elevations related to parcels ar-
ea, they are in the range of 1 to 59, high values went to parcels with low local
relief and big areas for high local relief values.
The Slope is one of the most fundamental assessments of parcels and
landscape characteristics, and is reported as a driving variable in many construc-
tion studies. A parcel with an extreme slope is bad evaluated in real estate mar-
ket, very difficult accessibility and not useful for investment.
The mountainous location of the study area and the nearest to Zaarour
country club for winter games gives an idea about the high slopes of the region.
A variation of slope from 0.8 % plain terrain to 96 % rocky cliff structures char-
acterized the evaluated parcels. The variety in parcels slopes gave a variety in
real estate prices.
Terrain curvatures computation is very complicated because, in general,
the surface has different curvatures in different directions. With GIS technology
and software, terrain curvature became a very easy parameter to calculate, the
parameter describing the concavity and convexity of the surface is called curva-
ture, which give a proper assumption about the nature of the land surface with in
the parcel boundary.
Based on Digital Elevation Models, the most popular algorithms for deriv-
ing terrain curvature are those of Evans (1972), Shary (1995), Zevenbergen and
Thorne (1987), as well as the modified Evans-Young (Shary et al., 2002) meth-
od Burrough and McDonnell (1998) gave preference to the Zevenbergen-Thorne
algorithm. For the extraction of the parcels terrain curvatures, we used the
Zevenbergen-Thorne’s method which is based on the idea of representation of a
surface by an equation of partial quadratic form (Zevenbergen, Thorne, 1987).
The Zevenbergen and Thorne module applied to the DSM generated from
the photogrammetric processing shows that the concave surface of the study ar-
ea is 14%, 53% convex and 33% flat areas. These results can be the prove that
must of the parcels are hilly as per the DSM 8.10 a.
All listed above DSM extracted maps constitute criteria for real estate
land evaluation map, McHarg describes the process of conducting multi-criteria
analyses by categorizing and ranking values from a variety of thematic datasets,
creating a transparency for each dataset, and then overlaying the transparencies
together to create a composite image. This final composite image was then used
to evaluate the suitable land uses in the design scenario (McHarg, 1995).
The value of this weighted overlay approach was understood and adopted
by many researchers, who saw great potential from the early GIS programs.
The weighted overlay approach allows researchers to create composite
maps, in our project a parcel evaluation map for real-estate assessments. These
composite maps made it possible for decision makers to consider multiple at-
tributes in a single map (Hopkins, 1977).
79
Multiple Criteria Decision Making (MCDM) methods designed to help
stakeholders (real estate experts) make well-informed decisions based on vari-
ous attributes (Jankowski, 1995).
Each criterion evaluation of local relief, slope and curvature takes a value
from: 1 to 5 number of classes, 1 being the worst value and class 1 the worst
class, otherwise the class number 5 is acting as the best value.
Local relief and curvature raster datasets were classified using the geo-
metric interval classification of ArcMap method. This classification method was
used for visualizing continuous data, the specific benefit of the geometrical in-
tervals classification is that it works reasonably well on data that are not distrib-
uted normally.
Local relief calculated by formulae 1 gives a range of values from 0.04 to
2.52, the small values with small interval of elevation and high values with big
interval the highest score went to the small values and gradually going down to
1 for the interval of big values.
Curvature negative values are concave terrain forms and positive values
are convex terrain forms the minimum concave extreme values of the first inter-
val -110--12 are found in deep valleys their score is 1, the highest score went to
the interval of plane surfaces.
Slopes are a very important index in parcel evaluation, real estate expert
and contractors inspect the slope as a first stage in their studies and reports,
more the slope is high more the terrain is not suitable and the price is low.
Slope was calculated with the ArcMap algorithm based on DSM and clas-
sified manually in table 8.9, the slope of the study area is moderate 83% of the
area fall in the slope interval between 0 and 29 percent, high score went to the
lowest interval 0-14 and the lower score of one is for the extreme slopes.
80
Table 8.9: Percent Slope Ranges of DSM Data Sets, Associated with Scores.
Slope
Score Interval % Area %
5 0-14 43
4 15-29 40
3 30-44 12
2 45-59 4
1 > 60 1
Fig. 8.11: Сriteria classified maps a) local relief, b) slope c) curvature, d) resulted real estate.
The resulted three classified datasets maps of local relief, slope and curva-
ture in figure 8.11 shows the high score values in red color and the low score
pixels in dark green.
The method used is weighted overlay analysis, performed by overlaying
classified datasets, assigning a weight to each dataset, summing the values of
each vertical cell stack, and then evaluating the resulting composite map (Col-
lins et al., 2001).
The developed composite map of parcel evaluation for real estate requires
the analysis of criteria illustrated in Table 8.9, in our study, criteria are not all of
equal importance. In a single criteria map we must prioritize values. Values have
been reclassified in the input criteria maps, slope has the big influence on terrain
81
evaluation and assessments that’s why it took a weight of 60% and for Local
relief and curvature 20 % for each. Criteria maps have been weighted and ag-
gregated together to produce parcel evaluation composite map figure 8.11d.
High scores in composite map figure 8.11d are found in the smoothed
plain areas hence low scores are found in extreme valleys. The cadastral vector
map of the study area draped on the resulted raster composite map with zonal
statistics (maximum, minimum and mean) of the scores inside parcels bounda-
ries, the evaluation real estate map classified manually in 5 classes using the
mean values within each parcel.
The parcels evaluation real estate map of the study area constitutes a real
estate ranking coefficient, parcels of the first class are not influencing on the
square meter price but the class number five is five times better then class one.
The results of this study is satisfactory, it would suffice to have all reliable
and necessary data relating to the case study if better outcomes were to be ex-
pected. These data are easily introduced in the system and can be updated in any
time, we can add more criteria such as neighborhood analysis, accessibility
analysis etc. Therefore, using the developed system, results should be closer to
reality since all criteria and them relative importance is taken into account at the
same time.
The main aim of this research was to determine whether UAVs DSM can
offer a suitable material for real estate parcels evaluation. To achieve that, a
UAV flight was done to produce DSM for terrain analysis. Afterwards the ter-
82
rain analysis results provide parcels evaluation criteria used for the creation of a
parcel evaluation real estate map.
The photogrammetry processing results in terms of precision are accepta-
ble, since the level of precision only depends on pixel size.
The resulted values of figure 8.12 could be a coefficient of parcel pricing
and real estate market arrangement, the parcel evaluation real estate map is a
basic for decision makers and experts.
The issue addressed is land evaluation for real estate. The ideal solution
would be to incorporate a module including important classification methods in
a GIS as well as appropriate analysis methods independently of data, of the
study area. It will represent a spatial decision support system dedicated to devel-
oping land evaluation map for real estate parcels assessments.
83
With the fast evolution of GIS and geoinformatics methods, many scien-
tists worked on the development of other methods for calculating terrain rough-
ness such as: the application of Fourier analysis (Stone and Dugundji, 1965) ge-
ostatistics (Herzfeld, et.al., 2000), the fractal dimension of a surface (Elliot,
1989; Doumit, Pogorelov, 2017).
From the first recognized traditional methods for quantifying roughness
was the land surface roughness index (LSRI) developed by (Beasom et
al.,1983). This index is a function of the total length of topographic contour lines
in a given area.
(Riley et al., 1999) developed a terrain roughness index (TRI) that is de-
rived digital elevation models (DEM) implemented in a geographical infor-
mation system (GIS). TRI uses the sum of changes in elevation within an area as
an index of terrain roughness.
Based on (Hobson, 1972) method developed for measuring surface
roughness in geomorphology, a Vector Roughness Measure (VRM) quantifies
terrain roughness by measuring the dispersion of vectors orthogonal to the ter-
rain surface.
In this study we tested the regression between VRM and TRI values at the
six different levels and we provided a correlation analysis between the raster
datasets of VRM, and TRI, to examine their distributions within each scale, we
generated scatterplots and calculated descriptive statistics (Min, Max, SD,
skewness, kurtosis and r2) to characterize terrain heterogeneity at different level.
Our study is independent from DSM accuracy and precision it will test
roughness at six different levels expressed by flight height of a drone at 20,
40,60,120,240 and 360 meters. The flight datum was calculated from the same
takeoff points of the drone of the six flights.
As this study is restricted to evaluating array-based geomorphometric
methods for calculating surface Roughness, an input DSM is required for further
analysis. DSM selection criteria were based on spatial resolution, with a high-
spatial-resolution DSM required in order to test the heterogeneity across a range
of resolutions and within the study area presenting multiscale Roughness fea-
tures.
The Terrain Roughness Index (TRI) based on an index described by (Ri-
ley, et. al. 1999) that calculated the sum change in elevation between a grid cell
and its eight neighboring grid cells table 8.10 by squaring the eight differences
in elevation, summing the squared differences, and taking the square root of the
sum. (Valentine,et.al. 2004) calculated the average of the absolute values of the
eight differences in elevation, by using the TRI equation given as:
0,0 1, 1 0,0 0, 1
0,0 1, 1 0,0 1,0 0,0
1,1 0,0 0,1 0,0 1,0
0,0 1,1 /8 (8.4)
84
Table 8.10: 3×3 grid of the TRI equation values.
Fig. 8.13: TRI maps at different flight altitudes 20, 40, 60,120,240 and 360 above the datum.
TRI high values at FH-20 shows details in ridges and water erosion traces,
in FH-60 structures are very smoothed, FH-120 shows the pixel's boundaries
and at FH-360 the map is totally pixelated. It is very clear in this map the disap-
pearance of the small structures with the loss of spatial resolution, running from
coarse to smooth then to pixelated surfaces.
Based on a method developed for measuring surface roughness in geo-
morphology (Hobson, 1972), the surface of elevation values can be divided into
planar triangles very similar to Triangulated Irregular Network (TIN models)
and normal to these planes represented by unit vectors. Values of vector mean
strength (R), and dispersion (k) can be calculated for each square cell. In smooth
areas, with similar elevations, the vector strength is expected to be high and the
vector dispersion to be low since the vectors will become parallel figure 8.14. In
rough areas, the nonsystematic variation in elevation will result in low vector
strength and high vector dispersion. The inverse of k can be a better representa-
tion of roughness (Mark, 1975).
85
Based on slope and aspect definitions, normal unit vectors of every grid
cell of a digital elevation model (DEM) are decomposed into x, y and z compo-
nents.
Fig. 8.14: Vector dispersion method used to calculate surface roughness at different scales for
a topographical surface. Graphic from (Grohmann et al. 2011).
DSM resolution dependent from the flight height, in figure 8.14 the topo-
graphic surface profile showing the terrain variation, at high spatial resolution
vectors are very dense and orientated in several directions otherwise for low spa-
tial DSM resolution as per example FH-360 vectors and far from each other per-
pendicular to segments expressing geometrical terrain forms.
The translation from the vector dispersion traditional method applied on
topographic maps to Vector Roughness Measure (VRM) calculated by GIS algo-
rithms, was done by applying the method and formulas used by (Veitinger et al.,
2016). Based on slope and aspect definition, the normal unit vector of every grid
cell of a Digital Surface Model is decomposed into x, y, and z.
A resultant vector R is then obtained for every pixel by summing up the
single components of the center pixel and its neighbors using a moving window
technique.
∑ ∑ ∑ (8.5)
86
rain heterogeneity and a trend to terrain homogeneity by a high degree of
smoothness especially in the last three DSMs FH-120, FH-240, and FH-360.
VRM measures the variation in terrain independent of its overall gradient, VRM
is able to differentiate among terrain types.
In this work, we have tested two widely used methods: Terrain Roughness
Index (TRI), Vector Roughness Measure (VRM), Terrain Roughness Index
(TRI) calculates the sum change in elevation between a grid cell and its neigh-
borhood, according to the algorithm by (Valentine, et.al., 2004).
Table 8.11: Terrain Ruggedness Index statistical values at each level. Std. – standard devia-
tion; Skew – Skewness; n – number of cell in a raster grid.
The statistics of the TRI values at each flight height listed in table 8.11,
the values of Min., Max., Mean and Std. showed that the TRI values increased
with the flight height hence with the scale. From the values of r2 it is proven that
no homogeneity of TRI values with their neighborhoods in each layer, it is nor-
87
mal especially for the high spatial resolution layer TRI-20, TRI-40, and TRI-60
with high n values.
For TRI-20 no symmetric data distribution because of the high skewness
value of 1.202, but the evidence is that negative values for the skewness at TRI-
120, TRI-240 and TRI-360 indicate data that are skewed left and positive values
for the skewness indicate that high spatial resolutions layer TRI-20, TRI-40, and
TRI-60 skewed right.
The distributions of roughness values (VRM) for the five levels were
highly skewed to the right with the highest proportion of VRM values at the
mean instead of FH-360 values skewed to the left.
Our results showed that TRI and VRM directly measured heterogeneity of
terrain more independently of scale, and both indices exhibited a pattern of bias
in that the minimum value of roughness increased with increasing spatial resolu-
tion.
A correlation analysis provided to understand the similarity between TRI
and VRM.
Fig. 8.16: Scatterplot of TRI and VRM ruggedness values at all levels of details. a) FH-20,
88
b) FH-40, c) FH-60, d) FH-120, e) FH-240, f) FH-360.
High correlation recorder at all flight heights, the scattered plot of figure
8.16 shows a high degree of similarity in small values at FH-20, FH-40 and FH-
60 expressed in dark color elongated areas of figures 8.16 a b and c.
At high flight height the concentration of the correlated values is moving
from small to mean values with a trend to the right figure 8.16e, otherwise the
correlation values of TRI and VRM in figure 8.16 f became more scattered and
less dense due to a dilution of similarity resulted from the changing of the spatial
resolution (pixel size).
We can say from figure 8.16 that the two roughness indices are very simi-
lar and have a high correlation and the degree of terrain roughness vary with the
spatial resolution. Differences in the distributions of roughness, measured by
VRM, and TRI reflected the characteristic terrain physiography of the terrain.
Surface Roughness in Earth sciences is used as an explanatory index. It is
dependent upon exogenic and endogenic geographical processes. Many methods
for surface Roughness measuring such as: area ratio, vector dispersion, the
standard deviation of first and second terrain derivative (elevation, slope, and
curvature) have been implemented in GIS and based on digital models.
The possibility of the production of digital models at different spatial
resolution spatially UAV based one, allows fast and inexpensive multiscale
analysis of surface Roughness. Two applied indices Topographic Roughness
Index (TRI) and Vector Roughness Measure (VRM) at different scale level ex-
press a variety in terrain heterogeneity at a UAV flight height of 20, 40, 60, 120
240 and 360.
Both indices show a roughness variation with scales and a transition from
coarse to smooth between FH-60 and FH-120, a cartographic generalization in-
fluenced by flight height is very clear in figure 8.14 and 8.16. Our statistical and
correlation analysis of roughness indices prove that multiscale and multilevel
UAV flights datasets are: a visual cartographic generalization, a transition scale
from level to another, a live roughness monitoring apparatus leads to a detection
of fine scale/regional relief, and performance at a variety of scales.
Researchers must be aware of potential biases that originate in DSM at
multiscale (different spatial resolution) when TRI and VRM values are inter-
preted. All DSMs contain inherent inaccuracies due to the sources errors in orig-
inal data. The elevation accuracy of a DSM is greatest in flat terrain and de-
creases in steep terrain where the roughness incises (Koeln et al., 1996). Terrain
roughness is a complicated geomorphometric parameter, it could be calculated
in many ways, under many names roughness, micro relief, and others.
89
Rugosity is an index of surface roughness that is widely used as a measure of
landscape structural complexity.
Rugosity is traditionally evaluated in-situ across a two dimensional terrain
profile by draping a chain over the surface and comparing the length of the chain
with the linear length of the profile figure 8.17.
Fig. 8.17: The rugosity of a surface (e.g. yellow profile of a terrain № 1) is the ratio
between the contoured distance (dashed line № 2) and the planar distance
(or area for three-dimensional data).
90
In the early 1970s The standard surface ratio (SR) method for measuring
rugosity was introduced by (Risk 1972; Dahl, 1973). They calculated rugosity
by projecting the surface onto a horizontal plane (Lundblad et al., 2006; Wright,
Heyman, 2008; Friedman et al., 2012) thereby coupling rugosity with the slope
at the scale of the surface equation 1.
(8.7)
91
(ACR). The method replaces the horizontal plane with a plane of best fit
(POBF), where the POBF is a function of boundary data interpolation (Preez,
Tunnicliffe, 2012). The ACR method can be used in multi-scale analyses, an
important attribute of a spatial analysis as morphological processes act at a vari-
ety of spatial scales (Levin, 1992), and differ in effects and importance with
scale (Wu, 2013).
Basing on Du Preez (2014), Jeff Jenness developed A new technique, op-
erates on a 3×3 neighborhood, using the triangulated area of each adjacent cell
and applying the Pythagorean theorem to compute the surface area. By default,
the planar area of each grid cell is corrected by dividing the cell area by the co-
sine of slope (Jenness, 2004).
In our study ACR was calculate in GIS tool installed on ArcGIS® Soft-
ware available for download in Du Preez (2014), from the six generated DSMs
we calculated Arc-Chord ration (ACR). The ACR rugosity index is a measure of
three-dimensional structural complexity defined as the contoured area of the sur-
face divided by the area of the surface orthogonally projected onto a plane of
best fit.
The arc-chord ratio (ACR) method calculates the planar distance by pro-
jecting the surface boundary onto a boundary data section 1 figure 8.17 (Red
dashed line) plane of best fit section 3 figure 8.17 (POBF; dashed-dotted line; 3)
effectively decoupling rugosity from the slope at the scale of the surface. ACR is
calculated by creating two TINs a contoured surface and a planar one represent-
ing the plane of best fit (POBF) figure 8.18a. The POBF is a function (interpola-
tion) of the boundary data only of the area of interest the area of interest is the
boundary of the study area. The surface area of the first TIN within is divided by
that of the second TIN to obtain a single ACR value for the area of interest (Du
Preez, 2014).
Fig. 8.18: A) ACR simultaneous surfaces leading to the horizontal planar one, b) an example
of ACR surfaces of FH-20.
As a first step, the conversation from raster to TIN for the six DSMs to
form contour surfaces at different scales.
92
Fig. 8.19: TIN models of contour surface at the six flight heights.
All the six TIN models expressed the terrain morphology with some varia-
tions detected on the colored contour lines, and a generalization in triangles
quantity.
Step two, the contoured surface translated to a plane of best fit (POBF)
decoupling rugosity from the slope at the scale of the surface (Du Preez, Tunni-
cliffe, 2012; Friedman et al., 2012).
Figure 8.20 shows six similar planar surfaces owning the same trend of
values by simplifying elevation values of the contour surfaces.
The innovation of the ACR method lies in the analysis used to generate
the POBF: identify and isolate the boundary data (step three) figure 8.18a an
illustrated example the boundary data of FH-20 the triangulated irregular net-
work data frame.
By using a linear polynomial interpolation of the boundary data at the six
levels to generate surface datasets (step four).
93
Fig. 8.20: the POBF of the six flight heights.
Some softwares are unable to interpolate the actual planar area, an alterna-
tive is to interpolate the angle of the POBF and use the cosine equation (and the
horizontal planar area) to extract the planar area of step five (Du Preez 2014).
To solve for the ACR rugosity index, by application of formula 7 using
the contoured and planar areas (step six).
By following Du Preez and Tunnicliffe (2012) arc-chord ratio (ACR) we
Computed a ratio between the three-dimensional surface area and the planar area
of the surface, this tool uses a novel methodology to develop a surface area da-
taset. The output values represent ratios between the surface area and planar ar-
ea, typically ranging from 1 in flat areas to 4 in areas of high variation.
The first step of Du Preez methodology of conversation from raster to
TIN, figure 8.19 of the similar six TIN models. Same study area at different spa-
tial resolution lead to visual data similarity a statistical comparison was done to
test this degree of similarity table 8.13.
The statistical values of table 8.13 shows a decreasing in triangles num-
bers from 321 to 41 due to decreasing in spatial resolution, the difference in av-
erage maximum and minimum elevations due to the interpolated predicted val-
ues.
94
Table 8.13: Elevation TIN statistics, quantity of triangles, average minimum and maximum
elevations and the slope average.
The quantity of triangle from elevation area to planar area of table 8.13
and table 8.14 are reduced more than twenty times in high spatial resolution data
of 20, 40 and 60 meters’ flight heights, otherwise low spatial resolution data
with low triangles quantity in contoured areas are reduced less than ten times in
planar areas TIN models.
Average of maximum and minimum elevations approximately in all levels
are reduced in a range of 2 meters between surface and planar areas, hence a re-
duction on the average slope in the range of 2 percent.
Table 8.14: POBF, planar TIN area statistics quantity of triangles, average minimum and
maximum elevations and the slope average.
In planar area TIN, the average values of the minimum and maximum el-
evations in all the six flights is reduced with the number of triangles due to the
transition from contoured to planar area.
The variation in values between table 8.13 and table 8.14 showed an un-
stable change in elevations and slope.
The first part of the transition from contoured area TIN to planar area TIN
(POBF) is very similar to trend analysis simplifying the complexity of values
with a conservation of the same datum.
Otherwise the second transition part from surface area two planar one rec-
ord a loss of initial datum elevation down to zero table 8.15.
95
Table 8.15: Surface areas statistics at different flight heights.
High spatial resolution data of 20, 40 and 60 have sub meter surface area
values, rising with approximately in double values between flight heights.
The surface area is designed to determine the amount of similarity be-
tween the tested area surface and planar surface. it is hypothesized that the sur-
face area increases with surface irregularity. Because there is a definite interplay
between the number and magnitude of terrain irregularities such that similar sur-
face area estimates could arise from different manipulations of these two varia-
bles.
The values of planar area at different scales is practically the same with
small variations at low flight height, basing the results of table 8.16 especially
the similar mean values, we can answer the above question constituting the tar-
get of our study by: yes, the transition of the multiscale Digital Surface Models
to planar areas have the same results.
Visual and statistical results prove the similarity of multiscale planar data,
a regression analysis run to test this similarity in planar surface at multiscale.
96
Fig. 8.21: scattered plots of planar area at different scales.
The correlation analysis between the highest spatial resolution data of FH-
20 as a reference and the other datasets, figure 8.21a a test scatterplot with same
data set in X and Y axes of FH-20, with hundred percent similarity.
The graph FH-40 with FH-20 gives 39.77 % of similarity figure 8.21b, r
square values of FH-60, FH-120, FH-240 and FH-360 against FH-20 are less
than 11 %, the core of the scatterplots for the high spatial resolution datasets are
in the lower left corners, moving positively with the Y axes in FH-120 then fall-
ing down negatively for FH-360.
The correlation analysis contrary to visual and statistical showed a differ-
ence in multiscale planar areas.
The present study provides multiscale DSM analysis by adaptation and
improvement of ACR geo-processing model tools and step-by-step application
of Du Preez 2014 module. Improving standard methods for the detection and
investigation of geomorphological patterns at different spatial resolution will
lead to better scientific information for generalizations, terrain analysis, man-
agement and conservation initiatives.
We can conclude from this study that Scale and resolutions effects on ter-
rain data and form an important issue in geographic researches, DSM UAV
based at high flights should be tested before use.
It is essential to have a good understanding of the effects of scale on the
analysis results, each elevation data has its own surface to planar result, terrain
rugosity depends with spatial resolution and visual analysis should follow a cor-
relation one.
97
Chapter 9 Drones regulations and buyers guide
Drones Regulations
When it comes to flying professional drones within the law, the respective
rules around the world are varied, till now there is no unique regulations for all
countries. Aviation authorities around the world are integrating unmanned air-
craft into civilian airspace, and each jurisdiction has its own rules and regula-
tions. As an example in USA, you must acquire a Certificate of Authorization
(COA) through the Federal Aviation Administration (FAA). While the require-
ments of each COA include (Reg Austin, 2010):
Flights below 300 meters. Daytime operation in Visual Flight Rules
(VFR).
Range limited to Visual Line of Sight (VLOS).
Greater than 4 km from an airport.
Flight altitude ceiling of 120 m above take-off point is common.
Maximum UAV take-off weight/weight classes (a lighter weight in-
creasingly equates to more flexible usage).
No, fly zones, such as within several km of an airport, dense urban ar-
eas and military places.
An operator certificate/license of some description is often required.
A second drone observer is sometimes required.
The investigations reveal that UAV regulations are subject to national leg-
islation and focus on three key issues:
1) Targeting the regulated use of airspace by UAVs as they pose a serious
danger for manned aircrafts;
2) Setting operational limitations in order to assure appropriate flights.
3) Tackling administrative procedures of flight permissions, pilot licenses
and data collection authorization in order to address public safety and privacy
issues.
Approximately half of all countries do not provide any information re-
garding the use of UAVs for civil applications, some countries have UAV regu-
lations were in other countries the use of UAVs is prohibited (Stöcker et al.,
2017).
Flight safety
A drone operator is responsible for ensuring the safety of every operation,
including the protection of nearby people, animals, property and the environ-
ment in general, he should evaluate weather conditions and choose suitable, safe
take-off and landing locations (Reg Austin, 2010).
For a safe flight drone operator must:
1. Flight within line of sight LOS.
2. Do not interrupt the LOS.
98
3. Do not fly with wind over 30 km/h.
4. Do not fly in fog.
5. Do not fly in rain.
6. Do not fly long periods (the ESC got hot).
7. Stay away from birds.
The ICAO is an international actor that serves as a collaboration and
communication platform for national civil aviation authorities. They are con-
cerned with fundamental regulatory frameworks at a global scale and provide
information material, Standard and Recommended Practices and Procedures for
Air Navigation Services (ICAO 2011). One important step taken in Riga 2015
was the publication of the Riga Declaration on Remotely Piloted Aircrafts. The
declaration highlights five main principles that should guide the regulatory
framework in Europe:
1) Drones need to be treated as new types of aircraft with proportionate
rules based on the risk of each operation;
2) EU rules for the safe provision of drone services need to be developed
now;
3) Technologies and standards need to be developed for the full integration of
drones in the European airspace; 4) Public acceptance is key to the growth of
drone services;
5) The operator of a drone is responsible for its use (Riga declaration
2005).
In some countries the required distance bounds a vague interpretation and
strictly determines the term visual line-of-sight (VLOS). Some cases further in-
clude extended visual line-of-sight (EVLOS) operations.
Here, the pilot uses an additional observer or remote pilots to keep the
visual contact to the UAV figure 9.1. The US, UK, Italy and South Africa par-
ticularly mention the possibility of EVLOS operations within their UAV regula-
tions. Furthermore, some countries basically allow beyond visual line-of-sight
(BVLOS) flights.
BVLOS flights, which are outside the general permission for the commer-
cial utilization of UAVs, require either special flight conditions or exceptional
approvals (Stöcker, 2017).
99
Fig. 9.1: Schematic distinction between UAV flight ranges.
Besides this, the majority of the cases demand a pilot certification or a li-
cense. A certificate is usually granted by intermediaries like authorized training
centers or UAV manufacturers and entails a basic practical and theoretical train-
ing of the pilot. In contrast, the procedure and requirements to obtain a UAV
pilot license usually involve sophisticated aeronautical background knowledge
and is issued by national aviation authorities (Stöcker, 2017).
100
Conclusion
Today, UAVs can be used as a precise, automated and computer con-
trolled data acquisition and measurement platform, this work deals with the
challenging task: The use of UAV systems as photogrammetric data acquisition
platform.
The main goal of this book is the identification of a generic workflow us-
ing a UAV as a photogrammetric data acquisition platform investigated on a real
application, focusing on the precision and resolution of the generated photo-
grammetric products, like elevation models, orthophoto and textured 3D models.
The main motivation of this book is to generate high-resolution infor-
mation of inaccessible areas, such as maps, orthoimages, general topography,
detailed elevation models, extraction of dangerous obstacles.
The extraction of the terrain, orthoimages and textured 3D models from
UAV-images or other sensor data can be applied to all kinds of hazards, cata-
strophic or environmental disasters, including 3D documentation of the envi-
ronment, cultural heritage sites, etc… Drones technologies allow cartogra-
phers to produce better mapping, coupled with multispectral, hyperspectral and
LiDAR systems.
The evaluation of low-cost UAV systems showed that these systems are
easily controllable and usable for scientific researches. Many challenges still
remain as UAV are adopted in the geospatial industry, this book showed that
flying the photography did not present as many challenges as the post-
processing of the resulting spatial data.
This book demonstrated that unmanned aerial vehicles (UAV) can be used
to produce geospatial data in the geographical analysis. The modular nature of
UAS and the ability to produce current spatial information (orthophotos, 3D
models) mean that we no longer need to turn to large mapping contracts because
of the economies of scale offered by the latter.
Today, UAVs can be used as a precise, automated and computer con-
trolled data acquisition and measurement platform, this work deals with the
challenging task: The use of UAV systems as photogrammetric data acquisition
platform.
The key with using all photography or imagery, whether aerial or satellite-
derived, is to only use the data in a sensible and meaningful way, understand the
error levels and their potential impact. Geographic correction can be a compli-
cated process, full of pitfalls it should transform a better image into a useful GIS
information.
The acquired data has great potential in GIS projects. Moreover, our study
showed also its usability for teaching Students to use new technologies.in future
work UAVs will be integrated as a fixed part in the remote sensing and GIS
courses.
101
References
1. Abbeel P., Coates, A., Quigley, M. and Ng, A. Y. (2007). An application of reinforce-
ment learning to aerobatic helicopter flight, In: NIPS, 19.
2. Abdel-Aziz Y. I. and Karara H.M. (1971). Direct linear transformation into object space
coordinates in close-range photogrammetry. In Symposium on Close-Range Photo-
grammetry.
3. Adams L.P. (1980). The Use of Non Metric Cameras in Short Range Photogrammetry.
14th Congress of the International Society for Photogrammetry, Commission V, Ham-
burg, Germany.
4. Agisoft, (2013). Agisoft PhotoScan User Manual: Professional Edition. Version 1.0.0
edn.
5. Agüera-Vega F., Carvajal-Ramírez, F., Martínez-Carricondo, P. (2016).: Accuracy of
digital surface models and orthophotos derived from unmanned aerial vehicle photo-
grammetry. J. Surveying Eng. 04016025. and Pattern Recognition, 1997.
6. archaeological areas using a low-cost UAV the Augusta Bagiennorum Test site, In: XXI
7. Arrell K, Fisher P.F, Tate N.J, Bastin L (2007) A fuzzy c-means classification of eleva-
tion derivatives to extract the morphometric classification of landforms in Snowdonia,
Wales. Computers & Geosciences 33, pp.1366-1381.
8. Atkinson K. B. (1996). Close Range Photogrammetry and Machine Vision. Whittles
Publishing, Roseleigh House, Latheronwheel, Caithness, Scotland.
9. Austin Reg. (2010) Unmanned aircraft systems UAVs design development and deploy-
ment, John Wiley & Sons Ltd, p.332
10. Autodesk. (2014). 123D Catch.www.123dapp.com/catch.
11. Barka I., Vladovic J., Malis F. (2011) Landform classification and its application in
predictive mapping of soil and forest units. Proceedings: GIS Ostrava 2011.
12. Barnes G., Volkman W., Sherko R., Kelm K. (2014). Design and Testing of a UAV-
based Cadastral Surveying and Mapping Methodology in Albania. World bank conf.
Land and Poverty, The World Bank - Washington DC, March 24-27.
13. Beasom S.L., Wiggers E.P., Giordono R. J. (1983). A technique for assessing land sur-
face ruggedness. Journal of Wildlife Management. 47: pp. 1163–1166.
14. Bellingham J.S., Tillerson M., Alighanbari M. and How J. P. (2003). Cooperative Path
Planning for Multiple UAVs in Dynamic and Uncertain Environments, In: 42nd IEEE
Conference on Decision and Control, Maui, Hawaii (USA).
15. Bendea H. F., Chiabrando F., Tonolo F.G. and Marenchino D. (2007). Mapping of ar-
chaeological areas using a low-cost UAV the Augusta Bagiennorum Test site, In: XXI
International Symposium, Athens, Greece.
16. Boufama B., Mohr R., Veillon F. (1993) Euclidean constraints on uncalibrated recon-
struction. Proceeding of the Fourth International Conference on Computer Vision, Ber-
lin, Germany, 466-470.
17. Brown D.C. (1976). The bundle adjustment—progress and prospects. Int. Archives Pho-
togrammetry, 21(3), Paper number 3–03 (33 pages).
18. Burrough P. A., McDonell R.A. (1998). Principles of Geographic Information Systems.
Spatial Information and Geostatistics, Chapter 8: Spatial Analysis Using Continuous
Fields, ISBN 0-19-823365-5.
19. Burrough P.A. (1983). Multiscale sources of spatial variation in soil- the application of
fractal concepts to nested levels of soil variation, J. Soil Sci., 34, 577.
20. Chang K., Tsai B (1991) The effect of DEM resolution on slope and aspect mapping.
Cartography and Geographic Information Science 18, pp. 69–77 scheduling, Whistler,
British Columbia, Canada.
102
21. Collins M. G., Steiner, F. R., & Rushman, M. J. (2001). Land-Use Suitability Analysis
in the United States: Historical Development and Promising Technological Achieve-
ments. Environmental Management, 611-621.decision-making methods. International
Journal of Geographic Information.
22. Cooper M.A. R. and Cross P.A. (1991). Statistical concepts and their application in pho-
togrammetry and surveying. Photogrammetric Record, 13(77):645–678.
23. Dahl A.L. (1973). Surface area in ecological analysis: quantification of benthic coral-
reef algae. Mar Biol. 23, pp.239–249.
24. Dai F., & Lu M. (2010). Assessing the accuracy of applying photogrammetry to take
geometric measurements on building products. Journal of Construction Engineering and
Management, 136(2), 242–250. Reston, VA: ASCE.
25. Dellaert F., Seitz S.M., Thorpe C.E. & Thrun, S. (2000) Structure from motion without
correspondence. Proceedings IEEE Conference on Computer Vision and Pattern
Recognition. CVPR 2000 (Cat. No.PR00662). pp. 557-564. IEEE Comput. Soc, Hilton
Head Island, SC, USA.
26. Deng Y., Wilson J.P., Bauer B.O. (2007) DEM resolution dependencies of terrain at-
tributes across a landscape. International Journal of Geographical Information Science
21, pp. 187–213.
27. Dissanayake G., Sukarieh S., Nebto E. and Durrant Whyte H. (2001) “The Aiding of a
Low-Cost Strap Down Inertial Measurement Unit Using Vehicle Model Constraints for
Land Vehicle Applications”, IEEE transactions on robotics and automation, Vol. 17.
28. Doumit J.A., Kiselev E.N. (2016). Structure from motion technology for macro scale ob-
jects cartography// Breakthrough scientific research as the modern engine of sciences,
St. Petersburg.: Publishers “Cult Inform Press”. pp 42-47. ISBN 978-5-8392-0627-4.
29. Doumit J.A., Pogorelov A.V. (2017). Multi-scale Analysis of Digital Surface Models
Based on UAV Datasets. Modern Environmental Science and Engineering (ISSN 2333-
2581), Volume 3, No. 7, pp. 460-468.
30. Du Preez, C. (2014). "A new arc-chord ratio (ACR) rugosity index for quantifying
three-dimensional landscape structural complexity." Landscape Ecology. 30, pp. 181–
192.
31. Du Preez C., Tunnicliffe V. Shortspine thornyhead and rockfish (Scorpaenidae). (2011).
distribution in response to substratum, biogenic structures and trawling. Mar. Ecol.
Prog. Ser. 425, pp. 217-231. [doi:10.3354/meps09005]. dynamic detection of moving
vehicles with a UAV, In: IEEE International Conference on Robotics and Automation,
pp. 1878-1883.
32. Eastman J.R., (1985). Single-Pass Measurement of the Fractional Dimensionality of
Digitized Cartographic Lines. paper presented to the Canadian Cartographic Associa-
tion, Annual Meeting, June 1985.
33. Egbert J. and Beard, R. W. (2007). Road following control constraints for low altitude
miniature air vehicles, In: American Control Conference, New York, U.S., 353-358.
34. Eisenbeiss H. (2009). UAV photogrammetry. Dissertation ETH No. 18515, Institute of
geodes and Photogrammetry, ETH Zurich, Switzerland, doi:10.3929/ethz-a-005939264.
35. Eisenbeiss H. (2011). The Potential of Unmanned Aerial Vehicles for Mapping, Zurich.
Photogrammetric Week '11 Dieter Fritsch (Ed.) Wichmann/VDE Verlag, Belin & Of-
fenbach.pp135-144.
36. Elliot J. K. (1989). An investigation of the change in surface roughness through time on
the foreland of Austre Okstindbreen, North Norway,” Comput. Geosci., vol. 15, no. 2,
pp. 209–217.
103
37. Evans I. (2003) Scale-specific landforms and aspects of the land surface. In: Evans, I.S.,
Dikau R., Tokunaga E, Ohmori H, Hirano M. (Eds.), Concepts and Modelling in Geo-
morphology: International Perspectives. Terrapub, Tokyo, pp. 61–84.
38. Evans S. (1990) “General Geomorphometry”. In: Goudie, A.S., Anderson M., Burt T.,
Lewin J., Richards, Whalley K., Worsley B., Geomorphological Techniques. 2nd edi-
tion. Unwin Hyman, London, pp.44–56.
39. Evans I.S. (1972) General geomorphometry, derivatives of altitude, and descriptive sta-
tistics. In: Chorley, R.J. (Ed.), Spatial Analysis in Geomorphology. Methuen, London,
pp. 17–90.
40. Falkner E., Morgan D. (2002) Aerial Mapping: Methods and Applications, (Mapping
Science) 2nd Edition, by CRC Press LLC, pp.195. ISBN 1-56670-557-6.
41. Faugeras O. D., Luong Q., and Maybank S. 1992 Camera self-calibration: Theory and
experiments. In Proc. European Conference on Computer Vision, LNCS 588, pages
321–334.
42. Florinsky I.V., Kuryakova G.A. (2000) Determination of grid size for digital terrain
modelling in landscape investigations-exemplified by soil moisture distribution at a mi-
cro-scale. International Journal of Geographical Information Science 14, pp. 815–832.
43. Friedman A., Pizarro O., Williams S.B. (2012). Johnson-Roberson M. Multi-scale
measures of rugosity, slope and aspect from benthic stereo image reconstructions. PLoS
One 7, 12, pp. 1–14.
44. Furukawa Y. & Ponce, J. (2009) Accurate, Dense, and Robust Multi-View Stereopsis.
IEEE Transactions on Pattern Analysis and Machine Intelligence. p. IEEE Computer
Society Press.
45. Galparsoro I., Borja A., Bald J., Liria P. (2009). Chust G. Predicting suitable habitat for
the European lobster (Homarus gammarus), on the Basque continental shelf (Bay of
Biscay), using Ecological-Niche Factor analysis. Ecol Model 220, pp. 556–567.
46. Glover J.M. (2014). Drone University, USA, Middletown, DE. P.134.
47. Gonzalez, J. P., Nagy, B. and Stentz, A., (2006). The Geometric Path Planner for Navi-
gating Unmanned Vehicles in Dynamic Environments, In: Proceedings ANS 1st Joint
Emergency Preparedness and Response and Robotic and Remote Systems, Salt Lake
City (Utah), USA.
48. Gonzalez M.F. and Wintz, P. (1987). Digital Image Processing, Addison Wesley,
Menlo Park, CA, p.414.
49. Goodchild M.F. and Mark D.M. (1987). The fractal nature of geographic phenomena.
Ann. Assoc. Am. Geogr., 77, p.265.
50. Granshaw S. (1980). Bundle adjustment methods in engineering photogrammetry. Pho-
togrammetric Record, 10(56), pp.181–207.
51. Grohmann C., Smith M., Riccomini C. (2011). Multiscale analysis of topographic sur-
face roughness in the Midland Valley, Scotland, IEEE T. Geosci. Remote, 49, pp.
1200–1213.
52. Gruner H. (1977). “Photogrammetry: 1776-1976”, Photogrammetric Engineering and
Remote Sensing, 43(5), pp.569-574.
53. Haarbrink, R. B. and Koers, E. (2006). Helicopter UAV for photogrammetry and rapid
response, In: International Archives of Photogrammetry, Remote Sensing and Spatial
Information Sciences, ISPRS Workshop of Inter-Commission WG I/V, Autonomous
Navigation, Antwerp, Belgium.
54. Hartley R.I., Zisserman, A. (2003): Multiple View Geometry in Computer Vision.
Cambridge University Press
104
55. Hengl T., Evans, I.S. (2009). Mathematical and digital models of the land surface. In:
Hengl T., Reuter, H.I. (Eds.), Geomorphometry — Concepts, Software, Applications.
Developments in Soil Science, vol. 33. Elsevier, Amsterdam, pp. 31–63.
56. Hengl T. (2006) Finding the right pixel size. Computers & Geosciences 32, pp. 1283–
1298.
57. Herzfeld U.C., Mayer H., Feller W., Mimler M. (2000) Geostatistical analysis of glaci-
er-roughness data, Ann. Glaciol., vol. 30, no. 1, pp. 235–242.
58. Heyden A. and Astrom K. (1997). Euclidean reconstruction from image sequences with
varying and unknown focal length and principal point. In Proc. IEEE Conference on
Computer Vision and Pattern Recognition.
59. Hobson R. D. (1972). Surface roughness in topography: quantitative approach. in R. J.
Chorley, editor. Spatial analysis in geomorphology. Harper and Row, New York, New
York, USA, pp. 221–245.
60. Hongxia C., Zongjian L., Guozhong S. (2007). Non-metric CCD Camera Calibration for
Low Attitude Photogrammetric Mapping. The Eighth International Conference on Elec-
tronic Measurement and Instruments ICEMI’.
61. Hopkins, L. (1977). Methods for Generating Land Suitability Maps: A Comparative
Evaluation. Journal of the American Institute of Planners, 380-400.
62. ICAO, (2011). Unmanned Aircraft Systems (UAS). Published in separate Arabic, Chi-
nese, English, French, Russian and Spanish editions by the International civil aviation
organization. University Street, Montréal, Quebec, Canada H3C 5H7. P. 38. ISBN 978-
92-9231-751-5
63. Irschara a., Kaufmann V., Klopschitz M., Bischof H., Leberl F. (2010). Towards fully
automatic photogrammetric reconstruction using digital images taken from UAVs, IS-
PRS TC VII Symposium – 100 Years ISPRS, Vienna, Austria, IAPRS, Vol. XXXVIII,
Part 7A pp.65-70.
64. Issod, C.S. (2015). Getting Started with Hobby Quadcopters and Drones, USA, Mid-
dletown, DE. P.97.
65. Iwahashi J & Pike R. J (2007) “Automated classifications of topography from DEMs by
an unsupervised nested-means algorithm and a three-zone geometric signature,” Geo-
morphology, vol. 86, no. 3/4, pp. 409–440.
66. Jankowski, P. (1995). Integrating geographical information systems and multiple crite-
ria Decision-Making Methods. Geographical Information Systems 9(3) :251-273.
DOI:10.1080/02693799508902036
67. Jenness J. (2005) Topographic Position Index. Extension for ArcView 3.x.
http://jennessent.com.
68. Jenness J. (2010) “Topographic Position Index (tpi_jen.avx) extension for ArcView
3.x”, v. 1.3a. Jenness Enterprises, http://www.jennessent.com/arcview/tpi.htm.
69. Jenness J. (2004). "Calculating landscape surface area from digital elevation models."
Wildlife Society Bulletin 32.3, pp. 829-839.
70. Jenson S.K., Domingue, J.O., 1988. Extracting topographic structure from digital eleva-
tion model data for geographic information system analysis. Photogrammetric Engi-
neering and Remote Sensing. 54, pp. 1593–1600.
71. Kaaniche K., Champion, B., Pegard, C. and Vasseur, P. (2005). A vision algorithm for
dynamic detection of moving vehicles with a UAV, In: IEEE International Conference
on Robotics and Automation, pp. 1878-1883.
72. Karara H.M. (1989). Non-Topographic Photogrammetry. Americal Society for Photo-
grammetry and Remote Sensing.
105
73. Karara H.M., and W. Faig. (1980). An Expose on Photographic Data Aquisition Sys-
tems in Close-Range Photogrammetry. 14th Congress of the International Society for
Photogrammetry, Commission V, Hamburg, Germany.
74. Koeln G.T., Cowardin L.M., Strong L.L. (1996). Geographical information systems. in
T.A. Bookhout, ed. Research and management techniques for wildlife and habitats.
Fifth ed., rev. The Wildlife Society, Bethesda MD., pp. 540-566.
75. Lafay M. (2015). Drones for Dummies, John Wiley and sons, Inc. Hobokon, New Jersy.
P.272.
76. Lam, N. and Quattrochi, D.A. (1992). On the issues of scale, resolution, and fractal
analysis in the mapping sciences, Prof. Geogr., pp.44-88.
77. Leibowitz D., College M. (2001). Camera Calibration and Reconstruction of Geometry
from Images. Robotics Research Group Department of Engineering Science University
of Oxford Trinity Term, p. 209.
78. Levin S.A. (1992). The problem of pattern and scale in ecology. Ecology. 73, pp. 1943–
1967.
79. Li Z., (1993). Mathematical models of the accuracy of digital terrain model surfaces
linearly constructed from gridded data, Photogrammetric Record, (82), pp. 661-674.
80. Lowe, D. (2004). Distinctive image features from scale-invariant key points. Interna-
tional Journal of Computer Vision 60, pp. 91–110.
81. Lucieer A., Watson C. and Turner D. (2012) “Development of a UAV-LiDAR System
with Application to Forest Inventory Luke Wallace”. School of Geography and Envi-
ronmental Studies, University of Tasmania. http://www.mdpi.com/2072-4292/4/6/1519.
82. Luhmann, T, Robson, S, Kyle, S, and Harley, I. (2006). Close Range Photogrammetry:
Principles, Methods, and Applications. Caithness: Whittles Publishing.
83. Lundblad ER., Wright DJ., Miller J., Larkin EM., Rinehart R., Naar DF., Donahue BT.,
Anderson SM., Battista T A. (2006). benthic terrain classification scheme for American
Samoa. Mar Geod., 29, pp. 89–111.
84. MacMillan R.A, Shary P.A. (2009). Landforms and landform elements in geomor-
phometry. In: Hengl, T., Reuter, H.I. (Eds.), Geomorphometry—Concepts, Software,
Applications. Developments in Soil Science, vol. 33. Elsevier, Amsterdam, pp. 227–
254.
85. Mark D.M. (1975). Geomorphometric parameters: A review and evaluation, Ge-
ografiska Annaler. Ser. A, Phys. Geography, vol. 57, no. 3/4, pp. 165– 177.
86. Mark, D.M. and Aronson, P.B. (1984). Scale-dependent fractal dimensions of topo-
graphic surfaces: an empirical investigation, with applications in geomorphology and
computer mapping, Math. Geol., 11,671.
87. Marshall A. R. (1989). Network design and optimization in close ranges photogramme-
try.UNISURV S-36, School of Surveying, University of new South Wales, Australia., p.
249.
88. Masahiko N. (2007). UAV borne mapping system for river environment, In: 28th Asian
Association of Remote Sensing Conference, Kuala Lumpur, Malaysia
89. Matthews N. A. (2008). Aerial and Close-Range Photogrammetric Technology: Provid-
ing Resource Documentation, Interpretation, and Preservation. Technical Note 428.
U.S. Department of the Interior, Bureau of Land Management, National Operations
Center, Denver, Colorado. 42 pp.
90. McCormick M.I. (1994). Comparison of field methods for measuring surface topogra-
phy and their associations with a tropical reef fish assemblage. Marine Ecology Pro-
gress Series 112, pp. 87-96.
106
91. McGlone J.C. (1989). Analytic data-reduction schemes in non-topographic photogram-
metry, Non-Topographic Photogrammetry, 2nd ed., ed. H.M. Karara, ASPRS, Falls
Church, VA, Chap. 4, 37-55.
92. McHarg I. L. (1995). Design with Nature. New York: J. Wiley. (Reprinted from De-
sign)
93. Meyer R. (1987). “100 Years of Architectural Photogrammetry”, Kompendium Photo-
grametrie, Vol. XIX, Leipzig: Akademische Verlagsgesellschaft, pp. 183-200.
94. Micheletti N, Chandler JH, Lane SN. (2014). Investigating the geomorphological poten-
tial of freely available and accessible structure-from-motion photogrammetry using a
smartphone. Earth Surface Processes and Landforms, DOI: 10.1002/esp.3648.
95. Mikhail E., Bethel J. and McGlone J. (2001). Introduction to modern photogrammetry.
John Wiley & Sons, New York, p.479.
96. Moser K., Ahn, C., Noe, G. (2007) Characterization of micro topography and its influ-
ence on vegetation patterns in created wetlands. Wetlands 27, pp. 1081-1097.
97. Nagai M., Shibasaki R., Manandhar D. and Zhao H. (2004): Development of digital
surface and feature extraction by integrating laser scanner and CCD sensor with IMU,
In: International Archives of the Photogrammetry, Remote Sensing and Spatial Infor-
mation Sciences, XX ISPRS Congress, Istanbul, Turkey, XXXV-B5, pp.655-659.
98. Nellis M.D. and Briggs, J.M. (1989). the effect of spatial scale on konza landscape clas-
sification using textural analysis, landscape ecol.,2,93.
99. Newsome, Sam R. Jr. (2016). "Discrepancy Analysis Between Close-Range Photo-
grammetry and Terrestrial LiDAR" Electronic Theses & Dissertations. Paper 1423.
100. Nordberg K., Farnebäck, G., Forssén, P.-E., Granlund, G., Moe, A., Wiklund, J. and
Doherty, P. (2002). Vision for a UAV helicopter, In: Workshop on aerial robotics, Lau-
sanne, Switzerland.
101. Partsh J. (1911). Schlesien, eine Landeskunde fur das deutshe Volk.II, S.586, Breslau.
102. Patias, P., Saatsoglou-Paliadeli , C., Georgoula, O., Pateraki, M., Stamnas, A. and Kyr-
iakou, N.(2007). Photogrammetric documentation and digital representation of the
macedonian palace in Vergina-Aegeae, In: CIPA, XXI International CIPA Symposium,
Athens, Greece.
103. Pettersson, P.O. and Doherty, P. (2004). Probabilistic roadmap based path planning for
autonomous unmanned aerial vehicles, In: 14th Int’l Conf. on Automated Planning.
104. Peucker T.K., Douglas D.H., 1974. Detection of surface specific points by local parallel
processing of discrete terrain elevation data. Computer Graphics and Image Processing,
4, pp. 375-387.
105. Pike R. J. (2002). A bibliography of terrain modeling (geomorphometry), the quantita-
tive representation of topography - supplement 4.0., Open-File Rep. No. 02-465. U.S.
Geological Survey, Denver, p. 116.
106. Pogorelov A.V & Doumit J.A. (2009). Relief of Kuban river basin: Morphometric anal-
ysis. M.: Geoc, 208 p. (In Russian).
107. Puri A. (2004). A Survey of Unmanned Aerial Vehicles (UAV) for Traffic Surveil-
lance, Internal Report, Department of Computer Science and Engineering, University of
South Florida, Tampa, FL, USA, p. 29.
108. Rasemann S., Schmidt J., Schrott L., Dikau R. (2004). “Geomorphometry in mountain
terrain”. In: Bishop, M.P., Shroder, J.F. eds. GIS & Mountain Geomorphology. Spring-
er, Berlin, pp.101-145.
109. Redweik P. (2012). Photogrammetry - Springer. Retrieved from
http://link.springer.com/chapter/10.1007/978-3- 642-28000-9_4/fulltext.html.
110. Reg Austin (2010). Unmanned aircraft systems UAVs design development and deploy-
ment, John Wiley & Sons Ltd, p.332
107
111. Riga Declaration on remotely piloted aircraft (drones) " Farming the future of aviation"
Riga - 6 March 2015.
112. Riley S.J., De Gloria S. D., Elliot R. A. (1999). Terrain ruggedness index that quantifies
topographic heterogeneity. Intermountain journal of sciences, Vol.5, No. 1-4.
113. Risk, M.J. (1972). Fish diversity on a coral reef in the Virgin Islands. Atoll Research
Bulletin 193, pp. 1-6.
114. Roelofs R. (1951). Distortion, principal point, point of symmetry and calibrated princi-
pal point, Photogrammetria.vol.7, pp. 49-66.
115. Rüther H., Smit J., Kamamba D. (2012). A Comparison of Close-Range Photogramme-
try to Terrestrial Laser Scanning for Heritage Documentation// South African Journal of
Geomatics, Vol. 1, No. 2, pp. 149-169.
116. Shary P.A. (1995). Land surface in gravity points classification by a complete system of
curvatures. Mathematical Geology, 27(3), pp.373-390.
117. Shary P.A., Sharaya L.S., Mitusov A.V. (2002). Fundamental quantitative methods of
land surface analysis. Geoderma, 107(1-2), pp. 1-32.
118. SMITH, M. J. and PARK, D. W. G. (2000). Absolute and exterior orientation using linear
features. International Archives of Photogrammetry and Remote Sensing, 33(B3): 850-
857.
119. Snavely N., Seitz, S.M. & Szeliski, R. (2006). Photo Tourism: Exploring image collec-
tions in 3D. ACM Transactions on Graphics, 25, p.835.
120. Snavely, N., Seitz, S.M. & Szeliski, R. (2007). Modeling the World from Internet Photo
Collections. International Journal of Computer Vision, 80, pp. 189-210.
121. Snavely, N., Seitz, S.N., Szeliski, R. (2008). Modeling the world from internet photo
collections. International Journal of Computer Vision 80, pp.189-210.
122. Spetsakis M.E., Aloimonos Y. (1991). A multi-frame approach to visual motion percep-
tion. International Journal of Computer Vision 6.pp. 245-255.
123. Stambaugh M.C., Guyette R.P. (2008). Predicting spatio-temporal variability in fire
return intervals using a topographic roughness index. For Ecol Manag, 254(3), pp. 463–
473.
124. Stöcker, C., Bennett, R., Nex, F., Gerke, M., Schmidt, C., Zein, T. (2017). Guide to reg-
ulatory practice on UAVs for land tenure recording. H2020 its4land 687828 D4.1 inno-
vation for Land Tenure.p.48.
125. Stone R. O., Dugundji J. (1965). A study of micro relief-Its mapping, classification and
quantification by means of a Fourier analysis, Eng. Geol., vol. 1, no. 2, pp. 89-187.
126. Sugiura R., Noguchi, N. and Ishii, K. (2005). Remote-sensing Technology for Vegeta-
tion Monitoring using an Unmanned Helicopter, In: Biosystems Engineering, 90, 4,
369–379.
127. Szeliski R., Kang S.B. (1994). Recovering 3-D shape and motion from image streams
using nonlinear least squares. Journal of Visual Communication and Image Representa-
tion 5. pp.10-28.
128. Tagil S. & Jenness J (2008) GIS- based automated landform classification and topo-
graphic, Landcover, and geologic attributes of landforms around the Yazoren Polje,
Turkey. Journal of Applied Sciences 8(6): 910-921.
129. Tsai Y. R. (1986). An efficient and accurate camera calibration technique for 3D ma-
chine vision. In unknown focal length and principal point. In Proc. IEEE Conference on
Computer Vision.
130. Valentine P.C., Scully L.A., Fuller S.J. (2004). Terrain Ruggedness Analysis and Dis-
tribution of Boulder Ridges and Bedrock Outcrops in the Stellwagen Bank National
Marine Sanctuary Region —Posters presented at the Fifth International Symposium of
the Geological and Biological Habitat Mapping Group, Galway, Ireland.
108
131. Veitinger J., Purves R. S., Sovilla B. (2016). Potential slab avalanche release area iden-
tification from estimated winter terrain: a multi-scale, fuzzy logic approach/Natural
Hazards and Earth System Sciences, 16, pp. 2211–2225.
132. Verhoeven, G, Sevara, C, Karel, W, Ressl, C, Doneus, M and Briese, C. (2013) ‘Un-
distorting the past: new techniques for orthorectification of archaeological aerial frame
imagery’, in Corsi, C et al (eds) Good Practice in Archaeological Diagnostics, Natural
Science in Archaeology. Switzerland: Springer International Publishing.
133. Von Blyenburg P (1999) UAVs-Current Situation and Considerations for the Way For-
ward. RTO-AVT Course on Development and Operation of UAVs for Military and Civ-
il Applications.
134. Wedding L.M., Friedlander A.M., McGranaghan M., Yost R.S., Monaco M. E. (2008).
Using bathymetric LIDAR to define nearshore benthic habitat complexity: implications
for management of reef fish assemblages in Hawaii. Remote Sens Environ, 112(11), pp.
4159–4165.
135. Westoby M., Brasington J., Glasser N.F., Hambrey M.J., Reyonds M.J. (2012). Struc-
ture from Motion photogrammetry: a low-cost, effective tool for geoscience applica-
tions. Geomorphology 179, pp. 300-314.
136. Wolf P. R. and Ghilani C.D. (1997). Adjustment Computations: Statistics and Least
Squares in Surveying and GIS. John Wiley & Sons.
137. Wood J. (1996). The geomorphological characterization of digital elevation models,
PhD Thesis. University of Leicester.
138. Wood J. (2009). Geomorphometry in LandSerf. In: Hengl, T., Reuter, H.I. (Eds.), Geo-
morphometry — Concepts, Software, Applications. Developments in Soil Science, vol.
33. Elsevier, Amsterdam, pp. 333–349.
139. Woodby D., Carlile D., Hulbert L. (2009). Predictive modeling of coral distribution in
the Central Aleutian Islands, USA. Mar Ecol Prog Ser. 397, pp. 227–240.
140. Woodcock C. E., Strahler. A H. (1987). “The Factor of Scale in Remote Sensing,” Re-
mote Sensing of Environment, vol.21, pp. 311-332.
141. Wright D.J., Heyman W.D. (2008). Introduction to the special issue: marine and coastal
GIS for geomorphology, habitat mapping, and marine reserves. Mar Geod., 31, pp. 223–
230.
142. Wu J. (2004). Effects of changing scale on landscape pattern analysis: scaling relations
Landscape Ecol. 19, pp. 125–138.
143. Wzorek M., Land´en D. and Doherty P. (2006). GSM Technology as a Communication
Media for an Autonomous Unmanned Aerial Vehicle, In: 21th UAV Systems Confer-
ence, Bristol.
144. Xie Feifei et al. (2014). 1:500 Scale Aerial Triangulation Test with Unmanned Airship
in Hubei Province IOP Conf. Ser.: Earth Environ. Sci. 17 012177.
145. Zevenbergen L., Thorne C. (1987). Quantitative Analysis of Land Surface Topography,
Earth Surface Processes and Landforms, Vol. 12, pp. 47-56.
146. Zhou G., Li C. and Cheng P. (2005). Unmanned aerial vehicle (UAV) realtime video
registration for forest fire monitoring, In: IEEE International, Geoscience and Remote
Sensing Symposium, pp. 1803-1806.
109
Scientific edition