Best Practice Tutorial: Technical Handling of The UAV "DJI Phantom 3 Professional" and Processing of The Acquired Data

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 38

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/316473258

Best practice tutorial: Technical handling of the UAV "DJI Phantom 3


Professional" and processing of the acquired data

Technical Report · April 2017


DOI: 10.13140/RG.2.2.36355.91680

CITATIONS READS

4 3,546

3 authors, including:

Steven Hill Hooman Latifi


University of Wuerzburg Khaje Nasir Toosi University of Technology
10 PUBLICATIONS   20 CITATIONS    71 PUBLICATIONS   804 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Special Issue of Forests: "New Application based on Advanced Remote Sensing Data in Forests and Wood Land Areas" View project

Special Issue of PFG: "Remote Sensing-Assisted Forest Inventory" View project

All content following this page was uploaded by Hooman Latifi on 26 April 2017.

The user has requested enhancement of the downloaded file.


BEST PRACTICE
TUTORIAL
Technical handling of the UAV "DJI
Phantom 3 Professional" and processing
of the acquired data

Marius Röder
(marius.roeder@web.de)
Steven Hill
Hooman Latifi
Table of contents

Composition and preparation of the UAV system .................................... 2


Performance of UAV recordings with DJI GO .......................................... 2
Performance of UAV data acquisition with Pix4Dcapture ........................ 2
Planning ............................................................................................... 3
Implementation ..................................................................................... 6
Import of the images........................................................................... 11
Relative orientation............................................................................. 15
Optimization of the camera parameters .............................................. 23
Dense Point Cloud Creation ............................................................... 28
Creation of the DSM ........................................................................... 31
Further processing and applications of the DSM ................................... 35
Literature ............................................................................................... 36

1
Best Practice Tutorial DJI Phantom 3
Professional
Composition and preparation of the UAV system
See Quick Start Guide

Performance of UAV recordings with DJI GO


For classic applications of a Low-Budget UAV – that means the simple recording of pictures
and videos – the app DJI GO is recommended. With this company-owned app, the drone can
be controlled manually via remote control. This allows photos or videos to be captured with the
camera by manual triggering. In addition, the sensor of the drone can be calibrated and further
parameters such as the exposure time of the camera, etc. can be adjusted.
However, an entry of a fixed flight path, which the drone should fly along, is not possible before
the start of the flight with DJI GO. The app is therefore not suitable for capturing images that
are later to be processed with photogrammetric software. For this reason, DJI GO is no longer
dealt with here. For further details on handling DJI GO, please refer to the Quick Start Guide
or the user manual.

Performance of UAV data acquisition with Pix4Dcapture


Last update: March 2017, App-Version 3.7.1 (Android)
Further information: Pix4Dcapture online manual https://support.pix4d.com/hc/en-
us/articles/203873435--Android-Pix4Dcapture-Manual#gsc.tab=0 – called on March 21st
2017
With the company-owned app DJI GO, the drone can be controlled manually. An input of a
fixed flight path, which the drone should follow, is not possible before the start of the flight.
The app Pix4Dcapture is a free and a very good alternative to plan the flights in advance and
set a predefined flight path. Thus, many other flight parameters can be set by the user
himself. The following is a description of how to use the app.
In order to connect Pix4Dcapture to a DJI drone, the app Ctrl + DJI has to be downloaded and
installed on the smartphone next to Pix4Dcapture from the Google Play Store. This app runs
in the background and allows working with Pix4Dcapture using DJI drones.
It can lead to problems when Pix4Dcapture and DJI GO are installed simultaneously on a
smartphone/tablet. Therefore it is recommended to install only one of the two flight control apps
on the smartphone/tablet.
For a successful UAV flight, some settings can already be set up in advance. For this reason,
this chapter is divided into the planning (internal service) and implementation (field service).

2
Planning
If the app Pix4Dcapture is opened after the installation, a window appears in which the user
must first create his own free account via Sign up for free (see Figure 1).

Figure 1: Creation of a free account

If the user is logged in, the actual start screen of the app appears (see Figure 2).

Figure 2: Start screen Pix4Dcapture

General settings can be made under Settings. In the General tab, the corresponding drone is
selected (in this case DJI Phantom 3 Professional). Another important setting is Sync
automatically when mission ends. By activating this option, the captured images are
automatically transmitted to the smartphone by radio after the flight has been completed. Since
no further evaluations take place on the smartphone, it is recommended to deactivate this
option in order to avoid unnecessary memory consumption on the smartphone. The images
3
are stored exclusively on the SD card. It is also important that the Save offline maps option is
enabled in the Maps tab. This saves background maps (satellite images or vector maps) when
you create the project on your smartphone. In areas where reception of mobile data is not
possible, the background maps are still available. Furthermore, in the Advanced tab under
Root directory path, you can specify where the metadata for the flights on the smartphone
should be stored.
If the general settings have been made, different flight missions can be selected in the start
screen. The app offers four different mission modes, which differ by type of flight path. The
Grid Mission is best suited to generate 2D maps from the images. Here, the drone flies a simple
grid over a defined area. The Circular Mission was designed to create 3D models of a single
object (such as a house). The recording takes place in a circular arrangement. In addition, the
Free Flight Mission can be used to create a project in which the drone is controlled manually
and triggered automatically within a specified interval. This mode is not to be mixed up with
the completely manual control in DJI GO. In DJI GO, among other things, even more settings
such as e.g. the exposure time of the camera can be set. In the Free Flight Mission, the drone
can only be flown manually. The image is taken automatically in the specified time interval and
not manually. To create 3D models of the Earth's surface, the Double Grid Mission is
recommended. Here, the drone flies a flight path over a defined area, which corresponds to
two rectangular grids (see Figure 3).

Figure 3: Double Grid flight path (black) with sample images (red)

Due to the high overlaps from different viewing angles, this flight pattern is best suited for the
creation of 3D models of the recording area by photogrammetric methods (Pix4D 2017). High
overlaps also lead to better accuracy of 3D point clouds (Haala, Cramer and Rothermel 2013).
The workflow with the Double Grid Mission is described below.
Clicking on Double Grid Mission will display a user interface with various settings (see Figure
4).

4
Figure 4: GUI Pix4Dcapture

The majority of the GUI is taken by the background map. The type of the background map can

be changed between the vector and satellite data via the - or -button. On the map is
a green polygon, which represents the flight path (double grid). The polygon can be drawn to
the desired size of the area to be taken.
On the left side, Alt can be used to set the flight height. This depends on the objects that are
to be recorded during the project. If a flight height of less than 30 m is selected, the app warns
in the upper bar (message: Low!) that a too small value of flight height has been probably
chosen. If a flight height is selected to exceed 100 m, the app will warn you that it is too high
(message: High!) and may violate local regulations and laws. This must be observed during
flight planning.

Further settings can be made via the -button (see Figure 5).

Figure 5: Adjustment of speed, angle and overlap

5
The speed can be selected in small intervals between slow and fast. Here the slowest speed
is recommended. This allows the best possible image quality by avoiding distortions or blurring
in the images. The recording direction can be determined under angle. For the calculation of
3D models, this option should be set to vertical, which corresponds to a recording angle of 80°.
As a result, approximate nadir photographs, which are usual for aerial photography, can been
achieved. The overlap should be set as high as 90% to achieve optimal accuracy for the later
orientation of the images and the subsequent point cloud calculation (Haala, Cramer and
Rothermel 2013). It was also shown that high image overlap minimizes height errors (Dandois,
Olano and Ellis 2015).

You can use the -button to zoom on the polygon. At the bottom of the GUI, it is indicated
how large the polygon is and how long the flight takes with the specified settings. The duration
of the flight depends on the size of the area to be taken, the speed of the flight and the flight
height. A battery of the drone would last about 23 minutes. In order for the drone to be safely
landed, it is recommended not to exceed a flight time of approximately 16 minutes. This still
provides sufficient time for landing. In addition, the app creates warnings in the bar at the top
of the screen (message: Flight time!), if the flight time was selected too long.

After all settings have been made, the project is saved via the -button. The user is
given access to the individual created projects on the start screen under Project List. When
the project is saved, the background maps for the area are automatically downloaded and
saved on the smartphone to make them available offline.

Implementation
Once all settings of the flight planning have been made, the flight can be carried out in the
field.
First, the drone has to be prepared for the flight (see Quick Start Guide). This includes
tightening the rotors, inserting the battery and checking the cleanliness of the camera lens.
Then the individual devices are switched on or connected to each other. According to
manufacturer, firstly the remote control, then the drone should be switched on and finally the
smartphone/tablet should be connected with the remote control via the USB cable (Pix4D
2017). The app is then started and the project created in the service is selected (Start screen
 Project List  Project xx). It is important that the GPS function of the smartphone is now

active. The -button allows the view to be focused on the current position of the
smartphone.
The connection of the remote control and the smartphone to the drone can be examined by
the fact that the Wifi symbol is green (see Figure 6). In addition, a drone symbol can now be
seen on the GUI, which shows the position of the drone (known by the integrated GNSS
receiver) (see Figure 6).

6
Figure 6: GUI after connecting with drone

If the devices are connected to each other, you can switch between map mode and camera

mode before the flight via . In the camera mode, a live view of the drone camera can
be seen (see Figure 7).

Figure 7: Camera mode

Since it was only possible to estimate roughly how large the grid is to be deployed during
mission planning, the size of the flight route can be readjusted in the field.
Clicking on Start will bring up a new window in which the app summarizes the most important
mission data and confirms that the smartphone or remote control is connected to the drone
(see Figure 8).

7
Figure 8: Control screen before the start of the drone (1)

Clicking on Next will bring up another window listing the requirements for starting (see Figure
9).

Figure 9: Control screen with checklist before the start of the drone (2)

For example, the app controls whether there are any connections between the individual
components (smartphone - remote control - drone), whether there are sufficient GNSS
satellites available or there is sufficient space on the SD card. If this is not the case, the app
issues warnings. During the training into the software different warnings occurred. When using
Pix4Dcapture, for example, the switch on the remote control has to be set to "F", otherwise a
warning occurs. In addition, the mission programmed into the app could not be loaded onto
the drone, since remote firmware and drone firmware had different versions installed. Please
ensure that the same version is installed on both devices. If the drone is started in the indoor
area, then errors occur that there are not enough GPS satellites. As a result of this, for

8
example, the so-called homepoint, from which the start takes place, is not known (see Figure
10).

Figure 10: Control screen with checklist before the start of the drone (3)

As soon as all prerequisites are met, the Take off-button is held for three seconds. The drone
then flies vertically into the desired flight height. The relative height above ground in this case
is determined not by GNSS, but by barometer (Pix4D 2017). Once the flight height is reached,
the drone goes to the starting point of the double grid and flies fully automatically the previously
programmed flight path. It is not necessary to intervene with the remote during the flight. In the
live view mode, the camera can be switched live during the flight (see Figure 11).

Figure 11: Live view during the flight

After the recordings are complete, the drone returns to the homepoint, which was determined
at the start, in the designated flight height. Above the homepoint, the UAV lowers itself
completely automatically. From a flight height of approx. 10 m, it is recommended that the
drone be landed manually. Care must be taken to ensure that the parking lot is free from
disturbing objects. The drone can be landed manually with the remote control.

9
In the internal service, the images are then saved via USB cable from the internal memory
card of the drone to an external hard disk.

10
Evaluation of UAV recordings in Agisoft Photoscan
Last Update: March 2017, Software-Version: Agisoft Photoscan Professional 64-Bit Version
1.2.6

The software product Agisoft Photoscan is used to evaluate the recorded UAV images. This
is an independent software product that carries out photogrammetric processing of digital
images and generates three-dimensional, spatial data.
In the following sub-chapters, the workflow for the evaluation of UAV recordings is explained
using an example project. The UAV flight took place as part of an M.Sc thesis (Röder 2017)
in the Bavarian Forest National Park. The different settings, calculation times, etc. are based
on experience values of the master thesis and are specially adapted for this project. As a
result, they are not generally applicable to all UAV analyzes with Agisoft Photoscan and are
to be taken individually for each project.

Import of the images


After the program has been started, the newly opened project is saved by File  Save (see
Figure 12).

Figure 12: Saving the project

Clicking on Workflow  Add Photos... opens a window in which all images taken with the drone
are selected and imported into the project (see Figure 13).

11
Figure 13: Adding the photos

On the left side of the GUI you can see the workspace in which the newly imported images are
listed (see Figure 14). After the images are imported, a chunk is created automatically. You
can save as many chunks per workspace. The division into chunks is useful, e.g. a chunk
should be created for each step in the project so that the information in the previous steps is
not lost. In addition, two UAV flights with overlapping areas can initially be oriented separately
from one another and subsequently processed together. NA behind each of the images stands
for Not Aligned and indicates that the images have not yet been oriented relative to each other.
In the middle of the GUI, after the import of the images in the model window, the approximation
position of the cameras appear as blue dots (see Figure 14). The drone is equipped with a
GNSS single-frequency receiver, which stores the position of the camera at the time of capture
in the metadata of the images. The accuracy of the positions is several meters, which is why
they are referred to as approximated positions. Agisoft Photoscan can import this information
automatically from the EXIF data of the images during import.

12
Figure 14: Workspace-tab after importing the images

If the user moves from the Workspace tab to the Reference tab (lower-left corner of the GUI),
the approximation positions with the geodetic date WGS84 (latitude, longitude, height) can be
seen in the Camera window (see Figure 15 left side). The Accuracy column has a value of 10
m (automatic accuracy of the GNSS single-frequency receiver). In the lower third of the GUI
you can see the pictures in small format. Double-clicking on one of the images opens a larger
view in the middle of the GUI (see Figure 15 right side).

Figure 15: Reference tab after importing the images (left) and single-image view (right)

13
The images of the example project used are quite dark. With the Pix4Dcapture app, the
exposure settings of the camera cannot be changed during recording. The exposure is always
adjusted automatically. Agisoft Photoscan provides a feature that allows you to adjust the
brightness of the images. The following evaluations in point clouds or in the orthomosaic, which
are carried out by the human observer, can be greatly facilitated by radiometric adaptation of
the images. Clicking on Photo  Image Brightness opens a new window (see Figure 16).

Figure 16: Call of the function Image Brightness

Via Estimate the software calculates an optimal value for the image brightness (see Figure
17). The exposure adjustment in the images is then clearly visible (see Figure 18).

Figure 17: Image Brightness before (left) and after (right) estimation of a fit value

14
Figure 18: Sample image after adjusting the Image Brightness

Relative orientation
After importing the images and the brightness adjustment, the relative orientation of the images
to one another takes place. At the moment, only an approximation position of the images is
available via the GNSS receiver of the drone. The images do not yet "know" how they are
positioned opposite the other images. For this purpose, the relative orientation of the images
must be established. Using Workflow  Align Photos... opens a new window with different
settings for the relative orientation (see Figure 19).

Figure 19: Accessing the Align Photos... function (left) and its settings (right)

Under General, the parameters are Accuracy and Pair preselection. It is recommended to
always set Accuracy to Highest. This computes the camera positions with the highest
accuracy. This setting results in a longer processing time for the relative orientation. However,
a high accuracy is the prerequisite for a precise production of following products such as DSM
15
or orthomosaic. By selecting Highest, the original image is scaled up by a factor of four. One
level of accuracy in each case means a scaling down of the images by a factor of four, which
considerably reduces the processing time of the orientation. Under Pair preselection,
Reference is selected. As a result, overlapping pairs of images are already determined in
advance by their approximation position (from the GNSS receiver of the drone), which
facilitates the relative orientation and reduces the calculation time.
Additional parameters can be set under Advanced. The default settings are used here. The
Key Point Limit specifies the upper limit of feature points that are considered per image during
processing. First of all, the software searches for these striking pixels per image, by means of
which it can then orient the images relative to each other. This parameter helps to set number
of feature points detected per image to match the recordings is set in the parameter Tie point
limit. The most reliable and accurate feature points are selected by the software. The Adaptive
camera model fitting parameter is always set active. This parameter makes it possible to
incorporate automatically adapted camera parameters into the compensation by virtue of their
reliability measures. By activating this option, the divergence of some parameters is pre-
selected, particularly in the case of aerial image data sets. The OK button pushes the
orientation. The progress of work can be followed in a new window (see Figure 20).

Figure 20: Work progress during the alignment

First, the feature points are detected. The overlapping image pairs are then selected and finally
matched. If the relative orientation is completed, the so-called sparse point cloud appears in
the middle of the GUI (see Figure 3).

16
Figure 21: Sparse Point Cloud after Alignement

You can see all the tie points used to create the relative orientation in the images. By clicking
on the Show Cameras button you can see the positions of the cameras taking into account the
relative orientation (see Figure 22).

Figure 22: Displaying the camera positions (right) via the Show Cameras Button (left)

In the single image view (double-click on one of the images) and click on View Points, the
unused feature points appear in gray and the tie points used appear in blue (see Figure 23).
17
Figure 23: View the Feature Points (white) and the Tie Points (blue) (right) using the View Points Button (left)

In the Reference tab, more information is now available (see Figure 24).

Figure 24: Reference-Tab after Alignement

In the column Error (m), the difference between the camera position from the approximation
coordinates and the camera position according to the relative orientation is shown. Projections
shows the number of tie points per image and Error (pix) indicates an RMSE value for the
reprojection error. Since values for yaw, pitch, and roll have not been imported, these columns
18
are empty. By right-clicking on Chunk  Show Info..., details of the chunk are displayed (see
Figure 25). Here under Alignment parameters you can see which settings have been made for
the relative orientation. In addition, the calculation time for the orientation is given here.

Figure 25: Call up the chunk information

Import of GCPs and external orientation


The relative orientation of the images is established. Next step is the exterior orientation using
GCPs. In principle, the recordings are already georeferenced. However, the accuracy of the
single-frequency GNSS receiver of the drone is not sufficient. For this reason, GCPs are used.
These are usually measured with an accuracy of less than 10 cm. The GCPs must be marked
in the software by so-called markers. To do this, double-click an image and search for a GCP.
If one of the GCPs has been found, a marker is placed there centrally by right-clicking 
Create Marker (see Figure 26).

Figure 26: Create a marker on a GCP

This is automatically assigned the name point 1, which can be changed in the workspace tab.
In this case, the markers were named according to their IDs (see Figure 27).

19
Figure 27: GCP before (left) and after (right) rename

When the next image is opened, a line appears on which the marker must lie in this image.
This is the epipolar line. This makes it easier to find the point. If the marker is found in the
second image, the marker is placed using the right-click  Place Marker (see Figure 10).

Figure 28: Place the newly created GCP in another image using the epipolar line

Once the marker has been set in two images, the position in all other images is known by the
already existing relative orientation. In order to improve the orientation manually, the point must
nevertheless be placed in the right position in every image. If another image is opened, the
marker appears as a point with a gray flag (see Figure 29, left). This is the proposed position
for this marker. Use the left mouse button to move the marker centrally to the correct position
of the GCP. Then the marker is marked with a green flag (see Figure 29, right).

Figure 29: Activation of the proposed approximation position (left) by displacement (right)

20
In order to improve the accuracy of the orientation, it is recommended to place the marker in
all images in which it is clearly visible. In images that are out of focus or in which the marker is
difficult to recognize, it is not recommended to set the marker. This workflow must be done for
all markers. Figure 30 shows the marker list in the workspace tab after setting all the markers
in the sample project.

Figure 30: Marker list after setting all GCPs

Georeferencing is already possible with three GCPs. In the sample project, six GCPs were
available. If the 3D position of all markers is known, they can be displayed via the Show
Markers button (see Figure 31).

21
Figure 31: Display the markers in the Sparse Point Cloud (below) by the Show Markers button (top)

In order to carry out the bundle adjustment of the aerial images by means of GCPs, their
precisely measured coordinates have to be imported. In this example, the reference points are
available as shape files. These were loaded in QGIS and the necessary attributes
(geographical length and width, accuracy, height, ID) are exported as a tab-delimited csv-file.
Afterwards, a text file was created for each plot in which the values for the individual attributes
are listed separately (see Figure 32, left). Using the Import button in the Reference tab, the
GCPs are imported into Agisoft Photoscan (see Figure 32, right).

Figure 32: Structure of the text file (left) and import button (right)

A new window with import settings appears (see Figure 33). The values can be assigned to
the individual tab-separated columns. By activating the Load accuracy checkbox, the accuracy
determined in the GNSS measurements can also be imported. After the import, the software
automatically performs bundle block adjustment. The images are then precisely georeferenced
by the GCPs.

22
Figure 33: Settings for importing the GCPs

In the Marker section of the Reference tab, the markers with the imported coordinates and the
accuracy are now listed (see Figure 34). At the same time, the software calculates in the
column Error (m) the difference between the imported coordinates and the coordinates
estimated by adjustment. The values in the Projections column indicate how often the markers
in the images were set. Error (pix) returns the RMSE value for the reprojection error of the
markers calculated for all images in which the marker is visible.

Figure 34: Marker section in the Reference tab after importing the GCPs

Optimization of the camera parameters


After the import of the GCPs has been completed, the camera parameters are improved in the
next step in order to optimize the accuracy of the model.
Agisoft Photoscan estimates the inner and exterior orientation parameters of the camera
during the orientation of the images. The accuracy of the estimation depends on many factors,
e.g. the overlap or the shape of the terrain. This can lead to errors which can lead to non-linear
deformations in the model. During georeferencing using GCPs, the model is linearly
23
transformed by means of a 7-parameter similarity transformation (3 translations, 3 rotations, 1
scale). Linear errors can be compensated, but non-linear components cannot be
compensated. For this reason, errors are mainly caused by georeferencing. In order to
eliminate the non-linear deformations, the Sparse Point Cloud and the camera parameters are
optimized on the basis of the known reference coordinates. In this step, Agisoft Photoscan
compensates the estimated point coordinates and camera parameters by minimizing the sum
of the reprojection errors and the reference coordinate errors.
For optimization, it is recommended to duplicate the chunk within the project. This means that
you can always access the status after the relative orientation and the import of the GCPs if
unexpected problems occur in subsequent processing steps. To do this, right-click on the
chunk and duplicate to copy the chunk within the project (see Figure 35). For a better overview,
the first chunk is renamed to Alignment and the second chunk to Optimization.

Figure 35: Duplicate the alignment chunk (left) and rename (right)

In the first step of the optimization, the tie points are eliminated, which are clearly recognizable
as outliers. To do this, the Sparse Point Cloud is loaded and viewed from different
perspectives. If outliers are clearly visible, they are selected with the options for the software
and removed with the Delete button (see Figure 36).

24
Figure 36: Detection of outliers

Next, tie points are removed which have a high reprojection error, high reprojection uncertainty,
and low projection accuracy. To do this, Agisoft offers with Edit  Gradual Selection, a function
with which the tie points are selected using a threshold value (see Figure 37).

Figure 37: Open the Gradual Selection Tool

The thresholds used are based on experience found in the literature (Gatewing 2017) (Mallison
2015). In the Gradual Selection window, initially Reprojection Error is selected as Criterion.
Here, the value 1 was selected as a threshold value with few exceptions. That means that all
tie points that have a larger reprojection error than 1 are selected by OK. In the model, the
selected points are marked in pink. The Delete button removes these points. In some plots the
reprojection errors were so low that the threshold was 0.5. Next, the Criterion Reconstruction
uncertainty was selected and all points that are above the threshold 10 are removed. Points
located at the edge of the aerial image group generally have a higher degree of reconstruction
uncertainty than dots in the middle. The reason for this is that these points are detected only
25
in images with forward overlap and the lateral overlap is missing. The last parameter Projection
accuracy was used to select and remove all points with a threshold greater than 2. The Figure
38 summarizes the settings made fort he reprojection.

Figure 38: Thresholds for the reprojection error (top left), the reconstruction uncertainty (top right) and the
projection accuracy (bottom)

As a result of the above measures, approximately 80 % of the tie points are eliminated (see
Figure 39). 20 % of the initially generated tie points are sufficient to link the images. However,
care should be taken to ensure that no more than 90 % of the total number of tie points is
removed to maintain a good relative orientation. If necessary, the threshold values are to be
set higher.

26
Figure 39: Sparse point cloud after filtering through gradual selection

Before the actual optimization is performed, settings in the Reference tab must be changed.
Click the Settings button to open the Reference Settings window. The settings must be set
according to the accuracy of the drone, the accuracy of the markers etc. (see Figure 40).

Figure 40: Call the reference settings before performing the optimization

27
The best results are achieved when the orientation parameters are first optimized based on
the camera coordinates and then based on the GCP coordinates. All cameras are activated
first and the GCPs are deactivated. By clicking the Optimize button, the optimization is
triggered (see Figure 41).

Figure 41: Performing the optimization with the Optimize Cameras Button

Afterwards, the cameras are deactivated and the GCPs are activated and the optimization
process is started again using the Optimize button. The average reprojection errors and the
residual claws should have become significantly smaller after optimization. Figure 42 shows
the marker section of the sample project after optimization of camera parameters. Compared
to Figure 34 the total error of the residual claws has decreased from 18.6 cm to 7.6 cm and
the total reprojection error from 0.65 pix to 0.22 pix.

Figure 42: Marker section in the Reference tab after optimization of camera parameters

If the residual clauses (column Error (m)) are still high (> 10 cm) for some GCPs after the
optimization, the GCPs in the images may not have been clicked exactly or the GCP has
moved. If necessary, remove the GCP from the adjustment (remove hook).

Dense Point Cloud Creation


After the optimization of the camera parameters, the creation of a dense point cloud takes
place. For this, it is recommended to create a new chunk by duplicating the optimization chunk
(see Figure 43).

28
Figure 43: Structure of the workspace after duplication of another chunk

Before the process of point cloud generation is started, the area for which the Dense Point
Cloud is to be calculated must be defined (Area of Interest). To do this, select the two buttons
Resize Region and Rotate Region (see Figure 44).

Figure 44: Definition of the Area of Interest by Resize Region (left) and Rotate Region (right)

The size and orientation of the bounding box can be changed. Only the area enclosed by the
bounding box is processed. As a result, a large part of processing time can be saved. The
Bounding Box was chosen to encapsulate the plot plotted with the GCPs (see Figure 45).

29
Figure 45: Definition of the Area of Interest

With Workflow  Build Dense Cloud, a window opens in which the settings for the point cloud
generation are set (see Figure 46).

Figure 46: Call up the function Build Dense Cloud... (left) and its settings (right)

30
Under Quality the desired quality of the reconstruction is set. Higher quality means a more
detailed and accurate geometry of the point cloud, but also a longer processing time. The
highest quality level - Ultra High - uses the images in original size during the process. Each
additional quality level scales the images down by a factor of four. The quality level High is
recommended as the best setting. Qualitatively good geometries are created, whereby the
calculation time still appears economical. With Ultra High, calculation times of approx. one
week result in the sample plot, which does not appear to be economically viable. With High the
point cloud of the Plot is calculated in about 10 hours. In addition, under Depth Filtering, you
can set whether and if so how to filter over the point clouds to eliminate outliers. If Disabled is
selected here, no filter is used. This parameter is, however, not recommended, as otherwise
the dot clouds are extremely noisy. If the area contains small details that are still to be
recognized in the point cloud, the setting Mild is recommended. If not, aggressively can use
an extremely strong filter, which removes very many outliers.
Figure 47 shows a very dense point cloud of the sample plot in which the individual
rejuvenation stocks of the spruce are clearly visible. The deadwood trunks are also clearly
reconstructed.

Figure 47: 3D section of the resulting DensePoint Cloud (Quality: High, Depth Filtering: Mild)

Creation of the DSM


The last step of the analysis with Agisoft Photoscan is the creation of the DSM. For this
purpose, the chunk in which the point cloud was calculated is duplicated and renamed (see
Figure 48).

31
Figure 48: Structure of the Workspace after duplication of another chunk

Since the depth filter Mild does not completely remove all outliers, it is necessary to select and
eliminate the remaining outliers manually (see Figure 49).

Figure 49: Removing remaining noise from the point cloud (top) and side view of the resulting point cloud (bottom)

After this step, Workflow  Build DEM... opens the window with settings for DSM creation (see
Figure 50).

32
Figure 50: Call up the function Build DEM... (left) and its settings (right)

Agisoft Photoscan generally designates the surface models as DEM (Digital Elevation Model),
which in this case is the same as DSM. It is important to specify the dense cloud as the data
source under Source data. The Interpolation is activated so that the gaps of the point cloud
are filled by interpolated points. The rest is left at the default settings. The resolution or raster
width of the DSM (Resolution (m/pix)) cannot be changed here. This setting is only made when
the DSM is exported. The DSM is calculated with OK. In the display window in the middle of
the GUI, a 2D image of the DSM is displayed (see Figure 51).

Figure 51: 2D-View of the resulting DSM

33
The export of the DSM is done in the Workspace tab by right-clicking on DEM  Export DEM
 Export TIFF /BIL /XYZ (see Figure 52, alternatively, the DSM can also be exported as a
KMZ file).

Figure 52: Export function of the DSM (left) and its settings (right)

A window with the export settings opens. Here, the resolution of the output sensor just
mentioned can be entered in meters via Metres... All DSM were exported with a resolution of
5 cm. The remaining settings were left at the default values. If necessary, the DSM can be
divided into blocks or only a certain region of the DSM can be exported. Clicking on Export...
opens another window where the location and file format are specified (see Figure 53). The
DSM generated in the sample plot have been exported in XYZ format. This format is universal
and readable by different software packages.

34
Figure 53: Save the DSM as a .xyz file

Further processing and applications of the DSM


At the end of the tutorial, we will briefly discuss how the DSMs derived from UAV images can
be further processed and which applications are possible with them.
If, for example, an externally provided DTM is present, the DSM can be normalized. In
normalized surface models it is then possible, for example, to get the height above ground.
This allows nDSM to be used, for example, for detecting tree heights or building heights.
The subtraction can be performed with raster functions in QGIS (Raster  Raster Calculator)
or ArcGIS (ArcToolbox  3D Analyst Tools  Raster Math  Minus). To do this, the .xyz file
must first be converted to a raster format (ArcGIS: ArcToolbox  Conversion Tools  In
Raster, QGIS: Tab Raster  Conversion  Raster (Vector in Raster)).
General applications of DSM and/or nDSM derived from UAV recordings are for example the
calculation of masses, the change detection of the earth surface’s topography or the inventory
of forest areas.
In contrast to Lidar data, only the surface is modeled in photogrammetric evaluations. That
means the vertical structure is not detected. In the active lidar method, however, first and load
pulses are used to record these data.
This results in the advantage for the Lidar process that, for example, DSM and DTM can be
derived simultaneously in forest areas. However, manned Lidar flights are very expensive and
economical for large areas. Using the low budget UAV, on the other hand, is cost-effective and
allows a very flexible recording of the objects to be captured. Due to the very low flight altitudes,
UAV recordings can also achieve much higher spatial resolutions than by Lidar flights.
In summary, this tutorial provides a description of how high-quality three-dimensional remote
sensing products can be produced from a commercially available UAV, an Android smartphone
and the corresponding evaluation software.

35
Literature
Dandois, Jonathan, Mark Olano, and Erle Ellis. "Optimal Altitude, Overlap and Weather
Conditions for Computer Vision UAV Estimates of Forest Structure." Remote
Sensing, Oktober 23, 2015.
Gatewing. "Software Workflow AgiSoft PhotoScan Pro 0.9.0 For use with Gatewing X100
UAS." 2017.
Haala, Norbert, Michael Cramer, and Mathias Rothermel. "Quality of 3D point clouds from
highly overlapping UAV Imagery." International Archives of the Photogrammetry,
Remote Sensing and Spatial Information Sciences, September 2013.
Mallison, Heinrich. Photogrammetry tutorial 11: How to handle a project in Agisoft Photoscan
- Online Tutorial. 2015.
https://dinosaurpalaeo.wordpress.com/2015/10/11/photogrammetry-tutorial-11-how-
to-handle-a-project-in-agisoft-photoscan/ (Zugriff am 22. März 2017).
Pix4D. Pix4Dcapture: Android Manual. 2017. https://support.pix4d.com/hc/en-
us/articles/203873435--Android-Pix4Dcapture-Manual#gsc.ta (Zugriff am 21. März
2017).
Röder, Marius. "Eignungsprüfung einer UAV-basierten Forstinventur als Ersatz zu
traditionellen Feldverfahren in Verjüngungsbeständen." Masterarbeit Hochschule für
Technik Stuttgart, 2017.

36

View publication stats

You might also like