Download as pdf or txt
Download as pdf or txt
You are on page 1of 52

M IDDLE E AST T ECHNICAL U NIVERSITY

Airborne Coast Monitoring System for Sea Turtle


Detection and Species Classification

Ahmed DARWISH
Furkan Gokturk OZTIRYAKI
Khaled ELDOWA
Mohamed BADAWY
Shaikh Saif Ul HASSAN
Zafer ATTAL

November 25, 2018

1
A BSTRACT
Automatically monitoring see turtles over extensive coastlines is an important task for
environmental research and conservation nowadays. Some sea turtle species are endangered,
that is why further surveys need to be done on different species, their walking track pattern,
and nests. Currently, different methods are used to monitor sea turtles, such as human
observation, and recording videos using drones. Such approaches are manual, slow, labor
intensive, and expensive[53][42]. Using machine vision technologies that analyze the
captured data automatically for sea turtle detection and classification can save time and effort.
In our project, we are implementing an airborne coast monitoring system for sea turtle
detection and species classification. The images are transmitted in real-time, analyzed, and
sent to the researchers. The researchers can use the system to arrange missions with specific
attributes, and review the findings of previous missions.

2
C ONTENTS
1 Introduction 5

2 Problem Definition and Group Responsibilities 5


2.1 The Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Functional User Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Non-Functional User Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.5 Group Responsibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3 Literature Review 8
3.1 Human survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.2 GPS monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.3 Manually controlled drone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.4 Computer vision with UAV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

4 UAV Platform 10
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.2 Requirements and Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.3 Competitor Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.3.2 Comparison of Generic Parameters . . . . . . . . . . . . . . . . . . . . . . 12
4.3.3 Comparison of Geometric Parameters . . . . . . . . . . . . . . . . . . . . 12
4.3.4 Comparison of Performance Parameters . . . . . . . . . . . . . . . . . . . 13
4.3.5 Comparison of Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.4 Component Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.4.2 Battery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.4.3 Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.4.4 Motor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.4.5 Propeller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.4.6 Flight Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.4.7 ESC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.4.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.5 Cost & Weight Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.5.2 Part by Part Cost & Weight Estimation . . . . . . . . . . . . . . . . . . . . . 22
4.5.3 Comparison with the Competitors . . . . . . . . . . . . . . . . . . . . . . . 22
4.5.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.7 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3
5 Computer Vision and Machine Learning 24
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.2 Object Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.2.1 R-CNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.2.2 Fast R-CNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.2.3 Faster R-CNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.2.4 YOLO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.4 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

6 IT Infrastructure 34
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.2 Mission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.3 Communication System Components . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.4 System models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.4.1 Communication model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.4.2 Drone Controlling model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.4.3 Raspberry pi Camera Controlling model . . . . . . . . . . . . . . . . . . . 35
6.5 Use cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.5.1 Use Cases Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.5.2 Use Cases’ Course of events . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.6 Architecture Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.6.1 Context Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.6.2 Level-1 Architecture Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.6.3 Computations Subsystem Level-2 Diagram . . . . . . . . . . . . . . . . . . 43
6.7 Mobile Application UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.8 Progress Made . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.8.1 Local Streaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.8.2 Secure File Transmission . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.8.3 Automatic System Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.9 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

7 Summary 46

4
1 I NTRODUCTION
Deep has been the relationship between humans and the wildlife. This relationship has
developed along the development of the human societies. Humans hunted animals for food,
they used them for transportation and protection, and they used them for science and
medicine. To achieve that, humans had to study animals closely to understand their
behaviour. In the modern time, however, the behaviour of humans has negatively affected a
lot of wild animals populations, which increased the need to develop new and effective ways
to study the wildlife for the mutual benefit of humans and the animals. Wildlife conservation
methods have developed from human surveying to camera traps to using GPS and satellites to
harvest all sorts of information about the population in question [45][15][60]. In our project,
airborne coast monitoring system for sea turtle detection and species classification, we will
explore the effectiveness of combining two wildlife research techniques: monitoring using a
UAV mounted camera; and using computer vision for animal detection and classification.

There are two sources of motivation for this project. The first being the status of the sea
turtle populations, focusing on the species available in Northern Cyprus, the Loggerhead sea
turtle and the Green sea turtle, the former is vulnerable while the latter is endangered [57].
The second source of motivation is avoiding the shortcomings of other research methods.
Like the long time and effort spent on camera trapping[15], the need to make direct contact
with the animals to install tracking devices[60], and the long time spent watching and
analyzing the footage captured by a drone-mounted camera [53].

This project is a multidisciplinary one, the team is comprised of two students from
Aerospace Engineering, two students form Electrical Engineering, and two students from
Computer Engineering. In this report we will present the precise problem definition of our
project, the user requirements, the limitations, and the group responsibilities in section 2. In
section 3, the related work is covered. In section 4, 5, and 6, each group states the mission,
design methodologies, progress made, and future plans.

2 P ROBLEM D EFINITION AND G ROUP R ESPONSIBILITIES


2.1 T HE O BJECTIVE
The aim of the project is to monitor sea turtles by means of a UAV and use image recognition
techniques to automatically recognize and classify the observed turtles. The project will rely
on an IT infrastructure consisting of a server to manage the computation and the storage of
the feeds, and a mobile application to act as an interface from which the concerned user can
view the collected information and manage various aspects of the operation.

2.2 F UNCTIONAL U SER R EQUIREMENTS


1. The drone should be able to fly above the target area where it can observe sea turtles.

5
2. The user should be able to define a path for the drone's trip, and the drone should be
able to follow that path autonomously.

3. The drone should have a mounted camera that is able to communicate in real-time with
the cloud server and stream the captured footage.

4. By analyzing the footage, the server should be able to recognize sea turtles and identify
their species.

5. The server should store the relevant footage along with any other accompanying
information on a permanent storage.

6. The user should have an interface (as in a mobile application) to access the stored
information, and to specify the drone's path.

2.3 N ON -F UNCTIONAL U SER R EQUIREMENTS


1. The drone should not disturb the turtles (10m height is recommended).

2. The mounted camera should be able to capture high resolution footage to facilitate
recognizing the turtles.

3. The communication channel between the drone and the server should be able to
transmit information with a high data rate to ensure the smoothness of the operation.

4. The communication between all the system's components (specifically between the
drone and the server) should be implemented in an energy efficient manner.

2.4 L IMITATIONS
1. We will be focusing on only the Mediterranean species; the Loggerhead Sea Turtles and
the Green Sea Turtles.

2. We will be using a normal camera not a thermal one.

3. We will be focusing on monitoring turtles under the daylight.

4. We will be focusing on monitoring turtles on the shorelines.

5. We will be focusing on Alagadi beach.

6. We will have an on-site station to manage the communication between the drone and
the server.

6
2.5 G ROUP R ESPONSIBILITIES
ASE

• Design the drone (a multirotor UAV such as quadrotor) and the payload system.

• The design should consider the environmental conditions.

• The design constraint is to maximize flight time with minimum cost, given standard
flight conditions in turtle monitoring season.

• Develop the flight control system for autonomous operation of the drone.

• Develop the path planning algorithm to cover the monitoring area.

EEE

• Design a sea turtle detection system using machine vision technology.

• Prepare the turtle/not turtle data set.

• Sea turtle should be identified with a confidence score (e.g. 75% confidence).

• Develop a sea turtle species classification system using machine vision technology.

• Prepare the data sets for each species.

• Species classification algorithm should give a confidence score for each species.

CNG

• Develop effective real-time communication between the drone and cloud for detection
and classification purposes.

• Build on-site wireless communication scheme.

• Build user interfaces for the cloud system including a mobile application to allow
researchers to see an overview of the data collected and also the results/findings of the
classification system.

• Develop an automatic controlling scheme for the drone (path planing and various
mission attributes) on the cloud system, which will be accessible through the developed
user interface.

7
3 L ITERATURE R EVIEW
With the development of technology and the need to monitor the wild animals, different
methodologies were introduced. Each method of surveying animal populations has its own
advantages and disadvantages. Hence, the goal of the study, the budget, duration, and the
accuracy required are the essential factors to determine the method to be used over the others.
We will discuss five different methods that were used in the literature. They are Human survey,
GPS tracking, Manually controlled drone, Computer vision with UAV.

3.1 H UMAN SURVEY


This is one of the basic methods of monitoring the behaviour of wild animals. The
observation of wild animals is done through tagging the animals and track their activities
according to the observations. In recent years the human surveying was used along other
integrated methods. However, these methods have their drawbacks. For large populations,
researchers tend to overlook some animals[58]. Expenses as in Sumatra, Indonesia can cost
up to $250,000 for a three-year survey cycle. And the surveys are not conducted at the
frequency required. Moreover, remote forested areas have never been surveyed. Although
aerial surveys can overcome some of these problems, they have their own limitations, such as
the height limitation, the area covered, and difficulty in landing[69]. These drawbacks led to
developing of other methodologies of surveys such as GPS based system.

3.2 GPS MONITORING


This technology uses external devices attached to the animals themselves to track them. It is
very reliable and useful for the studies about the habitats of animals, and their migration
patterns. Although these surveys are accurate, they are expensive, and done on a small
portion of the population. Off-the-shelf GPS loggers cost approximately 70$ USD developed
for human tracking market. Such devices require extended surface time to acquire satellite
ephemerides and almanack data. However, for diving mega vertebrates, other GPS logging
technologies are required such as FastLoc. Such tags are relatively expensive due to
technology copyrighting, and individually testing tags for specific taxa is approximately 3300 $
USD pers. comm. Nevertheless, it has its additional cost per study animal approximately
$5000 USD pers. comm. In addition to that, there is the monthly cost of the Argos terminals.
Moreoever, loss of some devices due to battery failure, or other external situations is possible.
To conclude, GPS tags provide the best overall solution in most tracking scenarios where
sufficient funds are available as both high-resolution data and information on the habitat are
provided[60].

3.3 M ANUALLY CONTROLLED DRONE


With the improvements made on the drones, their prices started decreasing more and more,
which lead to the drones being used more frequently in variety of the areas. Since the drones
provide a efficient way of surveillance, it is only logical to see them being used in the wildlife
monitoring [33]. Many of these monitoring applications are done by transmitting the video

8
captured by the drone in real-time to the wildlife expert, who analyzes the footage and makes
the necessary differentiation while also controlling the drone. This method proves to be one
of the most efficient methods. Moreover, it also saves time that would otherwise be spent on
trying to find the animals on foot and it reduces the chance of disturbing the wildlife [32].
Furthermore, using this method will decrease the cost of monitoring process. However, there
is still a lot of manual work (like reviewing the footage manually) to be accomplished by
humans [14][53].

3.4 C OMPUTER VISION WITH UAV


Monitoring the abundance of animal populations became a very essential and vital task for
environmental and ecological studies. The approach presented here was basically utilizing
UAVs with a single on-board camera to capture footage which then examined by people to
detect, count, and map the presence of animals. Some might argue that human counting
might be more accurate, however it seems that human accuracy is a controversial aspect since
it is widely believed that human count is subjective and has a lot of flaws and errors. Human
count requires labor plus it is quite tedious. On the other hand, Automated counts were within
95 to 98 percent of human estimates. More particularly, at Saddle Island, the automated model
has shown slight error and was able to estimate 894 seals compared to analyst counts of 913,
and at Hay Island it was able to detect 2188 seals compared to analysts’ count which was 2311.
Thus, this approach seems to be more accurate even though it needs human involvement[58].

To summarize, the traditional methods are slow, labor intensive, and expensive. Integrated
GPS technology, gives more accurate results, but requires huge fund. On the other hand,
proposing computer vision is time, and effort efficient, and relatively inexpensive. In this
project, we are proposing an automatic monitoring system for see turtles. The system
includes autonomous flight control system, with path planning algorithm. The system detects
and classifies the data using computer vision. The system includes a cloud server where data
is collected and the image processing is done in real-time. And through the developed mobile
application, the researcher can arrange a mission, access real-time analyses for the current
mission, and review the previous missions.

9
4 UAV P LATFORM
4.1 I NTRODUCTION
An unmanned aerial vehicle (UAV); more commonly referred as a drone; is an aircraft
controlled through a remote control without the need of a pilot on board. It has a wide range
of applications such as in recreation, surveillance and remote sensing. Drones can be
classified into two categories; a fixed wing drone and a multirotor drone. The first advantage
of using a multirotor drone over a fixed wing drone is VTOL1 . VTOL is advantageous especially
since no runway is required for the aircraft to fly. This creates a flexible takeoff and landing
zone, at the end saving time and money that would otherwise have been spent on
constructing an appropriate runway. It also offers the ability to hover. Ability to hover plays a
major role for this project when trying to stop the drone when a sea turtle is detected. The last
but one of the most important advantage of the multirotor is its’ shape.

Figure 4.1: Similarity between a fixed wing aircraft and a bird

A fixed wing aircraft has a shape similar to a natural aerial predator. Seagulls, vultures and
frigate birds are all known to hunt hatchlings as they struggle through the sand and make their
way into the water[29]. Thus, it is better to use a multirotor system reducing the disturbance
caused on the turtles by the drone.

After it has been decided to use a multirotor system, it is important to define how many
rotors it is going to have. Multirotors usually have 3 to 8 rotors and as the rotor number
increases the weight and the size also increases. Increasing the number of rotors is beneficial
1 Vertical Takeoff and Landing

10
since it increases the lift force that the multirotor can produce therefore increasing its capacity
to carry more payload. However, increasing rotors also adds more weight so there is a trade off
in terms of cost and endurance. More rotors will be more expensive to manufacture or
purchase off-the-shelf and using more rotors will decrease the flight time as more power will
be needed to keep the rotors working. From this perspective tricopter seems to be the best
option, on the other hand a tricopter is really problematic in terms of its’ control. To be more
specific, the yaw control is harder to manage on a tricopter because it requires the tilting back
of the rotor and the motor.

Figure 4.2: Basic Quadcopter Motions[8]

However for a quadcopter, as seen from Figure 4.2, the two back propellers produce counter
rotating moments therefore to turn, one pair of the propellers speeds up while the other pair
slows down which produces the moment which results in a yaw motion. This is a much
simpler and elegant design as it involves lesser number of rotating parts. Thus, for this project
the best option is using a quadcopter which is easy to control, small in size and light in weight.

4.2 R EQUIREMENTS AND C ONSTRAINTS


For this project, following requirements and constraints are determined in order to create a
better solution and meet the user needs.

1. Endurance ≥ 25 minutes

2. Able to carry the payload, the raspberry pie and the camera

3. Flight altitude ≥ 10 meters

4. Able to fly autonomously

4.3 C OMPETITOR S TUDY


4.3.1 I NTRODUCTION

In the design of an aircraft, it is important to search the market and check the existing
products. This allows the designer to have an idea about the specifications of the competitors,

11
which allows the designer to focus on the imperfections on others and improve on them
creating a better product. Thus, the final product is more appealing to the customers.

4.3.2 C OMPARISON OF G ENERIC PARAMETERS

Following table shows the comparison of 6 different drones in the market in terms of their
generic parameters.

Table 4.1: General Comparison[24][44][4][20][12][27]


DJI Veho AEE DJI DJI
X Star
Parameter Phantom Muvi Technology Mavic Phantom
Premium
3 Q-1 AP11 Pro 4 Pro
Number of
4 4 4 4 4 4
Rotors
Maximum
Take off Weight 1236 1650 1650 743 1600 1375
(g)
Maximum
Horizontal
16 20 20 18 16 20
Speed
(m/s)
Maximum
Ascent Speed 5 6 6 5 6 6
(m/s)
Maximum
Descent Speed 3 4 4 3 3 3
(m/s)
Endurance
25 20 20 27 25 30
(mins)
Range
24 24 24 29 24 36
(km)
Maximum
Service Ceiling 6 unkn. unkn. 5 1 6
(km)
Cost
513 425 699 999 999 1499
($)

4.3.3 C OMPARISON OF G EOMETRIC PARAMETERS

Following table shows the comparison of 6 different drones in the market in terms of their
geometric parameters.

12
Table 4.2: Geometric Comparison[24][44][4][20][12][27][66][21][23][26]
DJI Veho AEE DJI DJI
X Star
Parameter Phantom Muvi Technology Mavic Phantom
Premium
3 Q-1 AP11 Pro 4 Pro
Rotor Radius
120 127 127 105 120 120
(mm)
DJI
Aee X Star
Phantom 3 VXD-A001 Mavic
Propeller Technology Premium DJI 9450S
9450 PR Pro
Aj01 Propellers
8330
Height
216 188 370 198 216 254
(mm)
Diagonal
Distance 350 450 450 335 350 350
(mm)

4.3.4 C OMPARISON OF P ERFORMANCE PARAMETERS

Following table shows the comparison of 6 different drones in the market in terms of their
performance parameters.

Table 4.3: Performance Comparison[24][44][4][20][12][27][65][25][19][3][22][10][11]


DJI Veho AEE DJI DJI
X Star
Parameter Phantom Muvi Technology Mavic Phantom
Premium
3 Q-1 AP11 Pro 4 Pro
Battery
4480 6800 6800 3830 4900 5870
(mAh)
Battery Weight
365 3112 535 240 445 468
(g)
Phantom 3
DJI
Pro/Adv Mod. DJI
Spark
Motor Name 2312 unkn. unkn. 2212 Phantom 4
1504S
960KV Motor 2312S
Motor
Motor

4.3.5 C OMPARISON OF E QUIPMENT

Following table shows the comparison of 6 different drones in the market in terms of their
equipment.

2 Data taken from a counterpart.

13
Table 4.4: Equipment Comparison[24][44][4][20][12][27][61]
DJI Veho AEE DJI DJI
X Star
Parameter Phantom Muvi Technology Mavic Phantom
Premium
3 Q-1 AP11 Pro 4 Pro
Controller
Range 4 0.7 0.7 7 2 7
(km)
Camera
4K 4K 4K3 C4K 4K C4K
Resolution
FOV4
94 100 147 79 108 84
(°)
12x 2x 2x 2x
Zoom None None
Digital Digital Digital Digital
Velocity ≤8 ≤6 ≤14
5 unkn. unkn. unkn.
Range m/s m/s m/s
Live Video
720p None 1080p 1080p 1080p 1080p
Transmitter
Video
Transmitter
800 None 400 80 2000 2000-3500
Range
(m)
GPS
Positioning GPS GPS GPS
GPS GPS ATTI
System GLONASS GLONASS GLONASS
IOC
SAS6 3-axis 3-axis 3-axis 3-axis 3-axis 3-axis

4.3.6 C ONCLUSION

In this chapter, the selected 6 different drones were investigated and compared in terms of
their different specifications. For the selection of the drones, the focus was on quadcopters
and drones having a camera. This set light to the challenge we will face trying to transmit 4K
videos while trying to keep the cost at the minimum.

3 Equipped with s71.


4 Field of View.
5 Shows the operation range of the detection system.
6 Stability Augmentation System.

14
4.4 C OMPONENT A NALYSIS
4.4.1 I NTRODUCTION

Analysis of the components of the drone plays an important role in the weight and cost
estimation. Thus, this chapter will focus on the battery, frame, ESC7 , motor, propeller and
flight controller of the drone.

4.4.2 B ATTERY

Battery is the main power source for the drone and its capacity is a major factor in the
determination of the endurance. However, as the capacity of the battery increases, weight of
the battery also increases (see sub-chapter 4.3.4). On the other hand, as mentioned in [6], to
increase the flight time by 50%, you need a battery of double the capacity. Thus, an
optimization should be done to find the perfect battery for the desired endurance.

The following table shows the comparison of 5 different batteries for drones in terms of
their capacity, weight and cost.

Table 4.5: Battery Performance Analysis[24][65][20][10][27]


DJI AEE DJI DJI
X Star
Parameter Phantom Technology Mavic Phantom
Premium
3 AP11 Pro 4 Pro
Capacity
4480 6800 3830 4900 5870
(mAh)
Weight
365 535 240 445 468
(g)
Cost
149 129 89 99 169
($)

4.4.3 F RAME

While building a drone, a frame is used in order to mount all the component on it. Drone
frames are categorized as, freestyle and racing [7]. Since our purpose is to build a drone with a
camera on it, we will focus on freestyle frames.

The following table shows the comparison of 5 different frames for drones in terms of their
material, size, weight and cost.

7 Electronic Speed Controller

15
Table 4.6: Frame Performance Analysis[55][62][18][16][51]
Rotor DJI Impulse
Strix MQC
Parameter Riot FlameWheel RC Alien
Screech Fusion
CL1 F450 RR5
Carbon Carbon Glass Carbon Carbon
Material
Fiber Fiber Fiber Fiber Fiber
Weight
142 99 282 124 135
(g)
Diagonal
Size 25.4 23.4 45 21 22.5
(cm)
Cost
34.99 39.99 32.99 99.99 129.99
($)

Building the drone from scratch might also be possible. However, checking the quality of
the already existing frames, building something better in performance will pose a challenge.

4.4.4 M OTOR

Drones generally use brushless motors since they are "fast, powerful, agile, highly efficient
and extremely reliable"[17]. However, selection of a motor is not as easy as the other parts.
They vary in power, size and power consumption a lot. Thus, in this sub-chapter we will
mostly consider the motors that are already being used in commercial drones that are closer
to our desired result.

The following table shows the comparison of 5 different motors for drones in terms of their
"power", "size", weight and cost.

Table 4.7: Motor Performance Analysis[22][19][25][56][63]


TBS
DJI DJI Rotor
DJI Ethix
Parameter Phantom Phantom Riot
Spark Mr
3 4 Pro Hypetrain
Steele
Motor KV
960 2100 800 2450 2345
(rpm/V)
Stator Size 2312 1504 2312 2306 2306
Weight
60 18 60 30 28.5
(g)
Cost
23.94 29.99 21.75 25.99 26.99
($)

16
4.4.5 P ROPELLER

Propellers are the key for the drone flight. Through rotational power, a thrust force is
generated through which vertical and horizontal motion can be controlled.
The following table shows an analysis of 5 propellers and compares them in terms of material,
radius, number of blades, pitch angle, weight and cost.

Table 4.8: Propeller Performance Analysis[26][21][36][38][37]


DJI Phantom DJI Mavic TP-6X4.5X3
Propellers DD-6X3-GFN DD7X4.5
9450 Air GFN
Glass Fiber Glass Fiber Glass Fiber
Material unkn. unkn.
Composite Composite Composite
Radius
120 67 76 76 89
(mm)
Number of
2 2 2 3 2
blades
Pitch
unkn. unkn. 3 4.5 4.5
(°)
Weight
13 2.4 3.3 6.1 6.8
(g)
Cost
8 12 1.6 1.9 2.1
($/set)

4.4.6 F LIGHT C ONTROLLER

For the flight controller, selecting the avionics that will be used on the drone helps to
estimate the cost and the weight of the drone. Thus, this chapter will be focusing on the
avionics along with their purpose, cost and weight. A list of the required avionics is as
follows[5]:

3.2.7.1 A CCELEROMETER AND G YROSCOPE

This unit is responsible for measuring the inertial forces caused by accelerations. Using this
information, the orientation, position and the speed of the drone can be calculated given the
initial conditions.

3.2.7.2 C OMPASS (M AGNETOMETER )

This unit is responsible for giving a reference direction which does not rotate. This helps for
the calculation of the heading of the drone if used along with Gyroscope.

3.2.7.3 B AROMETER

17
This unit is responsible for measuring the pressure around the drone. This information can
be used to calculate the altitude of the drone using the standard atmosphere tables. Also, it is
possible to use this unit as an assist to stabilization since, while hovering or cruising pressure
is constant.

3.2.7.4 D ATALOGGING (B LACK B OX )

This unit is responsible for saving the data to be used as a debug tool if something goes
wrong.

3.2.7.5 S ENSOR F USION

This unit is responsible for taking the measurements from other sensors and combining
them ultimately creating an accurate reading of the state. This will enhance the stability of the
aircraft.

3.2.7.6 GPS

This unit is responsible for calculating the location of the drone using satellites. This unit
can also give an estimate of the altitude. However, the measurements of this device is not that
accurate.

3.2.7.7 S ATELLITE N ETWORKS

This unit is an improved version of GPS, e.g. GLONASS. It picks up more satellite signals
than a regular GPS resulting in a higher accuracy.

3.2.7.8 T ELEMETRY M ODULES

This unit is responsible for sending and receiving the signals between the ground station
and the drone.

3.2.7.9 P OWER M ODULE

This unit is responsible for regulating the current that the flight controller receives. This also
enables the measurement of the battery capacity and the voltage.

3.2.7.10 R/C I NPUTS AND O UTPUTS

This unit is responsible for taking the inputs from the remote control and giving the
corresponding outputs to the ESCs.

18
3.2.7.11 PPM E NCODER

This unit is responsible for converting multiple PPM outputs from the R/C receiver and
combining them into a single PPM output. This can be the case for the flight controllers that
require a single PPM input.

3.2.7.12 D ISTANCE S ENSORS

This unit is responsible for measuring the distance of the drone relative to a surface. This
device can be used to have a precise altitude measurement.

3.2.7.13 O PTIC F LOW

This unit is responsible for maintaining position in case it is not possible to obtain a reliable
GPS signal. This unit helps when flying indoors or in a thick tree cover.

3.2.7.14 M ARKET A NALYSIS

Mentioned avionics are connected to a processor on which sensor fusion runs, creating
what is called as a flight controller. It is possible to buy each avionic separately and construct a
flight controller. It is also possible to buy an already assembled ready-to-use flight controller.

The following table shows an analysis of each scenario and compares them in terms of
weight and cost.

19
Table 4.9: Market Analysis[9][39][70][13][35][64][43][34][46][47]
Avionic Model Weight Cost
Accelerometer GY-521 6
4.5 g $ 3.99
Gyroscope DOF MPU-6050
MAG3110
Magnetometer 5.7 g $ 8.20
3-Axis Digital Magnetometer
WINGONEER
Barometer 4g $ 7.99
GY68 BMP180 HMC5983 Digital Barometer
AZDelivery
GPS Module 5g $ 8.99
GPS Module
Hobbypower
Telemetry Module 5g $ 26.45
3DR Radio Wireless Telemetry
Hobbypower
Power Module 27 g $ 11.70
APM Power Module V1.0 XT60
MakerFocus
Distance Sensor 14 g $ 44.99
Infrared Distance Sensor Module
Total - 64.2 g $ 112.31
HGLRC F4 V6 PRO Flight Controller +
Alternative # 18 15.6 g $ 63.99
5.8GHz VTX w/ OSD, PDB, BEC
Alternative # 2 Pixhawk 4 15.8 g $ 211

From Table 4.9 the deduction can be made that, a fully assembled and integrated flight
controller provides many benefits such as being lighter in weight, faster in response times and
more efficient in terms of performance than the individual components assembled together.
It also comes fully assembled and ready-to-use, saving time that would have been spent
integrating and assembling the components piece by piece on to the board. Depending on the
selection, it can even be cheaper as some individual components are expensive because of
their high performance. However, it should be noted that individual components can be
replaced in case of a failure or upgraded as needed, which is either not possible or really
challenging to do if a fully integrated flight controller is used.

In this project, it is more advantageous to use an integrated flight controller instead of


trying to construct one. This is mainly due to lack of experience in assembling a flight
controller, time restraints, integrated boards having better efficiency and an already
established multi-platform capabilities.

4.4.7 ESC

ESC is the unit that is responsible from taking the inputs from the flight controller and
giving the necessary output to the motors. ESCs can be either separate, 4 ESCs for 4 motors, or

8 Alternative #1 & #2 are already constructed, ready to use flight controllers.

20
combined into one, 1 ESC for 4 motors. While selecting an ESC it is important to select it in
accordance with the motor, propeller and the battery information. All these 3 components
will alter how much load the ESC will be carrying. Also, the type of the drone, racing or
freestyle, will play a role in the ESC selection. For the market analysis, ESCs with current rating
of around 30A will be considered since it gives a good baseline and covers most of the cases.

The following table shows an analysis of 5 different ESCs and compares them in terms of
current rating, dimension, weight and cost.

Table 4.10: ESC Performance Analysis[41][50][49][28][48]


Racerstar RaceFlight
Wraith Racerstar DYS
Parameter RS30A Bolt
32 Star30 F30A
V2 V2
Type Single Single 4 in 1 4 in 1 4 in 1
Current
Rating 35 30 30 30 30
(Amps)
Dimensions
15x30 13x28 36x36 40x43 31x31
(mm)
Weight
4.8 6.26 25 21 15.5
(g)
Cost
13.73 13.21 40.65 44.99 65.99
($)

From the analysis of Table 4.10 it can be said that, 4 in 1 ESCs are better to use since they are
mostly lighter and cheaper. Also, using 4 in 1 ECSs will mean that the ESC will be mounted
close to the flight controller resulting in a much compact design.

4.4.8 C ONCLUSION

In this chapter various components of the quadcopter were investigated. Main purpose
behind this investigation was to understand each component in depth, in terms of their
performance, weight and cost. Note that, no final decisions on any of the components were
done in this chapter. Meaning that, the selection of the parts are not done but just started.
However, some of the decisions that have been made are, an integrated flight controller will be
used instead of building one and a 4 in 1 ESC will be used instead of 4 different ESCs.

4.5 C OST & W EIGHT E STIMATION


4.5.1 I NTRODUCTION

While designing a drone, it is important to estimate the weight and the cost of the drone.
Estimating the cost, adjustments can be made in order to decrease it if necessary. Estimating

21
the weight, the propeller and the motor can be selected accordingly which will define the
selection of ESC as well.

4.5.2 PART BY PART C OST & W EIGHT E STIMATION

For this analysis, the focus will be on the worst case scenario in terms of weight.

The following table shows a rundown of the worst scenario and estimates the weight and
cost.

Table 4.11: Weight and Cost Estimation[65][18][25][26][46][47][49]


Weight Cost
Component Name/Spec
(g) ($)
Battery 6800 mAh 535 129
Frame FlameWheel 282 32.99
Motor x4 800 KV 2312 240 87
Propeller x4 120 mm9 52 32
Flight Controller Pixhawk 4 15.8 211
ESC Star30 25 40.65
Total10 - 1149.8 532.64

4.5.3 C OMPARISON WITH THE C OMPETITORS

After the weight and cost estimation is done, it is wise to compare it with the competitors to
see the advantages and disadvantages of the design over the competitors.

The following table compares the design with the competitors in terms of battery, motor,
weight and cost.

9 Radial
10 Note that the weight estimation does not include the camera, wiring and joints.

22
Table 4.12: Comparison of the Design with Competitors
DJI Veho AEE DJI DJI
X Star
Parameter Phantom Muvi Tech. Mavic Phantom Design
Premium
3 Q-1 AP11 Pro 4 Pro
Battery
4480 6800 6800 3830 4900 5870 6800
(mAh)
960 KV unkn. KV unkn. KV 800 KV 800 KV
Motor unkn. unkn.
2312 1504 2212 2312 2312
Weight
1236 1650 1650 743 1600 1375 1149.8
(g)
Cost
513 425 699 999 999 1499 532.64
($)

4.5.4 C ONCLUSION

In this sub-chapter an estimate of the weight and the cost was done using the information
gathered on chapter 4.4. Afterwards these results were compared with the competitors from
chapter 4.3.

4.6 C ONCLUSION
In this chapter competitor study, component analysis and the first cost and weight
estimation was done. Competitors were selected to be as close as to the final desired product.
Components were analysed the same way. Then using these informations, first estimate of the
weight and the cost was done. Note that, this estimate is not an exact and final value and is
open to changes as the design progresses. However, as if for now, it sets light to the what final
product will be like when compared to the already existing quadcopters.

4.7 F UTURE W ORK


The list of the work that is left to be done for UAV Platform is as follows,

• Finalization of the component selection,

• Sizing the aircraft using the finalized component selection,

• 3-View drawings of the final design,

• Modelling the aircraft if possible,

• Acquiring the selected components and building the drone.

23
5 C OMPUTER V ISION AND M ACHINE L EARNING
5.1 I NTRODUCTION
Our main mission is detecting the Mediterranean Sea turtles and classifying them into two
main classes which are: Green sea turtles and loggerhead sea turtles. The object detection has
been a very popular computer vision application in a lot of different projects. In this paper, we
analyze different proposed algorithms and detection methods. Then, we underscore the
advantages and the drawbacks of each of the proposed methods. Thus, we represent our
approach and the related future work.

5.2 O BJECT D ETECTION


Recently, the object detection approaches have plateaued where the best performing
algorithms mainly combine low-level image features with high-level contexts. In this section,
we introduce and review the current state of arts. There are four main significant and recently
implemented algorithms of object detection. Chronologically, we have R-CNN, Fast R-CNN,
Faster R-CNN and YOLO. Different approaches have considered different compromises upon
which they implemented their algorithms. Generally, the two main parameters are precision
represented by mAP, which stands for mean average precision, and the other main parameter
is speed which is indicated by FPS (Frames Per Second). It is also worth to mention that the
precision of these object detection algorithms is defined by testing these algorithms on
canonical standard data sets one of which is PASCAl VOC dataset.

5.2.1 R-CNN

The approach used in R-CNN combines two key features:(1) one can apply high-capacity
convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and
segment objects and (2) To boost performance when labeled training data is scarce, a
supervised pre-training and then domain-specific fine-tuning is applied. Compared to
OverFeat, a recently proposed sliding-window detector based on a similar CNN architecture,
we can merely see that R-CNN surpasses OverFeat by a large margin on the 200-class
ILSVRC2013 detection data set.[30]
The object detection system consists of three modules. The first generates
category-independent region proposals. These proposals define the set of candidate
detections available to our detector. The second module is a large convolution neural
network.These proposals generated by the first module define the set of candidate detections
available to our detections. The CNN in the second module extracts a fixed-length feature
vector from each region. Finally, the third module compromises a set of class-specific linear
SVMs(support vector machines).[30]
Moving on to the module design which is mainly "Region proposals and Feature extraction". A
4096-dimensional feature vector is extracted from each region proposal using the Caffe [30]
implementation of the CNN described by Krizhevsky et al [30] Krizhevsky et al continued to
explain that features are computed by forward propagating a mean-subtracted 227 ÃŮ 227
RGB image through five convolutional layers and two fully connected layers.

24
Figure 5.1: R-CNN[30]

Discussing another attribute which is the run-time analysis, there are mainly two properties
worthy to be mentioned as they do ease the detection and maintain it in an efficient manner.
First, all CNN parameters are shared across all categories. Thus, the time spent computing
region proposals and features is reduced over all classes. Second, all features computed by
CNN are low dimensional which requires less memory to store linear predictors.

Figure 5.2: Convolution layers of RCNN[30]

Finally we come to the stage of the test, a selective search on the test image is run and this
test is designed such that it extracts 2000 candidate region proposals which are then warped
into a square and then these region proposals are fed into a CNN that produces a
4096-dimensional feature vector as output. The CNN acts as a feature extractor and the
output dense layer consists of the features extracted from the image and the extracted features
are fed into an SVM to classify the presence of the object within that candidate region
proposal as explained by Ross Girshick et all[30]. Not only this approach predict the existence
of an object within the region proposals,but the algorithm also can decide for four values
which are offset values to increase the precision of the bounding box. For example, given a
region proposal, the algorithm would have predicted the presence of a person but since the
face of that person within that region proposal could have been cut in half, the offset values
are crucial to help in adjusting the bounding box of the region proposal.

25
The drawbacks of the R-CNN can be summarized as follows

- It still takes a reasonable amount of time to get the network trained as you would have to
classify 2000 region proposals per image.

- The real time is impossible to implement as it takes around 47 seconds for each test image.
Thus it is not suitable to our project.

- The generation of bad candidate region proposals is possible because this algorithm is a fixed
algorithm. Therefore, no learning is happening at that stage.

5.2.2 FAST R-CNN

This algorithm is the next version of the R-CNN, in which several innovations are used to
improve training and test speed in addition to improving accuracy efficiency[31]. In Fast
R-CNN, it trains the VGG16 (Visual Geometry Group) network 9 times faster[31]. As shown in
Figure 4.3, it starts with taking the whole image and processes it that produces a set of object
proposals by using eclectic search which R-CNN uses as well to find out if there might be an
object or not according to the next stages of the process. Then, the VGG16 network processes
the full image with many convolutional and max pooling layers to make a convolutional
feature map from the processed image where each proposed object gets a region of interest
(RoI) pooling layer extracted from the featured map to the RoI feature vector in which it
reshapes the image into a fixed size. In the next step, the reshaped images will be fed to
produce two fully connected layers where one of them will produce a softmax probability layer
to estimate the number of the classes in that specified RoI in addition to catch all background
class. The second layer will produce four real numbers in which encoding them will give the
coordinates of the bounding box that refers to the classes of the extracted object[31].

Figure 5.3: Fast R-CNN[31]

26
The main reason behind the speed of the Fast R-CNN that it doesn’t feed 2000 region
proposals CNN every time. Conversely, it does the convolution operation once per image to
produce the feature map from the original image including the RoI projection on it[31].

5.2.3 FASTER R-CNN

In this algorithm, there are two main parts that build the base of the algorithm which is the
detector that consists of the Fast R-CNN algorithm and the region proposer network (RPN) as
it is clearly shown in Figure 6.5. Although this algorithm doesn’t use external region proposal
method like the two previous methods, it has an implemented region proposer that works
after the CNN layer[54].

Figure 5.4: Faster R-CNN[54]

Starting by the CNN layer, in Faster R-CNN, two CNN have been investigated:1)ZFnet,
2)VGG-16. ZFNet is an improvement AlexNet which was one of the successful convolutional
neural network in feature detector build in 2012. In ZFNet, the filter size is improved in a way
that it changes from (11x11 stride 4) to (7x7 stride 2) and it has 5 convulutional layers in
addition to 3 fully connected layers as shown in Figure 5.6. Also it uses deeper layer of
convolution operation in 3,4,5 layers[2, 71].

27
Figure 5.5: ZFNet[2, 71]

VGG-16 is a convolutional neural network model proposed by team from the University of
Oxford where it achieves very high accuracy. It can detect an image from 1000 images by
taking the image as RGB with size of 224x224. It consists of 16 layers where the used
convolution layers are 3x3 size and the max pooling layers are 2x2 size in addition to the fully
connected layers that comes at the end.Figure 5.6 shows the structure of the VGG-16[59].

Figure 5.6: VGG-16[40]

28
The next part of the Faster R-CNN algorithm is working just like the Fast R-CNN where it
takes the data from the feature map and through the region proposal network, it shows the
proposals in a new layer to go on with the classifier that will detect and specify according to
the features to which class the image refers[54].

5.2.4 YOLO

One of the most revolutionized object detection algorithms is YOLO (You only look once). It
is a relatively new approach to object detection. Prior work on object detection mainly
proposes classifiers to detect objects. Instead, YOLO proposes frame object detection as a
regression problem. That means to spatially separate bounding boxes and associate class
probabilities with these bounding boxes. Here is how it works, a single neural network
predicts bounding boxes and class probabilities directly from full images in one
evaluation[52]. End-to-end can be directly utilized on detection performance, due to the fact
that the whole detection pipeline is a single network.

YOLO is a unified architecture that why it is extremely fast. The base YOLO model processes
images in real-time at 45 frames per second[52]. A smaller version of the network, Fast YOLO,
processes an astounding 155 frames per second while still attaining double the mAP of other
real-time detectors[52] . Although YOLO is extremely fast, but one of its main drawbacks is
that it makes more localization errors red to state-of-the-art detection systems, however it is
less likely to predict false positives on background. Finally, it is worthy to emphasize that
YOLO learns very general representations of object in a more efficient manner as it
outperforms other detection methods, including DPM and R-CNN, when generalizing from
natural images to other domains like artwork.

The reason why YOLO surpasses a lot of other detection algorithms is that mostly all of the
previous object detection algorithms use regions to localize the object in the image. However,
the YOLO network does not look at the complete image which makes it faster approach.
Instead of looking at the complete image, it looks at parts of the image which have high
probabilities of containing the object. thus, YOLO or You Only Look Once is an object
detection algorithm much different from the region based algorithms seen above. In YOLO a
single convolutional network predicts the bounding boxes and the class probabilities for these
boxes.

In more detailed manner, how YOLO works is that it takes an image and split it into an SxS
grid, within each of the grid we take number of bounding boxes (m). For each of the bounding
box, the network outputs a class probability and offset values for the bounding box. Then the
bounding boxes possessing the class probability above a threshold value is selected and used
to locate the object within the image.[52]

Although YOLO is orders of magnitude faster(45 frames per second) than other object
detection algorithms, we hereby underscore another imitation of YOLO algorithm is that it

29
struggles with small objects within the image. This is due to the spatial constraints of the
algorithm.

in conclusion, Fast YOLO is the fastest general-purpose object detector in the literature, 155
frames per second, and YOLO pushes the state-of-the-art in real-time object detection. YOLO
also generalizes well to new domains making it ideal for applications that rely on fast, robust
object detection.

Figure 5.7: YOLO[52]

30
Figure 5.8: YOLO examples[52]

5.3 C ONCLUSION
After discussing the four previous algorithm we concluded some main difference between
them regarding two main aspects which are speed and efficiency. As it is shown in Fig. 4.9,
which is a representation for Picasso dataset precision verses recall curves, the rate of
decreasing in the precision when the recall increases is faster for R-CNN compare to YOLO
algorithm[52].
On the other hand, it is shown in Fig. 4.10 a Big picture which is a representation to the
difference between the different algorithm where that faster R-CNN algorithm has the highest
average precision and the fastest one in compare to the other and it is clear that the R-CNN
that introduced in 2013 is not efficient as it has mAP less than all other algorithm[52].

31
Figure 5.9: Picasso[52]

Figure 5.10: Big Picture[52]

5.4 F UTURE W ORK


For the further work, we will focus on two main things: First, we will accustom the proposed
algorithms to get the best performance in terms of accuracy and time. Second, we will

32
propose and work on different ways of classification between the two species of sea turtles
which are: Loggerhead seat turtles and green sea turtles. We will also work on deploying the
deep learning methods in our approaches and we will train a lot of different classes on our
machine.

Figure 5.11: Deep Learning Impact[67]

33
6 IT I NFRASTRUCTURE
6.1 I NTRODUCTION
So far, we have covered everything related to the UAV and the computer vision methodologies.
Now, it’s time to discuss how the system’s components will be communicating with one
another, how the operation of the system will flow, and how the harvested information will be
presented to the users of the system. In this chapter we will be stating the precise mission of
the computer engineering team, and we will discuss our solution presented in the models
used, the use cases, and the architecture of the system. Finally, we will present the progress
that we have made so far, and the plans for the near and the distant future.

6.2 M ISSION
The mission of the Computer Engineering team is as follows:

1. Develop effective real-time communication between the drone and cloud for detection
and classification purposes.

2. Build on-site wireless communication scheme.

3. Build user interfaces for the cloud system including a mobile application to allow
researchers to see an overview of the data collected and also the results/findings of the
classification system.

4. Develop an automatic controlling scheme for the drone (path planing and various
mission attributes) on the cloud system, which will be accessible through the developed
user interface.

6.3 C OMMUNICATION S YSTEM C OMPONENTS


1. Raspberry Pi.

2. Camera.

3. On-site laptop/router.

4. WIFI repeaters.

5. Cloud server.

6. Mobile application.

6.4 S YSTEM MODELS


6.4.1 C OMMUNICATION MODEL

In order for the drone to be able to communicate with the server, it should have a reliable
and fast internet connection. Due to region constrains, a cellular based connection with the

34
mentioned characteristics is not available. Thus, we will be using WIFI (IEEE 802.11) for the
drone. To achieve that, we will use a laptop/router to broadcast a wired internet connection,
and use WIFI repeaters to amplify the signal for it to be reachable by the drone throughout its
path.

Figure 6.1: Communication Model

6.4.2 D RONE C ONTROLLING MODEL

In order to achieve the desired semi-autonomous operating mode for the drone, we need to
send the flight details to the drone. These details include path, duration, altitude, speed, start
and end points. The user will be able to specify these parameters when arranging a flight
through the mobile application, however, the user will not communicate directly with the
drone. Instead, the flight controls will be sent to the server, which will have access to the
drone controller’s API (the suggested controller till now is the Pixhawk).

6.4.3 R ASPBERRY PI C AMERA C ONTROLLING MODEL

In order to control the camera, we are using a Raspberry PI. Raspberry PI is essentially a tiny
computer that can run various operating systems, this makes it easier to program than
Arduino boards. Also, its huge popularity makes it easier to find support when we face
problems. The server will be able to send the Raspberry PI various commands to control the
operation of the camera. When a flight starts, the server will command the PI to send images
periodically from the camera. If the user, through the app, wants to see a live stream from the
camera, the server will send a command to the PI to get a live stream while it is sending the
images normally. However, taking pictures whilst streaming will drop a number of frames
from the resulting video.

35
6.5 U SE CASES
6.5.1 U SE C ASES D IAGRAM

The following diagram shows the primary use cases of the system. The primary actor is the
user, and the secondary actors here are the Raspberry PI and the database. Please note that in
the use case diagram, we only include functionalities that are initiated by the actors not the
server. Thus, cases like testing the viability of the mission and various other drone control
cases are not included as the server starts it when the flight time comes. But these cases will
be considered in the architecture diagrams.

Figure 6.2: Use Cases Diagram

To elaborate on the "request data" case, the user can request to see the findings from
previous sightings (i.e. the relevant pictures and the extracted information). The user can also,
while a flight is underway, see the findings in real time along with the status of the drone
(position, battery level, etc). For arranging a mission, the user must specify the start time, the
path, the height, and the speed. When the flight time comes the server will first make sure that
all the components are running and accessible, it will also check if the current status of the
drone will enable it to do the mission.

36
6.5.2 U SE C ASES ’ C OURSE OF EVENTS

The following tables describes the interaction between the actors and the system for each
use case.

1- R EQUEST C URRENT M ISSION ’ S D ATA :

The following table shows the interaction between the researcher and the system to retrieve
details about a mission on the run. The details of the mission are instantly updated.

Table 6.1: Request Real-time data for the current mission


Actor Intentions System Responsibilities
1) The user chooses to check current mis-
sion details
2) Server receives the request
3) Server checks current mission status
4) Server retrieves data from Database
5) Server keeps updating the user applica-
tion with the data
6) Application displays the data to the
user

2- R EQUEST D ATA FROM P REVIOUS M ISSIONS :

The following table shows the interaction between the researcher and the system to retrieve
statistics data about certain missions.

Table 6.2: Request Data from the previous missions


Actor Intentions System Responsibilities
1) User specify missions to be checked
2) Server receives the request
3) Server retrieves data from Database
4) Server send the data to the application
5) Application displays the data to the
user

3- R EQUEST S TREAMING :

The following table shows the interaction between the researcher and the system to access
the live streaming of the mission. This feature is upon the researcher’s request. Normally, the
Raspberry pi uploads the pictures to the server automatically to evaluate them without
streaming.

37
Table 6.3: Request Streaming
Actor Intentions System Responsibilities
1) User requests streaming for the current
mission
2) Server receives the request
3) Server checks current mission status
4) Server requests streaming from the
Raspberry pi
5) Server receives streaming
6) Server forwards the stream to the appli-
cation
7) User has access to the stream

4- S END I MAGES P ERIODICALLY:

The following table shows the interaction between Raspberry pi and Raspberry pi camera
with the server. The Raspberry starts taking pictures and uploading them, after permission
from the server. Once the power is on for the Raspberry pi, it checks the connection with the
server and wait till it receive a mission from the server.

Table 6.4: Send Images Periodically


Actor Intentions System Responsibilities
1) Raspberry pi checks server connection
2) Server receives a connection request
3) Server sends if there is a mission
4) Raspberry pi takes pictures periodically
5) Raspberry pi uploads the pictures to
the server
6) Server store the received images
7) Server processes the received images
8) Server stores the results
9) Server sends end of mission message
to the Raspberry pi
10) Raspberry pi stops uploading images
11) Raspberry pi stops taking pictures

5- A RRANGE N EW M ISSION :

The following table shows the interaction between a researcher and the system to assign
new mission. The system takes the details from the researcher and evaluate these details.

38
However, the server will not check if the drone status will enable it to do the mission except
when the mission starting time comes.

Table 6.5: Arrange a new mission


Actor Intentions System Responsibilities
1) User chooses to arrange a new mission
2) User specifies research station
3) User specifies the altitude
4) User specifies the path
5) User specifies the start time
6) User sends the request to the server
7) Server receives the request
8) Server checks if no other mission at the
same time
9) Server sends the result back to the user
10) User receives server response

39
6.6 A RCHITECTURE D IAGRAMS
In this section, we will present the design of the system on different levels of details.

6.6.1 C ONTEXT D IAGRAM

The following diagram shows the interactions between the actors and the server.

Figure 6.3: Context Diagram

The concern of each of the shown interfaces is as follows:


I1 The Mobile Application Back-end
This interface is responsible for processing the commands that the user makes through
the mobile application.

I2 The Camera Control Interface


This interface is responsible for communicating with the Raspberry PI to send camera
related controls and receive images and videos.

I3 The Drone Control Interface


This interface is responsible for communicating with the drone’s controller to send
flight commands and receive the status of the drone.

40
I4 The Database communication interface
This interface handles store and retrieve operations with the database.

6.6.2 L EVEL -1 A RCHITECTURE D IAGRAM

The following diagram shows the internal structure of the server presented by its
constituent subsystems and the communication between them.

Figure 6.4: Level-1 Architecture Diagram

We can see that the server consists of 5 subsystems. The computations subsystem is
responsible for retrieving the raw images taken in flight, processing them using computer
vision, and storing the harvested information in the database. It is also responsible for starting
a scheduled flight when its time comes, but before starting, it does all the preliminary checks
to make sure that the flight is possible. We will look at it in more details in the next diagram.
The remaining four subsystems are self-explanatory. Please note that each of them
implements one of the interfaces (I1 to I4) that were discussed in the previous level for
communicating with the external entities.
The concern of each of the shown interfaces (the new ones) is as follows:

I5 Camera Control Subsystem In-Bound


This interface (shown in two boxes) contains the functions to be called by other
subsystems who wish to control the functionality of the camera.

41
I6 Drone Control Subsystem In-Bound
This interface contains the functions to be called by other subsystems who wish to
communicate with the drone controller.

I7 Database Image Transfer Interface


This interface handles image store and retrieval requests done by other subsystems.

I8 Database Data Transfer Interface


This interface handles research findings store and retrieval requests done by other
subsystems.

I9 Database Mission info Transfer Interface


This interface handles flight information store and retrieval requests done by other sub
systems.

I10 Mobile Application Mission Arrangement Out-Bound


This interface contains functions that call the required functions for the mobile
application to schedule a flight and store it in the database.

I11 Mobile Application Data Requesting Out-Bound


This interface contains functions that call the required functions for the mobile
application to request the research findings from the database.

I12 Mobile Application Stream Requesting Out-Bound


This interface contains functions that call the required functions for the mobile
application to request a live stream from the drone mid-flight.

I13 Camera Control Image Sending Out-Bound


This interface contains functions that call the required functions to send images from
the Raspberry PI to the database.

I14 Computations Subsystem Database Out-Bound


This interface contains functions that call the required functions for the computations
subsystem to store and retrieve images and data, and get the details of the scheduled
flight.

I15 Computations Subsystem Camera Control Out-Bound


This interface contains functions that call the required functions for the computations
subsystem to communicate with the camera control subsystem for testing the
connection and commanding the Raspberry Pi to start sending images when the flight
begins.

I16 Computations Subsystem Drone Control Out-Bound


This interface contains functions that call the required functions for the computations
subsystem to communicate with the drone control subsystem for sending the mission
details and getting the status of the drone at any given time.

42
6.6.3 C OMPUTATIONS S UBSYSTEM L EVEL -2 D IAGRAM

The following diagram shows the internal structure of the computations subsystem.

Figure 6.5: Computations Subsystem

As we can see, our subsystem is further divided into two units. The image processing unit is
responsible for running the computer vision algorithm on the input images and outputting
the results. The controller is responsible for feeding images into the image processing unit
and receiving the results, it is also responsible for checking the viability of a scheduled flight
and sending the required commands to all the subsystem in order to start the flight. Please
note that the controller implements interface (I14 to I16) that were presented in the previous
level for communicating with other subsystems.
The concern of each of the shown interfaces (the new ones) is as follows:
I17 Controller Unit Image processing Out-Bound
This interface contains functions that call the required functions for the controller to
feed images into the image processing unit and receive their output.

I18 Image Processing Unit Interface


This interface contains functions that are called to communicate with the image
processing unit.

6.7 M OBILE A PPLICATION UI


The mobile application will be developed to allow researchers to see an overview of the data
collected and also the results/findings of the classification system. The following Figures show

43
the tentative UI of the application. The researcher has the option to arrange a new mission,
view statistics for certain missions, and access the live missions.

Figure 6.6: Mobile application UI

Figure 6.7: Mobile application streaming UI

44
6.8 P ROGRESS M ADE
6.8.1 L OCAL S TREAMING

A live streaming was established using a Raspberry pi camera. A program was built on
Raspberry pi to create a local server, were all connected devices to the same network can
access. The program is uploading the stream to the local server using HTTP requests. Hence,
all devices on the same network can access the streaming as long as the program is on the run
as illustrated below. The implementation of the code was following online tutorials[68].

Figure 6.8: Streaming Figure

6.8.2 S ECURE F ILE T RANSMISSION

Secure file transmission was done between two local hosts, using SCP (Secure Copy
Protocol). SCP ensures secure transferring computer files between the local host and a remote
host. However, different protocols will be used for faster transmission between Raspberry pi
and the server, such as HTTPS.

6.8.3 AUTOMATIC S YSTEM C ONTROL

A program was implemented to run when the Raspberry PI starts up. The program checks
the system configuration, and server connection. The program updates automatically the
system files from an online repository on GitHub, then runs the system programs. The
program start streaming automatically, and checks for updates periodically[1].

6.9 F UTURE W ORK


The following is a list of the project’s milestones :

• Automating image transfer.

45
• Communicating with the cloud server.

• Incorporating an object detection program into the system.

• A simple version of the app.

• On-site testing using repeaters.

• Streaming controlling system.

• Drone controlling system.

• Incorporating the full detection and classification algorithm.

• Fully functioning app.

7 S UMMARY
In this report, we covered different aspects of our project. First, we started by defining the
aim of the project, the motivation behind it, the requirements, the agreed upon limitations,
and the responsibilities of each group. Then, we showcased a compilation of related work in
the field of wildlife conservation technologies. Finally, each team covered their design
methodology, their progress, and their future work.

46
R EFERENCES
[1] K. E. A. Darwish, “Auto-management-raspberry-pi,”
/https://github.com/aafdarweesh/Auto-Management-Raspberry-pi, 2018.

[2] G. E. H. A. Krizhevsky, I. Sutskever, “Imagenet classification with deep convolutional


neural networks,” 2012. [Online]. Available: /https://papers.nips.cc/paper/4824-
imagenet-classification-with-deep-convolutional-neural-networks.pdf

[3] G. Ace, “Gens ace 7.4v 6800mah 50c 2s lipo batarya t plug iÃğin 1/8 1/10 rc araba,” n.d.
[Online]. Available:
/https://www.banggood.com/tr/Gens-Ace-7_4V-6800mah-50C-2S-Lipo-Battery-T-
Plug-for-RC-Car-p-1237707.html?gmcCountry=TR&currency=TRY&createTmp=
1&utm_source=googleshopping&utm_medium=cpc_ods&utm_content=
heath&utm_campaign=pla-heli-tr-pc-tr&gclid=Cj0KCQjw08XeBRC0ARIsAP_
gaQDkAN7vtHcpCuDCixC8lwBv9bHcqUszQoyfwcY4ocacle4UOgqQdUIaArP4EALw_
wcB&cur_warehouse=CN

[4] AEE, “Toruk ap11 pro,” n.d. [Online]. Available:


/https://www.aeeusa.com/aee-drones/toruk-ap11-pro-6/toruk-ap11-pro.html

[5] ANON, “Beginners guide to drone autopilots (flight controllers) and how they work,” Nov
2015, [Online; posted 11-Nov-2016]. [Online]. Available: /https://www.dronetrest.com/t/
beginners-guide-to-drone-autopilots-flight-controllers-and-how-they-work/1380

[6] ——, “Quadcopter parts list | what you need to build a diy quadcopter,” n.d. [Online].
Available: /www.quadcoptergarage.com/quadcopter-parts-list-what-you-need-to-
build-a-diy-quadcopter/

[7] ——, “Ultimate fpv shopping list frame,” n.d. [Online]. Available:
/https://www.fpvknowitall.com/ultimate-fpv-shopping-list-frame/#freestyle-frames

[8] ——, “F13: Quadcopter,” Dec 2013, [Online; posted 08-Dec-2013]. [Online]. Available:
/http://socialledge.com/sjsu/index.php/F13:_Quadcopter

[9] Arduino, “Gy-521 6 dof mpu-6050 module 3 axis accelerometer gyroscope module for
arduino,” Nov 2016, [Online; posted 26-Nov-2016]. [Online]. Available:
/https://www.amazon.com/GY-521-MPU-6050-Accelerometer-Gyroscope-Arduino/
dp/B01M279JEU/ref=sr_1_6?s=electronics&ie=UTF8&qid=1541449200&sr=
1-6&keywords=MPU6050+6+dof

[10] AUTEL, “X-star premium battery,” n.d. [Online]. Available:


/https://shop.autelrobotics.com/products/battery

[11] ——, “X-star premium discussion,” n.d. [Online]. Available:


/https://autelpilots.com/threads/motor-specs.934/

47
[12] ——, “X-star premium,” n.d. [Online]. Available:
/https://www.autelrobotics.com/x-star-premium/

[13] AZDelivery, “Azdelivery gps module for arduino and raspberry pi,” July 2018, [Online;
posted 11-July-2018]. [Online]. Available:
/https://www.amazon.com/AZDelivery-âŋŘâŋŘâŋŘâŋŘâŋŘ-Module-Arduino-
Raspberry/dp/B07F8H539K/ref=sr_1_1_sspa?ie=UTF8&qid=1541449979&sr=
8-1-spons&keywords=gps+module&psc=1

[14] E. Bevan, T. Wibbels, B. M. Najera, M. A. Martinez, L. A. Martinez, F. I. Martinez, J. M.


Cuevas, T. Anderson, A. Bonka, M. H. Hernandez et al., “Unmanned aerial vehicles (uavs)
for monitoring sea turtles in near-shore waters,” Marine Turtle Newsletter, vol. 145, pp.
19–22, 2015.

[15] A. C. Burton, E. Neilson, D. Moreira, A. Ladle, R. Steenweg, J. T. Fisher, E. Bayne, and


S. Boutin, “Review: Wildlife camera trapping: a review and recommendations for linking
surveys to ecological processes,” Journal of Applied Ecology, vol. 52, no. 3, pp. 675–685, 6
2015. [Online]. Available: /https://doi.org/10.1111/1365-2664.12432

[16] M. Q. Club, “Mqc fusion,” n.d. [Online]. Available:


/https://store.rotorriot.com/mqc-fusion/

[17] DJI, “Inside a drone âĂŞ brushless motors,” May 2016, [Online; posted 11-May-2016].
[Online]. Available: /https://store.rotorriot.com/impulse-rc-alien-rr5-kit/

[18] ——, “Dji flamewheel f450 basic kit,” n.d. [Online]. Available:
/https://www.getfpv.com/dji-flamewheel-f450-basic-kit.html

[19] ——, “Dji spark 1504s motor,” n.d. [Online]. Available:


/https://urun.n11.com/multikopter/dji-spark-1504s-motor-P267745676?gclid=
Cj0KCQjw08XeBRC0ARIsAP_gaQC1cb9psH7QsCfWtlVrG0L8_
e8CwRtuhF6ttXuniSdytbcjifHlZAEaArUyEALw_wcB&gclsrc=aw.ds

[20] ——, “Mavic prospecs,” n.d. [Online]. Available:


/https://www.dji.com/mavic/info#specs

[21] ——, “Dji mavic pro 8330 yedek pervane,” n.d. [Online]. Available:
/https://www.klasfoto.com.tr/urun/dji-mavic-pro-8330-yedek-pervane?gclid=
Cj0KCQjw08XeBRC0ARIsAP_gaQBkvgQZX3QcN_
ogXT32uMGYDS844DGtIPRZjnH7ZQoogHIzduFcHxcaAgsEEALw_wcB

[22] ——, “Phantom 3 pro/adv - 2312 960kv motor (cw),” n.d. [Online]. Available:
/https://www.pilottr.com/urun/dji-phantom3-2312-cw-motor.html?gclid=
Cj0KCQjw08XeBRC0ARIsAP_gaQA1JiL7_zTsPj8eF-r4beT7-PKJLi0BqAOHMYe0J9_
E8vLmepA8n94aAhe9EALw_wcB

[23] ——, “Phantom 3 - 9450 self-tightening propellers - (1cw - 1ccw),” n.d. [Online].
Available: /https://www.pilottr.com/urun/dji-phantom3-9450-pervane.html?gclid=

48
Cj0KCQjw08XeBRC0ARIsAP_gaQDd8pyRizDVFuP9KNJRCqrYKIxhQo3pj1-su-
b3LDb9P9JaxstnaRUaAnJ4EALw_wcB

[24] ——, “Phantom 3 sespecs,” n.d. [Online]. Available:


/https://www.dji.com/phantom-3-se/info?lang=cn#specs

[25] ——, “Dji phantom 4 2312s motor cw part24,” n.d. [Online]. Available:
/https://urun.n11.com/multikopter/dji-phantom-4-2312s-motor-cw-part24-
P264853503?gclid=Cj0KCQjw08XeBRC0ARIsAP_gaQD0W9wOujV8qvvkFc-
ZqWd6CSBt5mBntw999eI9oz5GP2iZ06wOtB0aAgM_EALw_wcB&gclsrc=aw.ds

[26] ——, “Phantom 4 - 9450s quick release propellers (1cw - 1ccw),” n.d. [Online]. Available:
/https://www.pilottr.com/urun/dji-p4-9450s-pervane.html?gclid=
Cj0KCQjw08XeBRC0ARIsAP_gaQBQwqABnCUNzjA0D6a4T7v_9EkC1_
jItyzBMKfnqkFQgBuOkPFbX80aApEREALw_wcB

[27] ——, “Phantom 4 pro v2.0specs,” n.d. [Online]. Available:


/https://www.dji.com/phantom-4-pro-v2/info#specs

[28] DYS, “Dys f30a 4-in-1 esc blheli dshot,” n.d. [Online]. Available:
/https://www.getfpv.com/electronics/electronic-speed-controllers-esc/dys-f30a-4-in-
1-esc-blheli-s-dshot.html

[29] K. Gardner, “List of predators of baby sea turtles,” n.d. [Online]. Available:
/https://animals.mom.me/list-predators-baby-sea-turtles-8011.html

[30] D. J. D. T. Girshick, R. B. and J. Malik, “Rich feature hierarchies for accurate object
detection and semantic segmentation,” 2014.

[31] R. GirshickJ, “Fast r-cnn,” 2015.

[32] L. F. Gonzalez, G. A. Montes, E. Puig, S. Johnson, K. Mengersen, and K. J. Gaston,


“Unmanned aerial vehicles (uavs) and artificial intelligence revolutionizing wildlife
monitoring and conservation,” Sensors, vol. 16, no. 1, 2016. [Online]. Available:
/http://www.mdpi.com/1424-8220/16/1/97

[33] Y. Ham, K. K. Han, J. J. Lin, and M. Golparvar-Fard, “Visual monitoring of civil


infrastructure systems via camera-equipped unmanned aerial vehicles (uavs): a review
of related works,” Visualization in Engineering, vol. 4, no. 1, p. 1, Jan 2016. [Online].
Available: /https://doi.org/10.1186/s40327-015-0029-z

[34] HGLRC, “Hglrc f4 v6 pro flight controller + 5.8ghz vtx w/ osd, pdb, bec,” n.d. [Online].
Available: /https://www.getfpv.com/electronics/electronic-flight-controllers/hglrc-f4-
v6-pro-flight-controller-5-8ghz-vtx-w-osd-pdb-bec.html

[35] Hobbypower, “Hobbypower 3dr radio wireless telemetry kit 915mhz module for apm2.6
apm2.7 pixhawk px4,” n.d. [Online]. Available: /https://www.amazon.com/Hobbypower-
Wireless-Telemetry-915Mhz-Pixhawk/dp/B00Q7VC7AC/ref=sr_1_2_sspa?ie=
UTF8&qid=1541450192&sr=8-2-spons&keywords=radio+telemetry&psc=1

49
[36] HQProp, “Dd-6xc-gfn,” n.d. [Online]. Available:
/http://www.hqprop.com/en/MultiRotorsd.php?tid=289&pid=933

[37] ——, “Dd7x4.5,” n.d. [Online]. Available:


/http://www.hqprop.com/en/MultiRotorsd.php?tid=290&pid=935

[38] ——, “Tp-6x4.5x3-gfn,” n.d. [Online]. Available:


/http://www.hqprop.com/en/MultiRotorsd.php?tid=289&pid=946

[39] KNACRO, “Knacro mag3110 3-axis digital magnetometer i2c interface development
board,” n.d. [Online]. Available: /https://www.amazon.com/KNACRO-MAG3110-
Magnetometer-Interface-Development/dp/B073WQC6R9/ref=sr_1_9?ie=UTF8&qid=
1541449470&sr=8-9&keywords=3+Axis+Magnetometer

[40] leonardblier, “A brief report of the heuritech deep learning meetup #5,” 2013. [Online].
Available: /https://blog.heuritech.com/2016/02/29/a-brief-report-of-the-heuritech-
deep-learning-meetup-5/

[41] LHI, “Lhi 4pcs wraith32 - 32bit blheli 32 esc 35a dshot1200 built in current sensor for fpv
quadcopter,” n.d. [Online]. Available: /https:
//www.amazon.com/LHI-4PCS-Wraith32-DSHOT1200-Quadcopter/dp/B0716QS6B4

[42] S.-L. Long and N. A. Azmi, “Using photographic identification to monitor sea turtle
populations at perhentian islands marine park in malaysia,” Herpetological Conservation
and Biology, vol. 12, no. 2, pp. 350–366, 2017.

[43] MakerFocus, “Makerfocus infrared distance sensor module with 850nm infrared led,
detection distance up to 10m for uav, balanced vehicle and smart home,” April 2018,
[Online; posted 20-April-2018]. [Online]. Available:
/https://www.amazon.com/MakerFocus-Infrared-Distance-Detection-Balanced/dp/
B07CJYGJRY/ref=sr_1_1_sspa?ie=UTF8&qid=1541450420&sr=8-1-spons&keywords=
infrared+distance+sensor&psc=1

[44] MUVI, “Q1 drone,” n.d. [Online]. Available:


/http://www.veho-muvi.com/muvi_product/q-drone/

[45] M. Norton-Griffiths and J. Grimsdell, Counting animals, ser. Handbook (Serengeti


Ecological Monitoring Programme). Serengeti Ecological Monitoring Programme,
African Wildlife Leadership Foundation, 1978. [Online]. Available:
/https://books.google.com.cy/books?id=P3VFAQAAIAAJ

[46] Pixhawk, “Pixhawk 4,” Nov 2018, [Online; posted 06-Nov-2018]. [Online]. Available:
/https://docs.px4.io/en/flight_controller/pixhawk4.html

[47] ——, “Pixhawk 4,” n.d. [Online]. Available:


/https://shop.holybro.com/pixhawk-4beta-launch_p1089.html

50
[48] RaceFlight, “Raceflight bolt v2 30a 4in1 esc,” n.d. [Online]. Available:
/https://www.getfpv.com/electronics/electronic-speed-controllers-esc/raceflight-
bolt-v2-30a-4in1-esc.html

[49] Racerstar, “Racerstar star30 30a blheli 2-5s 4 in 1 detachable esc dshot600 ready,” n.d.
[Online]. Available:
/https://www.banggood.com/Racerstar-Star30-30A-Blheli_S-2-5S-4-In-1-Detachable-
ESC-Support-Dshot600-Ready-for-Racing-Drone-p-1142340.html?p=
V5211124150930201808&cur_warehouse=CN

[50] ——, “Racerstar rs30a v2 30a blheli esc opto 2-4s support oneshot42 multishot 16.5
dshot600,” n.d. [Online]. Available:
/https://www.banggood.com/Racerstar-RS30A-V2-30A-Blheli_S-ESC-OPTO-2-4S-
Support-Oneshot42-Multishot-for-FPV-Racer-p-1072073.html?p=
V5211124150930201808&cur_warehouse=CN

[51] I. RC, “mpulse rc alien rr5 (kit),” n.d. [Online]. Available:


/https://store.rotorriot.com/impulse-rc-alien-rr5-kit/

[52] D. S. G. R. F. A. Redmon, J., “You only look once: unified, real-time object detection,”
2016.

[53] A. F. Rees, L. Avens, K. Ballorain, E. Bevan, A. C. Broderick, R. R. Carthy, M. J. Christianen,


G. Duclos, M. R. Heithaus, D. W. Johnston et al., “The potential of unmanned aerial
systems for sea turtle research and conservation: a review and future directions,”
Endangered Species Research, vol. 35, pp. 81–100, 2018.

[54] H. K. G. R. S. J. Ren, S., “Faster r-cnn: towards real-time object detection with region
proposal networks,” 2015.

[55] R. Riot, “Rotor riot cl1 frame,” n.d. [Online]. Available:


/https://store.rotorriot.com/rotor-riot-cl1-frame/?aff=3

[56] ——, “Rotor riot hypetrain freestyle 2306 2450kv motor v2,” n.d. [Online]. Available:
/https://www.getfpv.com/motors/rotor-riot-hypetrain-freestyle-2306-2450kv-motor-
v2.html

[57] Seaturtles.org, “Sea turtle threats,” n.d. [Online]. Available:


/https://www.seeturtles.org/sea-turtles-threats/

[58] A. Seymour, J. Dale, M. Hammill, P. Halpin, and D. Johnston, “Automated detection and
enumeration of marine wildlife using unmanned aircraft systems (uas) and thermal
imagery,” Scientific reports, vol. 7, p. 45127, 2017.

[59] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image
recognition,” 2015.

51
[60] R. T. Snape, P. J. Bradshaw, A. C. Broderick, W. J. Fuller, K. L. Stokes, and B. J. Godley,
“Off-the-shelf gps technology to inform marine protected areas for marine turtles,”
Biological Conservation, vol. 227, pp. 301–309, 2018.

[61] Stephen, “A review of the aee s71 outdoor edition sports video camera with 4k and hd
video,” May 2015, [Online; posted 9-May-2015]. [Online]. Available:
/http://4k.com/camera/a-review-of-the-aee-s71-outdoor-edition-sports-video-
camera-with-4k-and-hd-video/

[62] STRIX, “Strix screech frame kit - black tpu,” n.d. [Online]. Available:
/https://www.readymaderc.com/products/details/strix-screech-frame-kit-black

[63] TBS, “Tbs ethix mr steele 2306 2345kv silk v2 motor,” n.d. [Online]. Available: /https:
//www.getfpv.com/motors/tbs-ethix-mr-steele-2306-2345kv-silk-v2-motor.html

[64] P. Technology, “Ir remote power module for raspberry pi,” n.d. [Online]. Available:
/https://www.robotshop.com/en/ir-remote-power-module-raspberry-pi.html

[65] A. Technology, “Aee technology ad02 drone accessory intelligent ap11 replacement
lithium polymer battery, 6800 mah, white,” May 2016, [Online; posted 13-May-2016].
[Online]. Available: /https://www.amazon.com/AEE-Technology-AD02-Intelligent-
Replacement/dp/B01G728VMI

[66] ——, “Aee technology aj01 10-inch self-tightening propeller set for toruk ap10 video
drone quadcopter (white),” n.d. [Online]. Available: /https://www.amazon.com/AEE-
Technology-AD02-Intelligent-Replacement/dp/B01G728VMI

[67] towardsdatascience.com, “Deep learning impact,” n.d. [Online]. Available:


/https://towardsdatascience.com/tagged/deep-learning

[68] R. N. Tutorials, “Video streaming with raspberry pi camera.” [Online]. Available:


/https://randomnerdtutorials.com/video-streaming-with-raspberry-pi-camera/

[69] J. C. van Gemert, C. R. Verschoor, P. Mettes, K. Epema, L. P. Koh, and S. Wich, “Nature
conservation drones for automatic localization and counting of animals,” in Workshop at
the European Conference on Computer Vision. Springer, 2014, pp. 255–270.

[70] WINGONEER, “Wingoneer gy68 bmp180 hmc5983 digital barometer ludftdruck sensor
i2c frarduino raspberry,” n.d. [Online]. Available: /https://www.amazon.com/
WINGONEER-Barometer-Ludftdruck-FrArduino-Raspberry/dp/B06XHJQMMJ/ref=
sr_1_3?ie=UTF8&qid=1541449612&sr=8-3&keywords=barometer+sensor

[71] M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,”


2014. [Online]. Available: /https://cs.nyu.edu/~fergus/papers/zeilerECCV2014.pdf

52

You might also like