Download as pdf or txt
Download as pdf or txt
You are on page 1of 88

UAV Detection and Tracking in

High-Resolution Images

PROJECT REPORT

Submitted by:
Aqsa Abu Bakar.
(UET-18F-BSCE-KICSIT-02)

Ayesha Tahir.
(UET-18F-BSCE-KICSIT-13)

Esha Baig.
(UET-18F-BSCE-KICSIT-18)

BACHELOR OF SCIENCE IN COMPUTER ENGINEERING

PROJECT SUPERVISOR

Mr. Mughees Sarwar Awan

DR. A. Q. KHAN INSTITUTE OF COMPUTER SCIENCES

AND INFORMATION TECHNOLOGY

KAHUTA

AFFILIATED WITH UET TAXILA

(2022)
UAV Detection and Tracking in High-Resolution Images

PROJECT REPORT

Submitted by:

Aqsa Abu Bakar.


(UET-18F-BSCE-KICSIT-02)

Ayesha Tahir.
(UET-18F-BSCE-KICSIT-13)

Esha Baig.
(UET-18F-BSCE-KICSIT-18)

BACHELOR OF SCIENCE IN COMPUTER ENGINEERING

PROJECT SUPERVISOR

Mr. Mughees Sarwar Awan

DR. A. Q. KHAN INSTITUTE OF COMPUTER SCIENCES

AND INFORMATION TECHNOLOGY

KAHUTA

AFFILIATED WITH UET TAXILA

(2022)
In the name of ALLAH ALMIGHTY the most beneficent,
the most merciful

UAV Detection and Tracking in High Resolution Image i


“To Him belongs the dominion of Heaven and the
Earth, it is He who gives life and death and He has

power over all things”


(Al-Quran)

UAV Detection and Tracking in High Resolution Image ii


DEDICATION

Dedicated to Ourselves, Our Families, Respected

Teachers, Mentors and Friends, who always had

been a source of Inspiration and Motivation for us.

UAV Detection and Tracking in High Resolution Image iii


DECLARATION

We hereby declare that this project and work, neither as a whole nor as a part has been copied
out from any source. It is further declared that we have conducted this project work and have
accomplished this thesis entirely on the basis of our personal efforts and under the sincere
guidance of our supervisor Mr. Mughees Sarwar Awan. If any part of this project is proved
to be copied from any source or found to be a reproduction of the same project, we shall stand
by the consequences. No portion of the work presented in our dissertation has been submitted
in support of any other degree or qualification of this or any other university or institute of
learning.

MEMBER’S SIGNATURES

______________________________
Aqsa Abu Bakar.
(UET-18F-BSCE-KICSIT-02)

_____________________________
Ayesha Tahir.
(UET-18F-BSCE-KICSIT-13)

_____________________________
Esha Baig.
(UET-18F-BSCE-KICSIT-18)

PROJECT SUPERVISOR’S SIGNATURE

___________________________
Mr. Mughees Sarwar Awan
Assistant Professor
Computer Engineering

UAV Detection and Tracking in High Resolution Image iv


FINAL APPROVAL

It is certified that we have examined the thesis titled UAV Detection and Tracking in High-
Resolution Images/ Videos submitted by Aqsa Abu Bakar (UET-18F-BSCE-KICSIT-02),
Ayesha Tahir (UET-18F-BSCE-KICSIT-13), Esha Baig (UET-18F-BSCE-KICSIT-18),
and found as per standard. We accept the work contained in the report as a confirmation of the
required standard for the partial fulfillment of the degree of Bachelor of Science in Computer
Engineering.

Committee:

External Examiner:
_________________________
Name
Designation
Institute

Project Coordinator:
_________________________
Mr. Muhammad Waqas

Project Supervisor:
_________________________
Mr. Mughees Sarwar Awan

Head of Department (Computer Engg.):


_________________________
Dr. Waqar Farooq

Director KICSIT:
_________________________
Engr. Masood Khalid

UAV Detection and Tracking in High Resolution Image v


COPYRIGHT STATEMENT

Copyright in text of this thesis rests with the authors. Copies (by any process) either in full, or
of extracts, may be made only in accordance with instructions given by the author and lodged
in the Library of KICSIT. Details may be obtained by the Librarian. This page must form part
of any such copies made. Further copies (by any process) of copies made in accordance with
such instructions may not be made without the permission (in writing) of the author.

The ownership of any intellectual property rights which may be described in this thesis is
vested in KICSIT, subject to any prior agreement to the contrary, and may not be made
available for use by third parties without the written permission of the KICSIT, which will
prescribe the terms and conditions of any such agreement.

Further information on the conditions under which disclosures and exploitation may take place
is available from the Library of KICSIT, Kahuta.

UAV Detection and Tracking in High Resolution Image vi


ACKNOWLEDGEMENTS

First of all, we thank Allah, the almighty, for giving us the strength to carry on this project and
for blessing us with many great people who have been the greatest support in our personal and
professional lives. We also thank our parents, for advising, praying, and supporting us in
everything in our lives, especially for 4 year studying Engineering. We owe so much to our
whole family for their undying support, and their unwavering belief that we can achieve so
much. We would like to thank our university Dr. A Q Khan Institute of Computer Sciences &
Information Technology for teaching us Computer Engineering and for all our professors and
doctors. Also, we would specially like to thank Swarm Robotics Lab, National Center of
Robotics and Automation for their cooperation and contribution during the training process of
our dataset.

We deeply thank our Director I/C Dr. Salman Iqbal whose help and advice were invaluable.

Not least of all, I am very fortunate and grateful to my project supervisor Mr. Mughees
Sarwar Awan for providing leadership and ideas. Thank you for your support and helpful
suggestions, we will be forever thankful to you. He provides help to solve various problems.

UAV Detection and Tracking in High Resolution Image vii


ABSTRACT

Several organizations including the military government and commercial sector are adopting
drone technology. Drones are also known as UAVs (Unmanned Ariel Vehicles). The
exponentially increasing public accessibility of drones possesses a great threat to general
security and confidentiality. Therefore, detecting and eliminating drones before lethal
outcomes is of paramount interest. For this purpose, we have developed an autonomous drone
detection and tracking system. For evaluation of the detection algorithm, we have used YOLO
v3 which is combined with the MOSSE tracker to enhance the efficiency of the detector. The
detection algorithm was trained through our custom dataset for which we have manually
labeled approx. 8000 images were gathered from almost 120 videos. The ROI(Region Of
Interest )is passed from the detector to the tracker which keeps on tracking the UAV until or
unless the target gets lost after which it shifts back to the initial detection. The rotating turret
rotates along the targeted UAV as it displaces from the center of the frame. As soon as the
drone is detected the system gives an alert in the form of a siren, sends an email message and
calls the authorized admin to inform about the detected drone. Authorized admin can view the
screen via a mobile app connected with the system through Wi-Fi. Moreover, the videos of the
tracked drones are saved at the backend along with the proper date and time so that they can
be used in the future for other purposes.

UAV Detection and Tracking in High Resolution Image viii


Table of Contents
Declaration ........................................................................................................................................... iv

Final Approval ...................................................................................................................................... v

Copyright Statement ............................................................................................................................. vi

Acknowledgments............................................................................................................................... vii

Abstract .............................................................................................................................................. viii

Table of Contents ................................................................................................................................. ix

List of Figures .................................................................................................................................... xiii

List of Tables ....................................................................................................................................... xvi

Acronyms and Abbreviations ............................................................................................................ xvii

CHAPTER 1 INTRODUCTION ................................................................................................... 1

1.1 Project Background/Overview ..........................................................................................................2

1.2 Problem Description ...................................................................................................................... ...3

1.3 Problem Statement ............................................................................................................................ 3

1.4 Working Overview .......................................................................................................................... 4

1.4.1 Static Turret ………………………………………………………………..…………….4


1.4.2 Rotating Turret …………………………………………………………….…………….4
1.5 Technology Used ..............................................................................................................................4

1.5.1 YOLO Algorithm…. ……………………………………………………….…………....4


1.5.2 MOSSE Tracker ………………………………………………………….……………...5

1.6 Chapter 1 Summary .......................................................................................................................... 6

CHAPTER 2 LITERATURE REVIEW ...................................................................................... 7

2.1 Existing System ................................................................................................................................8

2.1.1 Video detection methods…...………………………………………………….………....8


2.1.2 Use camera for detection methods …………….………………………………….……...8

UAV Detection and Tracking in High Resolution Image ix


2.1.3 Radar Systems ………………………......………………………………………….…....8
2.1.4 Radio Frequency Based Methods …...……………………………...…………………....9
2.1.5 Real-Time Drone Detection and Tracking with Visible, Thermal, and Acoustic Sensor …............9

2.2 Related Work ....................................................................................................................................9

CHAPTER 3 REQUIREMENT ANALYSIS ........................................................................... 11

3.1 User Requirements ..........................................................................................................................12


3.2 Functional Requirements ................................................................................................................13

3.2.1 Hardware Prototype ……........………………………………...………………………. 13


3.2.2 Arduino Python Programming ……………….…...……………………………………13
3.2.3 Software Model ……………………......……………………………………………….13

3.3 Non-Functional Requirements ........................................................................................................13

3.4 Important key points Indicators for Successful Projects ..................................................................14

CHAPTER 4 SYSTEM DESIGN AND IMPLEMENTATION ........................................... 15

4.1 Battery ……………........................................................................................................................ 16

4.1.1 Battery Introduction ....……........…………...…………………………………….........16


4.1.2 Technical Specification …...……........…………...….……………...…………….........16

4.2 Motors..............................................................................................................................................17
4.2.1 Introduction ....……........…………...…………………………………….……….........17
4.2.2 Technical Specification …...……........…………...….……………...…………….........17

4.3Arduino….........................................................................................................................................18

4.3.1 Technical Specification …...……........…………...….……………...……….………....18


4.3.2 Communication ………………………………………………………………………...19
4.3.3 Programming …………………………………………………………………………..19

4.4 L298N Motor Driver....................................................................................................................... 20

4.4.1 Working of L298N ….........……........…………...….………………...…….…….…... 21


4.4.2 Technical Specification ……………………………………………………….………. 22

4.5 4k resolution camera ......................................................................................................................23


4.5.1 Technical Specification …………….…………………………………………………. 24

UAV Detection and Tracking in High Resolution Image x


4.6 PK-940HA FHD 1080p AF Webcam ..............................................................................................24

4.6.1 Technical Specification ……………….………………………………………………. 25

4.7 Turrets ............................................................................................................................................25

4.7.1 Labelled Pieces ………………………………………………………………………...26


4.7.2 Circuit Diagram ………………………………………………………………………..30

4.8 System Specification ..................................................................................................................... 31


4.8.1 RTX5000 GPU …………………………………………………….………………….. 31
4.8.2 Technical Specification ………………………………………………………………...31

CHAPTER 5 TESTING AND RESULTS ................................................................................. 33

5.1 Facial Detection using Haar cascade................................................................................................34

5.1.1 Detection and Tracking of the face in real-time using Haar cascade…..………………. 35

5.2 Drone Detection and Tracking using Haar cascade .........................................................................35

5.2.1 Dataset Collection for drone …………….……………………………………………. 35


5.2.2 Collecting dataset for Haar Cascade …………………….……………………………. 35
5.2.3 Trained and tested cascade classifier using training utility …………………………… 37
5.2.4 Haar Cascade classifier …………………………………………………………...…... 37
5.2.5 Block Diagram of Drone Detection using Haar Cascade custom Classifier…..………..37
5.2.6 Drone Detection and tracking using Haar cascade …………..………………….…….. 38
5.2.7 Testing and Results of Pre-Trained Haar Cascade Classifier ……..………………..…. 38
5.2.7 Testing and Results of Custom-Trained Haar Cascade Classifier …...…………………38

5.3 YOLO Implementation....................................................................................................................39

5.3.1 Creating Dataset in YOLO format ………………………...…………………..………..39


5.3.2 YOLO format …………………………………………………………………….…….41
5.3.3 Train yolo weight file ……………………………………………………………….….41

5.4 Tracker Implementation...................................................................................................................41

5.4.1 BOOSTING Tracker ……………………………………..…………………………….41


5.4.2 MIL Tracker ……………………………………..……………………………………..42
5.4.3 KCF Tracker ……………………………………..………………………….……........42
5.4.4 TLD Tracker ………………...……………………………..…………………………..43
5.4.5 MEDIANFLOW Tracker ……………………………………..…………...…………...44
5.4.6 MOSSE Tracker ……………………………………..…………………………………44
5.4.7 CSRT Tracker ………………….…………………………..…………………………..45

UAV Detection and Tracking in High Resolution Image xi


5.4.8 Mean Shift Tracker ……………………………………..………………….....………...46
5.4.9 Cam Shift Tracker ……………………………………..……………………..………...46

5.5 Detection with Tracker Implementation...........................................................................................47

5.5.1 Flow chart and block diagram of Detection based tracking………..……………………47


5.5.2 Haar cascade on drone with tracker …………………………………………………….49
5.5.3 YOLO v3 on drone with tracker ………………………………..………………………50
5.5.4 Compare Efficiency of the detector and YOLO v3 detector with MOSSE
tracker………………………………………………………...………………………………51
5.5.5 Demonstration …………………………………………………………………………51

5.6 Evaluation of Dataset…………………...........................................................................................56

5.6.1 Confusion Matrix ……………………………………………………………………...56


5.6.2 Terminologies ………………………………………………………………………….56

CHAPTER 6 CONCLUSION AND FUTURE WORK ......................................................... 63

6.1 Conclusion...................................................................................................................................... 64

6.2 Future Work................................................................................................................................... 64

REFERENCES ................................................................................................................................ 65

UAV Detection and Tracking in High Resolution Image xii


List of Figures
Figure 4.1: 1500mAh 11.1V 25C Lipo battery. .................................................................... 16

Figure 4.2: 24V DC Motors. ……………............................................................................ 17

Figure 4.3: Arduino UNO. ……………............................................................................... 18

Figure 4.4: L298N Motor Driver. .......................................................................................... 20

Figure 4.5: Pin Diagram of L298N Motor Driver. ................................................................ 21

Figure 4.6: Circuit Diagram of L298N Motor Driver with motors. ….................................. 22

Figure 4.7: 4K Resolution Camera. ...................................................................................... 23

Figure 4.8: PK-940HA FHD1080p AF Webcam. ................................................................ 24

Figure 4.9: Turrets (a) Static Turret, (b) Rotating turret. ....................................................... 26

Figure 4.10: Components. ..................................................................................................26-


27

Figure 4.11: Base of Aluminum and Timber (a) Side view, (b) Top view............................. 27

Figure 4.12: Motors and Gears attached with the rod. …..................................................... 28

Figure 4.13: Bearings. ………….……………….................................................................. 28

Figure 4.14: Caster Wheels. ………….………………........................................................ 29

Figure 4.15: Static Turret. (a) Static Turret without the camera, (b) Static Turret with the
camera …………………..................................................................................................... 29

Figure 4.16: Rotating Turret. (a) Rotating Turret without the camera, (b) Rotating Turret with
the camera ………………………………….…………....................................................... 30

Figure 4.17: Circuit Diagram. ………….……………......................................................... 30

Figure 4.18: RTX 5000 GPU. ………….…………….......................................................... 32

Figure 5.1: Face and Eye Detection. .................................................................................... 34

Figure 5.2: Positive image Samples. ...................................................................................... 36

UAV Detection and Tracking in High Resolution Image xiii


Figure 5.3: Negative image Samples. .................................................................................... 36

Figure 5.4: Amin Ahmadi Tool. ............................................................................................ 37

Figure 5.5: Block Diagram of DD using Cascade Classifier. ................................................ 37

Figure 5.6: Testing of drone detection using the pre-trained classifier. ................................. 38

Figure 5.7: Testing of drone detection using the custom-trained classifier. .......................... 39

Figure 5.8: Drone Videos. .......................................................................................................40

Figure 5.9: Annotation on sliced images. ............................................................................. 40

Figure 5.10: Images and its text file folder. ........................................................................... 41

Figure 5.11: BOOSTING Tracker Implementation. ............................................................ 42

Figure 5.12: MIL Tracker Implementation. .......................................................................... 42

Figure 5.13: KCF Tracker Implementation. .......................................................................... 43

Figure 5.14: TLD Tracker Implementation. .......................................................................... 43

Figure 5.15: Median flow Tracker Implementation. ............................................................. 44

Figure 5.16: MOSSE Tracker Implementation. ................................................................... 45

Figure 5.17: CSRT Tracker Implementation. ....................................................................... 45

Figure 5.18: Mean Shift Tracker Implementation. ................................................................ 46

Figure 5.19: Cam Shift Tracker Implementation. ................................................................. 47

Figure 5.20: Block diagram of detection-based tracking. ...................................................... 48

Figure 5.21: Flow chart of detection-based tracking.............................................................. 48

Figure 5.22: Detection. ....................................................................................................... 49

Figure 5.23: Tracking. ......................................................................................................... 49

Figure 5.24: Detection. ........................................................................................................ 50

Figure 5.25: Tracking. .......................................................................................................... 50

UAV Detection and Tracking in High Resolution Image xiv


Figure 5.26: (a)FPS rate of Detector, (b) Combined YOLO V3 Detector with MOSSE tracker.
................................................................................................................................................ 51

Figure 5.27: (a)Demonstration(outdoor-i),(b)Demonstration(indoor-i),(c)Demonstration


(indoor-ii) ,(d) Demonstration (indoor-iii)…………………………………………..……51-53

Figure 5.28 (a)Alerts, (b) Message Statements………………………………….……… 53-54

Figure 5.29 (a)Folder of tracked drones, (b) Images saved inside folder, (c)Zoomed images
along with the video of tracked drones……………………………………………………54-55

Figure 5.30: Confusion Matrix. ............................................................................................. 56

Figure 5.31: True Positive. ................................................................................................... 57

Figure 5.32: True Negative. ................................................................................................... 57

Figure 5.33: Confusion Matrix of the custom dataset. .......................................................... 58

Figure 5.34: Heat map of Confusion Matrix in Python. ....................................................... 58

Figure 5.35: Evaluation Metrics of Custom Dataset Percentage. ........................................... 62

UAV Detection and Tracking in High Resolution Image xv


List of Tables

Table 1: Acronym and Abbreviation ..……………………………………………...…….xvii

Table 2: Rotation of motors based on pin configuration ...................................................... 21

Table 3: Pin Configuration of L298N Motor Driver ............................................................. 22

UAV Detection and Tracking in High Resolution Image xvi


Table 1: Acronym and Abbreviation

Acronyms Abbreviations

UAVs. Unmanned Aerial Vehicles.

RF. Radio Frequency.

RADAR. Radio Detection And Ranging.

LIDAR. Light Detection and Ranging.

YOLO. You Only Look Once.

CNN. Convolutional Neural Network.

YOLO V3. You Only Look Once, Version 3.

MOSSE Tracker. Minimum Output Sum of Squared Error.

YOLO V2. You Only Look Once, Version 2.

R – CNN. Region-based Convolutional Neural Network.

RF – based. Radio Frequency–based.

DD. Drone Detection.

MIL Tracker. Multiple Instance Learning Tracker.

KCF Tracker. Kernelized Correlation Filter Tracker.

TLD Tracker. Tracking, Learning, and Detection Tracker.

CSRT. Channel and Spatial Reliability Tracking.


.
ROI. Region of Interest.

UAV Detection and Tracking in High Resolution Image xvii


Chapter 1 Introduction

Chapter 1
INTRODUCTION

UAV Detection and Tracking in High Resolution Images 1


Chapter 1 Introduction

CHAPTER 1
INTRODUCTION
1.1 Project Background

UAVs (Un-Manned Ariel vehicles) also known as Drones are a modern and advanced technology used
in many fields. It is currently being used by many countries for military intelligence, reconnaissance,
and espionage operations and a number of organizations are adopting this technology including the
government, commercial and military sector. Drones are unlimited in our daily lives. For example,
drones are used for courier, security monitoring, and other purposes. On the other hand, drones are a
double-edged sword. Drones can be used to improve people's quality of life, but they can also be used
for other criminal and harmful purposes. Even terrorists have used smart drones to attack several
countries. Therefore, drone detection is an active research topic that attracts the attention of many
scientists. Sophisticated drones are often difficult to detect and, in some cases, life-threatening.

Most modern detection technologies use video, audio, radar, temperature, radio frequency (RF), or
Wi-Fi technology. However, each detection method has the disadvantage that it is not suitable for
detecting drones in sensitive areas.

Our goal is to overcome the challenges faced by current methods of drone detection. In this project,
we have combined different strategies to create an efficient drone detection and tracking system. In
particular, we have evaluated different detection and tracking techniques based on risk assessment and
probability of success. Most of the proposed drone detection solutions are based on reliable data
transmission, which is continuously sent to the ground station for data processing. Such
implementations may fail if large amounts of data need to be transferred over long distances.

The popular detection techniques for UAV detection are RADAR, LIDAR, and Acoustics. RADARs
cannot resolve the type of object being detected, LIDARs with a laser beam of high intensity can cause
damage to the human eye while acoustics is efficient in noisy environments such as airports

It is well known that the detection abilities are high when the target is visible, and camera sensors have
this advantage. The popularity of vision-based systems combined with image and video-processing
technologies is growing, in part, due to the proliferation of systems based on deep learning. The
advantage of these automated systems is that they can be operated at scale at any time of the day or

UAV Detection and Tracking in High Resolution Images 2


Chapter 1 Introduction

night, without human intervention and with minimal maintenance. It is modern and less susceptible to
human prejudice.

1.2 Problem Description

The ultimate goal of developing a UAV Detection and tracking in high-resolution


images/videos system:

• To track and target drones.

• To resolve issues in the current detection systems.

• To avoid false alarms.

• Security increase at a lower cost.

• To develop a system that doesn’t require large space and can be mounted on a small space.

Cameras have long been used to monitor security-sensitive areas. Drones have been used in many
security events in recent years. Drone detection requires the purchase of expensive equipment. In
past, other detection systems such as RADARs have cost us a fortune for setting up butt this system
meets the same functionality at a very lower cost and it can be mounted anywhere due to its
compactness. Therefore, you need a drone detection system. Today, security-sensitive areas such
as streets, busy public places, and borders are being monitored. Video surveillance has been around
for a long time. In recent years, the number of drone cases involving both civilian and military
sites has increased dramatically. Exorbitantly expensive specialized equipment is often used to
identify drones. Therefore, you need a drone detection system that can run on high-performance
hardware.

1.3 Problem Statement

Improvement of detection accuracy for drone / UAV using the high-resolution multi-cameras, and
detect the high fidelity movement of aerial objects i.e. drones.

UAV Detection and Tracking in High Resolution Images 3


Chapter 1 Introduction

1.4 Working Overview

This project is based on the video coming from two different cameras one with 4K resolution and the
other with 1080p mounted on two different turrets. The two turrets being used in this project are:-

1. Static Turret.
2. Rotating Turret.

1.4.1 Static Turret

The wide-angle camera that has 4K resolution is mounted on the static turret. To detect the object,
and is connected to the system. It detects the object i.e. the drone and is capable of detecting
multiple drones at a time.

1.4.2 Rotating Turret

A revolving structure that has two movements. Pitch and Yaw movements are two types of
movement. These movements are controlled by two motors. The low-angle camera that has 1080p
resolution is mounted on the rotating turret. It also allows us to get monitor the drone being tracked
closely by displaying a zoomed-in version of the drone. To track the object that is connected to the
system. Arduino is used to rotate the motors based on the coordinates sent serially from the Python
script to the Arduino. The motors are driven by the L298N motor driver IC. A battery powers the
motors.

1.5 Technology Used

1.5.1 YOLO ALGORITHM

YOLO is an associate degree rule that uses neural networks to supply a period of time object
detection. This algorithm is common thanks to its speed and accuracy. it's been utilized in varied
applications to detect traffic signals, people, parking meters, and animals.

YOLO is an abbreviation for the term ‘You solely Look Once’. this is often an algorithm that
detects and acknowledges various objects in exceedingly image (in real-time). Object detection in
YOLO is finished as a regression downside and provides the category possibilities of the detected
images.

UAV Detection and Tracking in High Resolution Images 4


Chapter 1 Introduction

YOLO algorithm employs convolutional neural networks (CNN) to detect objects in real-time. As
the name suggests, the rule needs solely one forward propagation through a neural network to
notice objects. this implies that prediction within the entire image is finished in an exceedingly
single algorithm run. The CNN is employed to predict varied category possibilities and bounding
boxes simultaneously.

YOLO algorithm works mistreatment the subsequent 3 techniques:

• Residual blocks
• Bounding box regression
• Intersection Over Union (IOU).

YOLOv3 could be a period of time object detection algorithm that identifies specific objects in
videos, live feeds, or images. YOLO uses options learned by a deep convolutional neural network
to notice associate degree object. it's a State-of-the-art Algorithm. it's thus quick that it's become
virtually a customary approach of sleuthing objects within the field of laptop vision.

1.5.2 MOSSE Tracker

Initialization and tracking are the algorithm's two core components. The initialization of the object
is established from the first few frames of either the simple or the more complex tracker version.
The item is cropped and centered before the filter is put on. The initialization filter is then attached
to the video tracking frame to calculate the object's new position. The tracker modifies the filter
when keeping track of complicated versions. The user selects an object by clicking in its center to
begin the filter. When you decide to annotate an object, the bounding box around at that object gets
displayed. The bounding box represents both the monitor window and the initialized model. During
tracking, the model would be clipped in order to create and update the filter. Before filters
initialization and object tracking, each frame of the film goes through a pre - processing stage.

The preprocessing step model is converted into a Fourier region. In order to initialize and modify
the filter, a composite result is created. Additionally, the combined output is converted into a
Fourier area, where the filter is then computed. In the next few frames of the video while tracking,
the correlation output is translated to the spatial domain and the new position of the identified item.
To accelerate the computation, the correlation is performed out in the Fourier area.

UAV Detection and Tracking in High Resolution Images 5


Chapter 1 Introduction

1.6 Chapter 1 Summary

This chapter discusses the idea behind the project, the background, and the basic need for this
project. Furthermore, he gave a short and small introduction to the components and algorithms
used in the project, details of which will be provided in the following chapters.

UAV Detection and Tracking in High Resolution Images 6


Chapter 2 Literature Review

Chapter 2
LITERATURE REVIEW

UAV Detection and Tracking in High Resolution Images 7


Chapter 2 Literature Review

CHAPTER 2
LITERATURE REVIEW

In this portion, we present a literature review of UAVs, and their prevention using anti-UAV
techniques. We first explain the existing systems such as cameras, videos for detection, RADAR
systems, etc. After understanding their operation, various techniques for monitoring and
preventing UAV attacks are described along with case studies.

2.1 Existing Systems:

The existing systems include video detection methods, cameras for detection, RADAR systems,
and RF-based methods.

2.1.1 Video detection methods


Real-time image detection of drones using deep learning has been implemented. A dataset has been
developed that is labeled semi-automatically by the KCF tracker rather than manually. Then they
improved the YOLOv2 model. [1]
The surveillance system has been equipped with a 4K video camera for detecting flying objects up
to 150 meters away, using the widest angle to capture a wide range of sky footage. [2]

2.1.2 Use camera for detection methods


A Wide-angle daylight camera allows the detection of flying intruders, as small as 20 pixels with
a very low false alarm rate. [3]

The surveillance system captures a wide area of the sky in high resolution with a 4K camera and
transmits the individual image to the server every second. [4]

A visual camera allows adjustment and provides each UAV with the ability to visually monitor
other members of the formation. Developing a robust solution for detecting and tracking
collaborative targets through a series of frames. [5]

2.1.3 Radar Systems:


Radar systems stand for Radio Detection and Ranging. The radar system has been used for many

UAV Detection and Tracking in High Resolution Images 8


Chapter 2 Literature Review

years to detect Unmanned Aerial Vehicles, especially at long distances and with poor visibility
such as thick fog or at night, while traditional radar detects objects that are smaller, slower, and at
lower altitudes than regular aircraft. Not designed to be. Radar is the easiest to find a drone in
flight and has a high false-positive rate in crowded urban areas. [6]
2.1.4 Radio Frequency Based Methods:
Radio frequency-based approaches are ineffective if the drone is not interacting with the controller.
RF-based control and video transmission. It is difficult when the altitude is high. It depends on the
transmission power and reception sensitivity. [7]
2.1.5 Real-Time Drone Detection and Tracking with Visible, Thermal, and Acoustic Sensors:
This paper describes the design process of a multi-sensor drone auto-detection system besides the
usual video and audio sensors; the system also includes thermal Infrared cameras proving to be a
viable solution for Drone detection tasks. Lower the resolution a little Performance is comparable
to visible range cameras. Detector performance as a function of sensor-to-target Distance is also
considered. In addition, by sensor fusion, the system is designed to be more robust than individual
sensors. Helps reduce false positives. To solve the lack of publicity. [8]

2.2 Related Work:

A convolutional neural network (CNN) may be an epoch within the deep learning field, it always
includes one enter layer and one output layer and to boot consists of quite one hidden layer among
the enter and output layers. The convolutional layer applies a convolution kernel over the input
tensor and also the convolutional operation is basically a cross-correlation mathematical process
that may cut back a large number of network parameters while achieving constant goals as a
conventional absolutely connected network. [9] Initially introduced a deep convolutional neural
network to classify ImageNet datasets, going beyond ancient classification methods. once their
successful work, a variety of recent applications emerged on CNN [10].
CNN is most typically employed in laptop vision image classification, image object detection,
image object segmentation, image vogue transfer, and more. Object detection may be a computer
vision task that discovers the category and site of objects in a very specific image. Discovered
objects are known by an oblong bounding box with a particular class identifier. Recent studies
have used deep learning to realize objection detection. Area-based convolutional neural networks
(R-CNN) for object detection are typical of this kind of deep learning technique. Use the region of

UAV Detection and Tracking in High Resolution Images 9


Chapter 2 Literature Review

interest technique to separate object classification and bounding box predictions and see objects
[11]. quick R-CNN improves the speed of RCNN by introducing a vicinity of interest (ROI)
pooling that may share a convolutional layer [12]

UAV Detection and Tracking in High Resolution Images 10


Chapter 3 Requirement Analysis

Chapter 3
REQUIREMENT ANALYSIS

UAV Detection and Tracking in High Resolution Images 11


Chapter 3 Requirement Analysis

CHAPTER 3
REQUIREMENT ANALYSIS
The requirement analysis covers the user requirements, functional requirements, non-functional
requirements, and important key points indicator for a successful project.

3.1 User Requirements

The UAV Detection and tracking in High-Resolution Images/ Videos are specially designed
for indigenous organizations. The main objective involved in this project is to detect and track
drones. Resolve issues in the current detection systems. Avoid false alarms. Security increases at
a lower cost. To develop a system that doesn’t require large space and can be mounted on a small
space.

Most of the work is Image-based real-time detection and tracking has been specially developed
for indigenous organizations. The main goal of this plan is to detect and track the drone. The use
of visual information to find and classify drones continues to be in its infancy. Most of the work
was done through victimization the learned options and numerous deep learning models and
techniques. Deep learning approaches, on the opposite hand, are data-driven and need giant tagged
datasets to create strong models. the shortage of public out their datasets may be a major obstacle
to analysis in this area rather than ranging from scratch, some authors used transfer learning to cut
back this difficulty. In other studies, a special software system was accustomed [13] to generating
composite pictures to extend the number of samples within the dataset. Data enlargement and
therefore the use of generative models appreciate Generative Adversarial Networks (GANs) to
make generative data comparable to the first real data increase the body of knowledge that will be
employed in the future. There are two ways. Most of the analysis on visual drone detection doesn't
specify the kind of detection device, drone type, detection range, or dataset used in their research.
These details are necessary to validate your work and compare it to similar investigations.
excluding these machine learning features, visual detection is restricted there in potency and is
often limited as a result of it depends on the presence of a line of sight (LOS) between the drone
and therefore the camera system.

UAV Detection and Tracking in High Resolution Images 12


Chapter 3 Requirement Analysis

3.2 Functional Requirements

These are the following functional requirement for project development:

• Hardware component Arduino board for project interface.


• Arduino Python programming.
• Software Model.
• Efficient Detection and Tracking Algorithm. (YOLO v3, MOSSE).

3.2.1 Hardware Prototype

The hardware components include the onboard Arduino circuit to which all system components
are connected.
3.2.2 Arduino Python programming

The connected components are controlled by Arduino, which also performs the geared motor and
turret rotation functions.

3.2.3 Software model:


• Python.
• Pycharm.
• Open CV.
• Tensorflow, Keras.
• Linux
.

3.3 Non-Functional Requirements

It includes safety, performance, availability, reliability, and others.

• Safety:

Appropriate testing and security measures have been taken to ensure that the system for
implementation and maintenance is functioning efficiently and properly.

UAV Detection and Tracking in High Resolution Images 13


Chapter 3 Requirement Analysis

• Performance:

The performance of our system is efficient and reliable.

• Availability:

The system should be accessible and will work effectively at any time

• Reliability:

It requires extremely less human resources, but it still provides more efficient results,
unlike the drone detectors and tracking currently in use.

• Other non-functional requirements:

It includes usability, reliability, and easy maintenance.

3.4 Important key points indicators for Successful Project:

We have defined the key points for the successful project.

• System should be able to detect drone reaches 4 frames.


• System should be able to track the path of the drone when passing the target as an ROI.
• All the tracking and detection of the drone videos should be saved in the memory.
• System should generate the alarm when the drone is detected then they also send email and
call to the dedicated authorities.

Our focus is on detecting and tracking drones in high-resolution images/ videos and generating an
alarm, sending emails and call to the dedicated authorities in real-time. Drone detection and
tracking also saved in the memory.

UAV Detection and Tracking in High Resolution Images 14


Chapter 4 System Design and Implementation

Chapter 4
SYSTEM DESIGN AND
IMPLEMENTATION

UAV Detection and Tracking in High Resolution Images 15


Chapter 4 System Design and Implementation

CHAPTER 4
SYSTEM DESIGN AND IMPLEMENTATION
4.1 Battery

4.1.1 Battery Introduction


Lipo battery could be a reversible battery of lithium-ion technology using a compound answer
instead of a liquid electrolyte. High natural phenomenon solid (gel) polymers kind this electrolyte.
These batteries supply higher specific energy. Lipo batteries are well-known for their performance,
dependability, and low cost. XF model batteries provide the entire rated capacity at an affordable
price. Discharge leads are included with XF model batteries to reduce resistance and support high
current loads.

Figure 4.1: 1500mAh 11.1V 25C lipo battery.

4.1.2 Technical Specifications

• Minimum Capacity: 1500mAh


• Configuration: 11.1 Volt / 3Cells
• Consistent Discharge: 25C
• Peak flow rate (10sec): 30C
• Package load: 125g
• Package dimensions: length 7.5cm, width 3.5cm, height 2.8cm
• Discharge Plug: T plug, Balance charger plug.

UAV Detection and Tracking in High Resolution Images 16


Chapter 4 System Design and Implementation

4.2 Motors

4.2.1 Introduction
There are two motors that are responsible for the clockwise and anticlockwise rotation of the turret.
These motors allow two movements, pitch, and yaw. One motor is responsible for yaw(left and
right) movement and the other is responsible for the pitch(up and down) movement. Turret
follows the detected drone.

Figure 4.2: 24V DC Motors

4.2.2 Technical Specifications

• Estimated force: 474 kg.cm


• Locked rotor force: 267.7 kg.cm
• Estimated speed: 182rpm.

• Estimated Voltage: 24V DC.

• Estimated current: 6.5A.

• Halt current: 22A.

• Maximum wattage: 100W/24VDC


• Spindle rotates 360 degrees
• Motor weight: 2.1 kg.

UAV Detection and Tracking in High Resolution Images 17


Chapter 4 System Design and Implementation

4.3 Arduino
Arduino Uno may be a microcontroller board supported by ATmega328 (datasheet). 14 digital
input/output pins (6 of which are used as PWM outputs), 6 relevant analog inputs, 16 megacycle
crystal oscillators, USB connectors, influence sockets, ICSP headers, and resets There is a button.
It contains everything you need to support a microcontroller. Connect it to your laptop with a USB
cable, or power it on with an AC-DC adapter or battery to start booting. Uno differs from all
previous boards in that it does not use the FTDI USB-to-Serial driver chip. Instead, the Atmega8U2
is programmed as a USB to serial converter. As an Italian associate degree,
"Uno" was named to commemorate the next release of Arduino 1.0. Uno and version 1.0 have
the potential to be reference versions of the evolving Arduino. With advanced features on various
USB Arduino boards, Uno is also the reference model for the Arduino platform for evaluation
compared to previous versions. [14]

Figure 4.3: Arduino UNO

4.3.1 Technical Specifications

• Microcontroller: ATmega328P

UAV Detection and Tracking in High Resolution Images 18


Chapter 4 System Design and Implementation

• Input Voltage: 7V - 20V.


• Working/ Control Voltage: 5V.
• Height: 68.6mm
• Breadth: 53.4mm
• Load: 25g
• Digitized Input Output Pins: 14 (of which 6 provide PWM output)
• Non-digitized Input Pins: 6
• DC current each I/O Pins: 40mA
• 3.3V pin DC current: 20mA
• Flash Memory: 32 KB of which 0.5KB is used by the bootloader.
• Static Random Access Memory: 2KB
• Electrically Erasable Programmable Read Only Memory: 1KB
• Clock Speed: 16MHz

4.3.2 Communication

We have to connect Arduino with the PC in this project (Python Script). We have used the
PySerial library because PySerial is a Python library that allows a PC to communicate with
Arduino or other systems serially.

Python gets the bounding box coordinates of the discovered object and sends a serial command
to Arduino accordingly. The motor (clockwise or counterclockwise) received from Python is
driven by the Arduino via the motor driver IC described later in this chapter

4.3.3 Programming

Arduino Uno can be programmed with Arduino software (download). Select Arduino Use
ATmega328 from the Tools> Board menu (compatible with onboard microcontrollers) Arduino
Uno's ATmega328 comes pre-installed with a bootloader that allows you to push new code without
using an external device developer. Has been the first STK500 convention (reference, C header
record) Program the boot loader and microcontroller through the ICSP (In-Circuit Serial

UAV Detection and Tracking in High Resolution Images 19


Chapter 4 System Design and Implementation

Programming) header. Check this driving route for attractions. You can access the source code of
the ATmega8U2 firmware.

The ATmega8U2 is stacked with the DFU boot loader and can be activated by connecting the bind
jumper on the back of the board (near the Italian instructions) and resetting the 8U2 at this point.
You can then use Atmel's FLIP programming or DFU software developers to stack different
firmware. Alternatively, you can use the ISP header with an external software engineer
(overwriting the DFU boot loader).

4.4 L298N Motor Driver


The L298N is a motor driver that can drive two DC motors. The L298N is a 15-pin IC that can
drive two DC motors in either direction. Dual H-bridge motor driver integrated circuit. It is used
to derive DC motors, and stepping motors and is also used in robotics.

Figure 4.4: L298N Motor Driver.

It is supported the H-bridge concept. A circuit that permits current to flow in both directions
because the voltage is responsible for changing the direction of motor rotation either clockwise or
anticlockwise direction. In L298N chip consists of two h-bridge circuits that can independently
rotate two dc motors. The pin diagram of the L298N motor driver is shown below

UAV Detection and Tracking in High Resolution Images 20


Chapter 4 System Design and Implementation

Figure 4.5: Pin Diagram of L298N Motor Driver.

Two Enable pins lie on Pin 6 and 11 both should be high to drive the motor. Pin 6 should enable
high to drive motor with the left H-Bridge. We should enable pin 11 for the right H-bridge. The
motor in the applicable segment will stop functioning If either pin 6 or pin 11 goes low. At the
same time, a switch has been inverted.

Table 2: Rotation of motors based on pin configuration

Input Function
EN A Input1 = H, Input2= L Clockwise
Input1 = L, Input2= H Anticlockwise
Input1 = Input2 Fast motor stop
EN B Input1 = X, Input2= X Free running Motor stop

Important point: We can simply connect the pin 4 Vcc (5V) to pin 6 and pin 11 to make them
high.

4.4.2 Working of L298N

The L298N has 4 input pins, Input 1 and Input 2 on pins 5 and 7 will control the rotation of motor
A while Input 3 and Input 4 on pins 10 and 12 will control the rotation of motor B. The rotation of
the motor is based on LOGIC 0 or LOGIC 1 input carry via input pins.

UAV Detection and Tracking in High Resolution Images 21


Chapter 4 System Design and Implementation

Figure 4.6: Circuit Diagram of L298N Motor Driver with motors.

Table 3: Pin Configuration of L298N Motor Driver

Pin Name Description


IN1 and IN2 Monitor the rotation of motor A.

IN3 and IN4 Monitor the rotation of motor B.

EN A Allow Pulse Width Modulation Signal for Motor A.

EN B Allow Pulse Width Modulation Signal for Motor B.

OUT1 and OUT2 Output pins of motor A.

OUT3 and OUT4 Output pins of motor B.

12V 12V input from DC power supply.

5V Powers the L298N switching logic circuit.

GND Ground pin.

UAV Detection and Tracking in High Resolution Images 22


Chapter 4 System Design and Implementation

4.4.3 Technical Specifications

• Driver Chip: L298N.


• Maximum Voltage: 46V
• Maximum Current: 2A.
• Logic Voltage: 5V.
• Driver Voltage: 5-35V.
• Driver Current: 2A
• Logic Current: 0-36mA.
• Maximum Power(W): 25W
• Current sense for each motor.
• Heat sink for better performance.
• Power-On LED indicator.

4.5 4k resolution camera:

The wide-angle camera that has 4K resolution is mounted on the static turret.

Figure 4.7: Specific 4K Resolution Camera.

UAV Detection and Tracking in High Resolution Images 23


Chapter 4 System Design and Implementation

4.5.1 Technical Specifications

• Quad support 4 different frequency communicating bands: 64 MP,(wide),


1/1.72”,0.8µm, PDAF 8 MP, 120˚ (ultrawide) 2 MP, f/2.4, (depth) 2MP,
f/2.4,(monochrome).
• Characteristics: Quad- LED flash, HDR,panorama
• Video capturing resolution with frame rate: 4K@30fps, 1080p@30fps.

4.6 PK-940HA FHD 1080P AF Webcam

PK-940 HA 1080p camera is used in our project. The camera is mounted on a rotating turret and
also connected to the system through the USB port. Camera capture frames in the form of
video/images in real-time. Algorithms evolved on the Python library especially used for image
processing using OpenCV and DNN library is used to execute deep neural networks in python and
detect the desired object as output.

Figure 4.8: PK-940HA FHD1080p AF Webcam.

UAV Detection and Tracking in High Resolution Images 24


Chapter 4 System Design and Implementation

4.6.2 Technical Specifications [17]

• Resolving power: Full HD 1080P, 1920×1080 Pixels.


• Video Recording Resolution: 1080p
• Type of lens: Full HD Autofocus Glass Lens.
• Viewing Angle: 75˚ Wide.
• Focus Range: 10cm and Beyond
• Built-in Mic: Single Digital Mic.
• Output Format: MJPEG.
• Frame Rate: 30fps.
• USB: USB 2.0.
• Compatibility: Windows 7 / 8 / 8.1 / 10 or later.

4.7 Turrets

This portion will have introduced about the turret and its mechanical portions related to its
movements.

There are two types of turrets used in our project.

• Static turret.
• Rotating turrets.
The wide-angle camera that has 4K resolution is mounted on the static turret. To track the object
that is connected to the system. It detects the object and passes the targeted object bounding box
to the tracker. A revolving structure that has two movements. Pitch and Yaw movements are two
types of movement. These movements are controlled by two motors. The low-angle camera that
has 1080p resolution is mounted on the rotating turret. To track the object that is connected to the
system. Arduino is used to rotate the motors based on the coordinates sent serially from the Python
script to the Arduino. The motors are driven by the L298N motor driver IC. The motors are
powered by a battery.

UAV Detection and Tracking in High Resolution Images 25


Chapter 4 System Design and Implementation

(a) (b)

Figure 4.9: Turrets (a) Static Turret, (b) Rotating Turret.

4.7.1 Labelled Pieces:

In this section, all of the components which include the timber board, rods, gear, and so on. in
addition to their dimensions, are shown with the help of images

UAV Detection and Tracking in High Resolution Images 26


Chapter 4 System Design and Implementation

Figure 4.10: Components.

All if these components (mechanical) are required to precisely construct the turret model in order
that it may work efficiently.
The upper part of the model is made from timber, and is well positioned over a base of aluminum
and timber to offer a balanced model, is shown as above.

(a) (b)
Figure 4.11: Base of Aluminum and Timber (a) Side View, (b) Top View.

The motor designated for pitch (up and down) is geared up in the timber arm of the higher part
of the rotating turret with the help of a hole so one can allow the aluminum rod to rotate freely
with the help of baring and gears and a metal strip to keep the motor in place

UAV Detection and Tracking in High Resolution Images 27


Chapter 4 System Design and Implementation

Figure 4.12: Motors and Gears attached with rod.

The turret's top part of the base has an imperative hole in it, into which we inserted a bearing
that permits it to revolve in a 360-degree route.

Figure 4.13: Bearings.

UAV Detection and Tracking in High Resolution Images 28


Chapter 4 System Design and Implementation

This is the mechanism with the help of which the turret rotates in the route of Yaw (left to right)
and is supported by way of three wheels that rotate at the aluminum pinnacle with no trouble.

Figure 4.14: Caster Wheels.

These are the three caster wheels deployed to distribute the weight of the turret's upper module in
order that the lower motor does not bear the entire load.

(a) (b)
Figure 4.15: Static turret (a) Static Turret without the camera, (b) Static Turret
with the camera.

UAV Detection and Tracking in High Resolution Images 29


Chapter 4 System Design and Implementation

(a) (b)

Figure 4.16: Rotating turret (a) Rotating turret without the camera, (b) Rotating
turret with the camera.

4.7.3 Circuit Diagram:

Figure 4.17: Circuit Diagram.

UAV Detection and Tracking in High Resolution Images 30


Chapter 4 System Design and Implementation

4.8 System Specification:

The system that we used in this project is Dell t7920 Workstation. Some of the technical
specifications of the system are as follows:

• Processor: 3.9 GHz Intel 2nd Generation Intel® Xeon® Scalable.


• Memory Installed: 32GB RAM
• Graphics Cards: NVIDIA Quadro RTX 5000with a 16GB GDDR6 with ECC.
• Storage: 512GB SSD
4.8.1 RTX 5000GPU:

The NVIDIA Quadro RTX 5000 16GB GPU provides a wide range of features to supercharge
next-generation workflows. Offering a range of new features, The RTX 5000 offers many new
features that allow professionals to push the boundaries of what is possible with the NVIDIA
Turing architecture packaged in the 12nm manufacturing process. Additionally, the graphics card
is equipped with 3072 CUDA cores and 16GB of GDDR6 memory, plus 48 ray tracing cores. [18]

4.8.2 Technical Specifications [19]

• GPU Memory: 16GB GDDR6.


• Memory Configuration: 256-bit.
• Error Correcting Code: Yes.
• NVIDIA Cuda Cores: 3,072.
• NVIDIA Tensor Cores: 384
• NVIDIA RT Cores: 48
• Single-Precision Performance: 11.2 TFLOPS.
• Tensor Performance: 89.2 TFLOPS
• NVIDIA NV Link: Connects 2 Quadro RTX 5000 GPUs.
• NVIDIA NV Link Bandwidth: 50 GB /s (bidirectional)
• System Configuration: PCI Express 3.0 x 16
• Power Consumption: Total board power: 265 W
Total graphics power: 230 W

UAV Detection and Tracking in High Resolution Images 31


Chapter 4 System Design and Implementation

• Thermal Solution: Active.


• Form Factor: 4.4 “H x10.5” L, Dual Slot, Full Height
• Encode / Decode Engines: 1X Encode, 2X Decode
• VR Compatible: Yes
• Compute APIs: CUDA, DirectCompute, OpenCL.

The features of RTX 5000 GPU are Four Display Port 1.4 Connectors, Virtual Link Connector3,
DisplayPort with Audio, VGA Support4, 3D stereo Support with Stereo Connector4, NVIDIA
GPUDirect™ Support, Quadro Sync II5 Compatibility, NVIDIA nView® Desktop Management
Software, HDCP 2.2 Support > NVIDIA Mosaic6.

Figure 4.18: RTX 5000 GPU

UAV Detection and Tracking in High Resolution Images 32


Chapter 5 Testing and results

CHAPTER 5

TESTING AND RESULTS

UAV Detection and Tracking in High Resolution Images 33


Chapter 5 Testing and results

CHAPTER 5

TESTING AND RESULTS

5.1 Facial detection using Haar cascade

We employed a .xml file with a pre-trained Haar cascade classifier for real-time face detection.
We created a Python algorithm for this. Face identification was primarily done so that we could
gain a fundamental understanding of the Haar Cascade Algorithm. Results are depicted visually
in the provided image. And it is clear that the classifier accurately recognizes faces in real-time.

Figure 5.1 Face and eye detection

UAV Detection and Tracking in High Resolution Images 34


Chapter 5 Testing and results

5.1.1 Detection and Tracking of the face in real-time using Haar cascade

Following the development of a face detection algorithm, the bounding box's coordinates were
extracted. As soon as we knew the face's coordinates, we send a command to the Arduino to drive
the motors in that direction so we could monitor the face in real-time.

5.2 Drone Detection and Tracking using Haar cascade

5.2.1 Dataset Collection for drone

Many experts who study drone detection choose to keep their work (including their classifier and
dataset) private. Furthermore, there isn't a drone-specific bespoke dataset available in Pakistan
that can be trained using various methods. Because of this, it was necessary to gather our dataset
for that particular drone. We developed our dataset and gave it the title AUDATS-UAV.

5.2.2 Collecting dataset for Haar Cascade:

Nearly every machine learning algorithm has a specific dataset format, and Haar Cascade is no
exception. Positive and negative images are needed to train the Haar Cascade classifier. Negative
photos are those that contain images without any drones, and positive images solely contain drone
images. Which kind of negative photos you used is up to you. Depending on the type of special
thing you wish to identify, you can employ various situations, backdrops, and objects in negative
photos.

In our situation, we employ 2000 negative images gathered from various sources along with 1500
positive images (self-captured images). When training the Haar classifier, more negative images
are often used than positive ones.

Samples that were positive and negative were kept in separate folders with the letters "p" and "n,"
respectively.

UAV Detection and Tracking in High Resolution Images 35


Chapter 5 Testing and results

Figure 5.2 Positive image samples

Figure 5.3 Negative image samples

UAV Detection and Tracking in High Resolution Images 36


Chapter 5 Testing and results

5.2.3 Trained and tested cascade classifier using training utility

We utilized the GUI tool "Cascade Trainer GUI (a tool built by Amin Ahmadi)" to build our
classifier on a specific dataset.

Figure 5.4 Amin Ahmadi Tool

5.2.4 Haar Cascade Classifiers:

A method based on machine learning where a cascade function is trained using a large number of
both positive and negative images. It is then used to detect the objects in the other images that are
based on the training. They are huge individual .xml files with a huge amount of feature sets and
each XML corresponds to a very unique type of use case

5.2.5 Block Diagram of Drone detection using Haar Cascade custom Classifier

Training
Parameters

Haar features
Drone Images + Non-drone Images Memory model of drone
(.xml)

Histogram Haar Feature Haar Cascade Detected


Camera Capture ftames Convert to Cray
Equalization Extraction Classifier Drones

Figure 5.5 Block Diagram of DD using Cascade classifier

5.2.6 Drone detection and tracking using Haar cascade

UAV Detection and Tracking in High Resolution Images 37


Chapter 5 Testing and results

Now we have a basic grasp of how the Haar cascade method functions and how to create an
algorithm to extract the coordinates of the detected object and send them to an Arduino utilizing
serial communication to track the target object in real-time. The Drone was tracked and detected
using Haar Cascade in the following phase.

5.2.7 Testing and Results of Pre-Trained Haar Cascade Classifier

The drone video was used to test the trained classifier. Although it fairly detects drones, it does
not detect drones that are moving and also does not detect drones very effectively indoors. In other
words, it works great when the background is the sky or is clear, but it struggles when the
background changes or the drones are moving.

Figure 5.6 Testing of drone detection using the pre-trained classifier.

5.2.8 Testing and Results of Custom-Trained Haar Cascade Classifier

On drone video, the custom-trained classifier was tested. Although it fairly detects drones, it does
not detect drones that are moving and also does not detect drones very effectively indoors. In other
words, it works great when the background is the sky or is clear, but it struggles when the
background changes or the drones are moving.

UAV Detection and Tracking in High Resolution Images 38


Chapter 5 Testing and results

Figure 5.7 Testing of drone detection using the custom-trained classifier.

5.3 YOLO Implementation

YOLO is an abbreviation for the phrase "You Only Look Once." This is an algorithm that detects
and recognizes different objects in a photograph (in real-time). YOLO performs object detection
as a regression problem and returns the class probabilities of the detected images. To detect objects
in real-time, the YOLO algorithm employs convolutional neural networks (CNN). To detect
objects, the algorithm requires only one forward propagation through a neural network, as the
name implies. This means that the entire image is predicted in a single algorithm run. The CNN
is used to predict multiple class probabilities and bounding boxes at the same time. The YOLO
algorithm has several variants. Tiny YOLO and YOLOv3 are two popular examples.

5.3.1 Creating Dataset in YOLO format:

In the beginning, we recorded various videos from various angles and locations. Three mobile
cameras were used to take about 120 videos (Android & iPhone). Using Python programming,
these films were turned into photos, and any extra or raw images were removed. So, in the end,
we obtained 12500 drone images.

UAV Detection and Tracking in High Resolution Images 39


Chapter 5 Testing and results

Figure 5.8 Drone Videos

Multiple cameras were used to record drone videos from various angles and locations (Indoor,
Outdoor). The python script is used to convert videos into frames. Videos were recorded at a
frame rate of three per second or 30 frames per second. Each image is manually annotated with
the Label Image tool.

Figure 5.9 Annotation on sliced images.

UAV Detection and Tracking in High Resolution Images 40


Chapter 5 Testing and results

5.3.2 Yolo Format

Data in the format is accepted by YOLO (Object-class, x, y, width, height).

Figure 5.10 image and its text file folder.

5.3.3 Train yolo weights file

We gathered our dataset of around 12500 drone images and gave it the title AUDATS-UAV. Then
we have trained 10000 images on GPU in Swarm Robotics Lab, National Center of Robotics and
Automation.

5.4 Trackers Implementation

When tracking, our objective is to locate an item in the current frame if we have successfully
tracked it in all (or almost all) of the preceding frames. You can use OpenCV's different object
tracking implementations in your computer vision applications. Let's see how various tracking
algorithms work.

5.4.1 BOOSTING Tracker

This algorithm is ten years old and functions reasonably, but I was unable to think of a solid reason
to utilize it, especially with the availability of more sophisticated trackers (MIL, KCF) built on
comparable ideas. The tracking results are only fair. However, it cannot constantly determine
when tracking has failed.

UAV Detection and Tracking in High Resolution Images 41


Chapter 5 Testing and results

Figure 5.11 Boosting Tracker Implementation

5.4.2 MIL Tracker

The performance of this tracker is acceptable. It performs admirably even when partially obscured
and does not drift as much as the BOOSTING tracker. This might be the greatest tracker available
to you if you're using OpenCV 3.0. But if you're using a more recent version, think about KCF.

Figure 5.12 MIL Tracker Implementation

5.4.3 KCF Tracker

This tracker reports tracking failure more effectively than BOOSTING and MIL, and it is both
more accurate and faster than MIL. However, it cannot recover from complete occlusion.

UAV Detection and Tracking in High Resolution Images 42


Chapter 5 Testing and results

Figure 5.13 KCF Tracker Implementation

5.4.4 TLD Tracker

Over several frames of occlusion, this tracker performs well. also keeps track of the best scaling
changes. However, it has numerous false positives, rendering it all but useless.

Figure 5.14 TLD Tracker Implementation

UAV Detection and Tracking in High Resolution Images 43


Chapter 5 Testing and results

5.4.5 MEDIANFLOW Tracker

Excellent tracking failure reporting is provided by this tracker. works incredibly well when there
is no occlusion and predictable motion. But in the case of strong motion, it fails.

Figure 5.15 Median flow Tracker Implementation

5.4.6 MOSSE tracker

The MOSSE tracker is resistant to changes in position, scale, illumination, and other non-rigid
deformations. The tracker may pause and pick up where it left off when the object reappears
because it also identifies occlusion using the peak-to-side lobe ratio. The MOSSE tracker also has
a greater frame rate. In addition to these advantages, it is also relatively simple to implement,
more precise than other sophisticated trackers, and faster.

UAV Detection and Tracking in High Resolution Images 44


Chapter 5 Testing and results

Figure 5.16 MOSSE Tracker Implementation

5.4.7 CSRT tracker

It also performs object tracking at a better level of accuracy while operating at a relatively slow
frame rate (25 fps).

Figure 5.17 CSRT Tracker Implementation

UAV Detection and Tracking in High Resolution Images 45


Chapter 5 Testing and results

5.4.8 Mean Shift tracker

In contrast to deep neural networks or other identification techniques, the mean shift doesn't need
any training. There is no need to input the computer hundreds or thousands of drone-labeling
photos if only one drone has to be tracked. Instead, the system automatically tracks the drone for
the remainder of its life after analyzing its initial color input.

Figure 5.18 Mean shift Tracker Implementation

5.4.9 Cam Shift tracker

It will be more challenging to track an object if its color or texture varies widely. The object being
tracked may also conflict with an image or video that has a "busy" or "noisy" background with a
lot of color fluctuation.

UAV Detection and Tracking in High Resolution Images 46


Chapter 5 Testing and results

Figure 5.19 Cam Shift Tracker Implementation

5.5 Detection with Trackers Implementation

We have implemented the detector with the MOSSE tracker because it has pure speed.

5.5.1 Flowchart and block diagram of Detection based tracking

For detection-based tracking firstly, the input is taken from a 4k wide-angle camera on whose
frames detection algorithm is used if the drone gets detected then the ROI(Region of Interest )is
passed from the detector to the tracker then we start taking input from 100p camera mounted on
the rotating turret on which tracking begins after which the output is displayed on the screen in
the form of a zoomed image, a binary image the trajectory of the tracked drone the videos and
images of the tracked drone are saved at the backend on the system in a folder named after the
time and date of a drone being tracked and finally the turrets rotate along the movement of the
drone being tracked

UAV Detection and Tracking in High Resolution Images 47


Chapter 5 Testing and results

4K Detection Display
Frames
Camera Algorithm
Memory
1080p Tracking
ROI Turret
Camera Algorithm

Input Main Computational Unit Output


Figure 5.20 Block diagram of detection-based tracking

Initially, a 4k wide angle camera starts monitoring the environment at 30 fps(frames per second).
If the drone is not detected, then it will again start monitoring. If the drone is detected, then it will
draw a bounding box on the detected target and increment the frame by one. It will perform
monitoring until the frame becomes equal to four. When the frame becomes equal to four the
bounding box of the detected target is passed to the tracker as ROI. Then 1080p camera will start
tracking if the target is lost then it will again start detection and monitoring.

No

4k wide angle camera


Drone yes Draw a bounding box
Start starts monitoring Frame +=1 frame==4
detected on the detected target
enviornment (30fps)

No
Yes

Pass the target i.e. the bounding


Yes 1080p camera tracks
End frame=0 Target lost box as an ROI (Region of
the target
interest for tracking)

No

Figure 5.21 flowchart of detection-based tracking

UAV Detection and Tracking in High Resolution Images 48


Chapter 5 Testing and results

5.5.2 Haar cascade on drone with tracker

The Haar Cascade detector detects the drone and passes the bounding box as ROI (Region of
Interest).

Figure 5.22 Detection

Figure 5.23 Tracking

UAV Detection and Tracking in High Resolution Images 49


Chapter 5 Testing and results

5.5.3 Yolo v3 on drone with tracker

The Yolo v3 detector detects the drone and passes the bounding box as ROI (Region of Interest).

Figure 5.24 Detection

Figure 5.25 Tracking

UAV Detection and Tracking in High Resolution Images 50


Chapter 5 Testing and results

5.5.4 Compare Efficiency of the detector and Yolo v3 detector with MOSSE tracker

If we use drone detection, then the rate of frame per second is 8. We combined the Yolo V3
detector with the MOSSE tracker then the rate of frame per second is 32. It is used to improve
efficiency. 4x times better performance than detector only.

(a) (b)

Figure 5.26 (a)FPS rate of Detector (b) FPS of Combined YOLO V3 Detector with MOSSE
tracker.

5.5.5 Demonstration

We have conducted several indoor and outdoor testing in order to confirm the working od our
project here we have shown the final demonstration of our project in which the zoomed image,
binary image, drone being tracked and it’s trajectory can be clearly seen in one frame along with
the turret that rotates in pitch and yaw in order to track the drone effectively.

Figure 5.27 (a) Demonstration (outdoor-i)

UAV Detection and Tracking in High Resolution Images 51


Chapter 5 Testing and results

Figure 5.27 (b) Demonstration (indoor-i)

Figure 5.27 (c) Demonstration (indoor-ii)

UAV Detection and Tracking in High Resolution Images 52


Chapter 5 Testing and results

Figure 5.27 (d) Demonstration (indoor-iii)

As soon as the drone gets detected it sends alert to the admin in the form of an email a phone call
and an SMS in order to inform about the detection of a hostile UAV in their territory. The message
and Email consist of the statement of the drone detected along with the date and time of detection.

Figure 5.28 (a) Alerts

UAV Detection and Tracking in High Resolution Images 53


Chapter 5 Testing and results

Figure 5.28 (b) Message statement

Videos and images of drones being tracked gets save at the backend in separate folders labeled
with date and time of tracking

Figure 5.29 (a) Folders of tracked drones

UAV Detection and Tracking in High Resolution Images 54


Chapter 5 Testing and results

Figure 5.29 (b) Images saved inside the folder

Figure 5.29 (c) Zoomed images along with the video of tracked drone

UAV Detection and Tracking in High Resolution Images 55


Chapter 5 Testing and results

5.6 Evaluation of Dataset:

A predictive model's performance is quantified by an evaluation metric. This usually entails


training a model on a dataset, then using the model to generate predictions on a holdout dataset
that was not used during training, and finally comparing the predictions to the predicted values in
the holdout dataset.

5.6.1 Confusion Matrix:

It is a table that's wont to describe the performance of a classification model on a dataset that
actuality values are known. comparatively simple to know however the connected terminologies
may be confusing. To outline all the terminologies, we'd like to define some thresholds (α=0.6).

Predicted
Class

Positive Negative

Positive True Positive. False Negative.


Actual
Class
Negative False Positive. True Negative.

Figure 5.30 Confusion Matrix

5.6.2 Terminologies:

True positives (TP): The case where the classifier predicts yes and the real case is positive.

UAV Detection and Tracking in High Resolution Images 56


Chapter 5 Testing and results

Figure 5.31 True Positive [11]

True negatives (FP): No predicted classifier and the actual case is also negative.

Figure 5.32 True Negative [12]

False positives (FP): Model incorrectly predicts the positive class.

False negatives (FN): Model incorrectly predicts the negative class.

UAV Detection and Tracking in High Resolution Images 57


Chapter 5 Testing and results

Predicted Class
N=2500
Positive Negative

Actual Positive 1898 92 1990

Class
Negative 0 510 510

1898 602

Figure 5.33 Confusion Matrix of Custom Dataset.


Heat Map of Confusion Matrix in Python:

Figure 5.34 Heat Map of Confusion Matrix in Python.


The list of ratios is usually calculated from a confusion matrix for a classifier:

• Accuracy.

UAV Detection and Tracking in High Resolution Images 58


Chapter 5 Testing and results

• Precision.
• Recall.
• Error rate.
• Specificity.
• F1 Score.

Accuracy:

Overall, how long does it take for the classifier to be accurate?

We can find accuracy by using the formula:

𝐓𝐫𝐮𝐞 𝐏𝐨𝐬𝐢𝐭𝐢𝐯𝐞 + 𝐓𝐫𝐮𝐞 𝐍𝐞𝐠𝐚𝐭𝐢𝐯𝐞


𝐀𝐜𝐜𝐮𝐫𝐚𝐜𝐲 =
𝐓𝐫𝐮𝐞 𝐏𝐨𝐬𝐢𝐭𝐢𝐯𝐞 + 𝐓𝐫𝐮𝐞 𝐍𝐞𝐠𝐚𝐭𝐢𝐯𝐞 + 𝐅𝐚𝐥𝐬𝐞 𝐏𝐨𝐬𝐢𝐭𝐢𝐯𝐞 + 𝐅𝐚𝐥𝐬𝐞 𝐍𝐞𝐠𝐚𝐭𝐢𝐯𝐞

1898 + 510
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 =
2500

Accuracy = 0.9632

Accuracy in % = 0.9632 x 100

𝑨𝒄𝒄𝒖𝒓𝒂𝒄𝒚 𝒊𝒏 % = 𝟗𝟔. 𝟑𝟐%

Precision:

It reflects the reliability with which the model classifies the sample as positive.

We can find Precision by using the formula:

∑ 𝑻𝒓𝒖𝒆 𝑷𝒐𝒔𝒊𝒕𝒊𝒗𝒆
𝑷𝒓𝒆𝒄𝒊𝒔𝒊𝒐𝒏 =
∑ 𝑻𝒓𝒖𝒆 𝑷𝒐𝒔𝒊𝒕𝒊𝒗𝒆 + ∑ 𝑭𝒂𝒍𝒔𝒆 𝑷𝒐𝒔𝒊𝒕𝒊𝒗𝒆

1898
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 =
1898 + 0

Precision = 1.0

Precision in % = 1.0 x 100

UAV Detection and Tracking in High Resolution Images 59


Chapter 5 Testing and results

𝑷𝒓𝒆𝒄𝒊𝒔𝒊𝒐𝒏 𝒊𝒏 % = 𝟏𝟎𝟎%

Recall:

The fraction of the relevant cases retrieved is also known as Sensitivity. Detecting positive
samples.

We can find Recall or Sensitivity by using the formula:

∑ 𝑻𝒓𝒖𝒆 𝑷𝒐𝒔𝒊𝒕𝒊𝒗𝒆
𝑹𝒆𝒄𝒂𝒍𝒍 =
∑ 𝑻𝒓𝒖𝒆 𝑷𝒐𝒔𝒊𝒕𝒊𝒗𝒆 + ∑ 𝑭𝒂𝒍𝒔𝒆 𝑵𝒆𝒈𝒂𝒕𝒊𝒗𝒆

1898
𝑅𝑒𝑐𝑎𝑙𝑙 =
1898 + 92

Recall = 0.9537

Recall in % = 0.9537 x 100

𝑹𝒆𝒄𝒂𝒍𝒍 𝒊𝒏 % = 𝟗𝟓. 𝟑𝟕%

Error Rate:

The error rate generally indicates how often the model makes false predictions also
known as the Misclassification rate.

We can find the Error rate by using the formula.

𝑬𝒓𝒓𝒐𝒓 𝒓𝒂𝒕𝒆 = 𝟏 − 𝑨𝒄𝒄𝒖𝒓𝒂𝒄𝒚

𝐸𝑟𝑟𝑜𝑟 𝑟𝑎𝑡𝑒 = 1 − 0.9632

𝐸𝑟𝑟𝑜𝑟 𝑟𝑎𝑡𝑒 = 0.0368

𝐸𝑟𝑟𝑜𝑟 𝑟𝑎𝑡𝑒 𝑖𝑛 % = 0.0368 × 100

𝑬𝒓𝒓𝒐𝒓 𝒓𝒂𝒕𝒆 𝒊𝒏 % = 𝟑. 𝟔𝟖%

Specificity:

Test specificity, also known as true negative rate (TNR), is the proportion of negative
samples that have a negative test result using the test in question.

UAV Detection and Tracking in High Resolution Images 60


Chapter 5 Testing and results

We can find the Specificity by using the formula.

∑ 𝑻𝒓𝒖𝒆 𝑵𝒆𝒈𝒂𝒕𝒊𝒗𝒆
𝑺𝒑𝒆𝒄𝒊𝒇𝒊𝒄𝒊𝒕𝒚 =
∑ 𝑻𝒓𝒖𝒆 𝑵𝒆𝒈𝒂𝒕𝒊𝒗𝒆 + ∑ 𝑭𝒂𝒍𝒔𝒆 𝑷𝒐𝒔𝒊𝒕𝒊𝒗𝒆

510
𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦 =
510 + 0

𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦 = 1

𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦 𝑖𝑛 % = 1 × 100

𝑺𝒑𝒆𝒄𝒊𝒇𝒊𝒄𝒊𝒕𝒚 𝒊𝒏 % = 𝟏𝟎𝟎%

F1 Score:

Precision and recall are two components of the F1 score. The F1 score is defined as the
harmonized mean of precision and recall.

• A model will receive a high F1 score if Precision and Recall are high.

• A model will receive a low F1 score if Precision and Recall are low.

• A model will receive an average F1 score if one of the Precision and Recall is low and the
other is high.

We can find the F1 Score by using the formula.

𝑷𝒓𝒆𝒄𝒊𝒔𝒊𝒐𝒏 ∗ 𝑹𝒆𝒄𝒂𝒍𝒍
𝑭𝟏 𝑺𝒄𝒐𝒓𝒆 = 𝟐 ×
𝑷𝒓𝒆𝒄𝒊𝒔𝒊𝒐𝒏 + 𝑹𝒆𝒄𝒂𝒍𝒍

1 ∗ 0.9653
𝐹1 𝑆𝑐𝑜𝑟𝑒 = 2 ×
1 + 0.9653

𝐹1 𝑆𝑐𝑜𝑟𝑒 = 0.976

𝐹1 𝑆𝑐𝑜𝑟𝑒 𝑖𝑛 % = 0.976 × 100

𝑭𝟏 𝑺𝒄𝒐𝒓𝒆 𝒊𝒏 % = 𝟗𝟕. 𝟔%

UAV Detection and Tracking in High Resolution Images 61


Chapter 5 Testing and results

Figure 5.35 Evaluation Metrics of Custom Dataset Percentage.

UAV Detection and Tracking in High Resolution Images 62


Chapter 6 Conclusion and Future Work

CHAPTER 6
CONCLUSION AND FUTURE WORK

UAV Detection and Tracking in High Resolution Images 63


Chapter 6 Conclusion and Future Work

CHAPTER 6
CONCLUSION AND FUTURE WORK.
6.1 Conclusion:
A drone detection system with YOLO v3 algorithm and MOSSE Tracker tracking was
implemented and tested using drone images from custom datasets, the internet as well as real-
time random drone images/videos. We used a sequence of image processing algorithms to
perform real-time object detection and tracking. This is useful for applications that require real-
time response. We have implemented a fully autonomous drone surveillance system based on
RGB cameras and computer vision. The implemented scheme can be partially or fully used for
other video surveillance tasks instead of countering the drone's activity after making the
appropriate changes. The focal point of this is that frames from multiple cameras are used for
detection and tracking in the proper configuration. Compared to conventional object detection
methods, it exhibits higher performance in terms of accuracy and accuracy.

6.2 Future work:


In the future, if the system gets equipped with Thermal cameras or night vision cameras it can
be used in night-time survey systems as these cameras are not capable of night vision or have
poor visibility during the night. The current system can track only one drone at a time so we’ll
be looking forward to constructing multiple drone tracking systems by deploying more turrets
(considering this one as a prototype) and allowing the admin to track the drones using unique
IDs. In the future, we’ll introduce a firing mechanism attached to the rotating turret with proper
configuration. To make it as compact as PTZ (Pan –Tilt –Zoom) and deploy it on a larger scale.

UAV Detection and Tracking in High Resolution Images 64


References

REFERENCES

UAV Detection and Tracking in High Resolution Images 65


References

REFERENCES

[1]Wu, M., Xie, W., Shi, X., Shao, P. and Shi, Z., 2018, July. Real-time drone
detection using deep learning approach. In International Conference on Machine
Learning and Intelligent Communications (pp. 22-32)

[2] Kashiyama, Takehiro, Hideaki Sobue, and Yoshihide Sekimoto. 2021. "Sky
Monitoring System For Flying Object Detection Using 4K Resolution Camera.“

[3]Unlu, E., Zenou, E., Riviere, N. and Dupouy, P.E., 2019. Autonomous drone
surveillance and tracking architecture. Electronic Imaging, 2019(15), pp.35-1.

[4 ]Kashiyama, T., Sobue, H. and Sekimoto, Y., 2020. Sky Monitoring System
for Flying Object Detection Using 4K Resolution Camera. Sensors, 20(24),
p.7071.

[5] Opromolla, R., Inchingolo, G. and Fasano, G., 2019. Airborne visual
detection and tracking of cooperative UAVs exploiting deep
learning. Sensors, 19(19), p.4332.

[6] Unlu, E., Zenou, E., Riviere, N. and Dupouy, P.E., 2019. Deep learning-based
strategies for the detection and tracking of drones using several cameras. IPSJ
Transactions on Computer Vision and Applications, 11(1), pp.1-13.
[7] I. Bisio, C. Garibotto, F. Lavagetto, A. Sciarrone and S. Zappatore,
"Unauthorized Amateur UAV Detection Based on WiFi Statistical Fingerprint
Analysis," in IEEE Communications Magazine, vol. 56, no. 4, pp. 106-111, April
2018, DOI: 10.1109/MCOM.2018.1700340.

UAV Detection and Tracking in High Resolution Images 66


References

[8] Svanström, F., Englund, C. and Alonso-Fernandez, F., 2021, January. Real-
time drone detection and tracking with visible, thermal and acoustic sensors. In
2020 25th International Conference on Pattern Recognition (ICPR) (pp. 7265-
7272). IEEE.

[9] Krizhevsky A, Sutskever I, Hinton GE (2012) ImageNet classification with


deep convolutional neural networks. Adv Neural Inf Process Syst 25.
Available online at: https://papers.nips.cc/paper/4824-imagenet-classification-
with-deep convolutional-neural-networks.pdf.
[Accessed 15 January 2022].:

[10] Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand,
T., Andreetto, M. and Adam, H., 2017. Mobilenets: Efficient convolutional
neural networks for mobile vision applications. arxiv preprint arXiv:1704.04861.

[11] Girshick, R., Donahue, J., Darrell, T. and Malik, J., 2014. Rich feature
hierarchies for accurate object detection and semantic segmentation. In
Proceedings of the IEEE conference on computer vision and pattern recognition
(pp. 580-587).

[12] Li, C., Sun, X. and Cai, J., 2019. Intelligent Mobile Drone System Based on
Real-Time Object Detection. Journal on Artificial Intelligence, 1(1), pp.1-8.

[13] J. Peng, C. Zheng, P. Lv, T. Cui, Y. Cheng and S. Lingyu, "Using images
rendered by PBRT to train faster R-CNN for UAV detection", Proc. Comput. Sci.
Res. Notes, pp. 13-18, 2018.

[14] Datasheet.octopart.com. 2022. [online] Available at:

UAV Detection and Tracking in High Resolution Images 67


References

https://datasheet.octopart.com/A000066-Arduino-datasheet-38879526.pdf
[Accessed 1 June 2022].:
[15] Singha, S. and Aydin, B., 2021. Automated Drone Detection Using
YOLOv4. Drones, 5(3), p.95.

[16] Reader's Digest. 2022. What Happens When a Plane Collides with a Flock
of Birds? [online] Available at: https://www.rd.com/article/plane-surrounded-
by-birds/ [Accessed 6 July 2022].:

[17] A4tech.com. 2022. FHD 1080P AF Webcam(PK-940HA) |. [online]


Available at: https://www.a4tech.com/product.aspx?id=237 [Accessed 5 Jun
e2022].:

[18] Database, G. and Specs, A., 2022. NVIDIA A100 SXM4 40 GB Specs.
[online] TechPowerUp. Available at: https://www.techpowerup.com/gpu-
specs/a100-sxm4-40-gb.c3506 [Accessed 6 July 2022].:

[19] Nvidia.com. 2022. [online] Available at:


https://www.nvidia.com/content/dam/en-zz/Solutions/design-
visualization/quadro-product-literature/quadro-rtx-5000-data-sheet-us-nvidia-
704120-r4-web.pdf/

UAV Detection and Tracking in High Resolution Images 68


UAV Detection and Tracking in High Resolution Images 69

You might also like