RF-UAVNet High-Performance Convolutional Network For RF-Based Drone Surveillance Systems

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Received April 6, 2022, accepted May 2, 2022, date of publication May 4, 2022, date of current version May 12,

2022.
Digital Object Identifier 10.1109/ACCESS.2022.3172787

RF-UAVNet: High-Performance Convolutional


Network for RF-Based Drone
Surveillance Systems
THIEN HUYNH-THE 1 , (Member, IEEE), QUOC-VIET PHAM 2 , (Member, IEEE),
TOAN-VAN NGUYEN 3 , (Member, IEEE),
DANIEL BENEVIDES DA COSTA 4,5 , (Senior Member, IEEE),
AND DONG-SEONG KIM 1 , (Senior Member, IEEE)
1 ICT Convergence Research Center, Kumoh National Institute of Technology, Gumi-si, Gyeongsangbuk-do 39177, Republic of Korea
2 Korean Southeast Center for the 4th Industrial Revolution Leader Education, Pusan National University, Busan 46241, Republic of Korea
3 Department of Electrical and Computer Engineering, Utah State University, Logan, UT 84321, USA
4 Digital Science Research Center, Technology Innovation Institute (TII), Abu Dhabi, United Arab Emirates
5 Future Technology Research Center, National Yunlin University of Science and Technology, Douliu 64002, Taiwan

Corresponding author: Thien Huynh-The (thienht@kumoh.ac.kr)


This work was supported in part by the National Research Foundation of Korea (NRF) through Creativity Challenge Research-Based
Project under Grant 2019R1I1A1A01063781; and in part by NRF, Institute for Information and Communications Technology
Planning and Evaluation (IITP) through the Ministry of Education, Science and Technology under
Grant 2018R1A6A1A03024003 and Grant IITP-2020-2020-0-01612.

ABSTRACT In recent years, the increasing popularity of unmanned aerial vehicles (UAVs) has arisen
from the emergence of cutting-edge technologies deployed in small and low-cost devices. With the great
capability of friendly uses and wide applications for multiple purposes, amateur drones can be piloted to
effortlessly access any geographical area. This poses some difficulties in monitoring and managing drones
that may invade private or limited-access areas. In this paper, we propose a radio-frequency (RF)-based
surveillance solution to effectively detect and classify drones, and recognize operations by leveraging a
high-performance convolutional neural network. The proposed network, namely RF-UAVNet, is specified
with grouped one-dimensional convolution to significantly reduce the network size and computational cost.
Besides, a novel structure of multi-level skip-connection, for the preservation of gradient flow, incorporating
multi-level pooling, for the collection of informative deep features, is proposed to achieve high accuracy via
learning efficiency improvement. In the experiments, RF-UAVNet yields the accuracy of 99.85% for drone
detection, 98.53% for drone classification, and 95.33% for operation mode recognition, numbers which
outperform the current state-of-the-art deep learning-based methods on DroneRF, a publicly available dataset
for RF-based drone surveillance systems.

INDEX TERMS Convolutional neural network, deep learning, drone detection, drone classification, drone
surveillance.

I. INTRODUCTION without authentication. Drone surveillance has become


Recently, small unmanned aerial vehicles (UAVs), also a potential solution to effectively cope with the above-
known as drones, have received explosive interest unprece- mentioned problem [4], in which drone detection and classi-
dentedly for numerous applications in diverse domains, fication play an important role in many advanced anti-drone
which range from aerial photography to disaster manage- systems. Several drone detection approaches based on radar,
ment, agriculture, and communications [1], [2]. However, this audio, video, and radio-frequency (RF) [5] have been devel-
rapid development of amateur drones poses many critical oped in surveillance systems, however, they have suffered
threats to public security and personal privacy [3], where from different limitations: ineffectiveness of radar-based
drones are controlled to enter restricted zones intentionally approaches for small drones detection, high sensitivity to
noise and limited working range of audio-based approaches,
The associate editor coordinating the review of this manuscript and and weather constraints with object occlusion of video-based
approving it for publication was Joanna Kołodziej . approaches [6]. Compared with the others, the RF-based

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
49696 VOLUME 10, 2022
T. Huynh-The et al.: RF-UAVNet: High-Performance Convolutional Network for RF-Based Drone Surveillance Systems

approach is more favorable to drone detection and classifica- calculate the local correlations of RF signals. We deploy
tion thanks to substantial reliability and outstanding perfor- grouped convolution to reduce the number of trainable
mance [7] besides easy implementation and long operation parameters and computational cost. We further design
range (approximately 1400 ft) [8]. an advanced structure with multi-level skip-connection
Several RF-based drone detection methods have been and multi-level pooling to improve the accuracy of three
introduced with feature extraction and machine learn- tasks.
ing (ML) algorithms. In [9], a reliable detection method • The proposed network achieves the overall accuracy of
was introduced to warn the presence of drones by ana- 99.85% for drone detection, 98.53% for drone classi-
lyzing the fast Fourier transform (FFT) of drone-controller fication, and 95.33% for operation recognition on the
RF signals. Shoufan et al. [10] learned the random forest DroneRF dataset while reducing the number of param-
classifier with time-domain features to improve the accuracy eters by around 78% and the computational cost by
of drone pilot identification. In [11], a regular ML frame- around 82%. In the method comparison, RF-UAVNet
work with frequency-domain features and neural networks outperforms several existing DL-based models in terms
was studied to detect drones via WiFi interferences. Deep of accuracy for different drone surveillance tasks.
learning (DL) [12], [13] with recurrent neural network (RNN) The remainder of this paper is organized as follows:
and convolutional neural network (CNN) [14]–[16] has been Section II briefly surveys the available literature on drone
exploited to process RF signals in wireless communications detection methods. In Section III, we present the proposed
and improve the performance of RF-based drone surveillance RF-based method for drone detection, drone classification,
systems. The superiority of DL with great capacity for learn- and operation recognition, wherein a high-performance CNN
ing relevant features at multi-scale RF signal resolutions was is introduced to learn RF signals. The diversified experiments
articulated in [17] when compared with various traditional with numerical results are provided in Section IV. Finally,
ML algorithms for drone detection. However, the primitive Section V concludes this paper.
architecturein [17] suffered from high implementation com-
plexity and low accuracy. Recently, some RNN and CNN II. STATE-OF-THE-ART TECHNIQUES
architectures have been designed with cascade structures A. RADAR-BASED TECHNIQUE
to process time-series data in different classification tasks, The principle of radar-based methods is electromagnetic
such as wearable-based physical activity recognition [18] and backscattering to identify an aerial object by measuring
motor fault diagnosis [19]. the radar cross-section (RCS) signature. Compared with
In this work, we present an efficient drone surveillance aircraft, drones are more challenging for RCS-based detec-
method using supervised learning with deep architectures to tion because of their small size and low-conductivity mate-
monitor and manage known drones. Especially, we propose rials that induce low RCS. In [21], the micro-Doppler
RF-UAVNet, a novel deep CNN architecture for three tasks: signature with time-domain analysis performed better
drone detection, drone classification, and operation mode than the Doppler-shift signature to increase the accu-
recognition. We design the deep network architecture with racy of clutter/target discrimination. The micro-Doppler
multiple convolutional layers, where each layer is specified signature was also adopted for drone classification in a
by several one-dimensional (1D) asymmetric filters of sizes non-cooperative drone surveillance system [22] with support
1 × 3 and 1 × 5 to extract local features of raw 1D signal vector machine (SVM) and decision tree (DT) classifiers.
at multi-scale resolutions (from coarse to fine). We lever- Recently, a passive radar with long-term evolution (LTE)
age grouped convolution instead of standard convolution to downlink signals [23] was exploited to detect small UAVs.
significantly reduce the number of network parameters and Geng et al. [24] utilized frequency-modulated continuous
the processing cost. Moreover, a multi-level skip-connection waveform to specify an effective LTE downlink signal fea-
is cleverly designed in association with a multi-level pool- ture, which in turn increases the accuracy of drone detection.
ing to regulate the gradient flow and collect more global
underlying features, which in turn increase the overall accu- B. AUDIO-BASED TECHNIQUE
racy of surveillance systems. We evaluate the performance This technique can detect, classify, and localize drones based
of RF-UAVNet on a practical DroneRF dataset [20] and on the sounds of the engine and high-speed rotating propellers
investigate with various hyper-parameters and architecture using acoustic sensors (e.g., microphones). In [25], a data-
configurations to corroborate the efficiency of RF-UAVNet driven method was proposed to detect drones in a heavy rain
for different drone surveillance tasks. The proposed method environment, in which a regular ML workflow was devel-
is compared with state-of-the-art approaches with the same oped to extract FFT-based frequency features and learn the
condition, showing that RF-UAVNet is superior to other deep classification model with SVM. Anwar et al. [26] improved
networks. The primary contributions of this paper can be the performance of audio-based drone detection systems with
summarized as follows: linear predictive cepstral coefficients and Mel-frequency cep-
• We propose RF-UAVNet to separately learn three com- stral coefficients. In [27], Uddin et al. calculated the power
mon tasks in drone surveillance systems, in which the spectral density of acoustic signals with independent com-
network architecture is specified by asymmetric filters to ponent analysis and learned a classification model with a

VOLUME 10, 2022 49697


T. Huynh-The et al.: RF-UAVNet: High-Performance Convolutional Network for RF-Based Drone Surveillance Systems

k-nearest neighbor (k-NN) to discriminate drones and other


objects, such as birds, airplanes, thunderstorms, rain, and
wind. Audio-based approaches are usually sensitive to ambi-
ent noise in crowded urban areas and require some advanced
designs of microphone array [8].

C. VIDEO-BASED TECHNIQUE
Video-based drone detection is typically the moving object
detection in computer vision, in which an object can
be detected and localized by analyzing visual features
(e.g., color, texture, and shape) in images and videos. With FIGURE 1. System setup for RF-based UAV detection, classification, and
the great capability of learning representational features auto- operation mode recognition.
matically, some recent methods have exploited DL with CNN
architectures to detect and identify drones using color cam-
era [28] and depth camera [29]. To overcome the limited field alternately constructing 1D convolutional layers and max-
of view, Rozantsev et al. [28] deployed a surveillance sys- pooling layers. In [37], a 5-layer DNN was deployed to
tem with moving cameras to track small fast-flying drones. estimate the direction of arrival (DoA) of single-channel
In [30], a high-performance CNN architecture was designed RF signals for aerial drone localization, in which the number
to boost the accuracy of drone detection while satisfying of neurons in the input layer was identical to the number
computational complexity for practical implementations. The of directional antenna elements. Most of the existing
network architecture has multiple spatial attention modules RF-based methods present low accuracy in drone detection
to highlight small and ambiguous drones in an image for and classification tasks because of the weak discrimination of
better localization and detection. Besides high sensitivity to ML algorithms [32]; meanwhile, DL with primitive architec-
illumination, these approaches face many challenging issues, tures cannot optimize learning efficiency.
such as occlusion and multi-object interference. In general, the drone signatures, detection range, advan-
tages, and disadvantages of above-mentioned drone detection
D. RF-BASED TECHNIQUE approaches are summarized in Table 1. It is worth noting that
By exploiting the intercepted RF signals between drones the detection ranges are typically derived from the literature,
and ground controllers, several RF-based drone detection that means, they might vary in the practice up on the type of
methods have been proposed with the advantages of day-and- drones with different hardware specifications, the algorithms
night working under all weather conditions for various sce- of deployment, and the surveillance environments.
narios. In [31], a passive cost-efficiency RF sensing method
was proposed with discrete wavelet transform and short-time III. METHODOLOGY
fast Fourier transform to extract drone body shifting and A. SYSTEM MODEL
vibration, which in turn improves the detection accuracy of The proposed RF-based UAV surveillance system is pre-
surveillance systems. In [32], Ezuma et al. analyzed inter- sented in Fig. 1. The system comprises several primary
cepted RF signals at multi-resolution in the wavelet domain modules: drones, a remote controller (a.k.a., flight control
and then learned a binary classifier with Naïve Bayes and unit), an RF sensing module, and a processing module with
Markov models. Several methods have classified drones by a database repository. Some drones for multi-purpose civil
learning RF fingerprints of WiFi and Bluetooth interference applications are specified by different technical specifica-
signals with ML algorithms. For instance, Bisio et al. [33] tions, such as maximum operation range and connectivity.
estimated the number of packets and the packet inter-arrival The remote controller, included in the drone package, sends
time of WiFi fingerprints to discriminate between different flight commands to the target drones and also receives the
drones. In [34], Alipour-Fanid et al. optimally selected the response of operating status. To intercept the drone-controller
L1-norm regularization-based descriptive statistics of traffic communication, an RF sensing module is configured with the
fingerprints to achieve comparable detection accuracy with assumption that the WiFi frequency is known (the operation
low computation. frequency can be determined by some passive frequency
Lately, various DL models have been deployed to improve scanning techniques). The intercepted RF signals can be
the detection and classification accuracy of drone surveil- captured by software-defined radio configurable devices and
lance systems. In [35], three deep neural networks (DNNs) then stored in a local database repository for processing
were built with the same architecture for three separated hereafter.
tasks: drone detection, drone classification, and opera-
tion recognition, where each network was designed with B. RF DATABASE DESCRIPTION
three hidden layers to learn the discrimination model from Regarding the aforementioned system, DroneRF [20],
FFT features of RF signals. Allahham et al. [36] designed a large-scale RF dataset, is introduced for different tasks:
a multi-channel convolutional network (MC-CNN) by drone detection, drone classification, and operation mode

49698 VOLUME 10, 2022


T. Huynh-The et al.: RF-UAVNet: High-Performance Convolutional Network for RF-Based Drone Surveillance Systems

TABLE 1. Summary of existing drone detection approaches.

TABLE 2. Overall specifications of thee three drones under consideration.

TABLE 3. Specifications of the USRP-2943 RF receivers.

recognition. Four operation modes, denoted as modes 01 to


04, of three different drones (Parrot Bebop, Parrot AR Drone,
and DJI Phantom 3, where some main specifications are listed
in Table 2) are considered as follows:
• Powering on and connecting to the controller.
• Hovering automatically without physical intervention
and user commands.
• Flying without video recording.
• Flying with video recording.
Two USRP-2943 RF receivers with the specifications
summarized in Table 3 are synchronized to collect the
lower (L) and upper (H) half of the frequency band signals
with the sampling rate of 40 MHz and the gain of 30 dB.
The dataset has 227 RF signal segments, where each segment
has two amplitude records (L and H bands), as shown in
FIGURE 2. Example of Bebop drone’s signal: (a) the raw amplitude
Fig. 2(a). It is noted that the Phantom drone is represented samples of the lower and upper bands and (b) the normalized samples
with one mode of drone connection (the data of other modes with fixed-sized partition using non-overlap windowing mechanism.
are corrupted in the collection procedure). Besides, the data
of no drone scenario (or background activity) is recorded to RF data in DroneRF should be partitioned into multiple
serve for the detection task, which identifies the presence of a signal frames with the length of 104 samples (long enough to
drone. The detailed experiments for the RF signal acquisition represent radio characteristic) using the non-overlap window-
are presented in the original work [20]. ing mechanism [35]. Accordingly, we segment 227K signal

VOLUME 10, 2022 49699


T. Huynh-The et al.: RF-UAVNet: High-Performance Convolutional Network for RF-Based Drone Surveillance Systems

TABLE 4. Dataset distribution of DroneRF. coordinate (x, y) in the spatial domain is formulated as
X
zx,y = wi ui + b, (2)
i
where b is the scalar bias. The volume convolution
of K channels (a.k.a., the depth size of the input) is the
sum of K resulting convolution scalar z. The norm layer
performs value normalization to its input with the mean µB
and variance σB2 calculated for each mini-batch and input
channel as
zx,y − µB
ẑx,y = q , (3)
σB2 + 

where  = 10−5 is the constant scalar to prevent the numer-


ical uncertainty caused by a very small variance. To possibly
process the inputs with zero mean and non-optimal variance
passing to the layer that follows the norm layer, the norm layer
frames, where each frame is formed in a high-dimensional then shifts and scales the normalized values by
array S with the size of 1 × 10000 × 2. The information of
dataset distribution is provided in Table 4. Due to the critical vx,y = κ ẑx,y + ξ, (4)
bias caused by two RF receivers with different RF band
where κ and ξ denote, respectively, the scale factor and offset
configurations and three drones without identical technical
parameters that are updated during network training. Once
specifications, the amplitude data needs to be normalized by
the training process is finished, the norm layer calculates the
scaling values to the range [0, 1] as
mean and variance over the whole training set. In the testing
process to predict the class of new input, the norm layer uses
(L) (L) (H ) (H )
(L) xi − xmin (H ) xi − xmin the trained mean and variance instead of the mean and vari-
x̂i = (L) (L)
, x̂i = (H ) (H )
, (1) ance of a mini-batch to normalize the activation. In a CNN for
xmax − xmin xmax − xmin
classification, batch normalization is typically used between
conv and eLU layers to accelerate the training of networks
(L) (L) (H ) (H )
where xmin , xmax , xmin , xmax denote the minimum and maxi- and reduce the sensitivity to network initialization [38]. The
mum amplitude values of the lower and upper half bands of all eLU layer then performs nonlinearity by outputting the iden-
drones. An example of partitioning RF signal into fixed-sized tical value on positive inputs and the exponential operation
frames is illustrated in Fig. 2(b). result on negative inputs, i.e.,
(
 vx,y , if vx,y ≥ 0,
C. RF-UAVNet: RF-BASED UAV ox,y = eLU vx,y = (5)
α (e − 1) , if vx,y < 0,
v x,y
DETECTION-CLASSIFICATION CONVOLUTIONAL NETWORK
This part presents an efficient CNN, denoted RF-UAVNet, for where α = 1 is the nonlinear parameter. Compared with
low-cost and high-accuracy drone detection, drone classifica- the rectified linear unit commonly developed in several con-
tion, and operation recognition based on RF signals, where volutional networks, eLU can achieve the learning conver-
the overall architecture is shown in Fig. 3(a). Regarding the gence faster with a higher accuracy in some classification
detailed architecture, an input layer is first specified with the tasks [39]. With 64 filters of size 1 × 5 and the stride of (1, 5)
size of 1×10000×2 to facilitate partitioned frames. To reduce specified for the conv layer, the width dimension of output
the spatial size of feature maps to significantly save com- maps reduces five times and the depth dimension is identical
putation at deeper layers, a 1D regular convolutional (conv) to the number of filters. At this point, with an input signal
layer is specified by the filters (so-called kernels) of size frame S ∈ R1×10000×2 , we determine the output of the r-unit
1×5 with the stride of (1, 5). Notably, the number of channels Fr−unit
OUT ∈ R
1×2000×64 .

for each filter in a regular conv layer is always equal to the As the primary modules of RF-UAVNet to extract the
number of channels (or the third dimension) of the input local correlations of amplitude samples at multi-scale fea-
to the layer. For example, because the input having two ture representations (as illustrated in Fig. 4), four processing
channels connects with the conv layer directly, the number of units (denoted g-unit) are arranged in cascade throughout the
channels for each filter is two. The conv layer is followed by network architecture in Fig. 3(a). In each g-unit, a grouped
a batch normalization (norm) layer and an exponential linear convolutional (g-conv) layer with the filters of size 1 × 3 and
unit (eLU) activation layer, where this layer group is denoted the stride of (1, 2) is followed by an eLU layer (see Fig. 3(c)).
r-unit in Fig. 3(b). Fundamentally, the convolution between Regarding the grouped convolution, the filters are separated
a kernel with weights wi and an input map ui at any specific into different groups for processing, where each group will

49700 VOLUME 10, 2022


T. Huynh-The et al.: RF-UAVNet: High-Performance Convolutional Network for RF-Based Drone Surveillance Systems

FIGURE 3. RF-UAVNet: (a) overall architecture, (b) skip-connection, and (c) network components: r-unit, g-unit, and multi-gap.

FIGURE 5. Example of the grouped 1D convolution with two filter groups.

FIGURE 4. Example of the 1D convolution operation to extract local


features at multi-scale resolutions from coarse to fine. layer. The number of channels in the output of g-conv layer
is numGroups × numFiltersPerGroup, where numGroups
serve as standard convolution for a certain number of input and numFiltersPerGroup, respectively, denote the number of
maps. Compared with the standard convolution, the grouped groups and the number of filters for each group. In this work,
one reduces the number of parameters and computational we specify the g-conv layers by eight groups with each group
cost significantly further as the number of non-overlapping having eight filters to always produce 64 feature maps at the
filter groups increases, while the learning efficiency is mostly output of g-conv layers. We denote by Fgi=1,...,4 ∈ R1×Wgi ×64
maintained with the sparse relationship among filters [40]. with Wgi = {1000, 500, 250, 125} the outputs of g-units,
With G groups, the complexity of g-conv is Din ×D G
out
, which where Wgi is the horizontal size of the output maps resulted
is lower than that of conv by G times [41], where Din and in the g-conv layer with the stride of (1, 2) in the i-th g-unit.
Dout , respectively, denote the depth size of input and out- To increase the overall system accuracy via well-handling
put. An example of grouped convolution with two groups is gradient flow in RF-UAVNet, we study a multi-level skip-
illustrated in Fig. 5, where the first group (yellow) convolves connection. Inspired by [42], skip-connection associates the
with the first half of the input and the second group (red) outputs of different g-units by adopting element-wise addi-
convolves with the second half. The number of filter groups tion operation. Given the output of r-unit Fr−unit
OUT , the proce-
must evenly divide the number of channels of the input dure of the first skip-connection can be formulated with an

VOLUME 10, 2022 49701


T. Huynh-The et al.: RF-UAVNet: High-Performance Convolutional Network for RF-Based Drone Surveillance Systems

element-wise addition (add) layer as preceding layers, we propose a multi-level global average
  pooling (multi-gap) module as described in Fig. 3(c). The
F1add = F Fr−unit
OUT = Fp1 + Fg1 , (6) outputs of different g-unit Fgi are first forwarded to the gap
    layers individually to obtain the global features
where Fp1 = Pmax Fr−unit and Fg = C Fr−unit
OUT ;
OUT 1
Figap = Pavg Fgi , with i = 1, . . . , 4,

(8)
Pmax denotes the max-pooling operation with the pool size
of (1, 2) and the stride of (1, 2); C denotes the united oper- where Figap ∈ R1×1×64 and Pavg denotes the average pooling
ation of grouped convolution, batch normalization, and eLU operation with different pool sizes. The resulting global fea-
activation. The max pooling (maxpool) layer is adopted to tures are then stacked by a depth-wise concatenation (concat)
pick on the most salient features [43] and to align the output layer as
to spatial size of Fg1 . The element-wise addition operation  
in (6) is done by an addition (add) layer, where all inputs Fmulti−gap = D F1gap , F2gap , F3gap , F4gap , (9)
to this layer must have the same dimension. For example,
by downsampling Fr−unit where D denotes the concatenation operation. It is noted
OUT in the horizontal dimension with the
stride of (1, 2), the output of maxpool layer Fp1 ∈ R1×1000×64 that a concat layer takes the inputs having the same height
has the same dimension with Fg1 ∈ R1×Wg1 ×64 . In neural net- and width and aggregates them along the depth (or channel)
works, the vanishing gradient problem usually occurs when dimension. The output Fmulti−gap ∈ R1×1×256 has the number
the gradient elements (the partial derivatives with respect of channels (or depth size) that is the sum of those of all
to the network parameters) become exponentially small; inputs.
hence, the update of the parameters with the gradient is almost At the end of network, the output of multi-gap Fmulti−gap
negligible. Based on the principle of residual block [42], the combines with the output
 of the last skip-connection
skip-connection allows the network to learn the residual C (·) Fadd−gap = Pavg F4add by a concat layer, see Fig. 3(a) as
instead of learning the true output F (·) to nearly maintain the
Ffinal = D Fmulti−gap , Fadd−gap ,

gradient flow. To put it simply, the attenuated gradient caused (10)
by activation functions is consolidated with the identity infor-
where Ffinal ∈ R1×1×320 , which contains the synthetic infor-
mation by skipping layers. The structure of skip-connection
mation of signal characteristics at multiple scales, is flattened
deployed in RF-UAVNet is generally described in Fig. 3(b).
into a single vector before feeding to the fc layer defined by
For a multi-level mechanism, the n-th skip-connection with
k neurons and followed by a softmax layer. The number of
n ≥ 2, in general, can be expressed as follows:
  neuron in fc layer is identical to the number of classes in a
Fnadd = F Fn−1 given dataset for a particular task. Regarding different drone
add = Fpn−1 + Fgn−1
    surveillance tasks studied in this work, we use the same CNN
= Pmax Fn−1 add + C Fadd .
n−1 architecture with a minor change in the fc layer. Particularly,
(7)
RF-UAVNet is with k = 2 for drone detection, k = 4 for
For the purpose of tracking the dimension of feature maps drone classification, and k = 10 for operation recognition
along with the backbone of RF-UAVnet in the corporation (see Table 4). The detailed network configurations are further
with multi-level skip-connection, the output halves the width provided in Table 5. The network is finalized with a softmax
dimension of feature maps for each time of passing the layer that performs a normalized exponential function to
input through a skip-connection. In particular, we obtain F : transform the scores s = {sj }kj=1 deduced by the fc layer to
F1add ∈ R1×Wg1 ×64 ← Fr−unit
OUT for the first skip-connection the class probability distribution
1×Wgn−1 ×64
and F : Fnadd ∈ R1×Wgn ×64 ← Fn−1 add ∈ R for the esj
n-th skip-connection with n ≥ 2, where Wgn is the width of pj = Pk , (11)
si
output maps. It is worth noting that the element-wise addition i=1 e
layer does not change the feature dimension. The proposed where pj is the j-th class probability of an input data with
multi-level skip-connection allows the gradient information 0 ≤ pj ≤ 1 and ki=1 pi = 1. The softmax function can
P
to be mostly maintained at different feature levels. Conse- be considered the multi-class generalization of the logistic
quently, the vanishing/exploding gradient problem can be sigmoid function. The network assigns each input to one of
prevented effectively when the network goes deeper. k mutually exclusive classes based on the output of softmax
A global average pooling (gap) layer is configured to function and computes the cross entropy loss for multi-classes
calculate the mean of each feature map, which is usually classification as follows:
located before fully connected (fc) layers for network size
N X
k
reduction and overfitting alleviation with no learnable param- X
νij ln υij ,

L=− (12)
eters. For example, the gap layer transforms the dimensions
i=1 j=1
from 1 × W × D to 1 × 1 × D by averaging across with
the pool size of (1, W ) for every maps along the channel where N is the number of training signal frames, νij denotes
dimension. To collect the meaningful knowledge from the the ground-truth of the i-th input signal associated with the

49702 VOLUME 10, 2022


T. Huynh-The et al.: RF-UAVNet: High-Performance Convolutional Network for RF-Based Drone Surveillance Systems

TABLE 5. RF-UAVNet configuration (Task 01: Drone detection, Task 02: A. MODEL PERFORMANCE
Drone classification, and task 03: Operation recognition).
In the first experiment, we evaluate the performance of
RF-UAVNet on the DroneRF dataset for different tasks,
where the numerical results of confusion matrices are shown
in Fig. 6. For drone detection with the result given in Fig. 6(a),
RF-UAVNet reaches the overall correct identification rate
of 99.85%. For drone classification, RF-UAVNet yields the
overall accuracy of 98.55%, whereas Phantom presents the
worst classification rate of 96.41%. Phantom and Bebop are
confused with AR by around 2.80% and 2.04%, respec-
tively. For operation recognition which is more challenging
than drone detection and classification with more classes
j-th class, and υij denotes the output resulted by the network for discrimination, the network achieves the overall accuracy
for the class j of the input signal i. of 95.33%. From the confusion matrix presented in Fig. 6(c),
The configurations of the network training process are Bebop mode 04 (lying with video recording) presents the
given as follows: the optimizer is the stochastic gradient worst recognition rate of 88.37%, which is mostly confused
descent with momentum, the momentum factor is 0.95, the with Bebop mode 03 (flying without video recording) by
L2 regularization factor is 0.0001, the maximum number of approximately 8.63%. Interestingly, high confusion is found
epochs for training is 90, the initial learning rate is 0.01 between different modes of the Bebop drone, for instance,
(dropping to 0.001 after 45 epochs for a better training con- mode 01 (powering on and connecting) and mode 02 (hover-
vergence), and the mini-batch size is 128. The network is ing) are misclassified as mode 03 with the error rate of 4.31%
evaluated on a platform using 3.70-GHz CPUs, 32GB RAM, and 5.74%, respectively.
and a single NVIDIA GTX 1080Ti GPU.
B. PARAMETER SENSITIVITY
IV. EXPERIMENTAL RESULTS AND DISCUSSIONS This experiment investigates the relationship between the
This section evaluates the performance of RF-UAVNet for overall performance and the network complexity with various
drone surveillance using the 10-fold cross-validation proto- numbers of filter groups (numGroups) defined in g-conv lay-
col [35]. Concretely, the DroneRF dataset is randomly parti- ers. The complexity is evaluated by the network size, which
tioned into ten non-overlapping folds, where one fold is used is measured by the number of trainable parameters, and the
as the validation set for testing, and the remainder is used computational cost, which is estimated by the number of
to train the network. This process is then repeated ten times floating point operations (FLOPs). In Fig. 7, we report the
to evaluate the whole dataset, and the overall performance is results of three tasks with numGroups = {2, 4, 6, 8, 16, 32},
reported by averaging the performance of all folds. where the number of filters in each group is automati-
Specifically, four principal experiments along with insight- cally determined to produce the same number of feature
ful discussions are delivered as follows: maps. In general, the cost efficiency of RF-UAVNet gains
• In the first experiment, we benchmark the perfor-
along with the increment of numGroups, whereas the per-
mance of RF-UAVNet for three drone surveillance tasks formance of three drone surveillance tasks gets degradation.
(i.e., drone detection, drone classification, and operation For instance, from Figs. 7(a) and 7(b), the F1-score of drone
recognition denoted by tasks 01-03) on the DroneRF detection and classification reduces by around 0.01% and
dataset. 0.14%, respectively, when we increase numGroups from 4 to
• The second experiment evaluates the parameter sensi-
8 groups. However, when we increase numGroups from 8 to
tivity of RF-UAVNet to the overall system performance 16 groups, the F1-score decreases severely by 0.62% for
with different numbers of filter groups in g-conv layers. drone detection and 3.08% for drone classification. For oper-
• The third experiment is an ablation study, in which RF-
ation recognition, the performance deterioration is nearly
UAVNet is compared with its variants to demonstrate the identical in each time of doubling numGroups (approxi-
advantages of grouped convolution and skip-connection mately the accuracy of 2.41% and the F1-score of 2.57%).
incorporated with multi-gap. Concerning network complexity, the number of parameters
• The last one compares RF-UAVNet with other state-of-
and FLOPs reduce significantly for small numbers of filter
the-art DL-based approaches, which also exploit DNN groups. Because of the same architecture configuration of
and CNN models for drone surveillance. r-unit and g-units to extract features, the size difference of
The performance of drone detection and classification RF-UAVNets for different tasks derives from the numbers
is measured using the accuracy and F1-score metrics. of weights and biases to densely associate all neurons in the
The MATLAB codes of our work can be freely accessed on concat layer with k neurons in the fc layers. With k = 10,
the GitHub repository.1 the size of RF-UAVNet for operation recognition is bigger
1 https://github.com/ThienHuynhThe/RF-based-Drone-Surveillance- than those of the networks for drone detection and drone
with-DL. classification as shown in Fig. 7(c). Notably, the computation

VOLUME 10, 2022 49703


T. Huynh-The et al.: RF-UAVNet: High-Performance Convolutional Network for RF-Based Drone Surveillance Systems

FIGURE 6. Confusion matrices: (a) drone detection, (b) drone classification, and (c) operation mode recognition, where classes 01-04 are of the
AR drone, classes 05-08 are of the Bebop drone, class 09 denotes the Phantom mode 01, and class 10 denotes RF background activity without
drone detection.

TABLE 6. Performance of RF-UAVNet With Different Activation Functions.

LeakyReLU for operation recognition with insignificant


gaps. Despite being better than Tanh for drone classification,
ReLU presents worse results in drone detection and operation
mode recognition experiments. It is noted that the activation
layers do not have parameters to learn and take up a small
fraction of the overall network computation.

C. ABLATION STUDY
The third experiment evaluates the effectiveness of
grouped convolution incorporated with multi-layer skip-
connection and multi-gap mechanisms, which are exploited
in RF-UAVNet. Specifically, we compare RF-UAVNet with
three cut-off variants, denoted by Net-A, Net-B, and Net-C,
in terms of accuracy and complexity. The detailed architec-
tures of these CNNs are given as follows and summarized
in Table 7:
• Net-A: an alternative architecture of RF-UAVNet using
standard conv layers without the incorporated structure
of skip-connection and multi-gap.
FIGURE 7. Performance and complexity of RF-UAVNet with various • Net-B: an alternative architecture of RF-UAVNet using
number of filter groups (numGroups).
g-conv layers without skip-connection and multi-gap (or
Net-A with g-conv).
of RF-UAVNet mostly is for feature calculation in convolu- • Net-C: an alternative architecture of RF-UAVNet using
tional layers, while the computational cost consumed in the g-conv layers and skip-connection without multi-gap (or
fc layer is insignificant. This explains a small gap in FLOPs Net-B with skip-connection).
for three different tasks in Fig. 7(d). As a result, we specify It is noted that all networks are evaluated with the
numGroups = 8 to achieve a comfortable balance between same training configuration. Compared with Net-A which
performance and complexity. uses standard conv layers, Net-B with g-conv layers
We further investigate the performance of RF-UAVNet remarkably reduces the network size (from 50K to 8.4K
with different common activation functions: rectified lin- parameters) and computational cost (from 24.5 MFLOPs
ear unit (ReLU), leaky ReLU, hyperbolic tangent (Tanh), to 4.4 MFLOPs, where MFLOPs denotes megaFLOPs),
and eLU. Based on the accuracy and F1-score results in while it suffers a trivial reduction of performance. Net-
Table 6, eLU shows the best performance for drone detec- C, which is upgraded from Net-B with the multi-layer
tion and drone classification, whereas being worse than skip-connection, achieves a performance improvement in

49704 VOLUME 10, 2022


T. Huynh-The et al.: RF-UAVNet: High-Performance Convolutional Network for RF-Based Drone Surveillance Systems

TABLE 7. Performance comparison between RF-UAVNet and its cut-off variants. Noted: complexity is reported with RF-UAVNet for operation recognition.

TABLE 8. Performance comparison between RF-UAVNet and other deep networks.

terms of accuracy and F1-score without the increment of multi-level skip-connection and multi-gap mechanisms, per-
the number of parameters and FLOPs. By leveraging a forms drone classification and operation recognition more
sophisticated-designed architecture, wherein g-conv layers precisely with the higher accuracy of 4.0% and 7.9% and
associate with multi-layer skip-connection and multi-gap with the greater F1-score of 7.5% and 18.1%, respectively.
mechanisms, RF-UAVNet successfully obtains a twofold For complexity, DNN has the biggest size with around 5.1M
objective: low complexity and high accuracy. Concretely, parameters because of specifying large numbers of neu-
RF-UAVNet is approximately 80% lower than Net-A in rons in hidden layers for dense connection, but its speed is
terms of complexity and effectively performs drone classifi- fastest with a very simple network structure. Compared with
cation and operation recognition better than Net-B by around DNN, 1D-CNN and MC-CNN are much more lightweight
7.41 − 12.12%. Notably, RF-UAVNet is bigger than Net-B but have higher computation costs and classify more slowly.
and Net-C in terms of the network size because the multi-gap RF-UAVNet presents the smallest network size with 11K
structure for multi-level feature maps aggregation increases parameters and the cheapest computation with 4.4 MFLOPs
the number of parameters in the fc layer. thanks to the deployment of grouped convolution (instead
of regular convolution in 1D-CNN and MC-CNN). With
D. METHOD COMPARISON the sophisticated structure of multi-level skip-connection and
In the last experiment, the RF-UAVNet is compared with multi-level pooling, our network performs a little slower than
some recent DL models deployed for drone surveillance, DNN. Accordingly, the proposed CNN has demonstrated
including DNN [35], 1D-CNN [17], and MC-CNN [36], superiority in terms of accuracy and complexity against other
in terms of performance and complexity, where the numerical existing deep models for drone surveillance.
results, including accuracy, F1-score, number of trainable
parameters, FLOPs, and processing speed (a.k.a., the average V. CONCLUSION AND DISCUSSIONS
prediction time of a signal frame) are reported in Table 8. In this paper, we presented an efficient RF-based surveillance
In general, the performance gap between the comparison approach to manage the flying activities of registered drones,
methods in drone detection is trivial (around 0.1 − 0.5%), in which we proposed a cost-efficient and high-accuracy
whereas the gaps in drone classification and operation recog- CNN, namely RF-UAVNet, to effectively detect and classify
nition are more significant. DNN, which is simply designed drones and recognize their operation modes. RF-UAVNet
with three hidden layers, reports the worst performance, espe- was characterized by grouped convolutional layers to reduce
cially with the operation recognition task with the overall the network size and computational cost significantly, where
accuracy of 46.8% and the F1-score of 43.0%. By leveraging each g-conv layer was specified by asymmetric 1D kernels
convolutional architectures, 1D-CNN with multiple 1D conv to capture the temporal correlations as local features of RF
layers associated with average pooling layers in a cascade signals. Remarkably, to increase the overall accuracy of
structure performs better than DNN in drone classification systems, we designed the multi-level skip-connection and
and operation recognition. Compared with DNN, 1D-CNN multi-gap mechanisms to, respectively, prevent the vanishing
improves accuracy and F1-score by up to 12.4% and 12.1%, gradient effectively and collect the useful global features at
respectively. With a depth-wise concatenation layer to selec- different signal resolutions. In the experiments, we analyzed
tively aggregate features in multiple channels, MC-CNN RF-UAVNet exhaustively with different architecture
reaches the drone classification accuracy of 94.6% and the configurations on the DroneRF dataset to demonstrate
operation recognition accuracy of 87.4%, which significantly the effectiveness of grouped convolution to reduce com-
outperforms DNN by 9.1 − 28.2%. Despite being worse plexity and multi-level skip-connection incorporated with
than MC-CNN by a tiny gap of drone detection (around multi-gap to improve accuracy. RF-UAVNet achieved
0.1 − 02%), RF-UAVNet, which effectively incorporates the very high accuracy for different drone surveillance tasks

VOLUME 10, 2022 49705


T. Huynh-The et al.: RF-UAVNet: High-Performance Convolutional Network for RF-Based Drone Surveillance Systems

(approximately 99.9% for drone detection, 98.6% for drone [16] G. B. Tunze, T. Huynh-The, J.-M. Lee, and D.-S. Kim, ‘‘Sparsely con-
classification, and 95.3% for operation recognition) with low nected CNN for efficient automatic modulation recognition,’’ IEEE Trans.
Veh. Technol., vol. 69, no. 12, pp. 15557–15568, Dec. 2020.
complexity (11K parameters and 4.4 MFLOPs), which could [17] S. Al-Emadi and F. Al-Senaid, ‘‘Drone detection approach based on
be a favorable approach for onboard deployment in portable radio-frequency using convolutional neural network,’’ in Proc. IEEE Int.
anti-drone systems. Furthermore, the proposed deep network Conf. Informat., IoT, Enabling Technol. (ICIoT), Doha, Qatar, Feb. 2020,
pp. 29–34.
considerably outperformed state-of-the-art DL models for [18] C. Yang, W. Jiang, and Z. Guo, ‘‘Time series data classification based
drone surveillance in terms of accuracy and F1-score. on dual path CNN-RNN cascade network,’’ IEEE Access, vol. 7,
In the future, we intend to create our own practical dataset pp. 155304–155312, 2019.
[19] F. Wang, R. Liu, Q. Hu, and X. Chen, ‘‘Cascade convolutional neural
of RF signals for different drone surveillance tasks, in which network with progressive optimization for motor fault diagnosis under
each signal should be represented by a complex envelop form nonstationary conditions,’’ IEEE Trans. Ind. Informat., vol. 17, no. 4,
with in-phase and quadrature components. Besides, we will pp. 2511–2521, Apr. 2021.
[20] M. S. Allahham, M. F. Al-Sa’d, A. Al-Ali, A. Mohamed, T. Khattab, and
extend the dataset into more conditions and scenarios, includ- A. Erbad, ‘‘DroneRF dataset: A dataset of drones for RF-based detec-
ing increasing the number of drones, recording RF signals tion, classification and identification,’’ Data Brief, vol. 26, Oct. 2019,
under different channel impairments (e.g., fading and additive Art. no. 104313.
[21] F. Hoffmann, M. Ritchie, F. Fioranelli, A. Charlish, and H. Griffiths,
noise) in both indoor and outdoor environments, and varying ‘‘Micro-Doppler based detection and tracking of UAVs with multistatic
speed, altitude, and distance between drones and the RF radar,’’ in Proc. IEEE Radar Conf. (RadarConf), Philadelphia, PA, USA,
sensing module. May 2016, pp. 1–6.
[22] M. Jahangir, B. I. Ahmad, and C. J. Baker, ‘‘Robust drone classification
using two-stage decision trees and results from SESAR SAFIR trials,’’ in
REFERENCES Proc. IEEE Int. Radar Conf. (RADAR), Washington, DC, USA, Apr. 2020,
[1] S. Hussain, S. A. Chaudhry, O. A. Alomari, M. H. Alsharif, M. K. Khan, pp. 636–641.
and N. Kumar, ‘‘Amassing the security: An ECC-based authentica- [23] Y. Dan, J. Yi, X. Wan, Y. Rao, and B. Wang, ‘‘LTE-based passive radar for
tion scheme for Internet of Drones,’’ IEEE Syst. J., vol. 15, no. 3, drone detection and its experimental results,’’ J. Eng., vol. 2019, no. 20,
pp. 4431–4438, Sep. 2021. pp. 6910–6913, Oct. 2019.
[2] Q.-V. Pham, T. Huynh-The, M. Alazab, J. Zhao, and W.-J. Hwang, ‘‘Sum- [24] Z. Geng, R. Xu, and H. Deng, ‘‘LTE-based multistatic passive radar
rate maximization for UAV-assisted visible light communications using system for UAV detection,’’ IET Radar, Sonar Navigat., vol. 14, no. 7,
NOMA: Swarm intelligence meets machine learning,’’ IEEE Internet pp. 1088–1097, Jul. 2020.
Things J., vol. 7, no. 10, pp. 10375–10387, Oct. 2020. [25] X. Yue, Y. Liu, J. Wang, H. Song, and H. Cao, ‘‘Software defined radio
[3] I. Bisio, C. Garibotto, H. Haleem, F. Lavagetto, and A. Sciarrone, ‘‘On the and wireless acoustic networking for amateur drone surveillance,’’ IEEE
localization of wireless targets: A drone surveillance perspective,’’ IEEE Commun. Mag., vol. 56, no. 4, pp. 90–97, Apr. 2018.
Netw., vol. 35, no. 5, pp. 249–255, Sep./Oct. 2021. [26] M. Z. Anwar, Z. Kaleem, and A. Jamalipour, ‘‘Machine learning inspired
[4] H. Fu, S. Abeywickrama, L. Zhang, and C. Yuen, ‘‘Low-complexity sound-based amateur drone detection for public safety applications,’’ IEEE
portable passive drone surveillance via SDR-based signal processing,’’ Trans. Veh. Technol., vol. 68, no. 3, pp. 2526–2534, Mar. 2019.
IEEE Commun. Mag., vol. 56, no. 4, pp. 112–118, Apr. 2018. [27] Z. Uddin, M. Altaf, M. Bilal, L. Nkenyereye, and A. K. Bashir, ‘‘Amateur
[5] B. Taha and A. Shoufan, ‘‘Machine learning-based drone detection drones detection: A machine learning approach utilizing the acoustic sig-
and classification: State-of-the-art in research,’’ IEEE Access, vol. 7, nals in the presence of strong interference,’’ Comput. Commun., vol. 154,
pp. 138669–138682, 2019. pp. 236–245, Mar. 2020.
[6] M. M. Azari, H. Sallouha, A. Chiumento, S. Rajendran, E. Vinogradov, [28] A. Rozantsev, V. Lepetit, and P. Fua, ‘‘Detecting flying objects using a
and S. Pollin, ‘‘Key technologies and system trade-offs for detection and single moving camera,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 39,
localization of amateur drones,’’ IEEE Commun. Mag., vol. 56, no. 1, no. 5, pp. 879–892, May 2017.
pp. 51–57, Jan. 2018. [29] A. Carrio, J. Tordesillas, S. Vemprala, S. Saripalli, P. Campoy, and
[7] G. Ding, Q. Wu, L. Zhang, Y. Lin, T. A. Tsiftsis, and Y.-D. Yao, ‘‘An ama- J. P. How, ‘‘Onboard detection and localization of drones using depth
teur drone surveillance system based on the cognitive Internet of Things,’’ maps,’’ IEEE Access, vol. 8, pp. 30480–30490, 2020.
IEEE Commun. Mag., vol. 56, no. 1, pp. 29–35, Jan. 2018. [30] H. Sun, J. Yang, J. Shen, D. Liang, L. Ning-Zhong, and H. Zhou, ‘‘TIB-
[8] I. Bisio, C. Garibotto, F. Lavagetto, A. Sciarrone, and S. Zappatore, ‘‘Unau- Net: Drone detection network with tiny iterative backbone,’’ IEEE Access,
thorized amateur UAV detection based on WiFi statistical fingerprint anal- vol. 8, pp. 130697–130707, 2020.
ysis,’’ IEEE Commun. Mag., vol. 56, no. 4, pp. 106–111, Apr. 2018. [31] P. Nguyen, H. Truong, M. Ravindranathan, A. Nguyen, R. Han, and
[9] P. Nguyen, H. Truong, M. Ravindranathan, A. Nguyen, R. Han, and T. Vu, ‘‘Cost-effective and passive RF-based drone presence detection and
T. Vu, ‘‘Matthan: Drone presence detection by identifying physical sig- characterization,’’ GetMobile, Mobile Comput. Commun., vol. 21, no. 4,
natures in the drone’s RF communication,’’ in Proc. 15th Annu. Int. Conf. pp. 30–34, Feb. 2018.
Mobile Syst. Appl. Serv. (MobiSys), Niagara Falls, NY, USA, Jun. 2017, [32] M. Ezuma, F. Erden, C. Kumar Anjinappa, O. Ozdemir, and I. Guvenc,
pp. 211–224. ‘‘Detection and classification of UAVs using RF fingerprints in the pres-
[10] A. Shoufan, H. M. Al-Angari, M. F. A. Sheikh, and E. Damiani, ‘‘Drone ence of Wi-Fi and Bluetooth interference,’’ IEEE Open J. Commun. Soc.,
pilot identification by classifying radio-control signals,’’ IEEE Trans. Inf. vol. 1, pp. 60–76, 2020.
Forensics Security, vol. 13, no. 10, pp. 2439–2447, Oct. 2018. [33] I. Bisio, C. Garibotto, A. Sciarrone, and F. Lavagetto, ‘‘Blind detection:
[11] H. Zhang, C. Cao, L. Xu, and T. A. Gulliver, ‘‘A UAV detection Advanced techniques for WiFi-based drone surveillance,’’ IEEE Trans.
algorithm based on an artificial neural network,’’ IEEE Access, vol. 6, Veh. Technol., vol. 68, no. 1, pp. 938–946, Jan. 2019.
pp. 24720–24728, 2018. [34] A. Alipour-Fanid, M. Dabaghchian, N. Wang, P. Wang, L. Zhao, and
[12] T. Huynh-The, Q.-V. Pham, T.-V. Nguyen, T. T. Nguyen, R. Ruby, M. Zeng, K. Zeng, ‘‘Machine learning-based delay-aware UAV detection and oper-
and D.-S. Kim, ‘‘Automatic modulation classification: A deep architecture ation mode identification over encrypted Wi-Fi traffic,’’ IEEE Trans. Inf.
survey,’’ IEEE Access, vol. 9, pp. 142950–142971, 2021. Forensics Security, vol. 15, pp. 2346–2360, 2020.
[13] Q.-V. Pham, N. T. Nguyen, T. Huynh-The, L. B. Le, K. Lee, and [35] M. F. Al-Sa’d, A. Al-Ali, A. Mohamed, T. Khattab, and A. Erbad, ‘‘RF-
W.-J. Hwang, ‘‘Intelligent radio signal processing: A survey,’’ IEEE based drone detection and identification using deep learning approaches:
Access, vol. 9, pp. 83818–83850, 2021. An initiative towards a large open source drone database,’’ Future Gener.
[14] Y. LeCun, Y. Bengio, and G. Hinton, ‘‘Deep learning,’’ Nature, vol. 521, Comput. Syst., vol. 100, pp. 86–97, Nov. 2019.
no. 7553, pp. 436–444, Sep. 2015. [36] M. S. Allahham, T. Khattab, and A. Mohamed, ‘‘Deep learning for RF-
[15] T. Huynh-The, C. Hua, Q. Pham, and D. Kim, ‘‘MCNet: An efficient based drone detection and identification: A multi-channel 1-D convolu-
CNN architecture for robust automatic modulation classification,’’ IEEE tional neural networks approach,’’ in Proc. IEEE Int. Conf. Informat., IoT,
Commun. Lett., vol. 24, no. 4, pp. 811–815, Apr. 2020. Enabling Technol. (ICIoT), Doha, Qatar, Feb. 2020, pp. 112–117.

49706 VOLUME 10, 2022


T. Huynh-The et al.: RF-UAVNet: High-Performance Convolutional Network for RF-Based Drone Surveillance Systems

[37] S. Abeywickrama, L. Jayasinghe, H. Fu, S. Nissanka, and C. Yuen, ‘‘RF- TOAN-VAN NGUYEN (Member, IEEE)
based direction finding of UAVs using DNN,’’ in Proc. IEEE Int. Conf. received the B.S. degree in electronics and
Commun. Syst. (ICCS), Chengdu, China, Dec. 2018, pp. 157–161. telecommunications engineering and the M.S.
[38] S. Ioffe and C. Szegedy, ‘‘Batch normalization: Accelerating deep network degree in electronics engineering from the HCMC
training by reducing internal covariate shift,’’ in Proc. 32nd Int. Conf. University of Technology and Education, Vietnam,
Mach. Learn., Lille, France, vol. 37, Jul. 2015, pp. 448–456. in 2011 and 2014, respectively, and the Ph.D.
[39] D.-A. Clevert, T. Unterthiner, and S. Hochreiter, ‘‘Fast and accurate degree in electronics and computer engineering
deep network learning by exponential linear units (ELUs),’’ 2015,
from Hongik University, South Korea, in 2021.
arXiv:1511.07289.
He is currently a Postdoctoral Researcher with the
[40] Y. Ioannou, D. Robertson, R. Cipolla, and A. Criminisi, ‘‘Deep roots:
Improving CNN efficiency with hierarchical filter groups,’’ in Proc. Department of Electrical and Computer Engineer-
IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Honolulu, HI, USA, ing, Utah State University, Logan, UT, USA. His current research interests
Jul. 2017, pp. 5977–5986. include mathematical modeling of 5G networks and machine learning for
[41] Z. Zhang, J. Li, W. Shao, Z. Peng, R. Zhang, X. Wang, and P. Luo, wireless communications.
‘‘Differentiable learning-to-group channels via groupable convolutional
neural networks,’’ in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV),
Oct./Nov. 2019, pp. 3541–3550.
[42] K. He, X. Zhang, S. Ren, and J. Sun, ‘‘Deep residual learning for image
recognition,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR),
DANIEL BENEVIDES DA COSTA (Senior Mem-
Las Vegas, NV, USA, Aug. 2016, pp. 770–778.
[43] J. Nagi, F. Ducatelle, G. A. Di Caro, D. Cireşan, U. Meier, A. Giusti, ber, IEEE) was born in Fortaleza, Ceara, Brazil,
F. Nagi, J. Schmidhuber, and L. M. Gambardella, ‘‘Max-pooling convolu- in 1981. He received the B.Sc. degree in telecom-
tional neural networks for vision-based hand gesture recognition,’’ in Proc. munications from the Military Institute of Engi-
IEEE Int. Conf. Signal Image Process. Appl. (ICSIPA), Kuala Lumpur, neering (IME), Rio de Janeiro, Brazil, in 2003, and
Malaysia, Nov. 2011, pp. 342–347. the M.Sc. and Ph.D. degrees in electrical engineer-
ing, area: telecommunications, from the University
of Campinas, Brazil, in 2006 and 2008, respec-
tively. From 2008 to 2009, he was a Postdoctoral
Research Fellow with INRS-EMT, University of
Quebec, Montreal, QC, Canada. From 2010 to 2022, he was with the Federal
THIEN HUYNH-THE (Member, IEEE) received University of Ceara, Brazil. From January 2019 to April 2019, he was a
the B.S. degree in electronics and telecommuni- Visiting Professor at the Lappeenranta University of Technology (LUT),
cation engineering from the Ho Chi Minh City Finland, with financial support from Nokia Foundation. From May 2019 to
University of Technology and Education, Vietnam, August 2019, he was with the King Abdullah University of Science and
in 2011, and the Ph.D. degree in computer sci- Technology (KAUST), Saudi Arabia, as a Visiting Faculty Member, and
ence and engineering from Kyung Hee University from September 2019 to November 2019, he was a Visiting Researcher
(KHU), South Korea, in 2018. From March to at Istanbul Medipol University, Turkey. From 2021 to 2022, he was Full
August 2018, he was a Postdoctoral Researcher Professor at the National Yunlin University of Science and Technology
with KHU. He is currently a Postdoctoral Research (YunTech), Taiwan. He is currently with the Digital Science Research Center,
Fellow with the ICT Convergence Research Cen- Technology Innovation Institute (TII), Abu Dhabi, UAE. His Ph.D. thesis
ter, Kumoh National Institute of Technology, South Korea. His current was awarded the Best Ph.D. Thesis in electrical engineering by the Brazilian
research interests include radio signal processing, digital image process- Ministry of Education (CAPES) at the 2009 CAPES Thesis Contest. He was
ing, computer vision, wireless communications, and deep learning. He was awarded with the prestigious Nokia Visiting Professor Grant. He is an editor
awarded with the Superior Thesis Prize by KHU, in 2018, and the Golden of several IEEE journals and has acted as a symposium/track co-chair in
Globe Award 2022 for Vietnamese Young Scientists by the Ministry of numerous IEEE flagship conferences. He is a world’s top 2% scientist from
Science and Technology of Vietnam, in 2020. Stanford University, in 2020 and has been ranked among 1% top scientists in
the world in the broad field of electronics and electrical engineering. He is
also a Distinguished Speaker of the IEEE Vehicular Technology Society.

QUOC-VIET PHAM (Member, IEEE) received


the B.S. degree in electronics and telecommuni-
cations engineering from the Hanoi University of DONG-SEONG KIM (Senior Member, IEEE)
Science and Technology, Vietnam, in 2013, and the received the Ph.D. degree in electrical and com-
Ph.D. degree in telecommunications engineering puter engineering from Seoul National Uni-
from Inje University, Republic of Korea, in 2017. versity, Seoul, Republic of Korea, in 2003.
Since January 2020, he has been a Research Pro- From 1994 to 2003, he worked as a full-time
fessor with Pusan National University, Republic of Researcher at ERC-ACI, Seoul National Uni-
Korea. He was granted the Korea NRF funding for versity. From March 2003 to February 2005,
outstanding young researchers, from 2019 to 2024. he worked as a Postdoctoral Researcher with the
His research interests include applying convex optimization, game theory, Wireless Network Laboratory, School of Electri-
and machine learning to analyze and optimize edge computing and future cal and Computer Engineering, Cornell University,
wireless communications. He was a recipient of the Best Ph.D. Dissertation Ithaca, NY, USA. From 2007 to 2009, he was a Visiting Professor with
Award in engineering from Inje University, in 2017, the Top Reviewer Award the Department of Computer Science, University of California at Davis,
from the IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, in 2020, and the Davis, CA, USA. He is currently the Director of the KIT Convergence
Golden Globe Award from the Ministry of Science and Technology, Vietnam, Research Institute and ICT Convergence Research Center (ITRC Program)
in 2021. He has been a TPC/TPC Chair of leading conferences, including supported by Korean Government, Kumoh National Institute of Technology.
IEEE ICC, IEEE VTC, and EAI GameNets. He is an Editor of Journal of His current main research interests include the real-time IoT, industrial
Network and Computer Applications (Elsevier), Scientific Reports (Nature), wireless control networks, networked embedded systems, and fieldbus. He is
and Frontiers in Communications and Networks, and a Lead Guest Editor of an ACM member.
the IEEE INTERNET OF THINGS JOURNAL.

VOLUME 10, 2022 49707

You might also like