Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

sensors

Article
A System Using Artificial Intelligence to Detect and Scare Bird
Flocks in the Protection of Ripening Fruit
Petr Marcoň * , Jiří Janoušek , Josef Pokorný, Josef Novotný, Eliška Vlachová Hutová, Anna Širůčková,
Martin Čáp, Jana Lázničková, Radim Kadlec, Petr Raichl, Přemysl Dohnal, Miloslav Steinbauer
and Eva Gescheidtová

Faculty of Electrical Engineering and Communication, Brno University of Technology,


61600 Brno, Czech Republic; xjanou09@vutbr.cz (J.J.); xpokor61@vutbr.cz (J.P.); xnovot0v@vutbr.cz (J.N.);
xhutov00@vutbr.cz (E.V.H.); xsiruc02@vutbr.cz (A.Š.); capm@vutbr.cz (M.Č.); xlazni09@vutbr.cz (J.L.);
kadlec@vutbr.cz (R.K.); xraich02@vut.cz (P.R.); dohnalp@vutbr.cz (P.D.); steinbau@feec.vutbr.cz (M.S.);
gescha@feec.vutbr.cz (E.G.)
* Correspondence: marcon@vut.cz

Abstract: Flocks of birds may cause major damage to fruit crops in the ripening phase. This problem is
addressed by various methods for bird scaring; in many cases, however, the birds become accustomed
to the distraction, and the applied scaring procedure loses its purpose. To help eliminate the difficulty,
we present a system to detect flocks and to trigger an actuator that will scare the objects only
when a flock passes through the monitored space. The actual detection is performed with artificial
intelligence utilizing a convolutional neural network. Before teaching the network, we employed

 videocameras and a differential algorithm to detect all items moving in the vineyard. Such objects
revealed in the images were labeled and then used in training, testing, and validating the network.
Citation: Marcoň, P.; Janoušek, J.;
Pokorný, J.; Novotný, J.; Hutová, E.V.;
The assessment of the detection algorithm required evaluating the parameters precision, recall, and F1
Širůčková, A.; Čáp, M.; Lázničková, J.; score. In terms of function, the algorithm is implemented in a module consisting of a microcomputer
Kadlec, R.; Raichl, P.; et al. A System and a connected videocamera. When a flock is detected, the microcontroller will generate a signal to
Using Artificial Intelligence to Detect be wirelessly transmitted to the module, whose task is to trigger the scaring actuator.
and Scare Bird Flocks in the
Protection of Ripening Fruit. Sensors Keywords: bird detection; convolutional neural network; deterrent system; flocks of birds; fruit; fruit
2021, 21, 4244. https://doi.org/ crops; starlings
10.3390/s21124244

Academic Editor: Paolo Spagnolo

1. Introduction
Received: 28 May 2021
The protection of fruit crops from raiding flocks of birds constitutes a major problem
Accepted: 18 June 2021
Published: 21 June 2021
for fruit farmers and winegrowers. One of the most prominent pests in this respect is
the European starling (Sturnus vulgaris), whose immense flocks feeding on fruit in large
Publisher’s Note: MDPI stays neutral
orchards and vineyards are perfectly capable of ruining the entire harvest [1]. Diverse
with regard to jurisdictional claims in
measures are being taken to prevent this beautiful passerine from attacking fruit fields and
published maps and institutional affil- wine-growing areas, with the relevant efforts and applicable techniques usually conceived
iations. or designed to be physically harmless to the bird. The scaring methods involve mechanical,
optical, and acoustic approaches, in addition to utilizing natural enemies.

1.1. Mechanical Techniques


The set of mechanical instruments, options, and procedures includes, first of all,
Copyright: © 2021 by the authors.
Licensee MDPI, Basel, Switzerland.
nets stretched over the entire orchard or vineyard or wrapped around the individual
This article is an open access article
plants. The main disadvantages of this option rest in the comparatively high purchase
distributed under the terms and cost, rather short service life, and unfavorable ecological impact; moreover, the installa-
conditions of the Creative Commons tion is very complicated, and the labor time to install and remove the nets amounts to
Attribution (CC BY) license (https:// approximately 60 h ha−1 .
creativecommons.org/licenses/by/ Another type of scaring within this group relies on various scarecrows and kites.
4.0/). The former embody a centuries-old, differently configurable tool and can be combined

Sensors 2021, 21, 4244. https://doi.org/10.3390/s21124244 https://www.mdpi.com/journal/sensors


Sensors 2021, 21, 4244 2 of 16

in various ways with the latter, which imitate birds of prey such as the eagle, falcon,
and sparrowhawk.
The natural enemies of starlings, blackbirds, and magpies are, for example, larger and
more aggressive birds, including eagles, owls, eagle owls, buzzards, and falcons; respected
and feared by the pests, these animals have been traditionally used as models for realistic
kites. In terms of the actual shape and placement, such models have proved most effective
when delivered in 3D and mounted visibly in the vicinity of the tree(s).
Yet another–interesting and modern–alternative consists in using unmanned aerial
vehicles (UAVs, drones) to deter a flock that has appeared above the monitored area, whose
size must be selected appropriately to the parameters of the drone [2,3]. This category
comprises also robotic birds, or robirds [4]; their applicability, however, is markedly limited
by the maximum flight time, relevant legislation, particular control and battery charging
requirements, and other prominent factors.

1.2. Acoustic Methods


In sizeable orchards and vineyards, a hunter’s services are usually of great benefit.
The hunter shoots blank cartridges to scare the flock away, the aim being not to kill the
birds. The central disadvantage is that the armed guard can watch over only a part of the
total area and has to stay active and alert all day, resulting in major financial costs.
Another option consists in utilizing sonic bird repellers, which emit emulated or
authentic sounds of raptors or frightened starlings signaling their mates to avoid a danger-
ous location.
A closely related procedure exploits sounds at frequencies audible and unpleasant
to the birds [5]. This principle finds use in, for instance, narrowly directional ultrasonic
repellers. Such devices, although often very effective, nevertheless provide optimum
performance especially in small, fully or partially enclosed spaces.
By extension, it is also possible to employ gas cannons, such as the Zon Mark 4 Propane
Bird Scare Cannon, albeit only at the expense of seriously disturbing persons and animals
that live nearby: The guns shoot continuously during the ripening period, creating or
increasing the noise burden in the surroundings. The method is comparatively cheap but
may gradually become ineffective.

1.3. Optical Modes and Instruments


The less costly optical solutions are based on distributing glittering items in gardens
or larger growing areas and on external surfaces of houses. This approach deters all types
of birds, as the shiny objects disturb them, and the smaller birds are directly scared by the
reflections, which resemble raptors’ eyes. Starlings, however, range among intelligent birds
and, after a time, may notice that the reflections always occur at the same places and that
there is no danger. Another popular tool is the holographic strip [6], a special glossy foil
emulating a raptor’s eye; at locations protected in this manner, starlings usually behave
very cautiously.
Considering novel trends and progressive technologies, especially in view of their
practical applicability and performance, we can emphasize the agrilaser system [7]. In this
concept, laser generators emit a beam which is then perceived by the pests as a physical
risk or obstacle and makes the birds take off or change their flight paths.

1.4. Natural Enemies


In the case of large-scale growers, a viable option is to hire a falconer. As raptors
commonly hunt small birds for food, this natural procedure (if regular) embodies a very
effective means to defend an orchard or a vineyard against starlings; however, disadvanta-
geously, the method is very time-intensive and costly.
Sensors 2021, 21, 4244 3 of 16

1.5. Aims and Objectives


Most of the above-presented methods are problematic in that the pests gradually
become accustomed to the disturbance. The issue is especially irritating in the acoustic
devices, where, despite the diminished scaring effect, the human and animal exposure to
the repelling noises remains at the same levels of intensity. With this difficulty in mind,
we designed a system that executes the scaring task only after optically detecting a flock;
the triggering signal is communicated to the actuator wirelessly. In the given context, our
paper characterizes the hardware and software components of a novel setup that utilizes
videocameras and artificial intelligence (AI) to detect flocks of starlings. The entire concept
incorporates a scaring element (such as a loudspeaker, a gas cannon, or a laser beam
generator) to be activated only when a flock has been detected; thus, the process is not
continuous, eliminating undesired sonic disturbance, and this factor constitutes the greatest
advantage of the system against regularly marketed options. In this manner, the actual
scaring becomes more effective and environmentally friendly thanks to the irregularity of
the actuating impulses.
Beyond the Introduction, the paper is organized as follows: Section 2 outlines the
state of the art in the field, comparing relevant studies; Section 3 describes the hardware
of the system; Section 4 analyzes the applied scaring methodology; Section 5 presents the
experiments and their results; Section 6 discusses the general outcomes of the research; and
Section 7 embodies the conclusion.

2. Previous Research
AI algorithms are currently employed in diverse branches of science and industry,
including but not limited to civil [8] and electrical [9] engineering, crude oil drilling or
mining [10], and manufacturing control [11].
In software terms, our concept of bird flock detection exploits AI algorithms, utilizing
the approaches outlined in previously published studies that focus on AI in the detection of
animals. The set of relevant sources comprises, for example, articles [12–17], which discuss
the monitoring, detection, and classification of birds to prevent their interaction with
wind turbines. Most of the monitoring projects described in these studies concern birds
in the air. A different technique is proposed by the authors of reference [18]. This study
eventually led to the designing of deep learning-based object-detection models using aerial
images collected by an unmanned aerial vehicle (UAV). In the bird detection, the authors
employ diverse models, including the Faster Region-Based Convolutional Neural Network
(R-CNN), Region-Based Fully Convolutional Network (R-FCN), Single Shot MultiBox
Detector (SSD), Retinanet, and You Only Look Once (YOLO). Such a model-supported
procedure is also applied in source [19]. Another variant of flying bird detection (FBD),
formulated for the purposes of aviation safety, namely, to forestall bird - aircraft collisions,
is characterized in papers [20,21]; the research utilizes a simplified bird skeleton descriptor
combined with an SVM (Support Vector Machine). A similar method appears in article [22],
which proposes a feature-based bird detector. By combining the Histogram of Oriented
Gradients (HOG) and the Center-Symmetric Local Binary Pattern (CS-LBP) into a feature
set, the authors successfully detected crows under various lighting conditions. A dataset
of crows, containing a wide range of movement phases, positions, and backgrounds, was
prepared and learned with a linear Support Vector Machine (SVM). Further, article [23]
introduces a bird detection and classification system exploiting Gaussian and Gabor filters,
the HOG, and convolutional neural networks (CNNs). Such networks are employed also
by other researchers in ornithology [24], agriculture [25], and air traffic safety centered on
collision avoidance [26,27].
Studies [28,29] discuss the behavior of birds in the sky and on water, in addition to
evaluating different types of classifiers.
The literature, by extension, includes comprehensive articles addressing both the
detection and the scaring of birds [30]. One of these sources is referenced in study [31],
which characterizes the prototype of a device that comprises PIR sensors as the detector
Sensors 2021, 21, 4244 4 of 16
Sensors 2021, 21, x FOR PEER REVIEW 4 of 16

tocharacterizes
track moving birds
the and uses
prototype of aLC oscillator
device type colpitts
that comprises with a as
PIR sensors piezo ultrasonic
the detector sensor
to track
asmoving
the repeller.
birds andAn uses
ultrasonic repeller
LC oscillator is described
type colpitts with also in source
a piezo [32], whose
ultrasonic sensor asauthors
the
nevertheless
repeller. An follow a markedly
ultrasonic more sophisticated
repeller is described also in source path,
[32],exploiting
whose authors machine learning
nevertheless
algorithms. The usemore
follow a markedly of ansophisticated
ultrasonic sensor as the actuator
path, exploiting machine in learning
combination with aThe
algorithms. Haar
use of classifier-based
cascade an ultrasonic sensor as theisactuator
detector in combination
then outlined in article with
[33]. a Haar cascade classifier-
based
Thedetector
methodisdefined
then outlined in article
in our study, [33].
compared to those exposed in the above-mentioned
research articles, generally does not rely compared
The method defined in our study, on detecting to those exposed
individual in the
birds but above-men-
rather on op-
timizing learning algorithms to facilitate flock detection. Another significantbut
tioned research articles, generally does not rely on detecting individual birds rather
difference
on optimizing
rests in that the learning algorithmsherein
design presented to facilitate flock detection.
can involve virtuallyAnother significant
any actuator, differ-
switchable
byence
the rests in that
wireless the design
module presented
integrated herein
in the can involve
system. virtuallytogether
These aspects any actuator,
form switcha-
the actual
ble by of
novelty thethe
wireless
designed module
setup, integrated
as furtherinindicated
the system. These aspects
in Section together
1.5 above and theform the ac-
Discussion
tual novelty
section below. of the designed setup, as further indicated in subsection 1.5 above and the
Discussion section below.
3. Hardware Components and Functions
3. Hardware Components and Functions
To scan the monitored space, evaluate the movement, and transfer the detection-
relatedTo scanwe
data, thedesigned
monitored anspace,
optical evaluate
detector theofmovement, and transfer
moving objects (Figurethe 1).detection-re-
This detector
utilizes a camera system and algorithms to track items in a predefined This
lated data, we designed an optical detector of moving objects (Figure 1). area,detector
allowing
theutilizes a camera
detection of not system
only and
birds algorithms
and insectsto track
but items
also, ifin differential
a predefinedimagearea, allowing
processingthe is
detection of not only birds and insects but also, if differential image
applied, fast processes or effects such as lightnings. The basic element of the detector processing is applied,
fast processes
consists or effectsJetson
in an NVIDIA such as lightnings.
Nano singleThe basic element oftothe
board-computer, detector
which twoconsists in an
videocameras
NVIDIA Jetson Nano single board-computer, to which
are connected: One a Raspberry Pi HQ equipped with a Sony IMX477R sensor and atwo videocameras are connected:
16Onemmaf/1.4
Raspberry Pi HQ equipped with a Sony IMX477R sensor and a 16 mm f/1.4
PT3611614M10MP lens, and the other an Arducam 8 Mpx USB webcam
PT3611614M10MP lens, and the other an Arducam 8 Mpx USB webcam CCTV 5-50 mm.
CCTV 5–50 mm. The combination of two videocameras is used intentionally to deliver fast
The combination of two videocameras is used intentionally to deliver fast processing and
processing and convenient properties of the images.
convenient properties of the images.

Figure 1. A block diagram of the sample setup for the detection of flying objects.
Figure 1. A block diagram of the sample setup for the detection of flying objects.

Thenumber
The numberofofdetectors
detectorsrequired
required for
for a
a 360°
360◦ coverage
coverageisisspecified
specifiedininTable
Table1.1.
Both of of
Both
the above-defined videocameras are preset such that a flock can be captured at
the above-defined videocameras are preset such that a flock can be captured at a distancea distance
ofof 300mmororless;
300 less;ininthe
theSony
SonyIMX477R,
IMX477R, the
the minimum
minimum zoom zoomvalue
valuehas
hastotobebeselected in in
selected
advance. In addition to the distance, the parameters that define the coverable area include
Sensors 2021, 21, 4244 5 of 16

the horizontal field of view (FOV). Before the actual monitoring, the devices are inserted in
3D-printed visors (Figure 2).

Table 1. The technical specifications of the applied cameras.

Sony IMX477R +
Camera Arducam 8 MP IMX179AF
PT3611614M10MP
Horizontal Number for Horizontal Number for
Zoom Zoom
FOV 360◦ coverage FOV 360◦ coverage
Min 44.6◦ 8.1
Fixed 64.5◦ 5.6
Max 21.8◦ 16.5

Considering the fact that the optical detector was to be tested in a vineyard with no
connection to the electricity grid, we had to assemble a small island network [34]. For this
purpose, we employed a photovoltaic panel exhibiting the following parameters: Max
power 100 W, open circuit voltage (Voc) 21.6 V, short circuit current (Isc) 6.11 A, max power
voltage (Vmp) 18 V, and max power current (Imp) 5.55 A. The electricity obtained from this
panel was transferred to the solar charger controller and then fed directly to the devices or
stored in the battery. In In island systems, a key factor consists in low consumption; we
therefore implemented in the design a time delay switch to activate the entire setup an
hour before sunrise and to shut it off an hour after sunset. Thus, we enabled the installed
optical detector to operate over the complete grape harvest period in the Czech Republic.
The electricity to the microcomputer is supplied by a 5 V voltage changer having
the maximum current of 2.5 A; the videocameras receive power from the microcomputer.
Separate feeding is secured for the module measuring the temperature and humidity,
which, thanks to the attached SIM card, transmits the acquired data over an LTE network
every 30 min. The reason for this configuration is that we need to know the temperature
and humidity inside the sealed box, intending to evaluate the data before further use.
To keep the internal spaces of the optical detector cooled, we fabricated a heat sink and
mounted it to the rear wall of the module; the air circulation is guaranteed by a cooling fan
running on a voltage of 12 V. As vineyards are generally warm and sunny, and the setup is
exposed to intensive sunlight, the surfaces of the plastic box were covered with a special
reflective foil.
The two videocameras capture the position of the flock, allowing the actuator to be
wirelessly triggered to execute the scaring task at the planned location. The wireless signal
is transferred via WiFi, using an ESP8266 module. The actual receiver comprises only the
power supply, ESP8266 module, and a relay to switch on the feeding (up to 10 A) and
signal lines of the actuator (an ultrasonic repeller or a gas cannon).
The optical detection and early warning unit is detailed in Figure 2; the complete
assembly, erected at a vineyard in the village of Bořetice, Moravia, the Czech Republic, is
then visualized in Figure 3.
Sensors 2021, 21, 4244 6 of 16
Sensors 2021, 21, x FOR PEER REVIEW 6 of 16
Sensors 2021, 21, x FOR PEER REVIEW 6 of 16

Figure2.2.The
The opticaldetection
detection andearly
early warningunit
unit to trackaaflock
flock of birds.
Figure 2. Theoptical
Figure optical detectionand
and earlywarning
warning unittototrack
track a flockofofbirds.
birds.

Figure 3. The complete setup located at a vineyard in Bořetice, Moravia, the Czech Republic.
Figure3.3.The
Figure Thecomplete
completesetup
setuplocated
locatedatataavineyard
vineyardininBořetice,
Bořetice,Moravia,
Moravia,the
theCzech
CzechRepublic.
Republic.
4. Methods for the Detection of Moving Targets
4.4.Methods
Methodsfor forthe
theDetection
Detectionof ofMoving
MovingTargets
Targets
When investigating the possibilities of the visual detection, we employed diverse im-
When investigating
When investigating the possibilities of thevisual
possibilities of the visualdetection,
detection, wewe employed
employed diverse
diverse im-
age processing techniques combined with deep learning. The algorithms were imple-
image processing techniques combined with
age processing techniques combined with deep learning. deep learning. The algorithms
algorithms were imple-
were imple-
mented in the module represented in Figures 1–3.
mented
mentedininthe themodule
modulerepresented
representedininFigures
Figures1–3.1–3.
At the initial stage, we traced any and all visual activity by utilizing a differential
At
Atthe theinitial
initialstage,
stage,we wetraced
tracedanyanyand
andall allvisual
visualactivity
activityby byutilizing
utilizingaadifferential
differential
algorithm in the detection of movement [35,36]. The captured set included all items that
algorithm
algorithmininthe thedetection
detectionofofmovement
movement[35,36].
[35,36].The Thecaptured
capturedset setincluded
includedall allitems
itemsthat that
had changed
had changed theirtheirpositions
positionsby byat
atleast
leastone
onepixel
pixelbetween
betweentwo twoinstances
instancesof ofimaging.
imaging.The The
had changed their positions by at least one pixel between two instances of imaging. The
images contained
images contained individual birds, birds,bird
birdflocks,
flocks,insects,
insects,and other objects. In total, we exe-
images contained individual
individual birds, bird flocks, insects, and
and other
other objects.
objects. In total,
In total, we weexe-
cuted
executed approximately 300,000 detection shots, and these then allowed us to use in thethe
re-
cuted approximately 300,000 detection shots, and these then allowed us to use in in
approximately 300,000 detection shots, and these then allowed us to use the re-
search
research a corresponding number of images with detected items. From such a perspective,
search aacorresponding
corresponding number
number of of images
images with
with detected
detected items.
items. From
From suchsuchaaperspective,
perspective,
the
the differential method for the detection of moving objects appears to be wholly insuffi-
thedifferential
differentialmethod
method forfor
thethe
detection
detectionof moving
of moving objects appears
objects to betowholly
appears insufficient
be wholly insuffi-
cient
incient in monitoring
monitoring only only
a a
single single concrete
concrete item; item;
in our in our
case,case, however,
however, the the object
object is a is a flock
flock of
in monitoring only a single concrete item; in our case, however, the object is a flock
Sensors 2021, 21, x FOR PEER REVIEW 7 of 16
Sensors
Sensors 2021,2021, 21, x FOR PEER REVIEW
21, 4244 7 of 1616
7 of

of birds. The acquired images nevertheless embodied a good input to support the subse-
of birds. The acquired images nevertheless embodied a good input to support the subse-
birds. The
quent acquiredofimages
application nevertheless
deep learning. embodied
To prepare fora this
goodprocedure,
input to support the subsequent
we classified and an-
quent application of deep learning. To prepare for this procedure, we classified and an-
application
notated the of deep eventually
images, learning. To prepare for
obtaining a this
set of procedure,
training we classified
images. The and
result annotated
was a classi-
notated the images, eventually obtaining a set of training images. The result was a classi-
the images,
fication modeleventually obtaining
implemented in a a set of training to
microcomputer images. Thethe
facilitate result was
visual a classification
detection of bird
fication model implemented in a microcomputer to facilitate the visual detection of bird
model
flocks. implemented
flocks.
in a microcomputer to facilitate the visual detection of bird flocks.

4.1. Image
4.1. Image Capturing
Capturing
4.1. Image Capturing
This
This section
section characterizes the
characterizes the image capturing
capturing procedures in greater detail. At the
This section characterizes theimage image capturing procedures
procedures in in greater
greaterdetail.
detail.AtAt the
the
initial
initial stage
stage of the
of of
the process, we formed a set of images capturing objects moving
movinginin the
initial stage theprocess,
process,we weformed
formedaa set set ofof images capturing objects
images capturing objectsmoving inthe
the
vineyard,
vineyard,
vineyard,
utilizing
utilizing an algorithm
utilizingananalgorithm
based
algorithmbased
based on
on the
on the
differential
the differential
method complemented
method complemented
differential method complementedwith with
with
background
background image
image filtering (Figure
filtering (Figure4). 4).
This
Thistechnique
technique exploits the principle
exploits of a movement
background image filtering (Figure 4). This technique exploits thethe principle
principleofofa amove-
move-
detector
ment capable
detector of eliminating
capable of false
eliminating detection
false caused
detection by
causedthe quivering
by the of
quiveringgrapevine
of leaves
grapevine
ment detector capable of eliminating false detection by the quivering of grapevine
or leaves
the green vegetation in the vicinity. By simply subtracting the current image from the
leaves or or thethe greenvegetation
green vegetationininthe thevicinity.
vicinity. By By simply
simply subtracting
subtractingthe thecurrent
currentimage
image
previous
from one, we yield awedifferential image, which image,iswhich
then isconverted to a binary image
from thethe previous
previous one,we
one, yieldaadifferential
yield differential image, which is then
thenconverted
convertedtotoa abinary
binary
resulting from a predefined threshold. Simultaneously, an accumulated differential image
image resulting from a predefined threshold. Simultaneously, an accumulateddifferential
image resulting from a predefined threshold. Simultaneously, an accumulated differential
of the
imagebackground is being created to be later subtracted from thefromdifferential image; thus,
image of of
thethe background
background isisbeing
beingcreated
created to be
to be later
later subtracted
subtracted fromthe thedifferential
differentialim-
im-
weage;
ensure
thus,the
we filtering
ensure of
the the background.
filtering of the Furthermore,
background. object detection
Furthermore, object via the recognition
detection via the
age; thus, we ensure the filtering of the background. Furthermore, object detection via the
of binary contours
recognition is incontours
of binary progress;isduring the operation,
in progress; during theany image containing
operation, a positively
any image containing
recognition of binary contours is in progress; during the operation, any image containing
detected item is stored and sent to a cloud storage room. A binary differential
a positively detected item is stored and sent to a cloud storage room. A binary differential image and a
a positively detected item is stored and sent to a cloud storage room. A binary differential
highlighted region around a detected object are shown in Figure
image and a highlighted region around a detected object are shown in Figure 5. 5.
image and a highlighted region around a detected object are shown in Figure 5.
Nvidia Jetson Nano
Nvidia Jetson Nano

+
Input image Actual image
+
Input image Actual image Filtering of the
+ Differential image
background
Object detection Image saving
Filtering of the
+- Differential image
background
Object detection Image saving
Previous image
Previous image - Transfer to a cloud
Accumulated storage room
Transfer to a cloud
differential image of storage room
Accumulated
the background
differential image of
the background
Figure 4. A flow chart characterizing the differential method (the input, actual, previous, and accumulated images); the
Figure 4. A flowimage
differential chart characterizing thethe
differential method
image; (the input, actual, previous, and accumulated images); the
Figure 4. A flow chartofcharacterizing
the background; differential
the differential method the filtering
(the of the background;
input, actual, object
previous, and detection; image
accumulated stor-
images); the
differential
age; imagetoofa the
transfer background;
cloud storage the differential image; the filtering of the background; object detection; image storage;
room.
differential image of the background; the differential image; the filtering of the background; object detection; image stor-
transfer to a cloud
age; transfer storage
to a cloud room.
storage room.

Figure 5. A binary differential image (left) and a highlighted region around a detected object (right).

The unit
Figure 5. A binary differential visualized
image (left) andin Figure 2 is region
a highlighted capablearound
of detecting bird-size
a detected objects in real time,
object (right).
Figure 5. A binary differential image (left)
up to a distance and a highlighted
of approximately region
300 m. around a detected
If movement object (right).
is recognized, the image is sent to
a cloud storage
The unit room. The
visualized images2 collected
in Figure is capableduring our experiment
of detecting bird-sizecomprised
objects in multiple
real time,
The unit visualized in Figure 2 is capable of detecting bird-size objects in real time, up
up false detections due to insects, and300
thism.
effect can be considered the greatest disadvantage
to a distance of approximately 300 m. If movement is recognized, the image is is
to a distance of approximately If movement is recognized, the image sent
sent toto
a
acloud
cloud storage room. The images collected during our experiment comprised
storage room. The images collected during our experiment comprised multiple false multiple
false detections
detections due todue to insects,
insects, andeffect
and this this effect
can becan be considered
considered the greatest
the greatest disadvantage
disadvantage of the
applied differential algorithm. The technique, however, enabled us to gather a sufficient
Sensors 2021,
Sensors 21,21,
2021, 4244
x FOR PEER REVIEW 8 of 816of 16
Sensors 2021, 21, x FOR PEER REVIEW 8 of 16

of the applied
number differential
of images algorithm.
(see Figure 6 for The technique,
examples), however,
which then enabled
found use us to
at gather a suf-
the next stage,
of the applied
ficient number differential
of images algorithm.
(see Figure The
6 technique,
for examples), however,
which enabled
then foundus to
use gather
at thea next
suf-
inficient
training the
number classifier
of the
images via machine
(see Figure learning methods.
6 for examples), The data
which then collection took place
next at
stage, in training classifier via machine learning methods. Thefound
data use at the took
collection
two wineries,
stage, one based in Bořetice and the other in Popice (both in Moravia, the Czech
place atintwo
training the classifier
wineries, one basedvia machineand
in Bořetice learning methods.
the other The(both
in Popice data in
collection
Moravia,tookthe
Republic).
place The altitude profile of the vineyard and the distribution of the optical detectors
CzechatRepublic).
two wineries, one based
The altitude in Bořetice
profile of theand the other
vineyard andinthe
Popice (both inof
distribution Moravia, the
the optical
are represented
Czech Republic).in The
Figure 7, where
altitude eachofofthe
profile thevineyard
red spotsand
represents
the three detecting
distribution of the modules
optical
detectors are represented in Figure 7, where each of the red spots represents three detect-
that cover are
detectors therepresented
areas highlighted in green.
ing modules that cover theinareas
Figure 7, where each
highlighted of the red spots represents three detect-
in green.
ing modules that cover the areas highlighted in green.

Figure 6. Examples of images capturing the activity in a vineyard.


Figure6.6.Examples
Figure Examples of images capturing
of images capturingthe
theactivity
activityinina avineyard.
vineyard.

Figure 7. A special content 3D map: the placement of the optical detectors.


Figure 7. A special content 3D map: the placement of the optical detectors.
Figure 7. A special content 3D map: the placement of the optical detectors.
The bulk of data collected with the detection algorithm comprised approximately
The bulk offrom
300,000 data collected 993 with
werethe detection algorithm comprised approximately
Theimages,
bulk of data whichcollected with selected
the for further
detection processing.
algorithm Subsequently,
comprised the
approximately
300,000 images,
individual items from
in which
each image 993 were
were selected
annotated for
and further processing.
categorized, with Subsequently,
the birds the
classified
300,000 images, from which 993 were selected for further processing. Subsequently, the
individual items
into more specific in each image were annotated and categorized, with the birds classified
individual
into more items insubsets
specific subsets
to demonstrate
each image
to demonstrate
their distinctive
were annotated
their the
distinctive
visual in-flight
and categorized,
visual with the
in-flight
characteristics
birds classified
characteristics
(Figure
into more 8). During
specific the machine
subsets to learning
demonstrate phase,
their images
distinctive were assorted
visual again,
in-flight roughly
characteristics
(Figure
at the 8). During
ratio of 8:1:1,the machine
into the learning
training, phase, theand
validating, images were
testing assorted(see
subgroups again, roughly
Table 2, in at
(Figure 8). During
at the ratio theinto
of 8:1:1, machine learning
the training, phase, the
validating, images
and were
testing assorted(see
subgroups again, roughly
Table 2, in
the ratio of 8:1:1, into the training, validating, and testing subgroups (see Table 2, in which
the individual data match the counts of detected objects). For our purposes, the term flock
denotes not less than five birds flying together.
Sensors 2021, 21, x FOR PEER REVIEW 9 of 16

Sensors 2021, 21, 4244 9 of 16


which the individual data match the counts of detected objects). For our purposes, the
term flock denotes not less than five birds flying together.

Birds Insects Helicopter

individuals Soaring Bees Flocks

Retracted wings Flies

Figure 8. Categorizing the collected images (Individual birds: spread/retracted wings; insects: bees and flies; helicopters;
Figure 8. Categorizing the collected images (Individual birds: spread/retracted wings; insects: bees and flies; heli-
flocks).
copters; flocks).
Table 2. The categories and their counts in the training, validation, and testing subgroups.
Table 2. The categories and their counts in the training, validation, and testing subgroups.
Category Training Validation Testing
Category Training Validation Testing
Flies 442 41 43
Insects Flies 442 41
Insects Bees 36 7 4 43
Bees 36 7
Individuals 2308 445 228 4
Birds Individuals
Retracted wings 55 2308 6 445 6 228
Birds Retracted wings 160
Soaring 55 5 6 3 6
Soaring 160 5 3
Flocks 79 29 26
Flocks
Helicopters 11 79 2 29 4 26
Helicopters 11 2 4
4.2. Machine Learning Algorithm
4.2. Machine Learning
To facilitate Algorithm of objects in the images, we employed the cloud-based in-
the recognition
To facilitate
strument the recognition
Google AutoML of objects
Vision [37–40], whichindelivers
the images, we employed
an algorithm the cloud-based
for the training of neu-
ral networks.
instrument Google AutoMLservice
The AutoML Visionutilizes
[37–40],recurrent neural networks
which delivers to search
an algorithm forfor antraining
the opti-
ofmum neural
neural network The
networks. architecture
AutoML(applying
service autilizes
procedure referredneural
recurrent to as neural architecture
networks to search
search)
for [41]. Theneural
an optimum algorithm (Figure
network 9) suggests (applying
architecture the basic set of hyperparameters
a procedure referred asto well as
as neural
the counts of
architecture layers [41].
search) and nodes to be required
The algorithm from
(Figure the convolutional
9) suggests the basicneural
set of network that is
hyperparameters
asbeing
wellsearched for. In of
as the counts thelayers
next iterations,
and nodes the to
individual parameters
be required from theare convolutional
specified more pre-neural
cisely by the feedback. This process then repeats until the algorithm has gradually
network that is being searched for. In the next iterations, the individual parameters are identified
the best-fitting
specified optimum by
more precisely architecture for theThis
the feedback. concrete dataset.
process thenThe desired
repeats untilmodel is then
the algorithm
selected from the set of assembled neural networks in such a manner that
has gradually identified the best-fitting optimum architecture for the concrete dataset. it possesses opti-The
mum properties, including sufficient accuracy and detection sensitivity.
desired model is then selected from the set of assembled neural networks in such a manner
that it possesses optimum properties, including sufficient accuracy and detection sensitivity.
Sensors 2021, 21, x FOR PEER REVIEW 10 of 16
Sensors 2021, 21, 4244 10 of 16
Sensors 2021, 21, x FOR PEER REVIEW 10 of 16

Parameter prediction in the novel


architecture
Parameter prediction in the novel
architecture

Convolutional neural network


Convolutional neural network
Training data
Predictive regulator Training data
Predictive
(recurrent regulator
neural network) Validation
(recurrent neural network) Validation
precision
Validating dataprecision
Validating data

Feedback
Feedback

Figure
Figure
Figure The
9. 9.9. The
Theprinciple
principleofof
principle the machine
ofthe
the learning
machine
machine algorithm
learning
learning toto
algorithm
algorithm assemble a convolutional
to assemble
assemble neural
a convolutional
a convolutional network
neural
neural net- net-
work
bywork byutilizing
utilizing
by utilizing thecloud-based
cloud-based
the cloud-based
the service
Google
serviceservice Google
AutoML
Google AutoML
Vision.
AutoML Vision.
Vision.

4.3. Parameters
4.3. Parameters of ofthethe
Classifier
Classifier
4.3. Parameters of the Classifier
TheThe
classifier
The classifier
classifierwas was
wastrained
trained
trained
with
with
with
the
the
the
cloud-based
cloud-based
cloud-based tooltool
Google
toolGoogle
GoogleAutoML
AutoML
AutoML
Vision
Vision
featuring
Vision featuring
featuring
separate
separate validation
separate validation
validationand and testing
andtesting sets.
testingsets.
sets.To
ToToallow
allow
allow the
thetheimplementation,
implementation,
implementation, we
we we used
usedused ananexported
exported
an exported
TensorFlow
TensorFlow
TensorFlow Lite
Lite
Lite model
model
model after
after the
after the
the classifier
classifier
classifier has
has hasbeen
beenbeentrained.
trained.
trained. The
The Theresults
results
results delivered byby
delivered
delivered the
by
the the
classifier
classifierhad
classifierhad followed
hadfollowed
followedfromfrom a comparison
froma acomparison
comparison of
of of the
thethe detection
detection
detection algorithm
algorithm
algorithm with
withwith the annotated
the annotated
the annotated
test images.
test
test images.The
images. The
The prerequisite
prerequisite
prerequisite for
forforevaluating
evaluating
evaluating aacorrectly
correctly
a correctly detected
detected
detected object
object (true
(true
object positive)
positive)
(true was
positive)waswas
ananoverlap
overlap of at
of atleast
least 50%
50% with
with the
the ground-truth
ground-truth
an overlap of at least 50% with the ground-truth template (Figure 10). template
template (Figure
(Figure 10).
10).
AtAtthis point, we would like to define Intersectionover overUnion
Union( (IoU), also called
thethethe
At this
thispoint,
point,we wewould
wouldlike liketotodefine
define Intersection
Intersection over ), also
Union (𝐼𝑜𝑈), called
also called
Jaccard
Jaccard index.
index. It Itis is
a metric
a metricthat evaluates
that evaluates thethe overlap
overlap between
betweenthethe
ground-truth
ground-truth mask
mask (GT)
Jaccard index. It is a metric that evaluates the overlap between the ground-truth mask
and
(GT)theand
predicted
the predictedmask mask
(PD). (PD).
We can We use
can IoU
use to determine
to if a given
determine if a detection
given is valid
detection is is
(GT) and the predicted mask (PD). We can use 𝐼𝑜𝑈 to determine if a given detection
orvalid
not. or not.
valid or not.
IoU isiscalculated
calculatedas asthe
thearea
area of of overlap/intersection
overlap/intersection between betweenGT GT and
and PD,PD, divided
divided by by
𝐼𝑜𝑈 is calculated as the area of overlap/intersection between GT and PD, divided by
the
thearea
areaofofthe
theunion
unionbetween
between the the two, that
that is: is:
the area of the union between the two, that is:
Area o  f overlap
  GT ∩ ∩ PD
= = = 𝐴𝑟𝑒𝑎 𝑜𝑓 𝑜𝑣𝑒𝑟𝑙𝑎𝑝
IoU 𝐼𝑜𝑈     == = 𝐺𝑇 .
∩ 𝑃𝐷
∪ PD . .
(1)(1)
(1)
Area𝐴𝑟𝑒𝑎 𝑜𝑓 𝑢𝑛𝑖𝑜𝑛
o f union GT 𝐺𝑇∪ ∪ 𝑃𝐷

Ground truth box


Ground truth box

Prediction box
Prediction box

Overlap
Overlap

Figure 10. The overlap between the predicted ground truth and a detected object.

Figure 10. The overlap between the predicted ground truth and a detected object.
Figure 10. The overlap between the predicted ground truth and a detected object.
Sensors 2021, 21, 4244 11 of 16

Other parameters enabling correct evaluation of the images were as follows:


Confidence rate—virtually errorless detection is achievable but only at the expense of
low sensitivity, and vice versa. To carry out the testing, we set the confidence rate to 30%.
The detection precision rates, presented in Table 3, denote the proportion of true-
positive detections within the entire set of detected objects. We thus have the following:

True positive
Precision = . (2)
True positive + False positive

A further criterion consists in the detection sensitivity value (recall):

True positive
Recall = . (3)
True positive + False negative

The F1 score then evaluates the harmonic average of precision and sensitivity, via:

2· True positive
F1 = . (4)
2· True positive + False positive + False negative

Finally, we evaluated the error rates in all classes of the trained classifier. This quantity
is given as the proportion between all misclassified objects and the total number of detected
objects in a concrete class, expressed through

False positive + False negative


Error rate = . (5)
False positive + False negative + True positive + True negative

Table 3. The assignment of the various types of objects to the GT, TP, FP, and FN sets.

Ground Truth True Positive( False Negative False Positive


(GT) TP) (FN) (FP)
bird 898 749 149 145
bug 71 48 23 24
flock 35 33 2 0
helicopter 4 4 0 0

5. Results
The figure below demonstrates the object detection outcomes obtained in testing the
designed optical detector. Figure 11a,b,i show some of the birds that were tracked near the
movement-sensing videocameras, at a distance of about 10 m. The visualized procedures
also include the detection of flocks Figure 11c–e,k at various distances and the tracking
of objects correctly defined as insect Figure 11j or a helicopter Figure 11l. The percentage
numbers in the captions to the green frames that mark the detected items express the
confidence scores; in terms of this parameter, the designed optical detector algorithm is set
such that if the optical unit detects a flock at a confidence score of at least 30%, the actuator
will be triggered to execute a scaring task.
The correctness and error rates related to the results obtained during the testing of the
trained classifier are summarized in Table 3. The Table compares the performances of the
detection algorithm with the ground truth (GT) set, which represents objects annotated in
advance. Correctly evaluated items are introduced within the true positive (TP) set. Further,
the false negative (FN) detections group comprises all items that were not evaluated as
belonging to a given class but actually fit within it. The false negatives are caused by the
lower sensitivities accompanying a class. Lastly, objects identified as false positive (FP)
were detected under a class but, in reality, do not fit; such detections generate false alarms
and are undesirable in the application of a classifier.
Sensors 2021, 21, 4244 12 of 16
Sensors 2021, 21, x FOR PEER REVIEW 12 of 16

a) b)

c) d)

e) f)

g) h) i) j)

k) l) m)

Figure 11. The detection outcomes in individual objects.


Figure 11. The detection outcomes in individual objects.

ToTo offer
offer a more
a more comprehensive
comprehensive idea
idea ofof thebehavior
the behaviorofofthe theclassifier,
classifier,we
weassembled
assembled a
relevant confusion matrix. This matrix is bound to the false positive
a relevant confusion matrix. This matrix is bound to the false positive (FP) detections;(FP) detections; such
detections,
such in turn,
detections, comprise
in turn, comprisefalse-positive
false-positivemisclassifications
misclassifications thatthat
match a different
match class
a different
on the one hand, and those that do not fit within any class on the other.
class on the one hand, and those that do not fit within any class on the other. As shown inAs shown in Table
4, confusing
Table 4, confusinga detected object
a detected with
object another
with another class is isa arare,
class rare,nearly
nearlynonexistent
nonexistentscenario.
scenario. In
In most cases, the detection type involved is FP, with the object being outside allofofthe
most cases, the detection type involved is FP, with the object being outside all theGT
GT subclasses. Thus, we can claim that the classifier is very resistant to being confusedbe-
subclasses. Thus, we can claim that the classifier is very resistant to being confused
tween the
between the classes.
classes.

Table 4. The confusion matrix for multi-class classification.


Predicted Class
Bird Bug Flock Helicopter
Bird 749 1 0 0
Ground
Bug 3 48 0 0
truth
Flock 1 0 33 0
Helicopter 0 0 0 4
Sensors 2021, 21, 4244 13 of 16

Table 4. The confusion matrix for multi-class classification.

Predicted Class
Bird Bug Flock Helicopter
Bird 749 1 0 0
Ground Bug 3 48 0 0
truth Flock 1 0 33 0
Helicopter 0 0 0 4

Table 5 describes the resulting detection parameters obtained from the individual
objects within the testing sets. The numbers are based on the data in Table 3, indicating the
detection precision, recall, F1 score, and error rate.

Table 5. Comparing the precision, recall, F1 score, and error rate between the individual classes.

Precision Recall F1-Score Error Rate


Bird 83.4% 83.4% 83.4% 28.2%
Bug 65.8% 67.6% 66.7% 49.5%
Flock 100.0% 94.3% 97.1% 5.7%
Helicopter 100.0% 100.0% 100.0% 0.0%

6. Discussion
To optimize the AI algorithms, we first acquired approximately 300,000 images to
choose items suitable for further processing. After this pre-selection, the testing dataset
comprised 134 images containing 1008 ground truth objects. Within the presented results,
the precision rates obtained in the birds, insects, and flocks amounted to 83.4%, 65.8%, and
100%, respectively. The set of bird flock images captured 35 objects; of these, 33 items were
detected correctly, and two exhibited falsely negative classification. At this point, it should
be emphasized that the resulting classifier evaluated the flocks correctly up to a distance
of 300 m.
As regards the recall parameter, the rates in birds and insects equaled 83.4% and 67.9%,
respectively. The lower sensitivity values were due in particular to the lower sensitivity
(confidence rate) of the classifier; this effect arose from the relatively small dimensions of
the objects. In bird flocks, however, the sensitivity reached 94.3%, mainly because such
formations are better definable with ground truth, to become detectable more effectively
(Figure 10). The F1 score indicator exhibited the value of 97.1% when applied to bird flocks;
the capturing of potentially dangerous flocks can thus be considered successful.
In bird flocks, the error rate remained at 5.7%; the other classes, however, produced
higher rates, owing to the greater uncertainties of the classifier. This evaluation was
influenced by a larger number of false negatives, which were caused mainly by the greater
distance of the object from the camera, resulting in a poorer detection performance in
smaller birds and insects.
The most significant obstacles to correct classification rested in the overly small
dimensions of the objects, caused by a long distance from the videocamera, and inferior
light conditions. Conversely, large flocks or even small formations of several individuals
are detectable with very good results. The reduced precision and sensitivity of detection
then arise especially from the remote position of the objects being detected. In practical
terms, it should be pointed out that the test involved randomly selected images. In
reality, a flock is captured in multiple images to be evaluated; such a process then ensures
sufficient sensitivity.
The number and placement of the cameras depend on not only the parameters of the
applied videocameras but also the area and altitude profile of the vineyard being observed.
Typically, vineyards in the region consist of long narrow stripes of land that are diversified
according to the owner and/or vine varietal. In the basic device placement scheme, a
videocamera becomes the center of a circle having the radius of 300 m. To cover the full
Sensors 2021, 21, 4244 14 of 16

range of 360◦ , we need at least 8 such sensing modules. Alternatively, it is possible to plan
triangular or romboidal detection segments.
Most of the papers referred to in Section 2 discuss detecting individual birds; in this
procedure, the proportion of the bird’s size to its distance from the videocamera is vital.
The algorithms proposed herein, however, were designed to deliver detect flocks rather
than individuals, and provide high operational sensitivity for the given purpose. The
sensitivity rate could nevertheless be increased even further by reducing the distance
between the optical detector and the actual flock. Bird flock detection is discussed in the
relevant paper [2], where other algorithms are employed together with the UAS-based
process; thus, the detection parameters are then somewhat worse than those delivered
through our research.
Generally, neither the commercial manufacturers nor the literature have utilized the
above-described approach to date in dedicated scaring of pest bird flocks (as opposed to
individual birds), whose raids on ripening fruit significantly reduce the planned harvest.
The design that best resembles ours is the AVIX, a bird detection setup exploiting
AI. According to previous articles, this system is capable of distinguishing between birds,
humans, and other objects. In functional terms, the birds are scared away by a green laser
beam. As regards our design, it is appropriate to stress again at this point the fact that
the actuator will not be triggered when the system has detected an individual bird but
only after an entire flock has been identified. The innovative concept, moreover, allows the
user to incorporate in the system multiple optional scaring components, including acoustic
equipment (loudspeakers or gas cannons), lasers, and drones.
Interestingly, a comparison between the design proposed herein and commercial
repellers, such as gas cannons and acoustic devices, will show that our approach eliminates
a major portion of the disturbances generated by standardly marketed instruments, thus
bringing substantial benefit to both humans and animals living in the vicinity of the
monitored area. Such a capability stems from the adopted technology, where–unlike the
other, presently available options–the actuating impulse is sent exclusively when a flock
has been found. The main asset of the novel design rests in that the birds cannot become
accustomed to the scaring process, by contrast to the continuously repeated effects [3]
triggered regardless of whether a flock is within reach or not.
The limitations to our approach encompass, above all, the reduced FOV of the cameras;
thus, the overall number of applied detecting and scaring modules has to be suitably
increased in the future.

7. Conclusions
We designed and characterized an AI-based system to allow timely detection of
bird flocks that damage ripened fruit. To approach errorless detection, i.e., to prevent
false negativity, we set the confidence rate to 30%. At higher rates, the algorithm in the
microcomputer will wirelessly activate the actuator, which then will initiate the bird scaring
procedure. As regards the recall value (sensitivity), the level achieved in the procedure
is 94.3% and rises further when the flock moves towards the camera. At flock distances
exceeding approximately 300 m, falsely negative results begin to appear. In terms of the
structure of the actual system, the paper discussed both the software and the hardware,
as they together ensure two significant benefits. The first of these advantages consists in
that the scaring will materialize only when a flock arrives, meaning that no persons or
animals in the area are disturbed by continuous sounds emitted by conventional gas guns
or acoustic devices. The newly proposed technology also reduced the adaptability of a bird
flock to the scaring sounds, an effect that occurs in commercial acoustic devices. Another
benefit then rests in reducing the electricity costs, thanks to the scaring modules being
switched only after a flock has appeared. Thus, the battery charging requirements in the
scaring device are lessened; in the photovoltaic process, a smaller area for the panel to
ensure sufficient charging in necessary.
Sensors 2021, 21, 4244 15 of 16

Author Contributions: Concept, P.M. and J.J.; methodology, P.M. and J.J.; software, J.N., P.R., E.V.H.,
J.L. and J.J.; hardware, J.J., J.P., M.Č. and R.K.; supervision E.G.; validation, J.J., J.N. and P.M.; writing—
original draft preparation, P.M., P.D., J.J., A.Š. and P.R.; project administration, E.V.H. and R.K.;
funding acquisition, E.V.H. and M.S. All authors have read and agreed to the published version of
the manuscript.
Funding: This research was funded by the Technology Agency of the Czech Republic, project number
TJ04000441.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Acknowledgments: The authors acknowledge financial support from the Technology Agency of the
Czech Republic (TJ04000441—A Starling Scaring System Using a Passive Optical Locator) and the
general student development project supervised by Brno University of Technology.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Homan, H.J.; Johnson, R.J.; Thiele, J.R.; Linz, G.M. European Starlings. Wildl. Damage Manag. Tech. Ser. 2017, 13. Available online:
http://digitalcommons.unl.edu/nwrcwdmts/13 (accessed on 20 June 2021).
2. Goel, S.; Bhusal, S.; Taylor, M.E.; Karkee, M. Detection and Localization of Birds for Bird Deterrence Using UAS. In Proceedings
of the 2017 ASABE Annual International Meeting, Spokane, WA, USA, 16–19 July 2017.
3. Wang, Z.; Griffin, A.S.; Lucas, A.; Wong, K.C. Psychological Warfare in Vineyard: Using Drones and Bird Psychology to Control
Bird Damage to Wine Grapes. Crop Prot. 2019, 120, 163–170. [CrossRef]
4. Folkertsma, G.A.; Straatman, W.; Nijenhuis, N.; Venner, C.H.; Stramigioli, S. Robird: A Robotic Bird of Prey. IEEE Robot. Autom.
Mag. 2017, 24, 22–29. [CrossRef]
5. Beason, R. What Can Birds Hear? USDA Natl. Wildl. Res. Cent. Staff Publ. 2004, 21. Available online: https://escholarship.org/
uc/item/1kp2r437 (accessed on 18 June 2021).
6. Blanche, P.-A.; Mahamat, A.H.; Buoye, E. Thermal Properties of Bayfol® HX200 Photopolymer. Materials 2020, 13, 5498. [CrossRef]
[PubMed]
7. Szentpeteri, C. Bird Control: Technology to Tackle Your Bird Troubles: Lasers and Drones Beat the Beak. Aust. N. Z. Grapegrow.
Winemak. 2018, 657, 31.
8. Moradi, M.J.; Hariri-Ardebili, M.A. Developing a Library of Shear Walls Database and the Neural Network Based Predictive
Meta-Model. Appl. Sci. 2019, 9, 2562. [CrossRef]
9. Ganguly, B.; Chaudhuri, S.; Biswas, S.; Dey, D.; Munshi, S.; Chatterjee, B.; Dalai, S.; Chakravorti, S. Wavelet Kernel-Based
Convolutional Neural Network for Localization of Partial Discharge Sources Within a Power Apparatus. IEEE Trans. Ind. Inform.
2021, 17, 1831–1841. [CrossRef]
10. Roshani, M.; Phan, G.T.T.; Jammal Muhammad Ali, P.; Hossein Roshani, G.; Hanus, R.; Duong, T.; Corniani, E.; Nazemi, E.;
Mostafa Kalmoun, E. Evaluation of Flow Pattern Recognition and Void Fraction Measurement in Two Phase Flow Independent of
Oil Pipeline’s Scale Layer Thickness. Alex. Eng. J. 2021, 60, 1955–1966. [CrossRef]
11. Fuqua, D.; Razzaghi, T. A Cost-Sensitive Convolution Neural Network Learning for Control Chart Pattern Recognition. Expert
Syst. Appl. 2020, 150, 113275. [CrossRef]
12. Yoshihashi, R.; Kawakami, R.; Iida, M.; Naemura, T. Evaluation of Bird Detection Using Time-Lapse Images around a Wind Farm.
Available online: https://nae-lab.org/~{}rei/publication/yoshihashi-ewea2015.pdf (accessed on 18 June 2021).
13. Yoshihashi, R.; Kawakami, R.; Iida, M.; Naemura, T. Construction of a Bird Image Dataset for Ecological Investigations. In
Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September
2015; pp. 4248–4252.
14. Yoshihashi, R.; Kawakami, R.; Iida, M.; Naemura, T. Bird Detection and Species Classification with Time-Lapse Images around a
Wind Farm: Dataset Construction and Evaluation. Wind Energy 2017, 20, 1983–1995. [CrossRef]
15. Gradolewski, D.; Dziak, D.; Martynow, M.; Kaniecki, D.; Szurlej-Kielanska, A.; Jaworski, A.; Kulesza, W.J. Comprehensive Bird
Preservation at Wind Farms. Sensors 2021, 21, 267. [CrossRef] [PubMed]
16. Aishwarya, K.; Kathryn, J.C.; Lakshmi, R.B. A Survey on Bird Activity Monitoring and Collision Avoidance Techniques in
Windmill Turbines. In Proceedings of the 2016 IEEE Technological Innovations in ICT for Agriculture and Rural Development
(TIAR), Chennai, India, 15–16 July 2016; pp. 188–193.
17. Takeki, A.; Trinh, T.T.; Yoshihashi, R.; Kawakami, R.; Iida, M.; Naemura, T. Detection of Small Birds in Large Images by Combining
a Deep Detector with Semantic Segmentation. In Proceedings of the 2016 IEEE International Conference on Image Processing
(ICIP), Phoenix, AR, USA, 25–28 September 2016; pp. 3977–3981.
Sensors 2021, 21, 4244 16 of 16

18. Hong, S.-J.; Han, Y.; Kim, S.-Y.; Lee, A.-Y.; Kim, G. Application of Deep-Learning Methods to Bird Detection Using Unmanned
Aerial Vehicle Imagery. Sensors 2019, 19, 1651. [CrossRef]
19. Janousek, J.; Marcon, P.; Pokorny, J.; Mikulka, J. Detection and Tracking of Moving UAVs. In Proceedings of the 2019 Photonics
Electromagnetics Research Symposium, Rome, Italy, 17–20 June 2019; pp. 2759–2763.
20. Xu, Q.; Shi, X. A Simplified Bird Skeleton Based Flying Bird Detection. In Proceedings of the 11th World Congress on Intelligent
Control and Automation, Shenyang, China, 29 June–4 July 2014; pp. 1075–1078.
21. Wu, T.; Luo, X.; Xu, Q. A New Skeleton Based Flying Bird Detection Method for Low-Altitude Air Traffic Management. Chin. J.
Aeronaut. 2018, 31, 2149–2164. [CrossRef]
22. Mihreteab, K.; Iwahashi, M.; Yamamoto, M. Crow Birds Detection Using HOG and CS-LBP. In Proceedings of the 2012 International
Symposium on Intelligent Signal Processing and Communications Systems, Taiwan, China, 4–7 November 2012; pp. 406–409.
23. Ghosh, S.K.; Islam, M.R. Bird Species Detection and Classification Based on HOG Feature Using Convolutional Neural Network.
In Proceedings of the Recent Trends in Image Processing and Pattern Recognition; Santosh, K.C., Hegadi, R.S., Eds.; Springer: Singapore,
2019; pp. 363–373.
24. Akçay, H.G.; Kabasakal, B.; Aksu, D.; Demir, N.; Öz, M.; Erdoğan, A. Automated Bird Counting with Deep Learning for Regional
Bird Distribution Mapping. Animals 2020, 10, 1207. [CrossRef] [PubMed]
25. Lee, S.; Lee, M.; Jeon, H.; Smith, A. Bird Detection in Agriculture Environment Using Image Processing and Neural Network. In
Proceedings of the 2019 6th International Conference on Control, Decision and Information Technologies (CoDIT), Paris, France,
23–26 April 2019; pp. 1658–1663.
26. Tian, S.; Cao, X.; Zhang, B.; Ding, Y. Learning the State Space Based on Flying Pattern for Bird Detection. In Proceedings of
the 2017 Integrated Communications, Navigation and Surveillance Conference (ICNS), Herndon, VA, USA, 18–20 April 2017;
pp. 5B3-1–5B3-9.
27. Jo, J.; Park, J.; Han, J.; Lee, M.; Smith, A.H. Dynamic Bird Detection Using Image Processing and Neural Network. In Proceedings
of the 2019 7th International Conference on Robot Intelligence Technology and Applications (RiTA), Daejeon, Korea, 1–3 November
2019; pp. 210–214.
28. T’Jampens, R.; Hernandez, F.; Vandecasteele, F.; Verstockt, S. Automatic Detection, Tracking and Counting of Birds in Marine
Video Content. In Proceedings of the 2016 Sixth International Conference on Image Processing Theory, Tools and Applications
(IPTA), Oulu, Finland, 12–15 December 2016; pp. 1–6.
29. Boudaoud, L.B.; Maussang, F.; Garello, R.; Chevallier, A. Marine Bird Detection Based on Deep Learning Using High-Resolution
Aerial Images. In Proceedings of the OCEANS 2019, Marseille, France, 17–20 June 2019; pp. 1–7.
30. Roihan, A.; Hasanudin, M.; Sunandar, E. Evaluation Methods of Bird Repellent Devices in Optimizing Crop Production in
Agriculture. J. Phys. Conf. Ser. 2020, 1477, 032012. [CrossRef]
31. Siahaan, Y.; Wardijono, B.A.; Mukhlis, Y. Design of Birds Detector and Repellent Using Frequency Based Arduino Uno with
Android System. In Proceedings of the 2017 2nd International conferences on Information Technology, Information Systems and
Electrical Engineering (ICITISEE), Yogyakarta, Indonesia, 1–3 November 2017; pp. 239–243.
32. Palanisamy, S.; Selvaraj, R.; Ramesh, T.; Ponnusamy, J. Applications of Remote Sensing in Agriculture—A Review. Int. J. Curr.
Microbiol. Appl. Sci. 2019, 8, 2270–2283. [CrossRef]
33. Arowolo, O.; Adekunle, A.A.; Ade-Omowaye, J.A. A Real Time Image Processing Bird Repellent System Using Raspberry Pi.
FUOYE J. Eng. Technol. 2020, 5. [CrossRef]
34. Marcon, P.; Szabo, Z.; Vesely, I.; Zezulka, F.; Sajdl, O.; Roubal, Z.; Dohnal, P. A Real Model of a Micro-Grid to Improve Network
Stability. Appl. Sci. 2017, 7, 757. [CrossRef]
35. Lin, L.; Zhu, M. Efficient Tracking of Moving Target Based on an Improved Fast Differential Evolution Algorithm. IEEE Access
2018, 6, 6820–6828. [CrossRef]
36. Cafforio, C.; Rocca, F. The Differential Method for Image Motion Estimation. In Proceedings of the Image Sequence Processing and
Dynamic Scene Analysis; Huang, T.S., Ed.; Springer: Berlin/Heidelberg, Germany, 1983; pp. 104–124.
37. He, X.; Zhao, K.; Chu, X. AutoML: A Survey of the State-of-the-Art. Knowl. Based Syst. 2021, 212, 106622. [CrossRef]
38. Bisong, E. Google AutoML: Cloud Vision. In Building Machine Learning and Deep Learning Models on Google Cloud Platform: A
Comprehensive Guide for Beginners; Bisong, E., Ed.; Apress: Berkeley, CA, USA, 2019; pp. 581–598. ISBN 978-1-4842-4470-8.
39. Rivera, J.D.D.S. Object detection with a model trained in Google Cloud AutoML. In Practical TensorFlow.js: Deep Learning in Web
App Development; Rivera, J.D.D.S., Ed.; Apress: Berkeley, CA, USA, 2020; pp. 163–184. ISBN 978-1-4842-6273-3.
40. AutoML Vision Beginner’s Guide. Available online: https://cloud.google.com/vision/automl/docs/beginners-guide?hl=cs
(accessed on 13 April 2021).
41. Connor, J.T.; Martin, R.D.; Atlas, L.E. Recurrent Neural Networks and Robust Time Series Prediction. IEEE Trans. Neural Netw.
1994, 5, 240–254. [CrossRef] [PubMed]

You might also like