Professional Documents
Culture Documents
Download Progress In Artificial Intelligence 20Th Epia Conference On Artificial Intelligence Epia 2021 Virtual Event September 7 9 2021 Proceedings Lecture Notes In Computer Science 12981 Goreti Marreiros Ed online ebook texxtbook full chapter pdf
Download Progress In Artificial Intelligence 20Th Epia Conference On Artificial Intelligence Epia 2021 Virtual Event September 7 9 2021 Proceedings Lecture Notes In Computer Science 12981 Goreti Marreiros Ed online ebook texxtbook full chapter pdf
Progress in
Artificial Intelligence
20th EPIA Conference on Artificial Intelligence, EPIA 2021
Virtual Event, September 7–9, 2021
Proceedings
123
Lecture Notes in Artificial Intelligence 12981
Series Editors
Randy Goebel
University of Alberta, Edmonton, Canada
Yuzuru Tanaka
Hokkaido University, Sapporo, Japan
Wolfgang Wahlster
DFKI and Saarland University, Saarbrücken, Germany
Founding Editor
Jörg Siekmann
DFKI and Saarland University, Saarbrücken, Germany
More information about this subseries at http://www.springer.com/series/1244
Goreti Marreiros Francisco S. Melo
• •
Progress in
Artificial Intelligence
20th EPIA Conference on Artificial Intelligence, EPIA 2021
Virtual Event, September 7–9, 2021
Proceedings
123
Editors
Goreti Marreiros Francisco S. Melo
ISEP/GECAD IST/INESC-ID
Polytechnic Institute of Porto University of Lisbon
Porto, Portugal Porto Salvo, Portugal
Nuno Lau Henrique Lopes Cardoso
DETI/IEETA FEUP/LIACC
University of Aveiro University of Porto
Aveiro, Portugal Porto, Portugal
Luís Paulo Reis
FEUP/LIACC
University of Porto
Porto, Portugal
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
This volume contains the papers presented at the 20th EPIA Conference on Artificial
Intelligence (EPIA 2021), held during September 7–9, 2021, in Portugal. The EPIA
Conference on Artificial Intelligence is a well-established European conference in the
field of Artificial Intelligence (AI). Due to the COVID-19 pandemic, the 20th edition
of the conference took place online.1 As in previous editions, this international con-
ference was hosted with the patronage of the Portuguese Association for Artificial
Intelligence (APPIA).2 The purpose of this conference is to promote research in all
areas of AI, covering both theoretical/foundational issues and applications, and the
scientific exchange among researchers, engineers, and practitioners in related
disciplines.
As in previous editions, the program was based on a set of thematic tracks proposed
by the AI community, dedicated to specific themes of AI. EPIA 2021 encompassed 12
tracks:
1
The conference website can be found at http://www.appia.pt/epia2021/.
2
http://www.appia.pt.
vi Preface
Steering Committee
Ana Bazzan Universidade Federal do Rio Grande do Sul, Brazil
Ann Nowe Vrije Universiteit Brussel, Belgium
Catholijn Jonker Delft University of Technology, The Netherlands
Ernesto Costa University of Coimbra, Portugal
Eugénio Oliveira University of Porto, Portugal
Helder Coelho University of Lisbon, Portugal
João Pavão Martins University of Lisbon, Portugal
José Júlio Alferes NOVA University Lisbon, Portugal
Juan Pavón Universidad Complutense Madrid, Spain
Luís Paulo Reis University of Porto, Portugal
Paulo Novais University of Minho, Portugal
Pavel Brazdil University of Porto, Portugal
Virginia Dignum Umeå University, Sweden
Track Chairs
Artificial Intelligence and IoT in Agriculture
José Boaventura Cunha University of Trás-os-Montes and Alto Douro, Portugal
Josenalde Barbosa Universidade Federal do Rio Grande do Sul, Brazil
Paulo Moura Oliveira University of Trás-os-Montes and Alto Douro, Portugal
Raul Morais University of Trás-os-Montes and Alto Douro, Portugal
Intelligent Robotics
João Fabro Universidade Tecnológica Federal do Paraná, Brazil
Reinaldo Bianchi Centro Universitário da FEI, Brazil
Nuno Lau University of Aveiro, Portugal
Luís Paulo Reis University of Porto, Portugal
Program Committee
Intelligent Robotics
André Conceição Federal University of Bahia, Brazil
André Luís Marcato Federal University of Juiz de Fora, Brazil
António J. R. Neves University of Aveiro, Portugal
António Paulo Moreira University of Porto, Portugal
Armando Pinho University of Aveiro, Portugal
Armando Sousa University of Porto, Portugal
Axel Hessler DAI-Labor, TU Berlin, Germany
Brígida Mónica Faria Polytechnic Institute of Porto, Portugal
Carlos Carreto Polytechnic Institute of Guarda, Portugal
Cesar Analide University of Minho, Portugal
Eurico Pedrosa University of Aveiro, Portugal
Fei Chen Advanced Robotics Department, Italy
Fernando Osorio University of Sao Paulo, Brazil
Jorge Dias University of Coimbra, Portugal
Josemar Rodrigues de Bahia State University, Brazil
Souza
xvi Organization
Additional Reviewers
Virginia Dignum
Abstract. Every day we see news about advances and the societal impact of AI. AI
is changing the way we work, live and solve challenges but concerns about fair-
ness, transparency or privacy are also growing. Ensuring AI ethics is more than
designing systems whose result can be trusted. It is about the way we design them,
why we design them, and who is involved in designing them. In order to develop
and use AI responsibly, we need to work towards technical, societal, institutional
and legal methods and tools which provide concrete support to AI practitioners, as
well as awareness and training to enable participation of all, to ensure the align-
ment of AI systems with our societies’ principles and values.
1
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206.
Trustworthy Human-Centric AI –
The European Approach
Fredrik Heintz
Abstract. Europe has taken a clear stand that we want AI, but we do not want
just any AI. We want AI that we can trust and that puts people at the center. This
talk presents the European approach to Trustworthy Human-Centric AI
including the main EU initiatives and the European AI ecosystem such as the
different major projects and organizations. The talk will also touch upon some
of the research challenges related to Trustworthy AI from the ICT-48 network
TAILOR which has the goal of developing the scientific foundations for
Trustworthy AI through integrating learning, optimisation, and reasoning.
To maximize the opportunities and minimize the risks, Europe has decided to focus on
human-centered Trustworthy AI based on strong collaboration among key stakehold-
ers. According to the High-Level Expert Group on AI, Trustworthy AI has three main
aspects, it should be Lawful, ensuring respect for applicable laws and regulations;
Ethical, ensuring adherence to ethical principles and values; and Robust, both from a
technical and social perspective.
There are many technical research challenges related to these requirements,
including fairness, explainability, transparency, and safety. To achieve these, we will
most likely need to integrate learning and reasoning in a principled manner while
retaining the explanatory power of more structured, often logical, approaches together
with the adaptability, flexibility, and efficiency of data driven machine learning
approaches.
To achieve its grand vision, Europe is establishing a growing ecosystem of AI
initiatives. One key organization is CLAIRE, the pan-European Confederation of
Laboratories for Artificial Intelligence Research in Europe based on the vision of
European excellence across all of AI, for all of Europe, with a human-centred focus.
Others are ELLIS, the European Laboratory for Learning and Intelligent Systems, and
EurAI, the European AI Association. From the European commission some of the main
initiatives are AI4EU, who is building a European AI On-Demand Platform; the four
ICT-48 Networks AI4Media, ELISE, HumaneAI NET and TAILOR, plus the ICT-48
CSA VISION; and the public-private partnership (PPP) on AI, data and robotics
between EU and Adra (the AI, Data, and Robotics Association), a new organization
formed by BDVA, CLAIRE, ELLIS, EurAI and euRobotics.
Europe has a good position to take the lead on Trustworthy AI globally. We now
need to consolidate and strengthen the European AI ecosystem so that we can accel-
erate towards a human-centric trustworthy future, together.
Multimodal Simultaneous Machine
Translation
Lucia Specia
Shimon Whiteson
University of Oxford, UK
shimon.whiteson@cs.ox.ac.uk
General AI
The DeepONets for Finance: An Approach to Calibrate the Heston Model . . . 351
Igor Michel Santos Leite, João Daniel Madureira Yamim,
and Leonardo Goliatt da Fonseca
xxviii Contents
Intelligent Robotics
One Arm to Rule Them All: Online Learning with Multi-armed Bandits
for Low-Resource Conversational Agents . . . . . . . . . . . . . . . . . . . . . . . . . . 625
Vânia Mendonça, Luísa Coheur, and Alberto Sardinha
Helping People on the Fly: Ad Hoc Teamwork for Human-Robot Teams. . . . 635
João G. Ribeiro, Miguel Faria, Alberto Sardinha, and Francisco S. Melo
1 Introduction
Agricultural production differs from others in the sense that it is affected by
uncontrolled inputs, such as climate, which affect the productivity of the system
and constantly change the structures and characteristics of the environment.
Humans are easily capable of identifying and working around these changing
scenarios. Still, some of these jobs can be demanding and depending on the
terrain and weather, it may be unsuitable or unpleasant for humans to do this
work. The preceding factors lead to a decline in the agricultural workforce in
well-developed countries as low productivity and hard, specialized work cannot
justify the choice. Currently, there is a shift to more stable, well-paying jobs in
c Springer Nature Switzerland AG 2021
G. Marreiros et al. (Eds.): EPIA 2021, LNAI 12981, pp. 3–15, 2021.
https://doi.org/10.1007/978-3-030-86230-5_1
4 J. Sarmento et al.
industry and services. Driven by rising labour costs, interest in automation has
increased to find labour-saving solutions for crops and activities that are more
difficult to automate [1]. An example is precision viticulture (PV), which aims
to optimize vineyard management, reduce resource consumption and environ-
mental impact, and maximize yield and production quality through automation,
with robots playing a major role as monitoring tools, for example, as reviewed
in [2]. This work aims to develop a vision-based guidance system for woody
crops, more precisely for vineyards. This type of crops brings several challenges
to autonomous robots. To automate them, the first step is the ability to nav-
igate between vine rows autonomously, which requires taking into account the
rough terrain that prevents the use of odometry for long periods of time, the
dense vegetation and harsh landscape that can obstruct the GPS-GNSS signal,
the outdoor lighting and the constantly changing appearance of the crops, and,
depending on the season, the degree of noise added by loose vine whips and grass.
In previous works, was approached the detection of vine trunk trees using Deep
Learning models in an Edge-AI manner [3–5]. In this paper, we extend this work
to estimate the Vanishing Point (VP) of the row of vines using a Neural Network
(NN) model trained using transfer learning to detect the base of the trunks. The
proposed system consists of a single camera and an edge Tensor Processing Unit
(TPU) dev board (Coral) capable of deep-learning inference to detect the base
of the vine trunk and perform a linear regression based on the positions of each
row of detected base trunks. Then, by intersecting both row lines, our algorithm
estimates the vanishing point located on the image that allows correcting the
orientation of the robot inside the vineyard. The remainder of this paper is orga-
nized as follows. Section 2 contains a survey of related work. Section 3 details the
hardware used and all the steps to obtain the vanishing point estimate. Section 4
presents the methodology used to evaluate the proposed solution and discusses
the results. Finally, Sect. 5 concludes this work and discusses future work.
2 Related Work
To the best of our knowledge, Vanishing Point guidance has never been applied
to vineyards. Nevertheless, there are many examples of autonomous guidance in
vineyards. Also, when it comes to vanishing point navigation, most examples are
applied to roads. One of the most common autonomous guidance techniques in
vineyards is the use of a laser scanner, also called LiDAR. Laser scanner based
guidance are proposed in [6] and [7]. The first one tracks only one of the vine
rows and approximates all the laser measurements to a line. Results show that
it is able to navigate a real vineyard (including performing U-turns) with the
mean value of the error being less than 0.05 m. The second guides a robot in an
orchard by tracking a center line obtained after acquiring the two rows from laser
distance measures. The system has a maximum error of 0.5 m in the transverse
direction and 1% of the row length in the longitudinal direction.
Reiser et al. [8] proposes a beacon localization system that estimates the
position based on the received signal strength indication (RSSI). Results show
an error of 0.6m using RSSI trilateration.
Autonomous Robot Visual-Only Guidance in Agriculture 5
3.1 Hardware
Considering the previously described requirements, the hardware chosen is a
Google Coral Dev Board Mini that has a Accelerator Module capable of per-
forming fast machine learning inference at the edge (4 trillion operations per
second), avoiding the need for cloud computing and allowing inference to be
performed in real time and locally. This module is placed in a single board com-
puter with a quad-core CPU (MediaTek8167s), 2 GB RAM, 8 GB flash memory,
a 24-pin dedicated connector for a MIPI-CSI2 camera and runs on Debian Linux.
Thus, this device is not only able to run the inference algorithm, but also the
visual steering. To do so, it receives images from a camera and uses the ROS2
framework to easily exchange between the distributed algorithms. In addition,
everything previously described is possible on a small form factor device with
low consumption.
(a) View from above of Qta. da Aveleda (b) Desired result from vanishing point ex-
vines. traction on Qta. da Aveleda vines.
Trunk Detection: To obtain the vine lines, the position of the vine trunks
is determined. For this purpose, various neural networks are trained and then
compared for object detection. The selected networks were MobileNet-V1,
MobileNet-V2, MobileDets and Resnet50. Since the Google Coral Edge TPU
has certain limitations in terms of compatible operations, not all NN architec-
tures are ideal. Therefore, the selection was conditioned by already available
Autonomous Robot Visual-Only Guidance in Agriculture 7
trained networks supplied by Google and TensorFlow, which the TPU supports.
To make the neural networks able to recognise the base of the trunks a dataset
must be created. The dataset created consists of the annotation of 604 vineyard
images recorded on-board of our agricultural robot [18] in two different stages
of the year (summer and winter), Fig. 5. The annotation resulted in 8708 base
trunk annotations (3278 in summer, 5430 in winter). In the summer images, the
vineyard has more vegetation, and thus, the base of the vine trunks is some-
times occluded. For this reason, this set of images has fewer annotated trunks in
comparison with the winter ones. To increase the size of the dataset, an augmen-
tation procedure was performed (Fig. 2), which consisted in applying different
transformations to the already annotated dataset, namely: A 15 and −15 rota-
tion was applied to the images to simulate the irregularity of the vineyards,
a varying random multiplication was performed to vary the brightness of the
images, a random blur was applied, a flip transformation was also implemented
and also a random combination of transformations selecting three out of seven
transformations (scale, angle, translation, flip, multiplication, blur and noise)
and applying them in random order. The final dataset consisted of 4221 labelled
images.
The training method used was transfer learning, a technique that takes a
previously trained model and re-trains either the final layers or the entire model
with a new dataset considering the already established layers and weights trained
by a previous dataset. In this case, the selected models were pre-trained using
the COCO dataset, an object detection dataset trained to recognise 90 objects.
The entire model was re-trained with the new base trunks detection dataset.
During the training, an evaluation has to be performed for validation. For this
purpose, the created dataset was divided as follows: 70% for the training, 15%
for the validation and 15% for the benchmarking after the training, this last part
will be explained later in Sect. 4.
8 J. Sarmento et al.
To use the trained model in the Coral dev board mini, it must be fully
quantized to 8-bit precision and compiled into a file compatible with the Edge
TPU. To do this, the dataset that was in PASCAL VOC 1.1 was converted to
TFRecord, TensorFlow’s proprietary binary storage format. Binary data takes
up less space on disk, requires less time to copy, and therefore makes the re-
training process more efficient. Next, all models were trained using Google Colab-
oratory and the TensorFlow framework. If possible, the model was quantized dur-
ing training, which is often better for model accuracy. Finally, the model must
be compiled into a file compatible with the Edge TPU, as described previously.
For this, the Edge TPU compiler is used, since this compiler requires TensorFlow
Lite, which is a version optimized for mobile and edge devices, the model trained
with TensorFlow is converted to TensorFlow Lite and then compiled.
Vanishing Point Estimation: Given the bounding box of the detected base
trunk by the inference algorithm, two lines must be determined for the left
and right rows. Since the detections are not assigned to a particular row, the
clustering has to be done for this purpose. The method used was to sum and
average all the x-values of the detection points, which gives the estimated mean
vertical line separating the two clusters, Fig. 3a. This method works well when
the detections on both sides are equal or nearly equal, which is the case for the
trained models. Then, if a point is to the left of the middle line, it belongs to
the left row and vice versa. After assigning the point to the corresponding row,
a line must be drawn to approximate the respective row, Fig. 3b. Since neither
the detections are exactly on the center of the base trunk nor the trunks of the
vines are perfectly aligned, an approximation must be performed. The method
used was the least mean square approximation method, which minimizes the
root mean square error of all the points, in this case, to the line. This method is
effective and not iterative. Then after obtaining the two lines, the intersection
results in the Vanishing Point, Fig. 3c.
Fig. 3. Vanishing point estimation process. (a) Clustering step, the green line is the
clustering threshold, yellow and red circles represent left and right row respectively.
(b) Linear approximation step, the orange lines represent the best fit calculated for
each row. (c) Intersection step, the blue line indicates the horizontal position of the
estimated vanishing point. (Color figure online)
Autonomous Robot Visual-Only Guidance in Agriculture 9
4 Results
4.1 Methodology
To evaluate the base trunk detection, as referred in Sect. 3.2, a portion of the
proposed dataset was reserved to later perform an unbiased benchmark of the
detections using the most commonly used metrics (Precision, Recall, Average
Precision (AP) and F1 Score) such in [19]. To evaluate the vanishing point
estimation, a ground truth was created by manually labeling images extracted
from a video recording of the vineyards in the winter and then in the summer.
Then, by comparing the horizontal error between the ground truth and the
obtained estimate, the error in pixels is obtained. Since the guidance system will
deal with angles, it is interesting to evaluate the error in angles. To do that,
Eq. 1 is used to obtain the angular error relative to the center of the image
from the ground truth and also for the obtained estimate, then by subtracting
10 J. Sarmento et al.
both errors, the angular error between the two is obtained. Finally, the angle
correction published by the visual steering algorithm and the error between the
center frame and the vanishing point estimate was observed to evaluate the
guidance system.
(a) MobileNet-V1 evaluation in the sum- (b) MobileNet-V1 evaluation in the winter
mer images. images.
(c) MobileNet-V2 evaluation in the sum- (d) MobileNet-V2 evaluation in the winter
mer images. images.
(e) MobileDets evaluation in the summer (f) MobileDets evaluation in the winter im-
images. ages.
(g) ResNet50 evaluation in the summer im- (h) ResNet50 evaluation in the winter im-
ages. ages.
Fig. 5. Base trunk detections, in each image the left and right portions represent the
detections and the ground truth respectively.
error was determined by summing all absolute error values and dividing by the
total to evaluate the magnitude of the error rather than its relative position. For
the winter annotations, the recorded error is so small that the error associated
with the labeling of the images is relevant, which means that it is impossible to
compare between the different models, although it also means that all models
have good results and the vanishing point estimation algorithm is robust. In
[15], vanishing point estimation is performed in the range ±5◦ . All previously
mentioned models combined with the vine line estimation and intersection out-
perform the method proposed by [15], with the largest range recorded in the
winter evaluation being ±2◦ , even tho having a considerately smaller dataset. It
is also important to point out that there were a few outliers during the evalua-
tion that are not currently filtered in the vanishing point estimation algorithm.
These outliers are false positives that affect the line estimation algorithm, result-
ing in a false vanishing point estimate. For reference, in the winter evaluation
there was only one instance of an outlier for the Mobilenet-v1 model. In contrast,
the results for the summer annotations were not so satisfactory, although the
12 J. Sarmento et al.
average error is not large, as seen in Table 2, the number of estimations with
an error greater than 10px (≈2o ) is half or more than half of the annotated
images for all models, the magnitude of this can be seen when comparing the
dispersion of the Vanishing Point estimation error histograms from the winter
(Figs. 6d to f) to the summer (Figs. 6g to i) where the range of values are [−20;
20]px and [−100; 100]px respectively. It was also observed that in some cases it
was not possible to estimate a vanishing point. The reason for such results is the
noise at the base of the trunk and the loose vines which sometimes cover the
camera. This noise sometimes leads to false positives and false negatives. One
of the causes of the outliers is in the clustering algorithm. Suppose a significant
number of false positives are detected. In that case, this can throw the algo-
rithm out of balance and lead to incorrect assimilation of row and subsequently
incorrect line fitting and vanishing point estimation. The other main problem is
that the model is unable to find the minimum number of points to produce an
optimal fit (2 points), leading to the impossibility of vanishing point estimation.
The algorithm as it stands will not provide as robust guidance in the summer
as it does in the winter. A more complex clustering and filtering algorithm are
needed to provide more reliable guidance, and, more importantly, a larger data
set is also needed.
Table 2. Deep Learning-based vanishing point detection average absolute error and
min and max range in degrees by time of the year. ResNet50 was not implemented due
to the incapability of quantization.
(a) MobileNet-V1 5px Van- (b) MobileNet-V2 8px Van- (c) MobileDets 6px Vanish-
ishing Point Error. ishing Point Error. ing Point Error.
Fig. 6. Vanishing point estimation evaluation. In figs. (a) to (c) The vertical green
and red lines are the vanishing point ground truth and estimation respectively. (Color
figure online)
As can be seen in Fig. 7a, there is a displacement between the centerline and the
vanishing point, so to correct the system in this case, a positive angle correc-
tion was needed. In Fig. 7b the displacement error is plotted against the angular
correction. When the red line (displacement) changes the sign, the blue line
(angular correction) changes in the opposite sign, indicating that if there is a
negative displacement, it will be corrected with a positive angular correction
as it was intended. The controller currently has only an uncalibrated propor-
tional gain, as there has been no method for evaluating controller performance.
Future work will present a method for determining the response of the controller
and investigate a possible Proportional Integrative Derivative (PID) controller
implementation.
14 J. Sarmento et al.
(a) 66px or 5.4◦ displacement between the van- (b) Relation between system dis-
ishing point estimation and the center frame, placement error and angle correc-
blue and green line respectively. tion, blue and red line respectively.
5 Conclusions
In this work was developed a Visual-only guidance system that makes use of the
vanishing point to correct the steering angle. The vanishing point estimation is
obtained by detection of the base of the trunks using Deep Learning. Four Single-
Shot Detectors (MobileNet-V1, MobileNet-V2, MobilDets and ResNet50) were
trained using transfer learning and built to be deployed on the Google coral
dev mini with a built in-house dataset. The results were evaluated in terms
of Precision, Recall, Average Precision and F1 Score. For the Vanishing point
estimation, all trained models had a similar and good performance since the
lowest average of the absolute error recorded was 0.52◦ . On the other hand, the
performance on the summer proved that there is still progress to be made, being
that the lowest average of the absolute error recorded was 2.89◦ . In future work,
the dataset will be extended to improve the performance in the summer. Also,
a PID controller will be implemented for guidance and tested in a novel 3D
simulation created to emulate a vineyard.
Acknowledgements. The research leading to these results has received funding from
the European Union’s Horizon 2020 - The EU Framework Programme for Research and
Innovation 2014–2020, under grant agreement No. 101004085.
References
1. Christiaensen, L., Rutledge, Z., Edward Taylor, J.: Viewpoint: the future of work
in agri-food. Food Policy 99(March 2020), 101963 (2021)
2. Ammoniaci, M., Paolo Kartsiotis, S., Perria, R., Storchi, P.: State of the art of
monitoring technologies and data processing for precision viticulture. Agriculture
(Switzerland) 11(3), 1–21 (2021)
Autonomous Robot Visual-Only Guidance in Agriculture 15
3. Silva Aguiar, A., Neves Dos Santos, F., Jorge Miranda De Sousa, A., Moura
Oliveira, P., Carlos Santos, L.: Visual trunk detection using transfer learning and
a deep learning-based coprocessor. IEEE Access 8, 77308–77320 (2020)
4. Silva Pinto de Aguiar, A., Baptista Neves dos Santos, F., Carlos Feliz dos Santos,
L., Manuel de Jesus Filipe, V., Jorge Miranda de Sousa, A.: Vineyard trunk detec-
tion using deep learning - an experimental device benchmark. Comput. Electron.
Agric. 175(March), 105535 (2020)
5. Silva Aguiar, A., et al.: Bringing semantics to the vineyard: an approach on deep
learning-based vine trunk detection. Agriculture (Switzerland) 11(2), 1–20 (2021)
6. Riggio, G., Fantuzzi, C., Secchi, C.: A low-cost navigation strategy for yield esti-
mation in vineyards. In: Proceedings - IEEE International Conference on Robotics
and Automation, pp. 2200–2205 (2018)
7. Bergerman, M., et al.: Robot farmers: autonomous orchard vehicles help tree fruit
production. IEEE Robot. Autom. Mag. 22(1), 54–63 (2015)
8. Reiser, D., Paraforos, D.S., Khan, M.T., Griepentrog, H.W., Vázquez-Arellano,
M.: Autonomous field navigation, data acquisition and node location in wireless
sensor networks. Precis. Agric. 18(3), 279–292 (2017)
9. Rovira-Más, F., Millot, C., Sáiz-Rubio, V.: Navigation strategies for a vineyard
robot. In: American Society of Agricultural and Biological Engineers Annual Inter-
national Meeting, vol. 2015, no. 5, pp. 3936–3944 (2015)
10. Aghi, D., Mazzia, V., Chiaberge, M.: Local motion planner for autonomous nav-
igation in vineyards with a RGB-D camera-based algorithm and deep learning
synergy. arXiv, pp. 1–11 (2020)
11. Sharifi, M., Chen, X.: A novel vision based row guidance approach for navigation
of agricultural mobile robots in orchards. In: ICARA 2015 - Proceedings of the
2015 6th International Conference on Automation, Robotics and Applications, pp.
251–255 (2015)
12. Kun Lyu, H., Ho Park, C., Hee Han, D., Woo Kwak, S., Choi, B.: Orchard free
space and center line estimation using Naive Bayesian classifier for unmanned
ground self-driving vehicle. Symmetry 10(9), 355 (2018)
13. Garcı́a-Faura, Á., Fernández-Martı́nez, F., Kleinlein, R., San-Segundo, R., Dı́az-de
Marı́a, F.: A multi-threshold approach and a realistic error measure for vanishing
point detection in natural landscapes. Eng. Appl. Artif. Intell. 85(August), 713–
726 (2019)
14. Zhou, Z., Farhat, F., Wang, J.Z.: Detecting dominant vanishing points in natu-
ral scenes with application to composition-sensitive image retrieval. IEEE Trans.
Multimedia 19(12), 2651–2665 (2017)
15. Kai Chang, C., Zhao, J., Itti, L.: DeepVP: deep learning for vanishing point detec-
tion on 1 million street view images. In: Proceedings - IEEE International Confer-
ence on Robotics and Automation, pp. 4496–4503 (2018)
16. Bo Liu, Y., Zeng, M., Hao Meng, Q.: D-VPnet: a network for real-time dominant
vanishing point detection in natural scenes. Neurocomputing 417, 432–440 (2020)
17. Han, S.-H., Kang, K.-M., Choi, C.-H., Lee,D.-H., et al.: Deep learning-based path
detection in citrus orchard. In: 2020 ASABE Annual International Virtual Meeting,
page 1. American Society of Agricultural and Biological Engineers (2020)
18. Santos, L., et al.: Path planning aware of robot’s center of mass for steep slope
vineyards. Robotica 38(4), 684–698 (2020)
19. Padilla, R., Netto, S.L., Da Silva, E.A.B.: A survey on performance metrics for
object-detection algorithms. In: International Conference on Systems, Signals, and
Image Processing, July 2020, pp. 237–242 (2020)
Terrace Vineyards Detection from UAV Imagery
Using Machine Learning: A Preliminary
Approach
Abstract. Alto Douro Wine Region is located in the Northeast of Portugal and is
classified by UNESCO as a World Heritage Site. Snaked by the Douro River, the
region has been producing wines for over 2000 years, with the world-famous Porto
wine standing out. The vineyards, in that region, are built in a territory marked
by steep slopes and the almost inexistence of flat land and water. The vineyards
that cover the great slopes rise from the Douro River and form an immense ter-
raced staircase. All these ingredients combined make the right key for exploring
precision agriculture techniques. In this study, a preliminary approach allowing to
perform terrace vineyards identification is presented. This is a key-enabling task
towards the achievement of important goals such as production estimation and
multi-temporal crop evaluation. The proposed methodology consists in the use of
Convolutional Neural Networks (CNNs) to classify and segment the terrace vine-
yards, considering a high-resolution dataset acquired with remote sensing sensors
mounted in unmanned aerial vehicles (UAVs).
1 Introduction
The majority of nations across the world are worried that the increasing population is
bound to introduce more problems to food supplying. The predicted global scenario is
critical, pointing to a considerable increase in the world population, while the scarcity of
water and arable area are decreasing, due to climate change and global warming [1]. The
production of local, seasonal and environmentally friendly products favors biodiversity,
sustainable livelihoods and ensures healthy food. The region of Trás-os-Montes and
Alto Douro stands out for having the largest area of vineyards, chestnut, almond, walnut
and apple trees in Portugal, and for being the second region with the largest area of
olive groves and cherry trees [2]. This region is bathed by the Douro River and has been
producing wine for over 2000 years, including the world-famous Porto wine, representing
1914.
Sopusointu.
Kansanvalta vaikenee.
Kannel yksin helisee,
piirtäjien pilkkataito
vuotaa vaan kuin lämmin maito.
Yhdyn yhteissointuhun,
kutsun siihen sun ja mun.
Riidelköhön' roskaväki!
Meillä kukkuu rauhan käki.
27/3 1914.
Pöytälaulu.
1914.
Rovastin tupakkalakko.
1915.
Helkalaulu.
1916.
Kevätmyrsky.
1916.
*** END OF THE PROJECT GUTENBERG EBOOK TUULIKANNEL
***
Updated editions will replace the previous one—the old editions will
be renamed.
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the terms
of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.
• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.F.
1.F.4. Except for the limited right of replacement or refund set forth in
paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.