Professional Documents
Culture Documents
Seasonal Crops Disease Prediction and Classification Using Deep Convolutional Encoder Network
Seasonal Crops Disease Prediction and Classification Using Deep Convolutional Encoder Network
https://doi.org/10.1007/s00034-019-01041-0
Abstract
Agriculture plays a significant role in the growth and development of any nation’s
economy. But, the emergence of several crop-related diseases affects the productiv-
ity in the agriculture sector. To cope up this issue and to make aware the farmers to
prevent the expansion of diseases in crops and to implement effective management,
crop disease diagnosis plays its significant role. Researchers had already used many
techniques for this purpose, but some vision-related techniques are yet to be explored.
Commonly used techniques are support vector machine, k-means clustering, radial
basis functions, genetic algorithm, image processing techniques like filtering and seg-
mentation, deep structured learning techniques like convolutional neural network. We
have designed a hybrid approach for detection of crop leaf diseases using the combina-
tion of convolutional neural networks and autoencoders. This research paper provides
a novel technique to detect crop diseases with the help of convolutional encoder net-
works using crop leaf images. We have obtained our result over a 900-image dataset,
out of which 600 constitute the training set and 300 test set. We have considered 3
crops and 5 kinds of crop diseases. The proposed network was trained in such a way
that it can distinguish the crop disease using the leaf images. Different convolution
filters like 2 × 2 and 3 × 3 are used in proposed work. It was observed that the proposed
architecture achieved variation in accuracy for the different number of epochs and for
different convolution filter size. We reached 97.50% accuracy for 2 × 2 convolution
filter size in 100 epochs, while 100% accuracy for 3 × 3 filter size which is better than
other conventional methods.
B Deepak Gupta
deepakgupta@mait.ac.in
Extended author information available on the last page of the article
Circuits, Systems, and Signal Processing (2020) 39:818–836 819
1 Introduction
V4, ResNet with 50, 101 and 152 layers and DenseNets with 121 layers. DenseNets
attains a test accuracy score of 99.75% for the 30th epoch, and thus considered as best
among the rest of the architectures. Ji et al. [22] proposed a new three-dimensional
(3D) convolutional neural network (CNN)-based process that automatically classifies
crops via spatiotemporal remote sensing images. An active learning framework was
presented for 3D CNN-based methods to enhance the classification accuracy to the
desired level. CNN-based techniques perform superior to conventional methods. 3D
CNN did better than 2D CNN, while among conventional methods, SVM was most
effective [10].
In 2017, Barre et al. [2] presented a CNN-based architecture named LeafNet,
which was tested on freely available datasets like LeafSnap, Foliage and Flavia. They
achieved top-1 accuracy of 86.3% on the LeafSnap dataset, 95.8% on Foliage dataset
and 97.9% on Flavia dataset. Singh et al. [34] proposed image segmentation and
soft computing-based technique for automatic detection and plant leaf diseases clas-
sification with the help of a genetic algorithm. The classification was initially done
using the minimum distance criterion with K-mean clustering having an accuracy of
86.54%. This accuracy was improved to 93.63% by the proposed algorithm, which
was improved to 95.71% by the use of SVM with the proposed algorithm. The aver-
age accuracy for the proposed system was 97.6%. Lu et al. [25] presented an in-field
automatic disease identification system of wheat which was based on a supervised
deep learning framework and multiple instance learning (MIL). The main aim was to
build up an automatic wheat disease analysis system which helps to categorize dis-
ease types and find corresponding disease areas. VGG-FCN-VD16 and VGG-FCN-S,
which are proposed architectures, achieved the mean recognition accuracies of 97.95%
and 95.12%, respectively, over fivefold cross-validation on WDD2017 dataset. VGG-
CNN-VD16 and VGG-CNN-S, which are conventional CNN frameworks, achieved
accuracies of 93.27% and 73.00%. Hence, the proposed models beat conventional
CNN architectures on recognition accuracy under the similar set of parameters.
In 2016, Dyrmann et al. [4] proposed a method for plant species classification
using deep convolutional neural networks. The recognition accuracy of the network
reached from 33% up to 98% with an average accuracy of 86.2%, but it suffers from
some misclassification due to the small size of the dataset. Shin et al. [33] explored and
evaluated different CNN architectures by exploiting some factors. They checked CNN
for two computer-aided diagnostic applications. Sladojevic et al. [35] proposed a model
to recognize 13 different types of plant diseases using deep CNN. The experimental
results on the proposed model achieved accuracy for separate class tests between 91
and 98% and on average 96.3% precision.
This paper presents an approach to train a convolutional encoder neural network
to detect crop diseases. The idea is to build a set of self-learned features that makes
the proposed network unaffected by any variations such as shadows, illumination and
skewed images. The base of the designed network is CNN and autoencoders. We used
CNN since it has gained significant attention in recent years and has outperformed in
various image recognition challenges [36]. We used backpropagation learning algo-
rithm for weight updation which is required to minimize the error by backpropagating
it to previous hidden layers.
Circuits, Systems, and Signal Processing (2020) 39:818–836 821
We have proposed a hybridized deep learning neural network and named it as con-
volutional encoder network. It is a combination of both CNN and autoencoders, but
we have used only the encoding part of the autoencoders to obtain the useful features.
In the encoding part, we take an input image and generate a high-dimensional feature
vector; then, the features are aggregated at multiple levels. Firstly, there is some brief
description of CNN. Convolutional neural network (CNN) is a notable deep learning
architecture inspired motivated by the normal visual observation system of the living
creations. In 1959, Hubel and Wiesel found that cells in creature visual cortex are in
charge of distinguishing light in open fields [30]. Propelled by this revelation, Kuni-
hiko Fukushima proposed the neocognitron in 1980, which could be viewed as the
ancestor of CNN. A key ConvNet is a progression of layers, and each layer of a Con-
vNet transforms one volume of starts to another through a differentiable limit [16]. A
ConvNet consists of three main layers which are: convolutional layer, pooling layer,
and fully connected layer along with some additional layers like a normalization layer
and others [17]. An autoencoder is a special type of artificial neural network (ANN)
used for learning efficient encodings [23]. It comprises of two phases, first is encoding
and other is decoding, i.e. it reconstructs its own inputs. But, we have only used the
encoding phase in our proposed architecture. There are different variants of autoen-
coder such as sparse autoencoder, denoising autoencoder, convolutional autoencoder,
zero-biased autoencoder and contractive autoencoder [30].
The proposed architecture of convolutional encoder network as shown in Fig. 1
consists of convolution encoder layers, max-pooling layers and fully connected layers.
The activation function is applied internally in convolution layer.
The building blocks of the network are explained below:
822 Circuits, Systems, and Signal Processing (2020) 39:818–836
The convolutional layer is the centre structure unit of a convolutional network that
does most of the computationally tough task. This layer’s parameters encompass
of a sequence of learnable channels. In the forward pass, we slide (or convolve)
each channel over the width and height of the information volume of image and
calculate the dot product of kernel and image pixels, while in the backward pass
we compute the gradients of loss with respect to weights, input and bias. The for-
mula for the convolution of a filter over an image can be represented mathematically
as:
∞
∞
f (x, y) ∗ g(x, y) f (n 1 , n 2 ) · g(x − n 1 , y − n 2 ) (1)
n 1 −∞ n 2 −∞
In the above expression, f (x, y) represent an image function, while g(x, y) represent a
filter mask of size n1 × n2 . The channel or the filter is mainly composed of a weight
matrix used for extracting useful information from the original image matrix. Thus, we
can use different weight combinations that can be used for extracting some particular
features like one for extracting edges, other for some colour and another to de-noise the
image. We can use multiple convolutional layers in our network. The starting layers
extract the most generic features from the image, but as we go deeper, this becomes
more complex and can be employed to solve a particular problem. We also used some
terminologies such as stride and padding. Stride can be described as a parameter that
Circuits, Systems, and Signal Processing (2020) 39:818–836 823
how we want the filter to move across the image matrix, while padding means adding
zeros across the image matrix in order to preserve the important features.
Batch normalization is used to increase the performance and stability of the network.
It ensures that inputs to layer fall in a similar range. It normalizes the input of each
layer such that they have a mean output activation of zero and standard deviation of
one. The basic normalization Eq. (2) can be given as follows:
x − E[x]
x̂ √ (2)
Var[x] +
where E|x| represents the mean and Var|x| represent the variance of current batch x.
A constant, ε, is used to avoid zero division. Batch normalization reduces the training
time and demand for regularization [31].
An activation function is added to add nonlinearity to the network. We have used the
rectified linear unit (ReLU) as activation function because it can be calculated very
easily and faster as compared to sigmoidal function [29]. The ReLU function is defined
as follows:
x, if x > 0
f (x) (3)
0, otherwise
A fully connected layer is required to generate an output equal to the number of classes
that we need as output. In this, flattening of neurons is done and we obtain a vector
of all neurons. In such layer, neurons have full connections with all neurons in the
previous layer, as in the case of consistent neural networks.
824 Circuits, Systems, and Signal Processing (2020) 39:818–836
We applied softmax function at the output layer. This function estimates the probability
distribution of an event over n different events. Simply we can say that this function
computes the probability of each target class over all possible target classes. The
equation of softmax function for an event x i can be given as:
Exp(xi )
F(xi ) k , where i 0, 1, 2, . . . , k (4)
j0 Exp x j
The above equation calculates the exponential of given input or event and the summa-
tion of the exponential of all the input values. Then, the ratio of both is the output of
the softmax function. The main advantage of using softmax activation function is that
the calculated probabilities will be in the range from 0 to 1, and summation of all prob-
abilities is 1. It returns the probabilities of each class in case of a multi-classification
model, and the target class must have high probability.
PlantVillage dataset [29], which includes more than 50,000 images on healthy and
infected leaves of crops, is used for this study which is freely available online. We
have reduced the whole dataset into 900 images. The training set consists of 600
images, and the test set comprises 300 images. We have considered 6 classes out of
which 5 are diseases and 1 includes healthy leaves. Also, we considered 3 crops: potato,
tomato and maize, and 5 kinds of diseases whose description is shown in Table 1.
We have chosen two Rabi seasonal crops, viz. potato and tomato and one Kharif
seasonal crop, i.e. maize. In potato, we covered two major diseases, namely early blight
and late blight, while in the case of tomato, two diseases, leaf mould and yellow leaf
curl are considered. Rust disease of maize crop is also taken into account in the dataset.
One class named healthy comprises of all the healthy leaves from all the diseased leaf
classes. Figure 2 shows a randomly selected sample of images from the dataset.
826 Circuits, Systems, and Signal Processing (2020) 39:818–836
Image
Image Database Pre-processing
Conv2D_2 C
Conv2D_1
Batch_Normalization_1 (32, (3 x 3))
Batch_Normalization_2 O
(32, (3 x 3))
N
V
Max_Pooling2D_1 O
L
U
T
Conv2D_3 Conv2D_4 I
Batch_Normalization_3 Batch_Normalization_4
(64, (3 x 3)) (64, (3 x 3)) O
N
Max_Pooling2D_2 A
L
E
Conv2D_5 Conv2D_6 N
Batch_Normalization_5 Batch_Normalization_6
(128, (3 x 3)) (128, (3 x 3)) C
O
D
E
R
Conv2D_7 Conv2D_8
Batch_Normalization_8 N
(256, (3 x 3)) Batch_Normalization_7 (256, (3 x 3)) E
T
W
O
R
Flatten Layer Dense Layer (Relu) Dense Layer (Softmax) K
The image dataset is first preprocessed in order to undergo the further transformation.
In the preprocessing, we reduce the dimension. Data augmentation is done in order to
make the images invariant to any transformations. The workflow is explained with the
help of Fig. 3 whose description is mentioned below:
• Firstly, the input image is subjected to two convolution layers having 32 filters each
and filter size of 2 × 2. The ReLU activation function is applied internally in the
convolution layer. Parallelly, we apply the batch normalization to the convolution
layer to reduce the training size.
• After that, we use a max-pooling layer of size 2 × 2 to reduce the size of the convo-
lution matrix further.
• Again, we used two convolution layers with 64 filters and size 2 × 2 along with
ReLU activation function and batch normalization.
• This is followed by one more max-pooling layer with size 2 × 2.
Table 2 Model summary (filter size: 2 × 2)
828
Layer (type) Number of filters used Size of each filter Output shape Number of parameters
Table 4 Training and testing Number of Convolution Batch size Training Testing
accuracy for different epochs epochs filter size accuracy accuracy
and filter size
3 2×2 32 95.21 74.38
3 3×3 32 93.75 77.69
25 2×2 32 98.54 80.17
25 3×3 32 100.00 85.12
50 2×2 32 97.71 70.25
50 3×3 32 100.00 82.64
100 2×2 32 97.50 80.17
100 3×3 32 100.00 86.78
• Then, we applied two more convolution layer with 128 filters and filter size 2 × 2
with ReLU and batch normalization.
• This is followed by two more convolution layers with 256 filters and filter size 2 ×
2 with ReLU and batch normalization.
• After all this, a flattening layer is used to get a vector of neurons which uses ReLU
function.
• Then two dense layers are used: one uses ReLU, while the other uses the softmax
function and depicts the output class.
Tables 2 and 3 represent the generated convolutional encoder network model sum-
mary for convolutional filter sizes 2 × 2 and 3 × 3, respectively. In case of 2 × 2 model,
total parameters are 2,623,430, out of which 2,621,510 are trainable, and for 3 × 3 filter
size, 3,274,150 are total parameters and 3,272,230 are trainable.
Circuits, Systems, and Signal Processing (2020) 39:818–836 831
Fig. 5 Model accuracy and loss curve for 100 epochs using 2 × 2 and 3 × 3 filter sizes
Fig. 6 Confusion matrix for 100 epochs using 2 × 2 and 3 × 3 filter sizes
which has 2 GB RAM, total RAM of this system is 8 GB. Python software (version
3.5), Anaconda Spyder tool (64-bit), API and libraries were used in this experiment
to build and train convolutional encoder network. Figure 4 shows a code snippet from
the implementation of the model in python language.
In the proposed work, convolutional encoder network is employed for disease detection
of crops such as potato, tomato and maize. The implementation is done in python
language using Anaconda Spyder tool. The training set consists of 600 images out of
which 120 images are used for validation purpose and 480 for training purpose, and
300 images are used for testing. The implemented code was executed for a different
number of epochs, and the accuracy and losses were recorded. We also changed the
filter size to see the difference between accuracy. Adam optimizer has been used to
increase the accuracy and reduce the loss while training. Table 4 shows the training
and testing accuracy for the different number of epochs such as 3, 25, 50 and 100, and
different convolutional filter sizes such as 2 × 2 and 3 × 3. The best training accuracy
for 2 × 2 filter size is 95.21, and best testing accuracy is 77.69 for 3 × 3 filter size
when the number of epochs was 3. In the case of 5 epochs, the best training and testing
accuracy are 100.00 and 85.12, both for 3 × 3 filter size. Also for 50 epochs, the best
training and testing accuracy are 100.00 and 82.64, again for 3 × 3 filter size. Lastly,
for 100 epochs, the best training and testing accuracy are 100.00 and 86.78, for 3 × 3
filter size.
Circuits, Systems, and Signal Processing (2020) 39:818–836 833
Figure 5 shows the accuracy and the loss curve for the proposed model for 100
epochs for two filter sizes, 2 × 2 and 3 × 3. It is clear from the above curve plot that
in the case of 2 × 2 filter size, the model performed very well, but after 70 epochs,
its accuracy starts to decrease, while loss begins to increase. On the other side, there
were less accuracy and more losses initially in case of 3 × 3 filter size. But, as the
number of epochs increases, the accuracy starts to increase gradually. Thus, from
these experimental implementations, we have observed that the accuracy varies with
the number of epochs as well as with the filter size. This has also affected the prediction
of the correct crop disease which can be understood with the help of confusion matrices
as shown in Fig. 6.
Tables 5 and 6 show the precision, recall, F1-score and support for the test set for
2 × 2 and 3 × 3 filter sizes. The precision average comes out to be 83% and 91% for
2 × 2 and 3 × 3 filter sizes, respectively. Figures 7 and 8 show the correct and incorrect
prediction of crop leaf diseases.
834 Circuits, Systems, and Signal Processing (2020) 39:818–836
In the proposed network model for crop disease detection and prediction, a hybrid
convolutional encoder network is used to classify and predict the crop leaf diseases
using computer vision and deep learning models. This network has performed efficient
results than the conventional techniques. No such research has been done yet for crop
disease detection, but, still, a lot more is yet to be explored in deep learning. There are
some challenges such as the training time was very large due to the lack of availability
of good GPUs, but it can be overcome by using high-speed GPUs. Also, we have used
only 6 classes and 3 seasonal crops, i.e. a small dataset due to hardware limitations,
but this work can be extended for other crops and plants to cover a wide range of
common plant diseases which would be more beneficial in agriculture to enhance the
productivity. There are some other possibilities to fine-tune the proposed model using
hyper-parameters such as dropout and regularization.
Acknowledgements VHCA received support from the Brazilian National Council for Research and Devel-
opment (CNPq, Grant # 304315/2017-6 and #430274/2018-1).
Circuits, Systems, and Signal Processing (2020) 39:818–836 835
Conflict of interest The authors acknowledge that they have no competing and conflict of interest.
References
1. A.D. Almási, S. Woźniak, V. Cristea, Y. Leblebici, T. Engbersen, Review of advances in neural net-
works: neural design technology stack. Neurocomputing 174, 31–41 (2016)
2. P. Barré, B.C. Stöver, K.F. Müller, V. Steinhage, LeafNet: a computer vision system for automatic plant
species identification. Ecol. Inform. 40(May), 50–56 (2017)
3. Department of Agriculture, Cooperation & Farmers Welfare, Annual Report, 2016–17. http://agricoop.
nic.in/annual-report. Accessed July 2018
4. M. Dyrmann, H. Karstoft, H.S. Midtiby, Plant species classification using deep convolutional neural
network. Biosyst. Eng. 151(2005), 72–80 (2016)
5. K.P. Ferentinos, Deep learning models for plant disease detection and diagnosis. Comput. Electron.
Agric. 145(January), 311–318 (2018)
6. D. Gupta, A. Ahlawat, Taxonomy of GUM and usability prediction using GUM multistage fuzzy expert
system. Int. Arab J. Inf. Technol. 6, 14–25 (2018)
7. D. Gupta, A. Ahlawat, Usability determination using multistage fuzzy system. Procedia Comput. Sci.
5, 5–9 (2016). https://doi.org/10.1016/j.procs.2016.02.042
8. D. Gupta, A. Ahlawat, Usability evaluation of live auction portal. Int. J. Control Theory Appl. SCOPUS
9(40), 24–32 (2016)
9. D. Gupta, A. Ahlawat, Usability prediction of live auction using multistage fuzzy system. Int. J. Artif.
Intell. Appl. Smart Dev. 5(1), 11–20 (2017)
10. D. Gupta, A. Ahlawat, Usability feature selection via MBBAT: a novel approach. J. Comput. Sci. 23,
195–203 (2017)
11. D. Gupta, A. Ahlawat, K. Sagar, Usability prediction and ranking of SDLC models using fuzzy hier-
archical usability model. Open Eng. Central Eur. J. Eng. ESCI SCOPUS 7(1), 161–168 (2017)
12. D. Gupta, A. Ahlawat, K. Sagar, A critical analysis of a hierarchy based usability model, in 2014
International Conference on Contemporary Computing and Informatics (IC3I) (IEEE, 2014)
13. D. Gupta, A. Khanna, Software usability datasets. Int. J. Pure Appl. Math. SCOPUS 117(15),
1001–1014 (2017)
14. D. Gupta, J.J.P.C. Rodrigues, S. Sundaram, A. Khanna, V. Korotaev, V.H.C. Albuquerque, Usability
feature extraction using modified crow search algorithm: a novel approach. Neural Comput. Appl.
(2018). https://doi.org/10.1007/s00521-018-3688-6
15. D. Gupta, K. Sagar, Remote file synchronization single-round algorithm. Int. J. Comput. Appl. 4(1),
32–36 (2010)
16. J. Gu et al., Recent advances in convolutional neural networks. Pattern Recogn. 5(2), 43–49 (2017)
17. Y. Guo, Y. Liu, A. Oerlemans, S. Lao, S. Wu, M.S. Lew, Deep learning for visual understanding: a
review. Neurocomputing 187, 27–48 (2016)
18. E. Hamuda, B. Mc Ginley, M. Glavin, E. Jones, Improved image processing-based crop detection using
Kalman filtering and the Hungarian algorithm. Comput. Electron. Agric. 148, 37–44 (2018)
19. T. Huang, R. Yang, W. Huang, Y. Huang, X. Qiao, Detecting sugarcane borer diseases using support
vector machine. Inf. Process. Agric. 5(1), 74–82 (2018)
20. S. Ioffe, C. Szegedy, Batch Normalization: Accelerating Deep Network Training by Reducing Internal
Covariate Shift (Wiley Publication, 2015)
21. R. Jain, D. Gupta, A. Khanna, Usability feature optimization using MWOA, in International Conference
on Innovative Computing and Communication (ICICC), vol. 2 (2018)
22. S. Ji, C. Zhang, A. Xu, Y. Shi, Y. Duan, 3D convolutional neural networks for crop classification with
multi-temporal remote sensing images. Remote Sens. 10(1), 75–84 (2018)
23. C.Y. Liou, W.C. Cheng, J.W. Liou, D.R. Liou, Autoencoder for words. Neurocomputing 139, 84–96
(2014)
24. W. Liu, Z. Wang, X. Liu, N. Zeng, Y. Liu, F.E. Alsaadi, A survey of deep neural network architectures
and their applications. Neurocomputing 234, 11–26 (2017)
836 Circuits, Systems, and Signal Processing (2020) 39:818–836
25. J. Lu, J. Hu, G. Zhao, F. Mei, C. Zhang, An in-field automatic wheat disease diagnosis system. Comput.
Electron. Agric. 142, 369–379 (2017)
26. M. Mahmud, M.S. Kaiser, A. Hussain, S. Vassanelli, Applications of deep learning and reinforcement
learning to biological data. IEEE Trans. Neural Netw. Learn. Syst. 29(6), 1–33 (2018). arXiv: 1711.
03985v2 [cs. LG]
27. A. Patnaik, D. Gupta, Unique identification system. Int. J. Comput. Appl. 7(5), 16–28 (2010)
28. A. Picon, A. Alvarez-Gila, M. Seitz, A. Ortiz-Barredo, J. Echazarra, A. Johannes, Deep convolutional
neural networks for mobile capture device-based crop disease classification in the wild. Comput.
Electron. Agric. 138, 200–209 (2018)
29. PlantVillage Disease Classification Challenge. https://www.crowdai.org/challenges/plantvillage-
disease-classification-challenge/dataset_files. Accessed June 2018
30. S. Qian, H. Liu, C. Liu, S. Wu, H.S. Wong, Adaptive activation functions in convolutional neural
networks. Neurocomputing 272, 204–212 (2018)
31. J. Schmidhuber, Deep Learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)
32. K. Shankar, S.K. Lakshmanaprabu, D. Gupta, A. Maseleno, V.H.C. de Albuquerque, Optimal features-
based multi-kernel SVM approach for thyroid disease classification. J. Supercomput. (2018). https://
doi.org/10.1007/s11227-018-2469-4
33. H.C. Shin et al., Deep convolutional neural networks for computer-aided detection: CNN architectures,
dataset characteristics and transfer learning. IEEE Trans. Med. Imaging 35(5), 1285–1298 (2016)
34. V. Singh, A.K. Misra, Detection of plant leaf diseases using image segmentation and soft computing
techniques. Inf. Process. Agric. 4(1), 41–49 (2017)
35. S. Sladojevic, M. Arsenovic, A. Anderla, D. Culibrk, D. Stefanovic, Deep neural networks based
recognition of plant diseases by leaf image classification. Comput. Intell. Neurosci. 2016, 3289801
(2016)
36. E.C. Too, L. Yujian, S. Njuki, L. Yingchun, A comparative study of fine-tuning deep learning models
for plant disease identification. Comput. Electron. Agric. 22, 135–152 (2018)
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.
Affiliations
Aditya Khamparia
aditya.khamparia88@gmail.com
Gurinder Saini
gurindersaini25@gmail.com
Ashish Khanna
ashishkhanna@mait.ac.in
Shrasti Tiwari
shrastitiwari@gmail.com
Victor Hugo C. de Albuquerque
victor.albuquerque@unifor.br
1 School of Computer Science and Engineering, Lovely Professional University, Jalandhar,
Punjab, India
2 Maharaja Agrasen Institute of Technology, Delhi, India
3 Division of Examinations, Lovely Professional University, Jalandhar, Punjab, India
4 Graduate Program in Applied Informatics, University of Fortaleza, Fortaleza, CE, Brazil