Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

Mango Leaf Disease Detection Using Deep Learning

Techniques
Abstract:
India ranks first among world’s mango producing countries accounting for about
50% of the world’s mango production. The mango fruit is popular among the masses because of
its wide range of adaptability, high nutritional value, abundant variety, delicious taste and excellent
flavor. The fruit is eaten raw or mature and is a rich source of vitamin A and C. The crop is
suspicious of diseases like powdery mildew, anthracnose, die back, blight, red rust, sooty mould,
etc. Disorders may also impact the plant in the absence of effective case and control measures.
These include malformation, biennial bearing, fall of fruit, black top, clustering, etc. The grower
must seek advice and professional support for the prevention / control of diseases and crop
disorder. New techniques of detecting mango disease are required to promote better control to
avoid this crisis. By considering this, paper describes image recognition which provides cost
effective and scalable disease detection technology. Paper further describes new deep learning
models which give an opportunity for easy deployment of this technology. By considering a dataset
of mango disease, pictures are taken from Konkan area in India. Transfer learning technique is
used to train a profound Convolutionary Neural Network (CNN) to recognize 91% accuracy.

1. Introduction

Mango is national fruit of India also called as King of fruits. About 40% of mangoes
production.Amongst the fruit crops in India vital place gets occupied by mangoes and plays an
essential role in the economy of the country. About 30% to 40% of the crop yield got infected by
various diseases and due to unaided eye perception the mango leaf disease went undetected. The
different diseases affecting mango leaves cannot get acknowledged by the farmers which cause
less production of mango fruit. Different diseases cause different effects on mango crops. Some
cause white patches and some cause black and all these patches seem over the surface of the leaf
or early grown fruits as well while some other diseases cause white fungal powder on leaves and
some affect the young leaves and shoots also. These patches begin in tiny form, but quickly they
cover the entire region of the fruit or leaf, resulting in rotten leaves or rotten fruits. Such diseases
are caused by pathogens like bacteria, virus, fungi, parasites etc, and even unfavourable
environmental conditions. Disease in leaf affects the photosynthesis, transpiration, pollination,
fertilization, germination, etc symptoms and the affected leaf area determine the type of disease.

All these different types of illnesses need to be discerned in the initial stages and should be
managed before it grows more and causes a severe loss to the plant life. To get this detection done
in the initial stage, farmers and agricultural scientists need to keep an eye continuously on the plant
parts which is a sluggish process. For the advance detection of disease in the plants some technique
is needed as the prior acclaim of disease is the first step in the detection and expansion of mango
diseases. Conventional ways to identify diseases is time consuming and expensive as it needs the
expertise, knowledge and continuous monitoring. Still it lacks correct recognition of disease
because of the complex structure and pattern of the leaf. With the advent of computational methods
in the field of image recognition and classification this problem can be solved with greater
accuracy. By using technology one can detect diseases on a large scale. In the case of mango
leaves; there are various types of diseases present like powdery mildew, anthracnose, red rust etc.

In the present work, a deep learning (DL) based model has been proposed for the classification of
various mango leaf diseases (powdery mildew, anthracnose, red rust) at the initial stages.
Accuracy, Recall, Precision and F-Score have been used to evaluate the model.

2. Literature review.
In recent times, there are various methods developed for plant leaf disease detection. These are
broadly categorized into disease detection and classification methods. Most of the techniques use
a deep convolution network for segmentation, feature extraction, and classification considering the
images of mango, citrus, grapes, cucumber, wheat, rice tomato, and sugarcane. Likewise, these
methods are also suitable for fruits and their leaf diseases. Since few diseases can be correctly
identified if they are precisely segmented and relevant features are extracted for classification
purposes. Machine learning techniques are generally used, such as disease symptom enhancement
and segmentation support, to calculate texture, color, and geometric features. Further, these
techniques are also helpful for the classification of the segmented disease symptom. As a result,
all this could be applied to make an automated computer-based system work efficiently and
robustly.Iqbal et al. [7], in their presented survey, discussed different detection and classification
techniques and concluded that all these were at an infancy stage. Not much work has been
performed on plants, especially the mango plant. Further, in their survey paper, they have
discussed almost all the methods considering their advantages, disadvantages, challenges, and
concepts of image processing to detect disease.Sharif et al. [8] have proposed an automated system
for the segmentation and classification of citrus plant disease. They have adopted an optimized
weighted technique to segment the diseased part of the leaf symptom in the first phase of the
proposed system. Afterward, in the second phase, various descriptors, including color, texture, and
geometric, are combined. Finally, the optimal features are selected using a hybrid feature selection
method that consists of the PCA approach, entropy, and covariant vector based on skewness. As a
result, the proposed system has achieved above 90% accuracy for all types of diseases. Few
prominent articles regarding classification of wheat grain [9], apple grading based on CNN [10],
crop characterization by CNN based target-scattering-model [11], plant leaf disease, and soil-
moisture prediction by introducing support vector (SVM) for feature classification [12], and UAV-
based smart farming for the classification of crop and weed reflecting the use of geometric features
have been reported. Safdar et al. [13] applied watershed segmentation to segment the diseased part.
The suggested technique is applied to the combination of three different citrus datasets. The
outcome of their technique is 95.5% accuracy, which is incredible for segmenting the diseased
part. Febrinanto et al. [14] used the K-nearest-neighbour approach for segmentation. The detection
rate of their methodology is 90.83%. Further, the authors segment the diseased part of the citrus
leaf using a two and nine clusters approach by incorporating optimal minimum bond parameters
that are 3%, which lead to final accuracy of 99.17%.Adeel et al. [15] achieved a 90% and 92%
accuracy level for segmentation and classification, respectively, on grape images from the Plant
Village dataset. Results were obtained by applying the local-contrast-haze-reduction (LCHR)
enhancement technique and LAB color transformation to select the best channel for thresholding.
After that, color, texture, and geometric features were extracted and fused by canonical correlation
analysis (CCA). M-class SVM performs the classification of finally reduced features. Zhang et al.
[16] presented a GoogLeNet and Cifar10 network to classify the diseases on maize leaves. The
models presented by them achieved the highest accuracy when compared with VGG & Alexnet to
classify nine different types of leaves of a maize plant. Gandhi et al. [17] worked on identifying
plant leaf diseases using a mobile application. The authors presented their work with Generative
Adversal Networks (GAN) to augment the images. Then classification was performed by using
convolution neural networks (CNN) deployed in the smartphone application. Durmuş et al. [18]
classified plant diseases of tomato plant leaf taken from the Plant Village database. Two deep-
learning architectures (Alex-net and squeeze-net) were tested, first Alex-Net and then Squeeze-
Net. For both of these deep-learning networks, training and validation were done on the Nvidia
Jetson TX1. Bhargava et al. [19] used different fruit images and grouped them into red, green, and
blue for grading purposes. Then they subtracted the background using the split and merge
algorithm. Afterward, they implemented four different classifiers viz. K-Nearest-Neighbor
(KNN), Support-Vector-Machine (SVM), Sparse-Representative-Classifier (SRC), and Artificial-
Neural-Network (ANN) to classify the quality. The maximum accuracy achieved for fruit detection
using KNN was 80%, where K was taken as 10, SRC (85.51%), ANN (91.03%), and SVM
(98.48%), respectively. Among all, SVM proved to be more effective in quality evaluation, and
the results obtained were encouraging and comparable with the state-of-art techniques. Singh et
al. [20] proposed a classification method considering infected leaves of the Anthracnose disease
on the mango plants. They used a multilayer convolution neural network for this purpose. The
technique was implemented on 1070 self-collected images. The multilayer-convolution-neural-
network (MCNN) was developed for the classification of the mango plant leaves. As a result of
this setup, the classification accuracy raised to 97.13% accurately. Kestur et al. [6] proposed a deep
learning segmentation technique called Mangonet in 2019. The accuracy level of 73.6% was
achieved using this method. Arivazhagan et al. [2] used a convolutional neural network (CNN)
and achieved an accuracy of 96.6% for mango plant leaf images. In the mentioned technique, only
feature extraction with no preprocessing was done. Keeping in view the scanty literature, we
proposed a precise deep-learning approach for the segmentation of the diseased leaves of the
mango plant called a full resolution convolutional network (FrCNnet). We aim to extend the U-
net deep learning model, which is a famous segmentation deep learning technique. This method
directly segments the diseased part of the leaf after implementing some preprocessing.This paper
proposed a novel, precise deep learning approach for segmenting and classification of mango plant
leaves. After segmenting the diseased part, three features (color, texture, and geometric) are
extracted for fusion and classification purposes. The fused feature vector is then reduced using the
PCA-based feature selection approach to obtain an optimal feature set to get enhanced
classification accuracy. The dataset used for this work is a collection of self -collected images
captured using different types of image capturing gadgets. The subsequent section will highlight
in detail the work done in this regard.

3. Methodology.
The primary step of this work was the preparation of a dataset. The dataset used for this work
was a collection of self-collected images that were captured using different types of image
capturing gadgets. These images were collected from different mango growing regions in
Pakistan, including Multan, Lahore, and Faisalabad, in the form of RGB images. The collected
images were resized to 256 × 256 after being annotated by expert pathologists, as they showed
dissimilar sizes. Figure 1 presents the workflow of the proposed technique, which comprises the
following steps: (1) the preprocessing of images consisting of data augmentation followed by
image resizing; (2) the use of the proposed model to segment the images obtained as a result of
the resizing operation (codebook); (3) color and texture (LBP) feature extraction. Finally, the
images were classified by using 10 different types of classifiers.

Work flow
Workflow of Mango leaf disease detection.

3.1 Dataset of Mango leaves.

Steps of Dataset Collection and Preparation

The main phases of our entire dataset preparation procedure are as follows:

1. Conducting background study on prevalent diseases that affect mango trees.


2. Selecting the mango orchards for data collection in consultation with the
agricultural experts.
3. Physically capturing the images of healthy and diseased mango leaves from the
trees. We consider seven diseases in total.
4. Validating the images of the dataset. This step includes:
a. Labelling the images manually by human experts.

b. Resizing the images to standard shape.

c. Cleaning the images from background noise.


After data collection of images we move onto Image Processing

Fig 1 Dataset of Mango leaves.

3.2 Data Preprocessing.


It is essential to use high-quality and well-balanced image data when using the CNN technique
for image classification. As a result, it is essential to apply appropriate image development and
balancing mechanisms to improve the model's image quality. This module scrutinizes the
different image processing techniques. LeNet-5,Vgg-16(C) and MobileNet-V2, and ResNext-
101 for instance, divides images into tiles or contextual portions, computes a bar graph for each
tile, there after a specific histogram distribution parameter copies the output.
Fig-2 Data preprocessing Methodology.

3.3 Image Enhancement.


FCE-(Fast image contrast enhancement)
In the enhancement of biomedical images, the resolution of the images should not be degraded. At
the same time, the global correlation information between different pixels in the image should be
ensured. Therefore, the network not only needs enough spatial information, but also needs a large
receptive field in each convolutional layer. Such as U-Net, it includes two convolution blocks in
each sampling layer commonly. These blocks contain many kernels with a size of 3×3 to obtain
a large receptive field and ensure enough spatial information as well at a cost of increasing of
number of parameters. FCE-Net is different from conventional networks which utilize single path
encoder and decoder structure. As shown in Fig. 1, FCE-Net has upper and lower paths: attention
path contains large receptive field which is used to obtain the correlation information between
pixels of image; spatial path is in charge of preserve spatial information in different layers. Then
we combine the two paths together and decode the image to its original size.
Fig-3 FCE-Net Enhanced image of mango leaves.

Fig-4 FCE-NET architecture for Mango leaves.

3.4 Data Classification.


DataBase Name Disease Class Number
of
Images
Anthracnose 125

Bacterial CanKer 110

Cutting weevil 60

Leaf Disease Die Back 150


images(LDI)

Gall Midge 153

Healthy 207

Powder Mildew 186

Sooty Mould 215

Total images 1206

Table-1 Number of two dataset sample before augmentation.

DataBase Name Disease Class Number Number of


of images after
Images augmentation
Anthracnose 125 1600

Bacterial CanKer 110 1786

Cutting weevil 60 2054

Leaf Disease Die Back 150 2145


images(LDI)

Gall Midge 153 2046

Healthy 207 1574


Powder Mildew 186 1487

Sooty Mould 215 1695

Total images 1206 14387

Table-2 Number of two dataset samples after augmentation.


Steps to train the images:
Step 1: Start with images of which their classes are known prior.
Step 2: Find the feature set for each of the image and label them.
Step 3: Take the successive image as an input and identify features of the one as a new input.
Step 4: Device the binary SVM to multi class SVM method.
Step 5: Train the SVM using any function.
Step 6: Find the class of the input image.
Step 7: Depending on the result, a label is given to the next image. Add these features to the
database.
Step 8: Steps 3 to 7 are performed for all images that will be used in database.
Step 9: The outcome species is the class of the input image.
Step 10: To find the accuracy of the system or the LeNet-4, in this case, random set of inputs are
chosen for training and testing from the database. Two different sets for train and test are
generated. The steps for training and testing are same, however, followed by the test is
performed.

PROPOSED METHODOLOGY:
A Convolutional Neural Network (CNN) is a machine learning method from deep learning.
CNNs are trained using large collections of images. From these large collections, CNNs can learn
many of the feature representations for the whole of the collection of images. As an easier way
to perform the classification without shedding time and effort, they are trained prior as an
extractor of the features. The following are the steps involved:

Step 1: Start with the input image


Step 2: Feature map is created by applying four filters (convolution, mean and median,
average).
Step 3: Apply a ReLU function for its non-linearity to increase.
Step 4: Apply pooling layer to each of the feature map.
Step 5: Flatten the pooled images into a single long vector.
Step 6: Gets the vector into a fully connected layer.
Step 7: Processes the features and the final fully connected layer provides the voting of the
classes we require.
Step8: Trains for many epochs through the forward propagation and back propagation and this
will repeat until a defined neural network with the trained features are obtained.

3.5 Result Evaluation.


MnasNet (Mobile Neural Architecture Search Network) is a neural network architecture designed for
mobile and edge devices with limited computational resources. It was introduced in the paper titled
"MnasNet: Platform-Aware Neural Architecture Search for Mobile" by Mingxing Tan, Bo Chen, Ruoming
Pang, Vijay Vasudevan, and Mark Sandler. The paper was presented at the Conference on Computer
Vision and Pattern Recognition (CVPR) in 2018.

The key idea behind MnasNet is to use neural architecture search (NAS) to automatically discover
efficient convolutional neural network (CNN) architectures specifically tailored for mobile devices. NAS
involves searching for the best neural network architectures automatically rather than relying on
manual design. In the case of MnasNet, the focus is on finding architectures that balance performance
and efficiency for mobile platforms.

Some of the notable features and contributions of MnasNet include:

• Platform-Aware Design: MnasNet takes into consideration the hardware constraints and
characteristics of mobile devices during the architecture search process. This results in models
that are better suited for deployment on mobile platforms.
• Efficient Building Blocks: The network is constructed using efficient building blocks, and the
architecture search explores combinations of these blocks to find optimal structures. This helps
in achieving a good trade-off between accuracy and computational efficiency.
• Training Strategy: MnasNet introduces a training strategy that combines both the reward of
accuracy and a regularization term encouraging architectural simplicity. This helps in
preventing the discovery of overly complex architectures that might not generalize well.
MnasNet has demonstrated competitive performance compared to manually designed architectures
while being more computationally efficient. The focus on mobile and edge devices makes it particularly
valuable for applications where computational resources are limited, such as on smartphones and IoT
devices.

Classify image using ResNext-101:


ResNeXt-101 is a convolutional neural network (CNN) architecture that belongs to the
ResNeXt family. The ResNeXt architecture was introduced in the paper titled "Aggregated
Residual Transformations for Deep Neural Networks" by Saining Xie, Ross Girshick, P iotr
Dollár, Zhuowen Tu, and Kaiming He. The paper was presented at the Conference on
Computer Vision and Pattern Recognition (CVPR) in 2017.
ResNeXt is designed to improve the scalability and efficiency of deep neural networks. The
key innovation in ResNeXt is the use of a "cardinality" parameter, which controls the
number of parallel paths in each ResNeXt block. Traditional ResNext blocks have a single
path (cardinality=1), but ResNeXt introduces the idea of using multiple paths
(cardinality>1) to capture diverse features.
ResNeXt-101, specifically, denotes a ResNeXt model with 101 layers. It is a deep and
powerful model that has shown impressive performance on various computer vision tasks,
including image classification and object detection.
The architecture of ResNeXt-101 consists of multiple building blocks, and each block
contains groups of convolutional layers with the same cardinality. The increased model
depth allows it to learn hierarchical features and representations, which is benefic ial for
complex tasks.
While the specific details of the ResNeXt-101 architecture might be extensive, a general
overview is as follows:

• Input Layer: The network typically takes an input image, and the first layer
performs initial convolution and pooling operations.
• ResNeXt Blocks: The network is composed of several blocks, each containing
multiple groups of convolutional layers with the same cardinality.
• Global Average Pooling (GAP): Instead of fully connected layers, ResNeXt models
often use global average pooling to reduce the spatial dimensions of the feature
maps and produce a compact representation.
• Fully Connected Layer and Output: The final layers include fully connected layers
and the output layer, which produces the final predictions for the task at hand.
It's important to note that the specific details of ResNeXt-101, such as the number of blocks,
cardinality, and other hyperparameters, may vary depending on the implementation or
variant used. Implementations can be found in deep learning frameworks such a s PyTorch
or TensorFlow. In ResNext we are having 101 layers. In this we are adding another 20
layers. Finally we are getting ResNext-131 layers. Finally we are getting the accuracy with
92.40%.

Table-3 The accuracy obtained for Multiclass ResNext-131 is 92.40%.


Total Anthra Bacterial Cutting Die Gall Healthy Powdery Sooty
Images cnose CanKer weevil Back Midge Mildew Mould
taken
Anthracnos 10 2 0 0 0 5 0 0
e-19
Bacterial 1 8 0 0 3 4 3 1
CanKer-20
Cutting 1 1 5 0 0 0 0 0
weevil – 7
Die Back– 1 5 0 10 0 0 0 0
16
Gall Midge- 1 1 0 5 1 0 1 0
9

Healthy-14 3 0 4 0 4 0 3 0

Powdery 0 0 1 1 1 5 2 6
Mildew-16

Sooty Mould- 0 2 1 0 0 0 6 1
10

TABLE 3 Confusion matrix for Multiclass ResNext-101

Confusion matrix of ResNext-131.

5. Result Analysis:
For the augmented mango dataset, the overall accuracy in classifying a leaf as with ResNext101
CNN architecture is 91% with validation set of 15%. With ResNext101 CNN architecture the
accuracy is 88% with validation set of 15% and for ResNext131 the accuracy is 92.40% for the
4th epoch.

Precision=TP/(TP+FP)
Recall=TP/(TP+FN)
F1=2*precision*recall/(precision+recall)
Accuracy=TP+TN/(TP+FN+TN+FP)

Specificity= TN/(TN+FP)

TPR=TP/(Actual Positive)=TP/(TP+FN)
FNR=FN/(Actual Positive)=FN/(TP+FN) TNR=TN/(Actual
Negative)=TN/(TN+FP)
FPR=FP/(Actual Negative)=FP/(TN+FP)

Fig-5 Bar graph for all used methodologies.


Fig-6 Stacked graph for methodologies percentage.

Fig-7 Box and Whisker method for confusion Matrix.


• All models used here conducted much better than randomly guessing, even with different
background in pictures such as human hands, soil or other distracting things. Results also
indicate that the models were not over-fitted to the datasets because the split training validation
information had a tiny impact on the general accuracies reported.
Fig-8 Confusion Matrix for Mango Leaf disease detection.
• The confusion matrix from the mango dataset allows a more detailed analysis by showing
how the model performance varies with different disease representations in the images. On the
first confusion matrix plot for the 15% validation set, data split the rows shows the true classes
and the column shows the predicted classes. The diagonal cell reflects the proportion of
instances that the qualified network correctly predicts the groups of observations i.e. the
proportion that fits real and expected classes. The off-diagonal cells reflect where there were
inconsistencies in the network.
• The proportion shown in the off-diagonal cells and on diagonal cells for the models showed
that the highest reported prediction accuracy is 0.95 for golmich , when 20% validation set is
taken for ResNext101 architecture and redrust is 0.95 when 15% validation set is taken for
ResNext131.
6. Conclusion:
The findings of this research indicate that image recognition with transfer learning from the
convolutionary neural network ResNext101 is a strong technique for high precision automated
identification of mango disease. This technique prevents the complicated image extraction and a
model can be trained on a desktop and can be implemented for mobile device. In this study three
CNN architectures are used for four different classes of mango diseases. Therefore, this study
shows that transfer learning applied to the ResNext deep learning system, provides a promising
avenue for in-field disease detection using convolutionary neural networks with an object dataset,
and is a powerful technique for high precision automatic mango disease identification.

References:
1. R. N. Jogekar and N. Tiwari, “A review of deep learning techniques for identification and
diagnosis of plant leaf disease,” in Smart Trends in Computing and Communications: Proc.
of SmartCom, Siam Square, Bangkok, pp. 435–441, 2020.
2. S. Arivazhagan and S. V. Ligi, “Mango leaf diseases identification using convolutional
neural network,” International Journal of Pure and Applied Mathematics, vol. 120, no. 6,
pp. 11067–11079, 2018.
3. N. Rosman, N. Asli, S. Abdullah and M. Rusop, “Some common diseases in mango,” in
AIP Conf. Proc., Selangor, Malaysia, pp. 020019, 2019.
4. K. Srunitha and D. Bharathi, “Mango leaf unhealthy region detection and classification,”
in Computational Vision and Bio Inspired Computing, vol. 28, Cham: Springer, pp. 422–
436, 2018.
5. A. S. A. Mettleq, I. M. Dheir, A. A. Elsharif and S. S. Abu-Naser, “Mango classification
using deep learning,” International Journal of Academic Engineering Research, vol. 3, no.
12, pp. 22–29, 2020.
6. R. Kestur, A. Meduri and O. Narasipura, “Mangonet: A deep semantic segmentation
architecture for a method to detect and count mangoes in an open orchard,” Engineering
Applications of Artificial Intelligence, vol. 77, no.2, pp. 59–69, 2019.
7. Z. Iqbal, M. A. Khan, M. Sharif, J. H. Shah, M. H. ur Rehman et al., “An automated
detection and classification of citrus plant diseases using image processing techniques: A
review,” Computers and Electronics in Agriculture, vol. 153, pp. 12–32, 2018.
8. M. Sharif, M. A. Khan, Z. Iqbal, M. F. Azam, M. I. U. Lali et al., “Detection and
classification of citrus diseases in agriculture based on optimized weighted segmentation
and feature selection,” Computers and Electronics in Agriculture, vol. 150, pp. 220–234,
201
9. M. Charytanowicz, P. Kulczycki, P. A. Kowalski, S. Łukasik and R. Czabak-Garbacz, “An
evaluation of utilizing geometric features for wheat grain classification using X-ray
images,” Computers and Electronics in Agriculture, vol. 144, pp. 260–268, 2018.
10. P. Moallem, A. Serajoddin and H. Pourghassem, “Computer vision-based apple grading
for golden delicious apples based on surface features,” Information Processing in
Agriculture, vol. 4, no. 1, pp. 33–40, 2017.
11. X. Huang, J. Wang, J. Shang and J. Liu, “A simple target scattering model with geometric
features on crop characterization using polsar data,” in IEEE Int. Geoscience and Remote
Sensing Symp., Fort Worth, Texas, USA, pp. 1032–1035, 2017.
12. D. Sabareeswaran and R. G. Sundari, “A hybrid of plant leaf disease and soil moisture
prediction in agriculture using data mining techniques,” International Journal of Applied
Engineering Research, vol. 12, no. 18, pp. 7169–7175, 2017.
13. A. Safdar, M. A. Khan, J. H. Shah, M. Sharif, T. Saba et al., “Intelligent microscopic
approach for identification and recognition of citrus deformities,” Microscopy Research
and Technique, vol. 82, no. 9, pp. 1542–1556, 2019.
14. F. Febrinanto, C. Dewi and A. Triwiratno, “The implementation of k-means algorithm as
image segmenting method in identifying the citrus leaves disease,” in IOP Conf. Series:
Earth and Environmental Science, East Java, Indonesia, pp. 012024, 2019.
15. A. Adeel, M. A. Khan, M. Sharif, F. Azam, J. H. Shah et al., “Diagnosis and recognition
of grape leaf diseases: An automated system based on a novel saliency approach and
canonical correlation analysis based multiple features fusion,” Sustainable Computing:
Informatics and Systems, vol. 24, pp. 100349, 2019.
16. X. Zhang, Y. Qiao, F. Meng, C. Fan and M. Zhang, “Identification of maize leaf diseases
using improved deep convolutional neural networks,” IEEE Access, vol. 6, pp. 30370–
30377, 2018.
17. R. Gandhi, S. Nimbalkar, N. Yelamanchili and S. Ponkshe, “Plant disease detection using
CNNs and GANs as an augmentative approach,” in IEEE Int. Conf. on Innovative Research
and Development, Khlong Nueng, Thailand, pp. 1–5, 2018.
18. H. Durmuş, E. O. Güneş and M. Kırcı, “Disease detection on the leaves of the tomato plants
by using deep learning,” in 2017 6th Int. Conf. on Agro-Geoinformatics, Fairfax, VA,
USA, pp. 1–5, 2017.
19. A. Bhargava and A. Bansal, “Automatic detection and grading of multiple fruits by
machine learning,” Food Analytical Methods, vol. 13, pp. 751–761, 2020.
20. U. P. Singh, S. S. Chouhan, S. Jain and S. Jain, “Multilayer convolution neural network
for the classification of mango leaves infected by anthracnose disease,” IEEE Access, vol.
7, pp. 43721–43729, 2019.
21. Z. Gao, D. Wang, S. Wan, H. Zhang and Y. Wang, “Cognitive-inspired class-statistic
matching with triple-constrain for camera free 3D object retrieval,” Future Generation
Computer Systems, vol. 94, pp. 641–653, 2019.
22. D. Park, H. Park, D. K. Han and H. Ko, “Single image dehazing with image entropy and
information fidelity,” in IEEE Int. Conf. on Image Processing, Paris, France, pp. 4037–
4041, 2014.
23. M. Lin, Q. Chen and S. Yan, “Network in network,” arXiv preprint arXiv: 1312.4400,
2013.
24. J. Long, E. Shelhamer and T. Darrell, “Fully convolutional networks for semantic
segmentation,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition,
Boston, Massachusetts, USA, pp. 3431–3440, 2015.
25. L. Bi, J. Kim, E. Ahn, A. Kumar, M. Fulham et al., “Dermoscopic image segmentation via
multistage fully convolutional networks,” IEEE Transactions on Biomedical Engineering,
vol. 64, no. 9, pp. 2065–2074, 2017.
26. A. A. Pravitasari, N. Iriawan, M. Almuhayar, T. Azmi, K. Fithriasari et al., “UNet -vGG16
with transfer learning for MRI-based brain tumor segmentation,” Telkomnika, vol. 18, no.
3, pp. 1310–1318, 2020.
27. Q. Guo, F. Wang, J. Lei, D. Tu and G. Li, “Convolutional feature learning and hybrid
CNN-hMM for scene number recognition,” Neurocomputing, vol. 184, pp. 78–90, 2016.
28. J. Park, S. Park, M. Z. Uddin, M. Al-Antari, M. Al-Masni et al., “A single depth sensor
based human activity recognition via convolutional neural network,” in Int. Conf. on the
Development of Biomedical Engineering in Vietnam, Ho Chi Minh, Vietnam, pp. 541–
545, 2017.
29. A. L. Maas, A. Y. Hannun and A. Y. Ng, “Rectifier nonlinearities improve neural network
acoustic models,” in Proc. ICML, Atlanta, USA, pp. 1–6, 2013.
30. X. Glorot, A. Bordes and Y. Bengio, “Deep sparse rectifier neural networks,” in Proc. of
the Fourteenth Int. Conf. on Artificial Intelligence and Statistics, Fort Lauderdale, FL,
USA, pp. 315–323, 2011.
31. E. Shelhamer, J. Long and T. Darrell, “Fully convolutional networks for semantic
segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39,
pp. 640–651, 2017.
32. S. Min, B. Lee and S. Yoon, “Deep learning in bioinformatics,” Briefings in
Bioinformatics, vol. 18, no. 5, pp. 851–869, 2017.
33. J. Yosinski, J. Clune, Y. Bengio and H. Lipson, “How transferable are features in deep
neural networks?,” in Advances in Neural Information Processing Systems, Montreal,
Quebec, Canada, pp. 3320–3328, 2014.
34. H.-C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu et al., “Deep convolutional neural networks
for computer-aided detection: CNN architectures, dataset characteristics and transfer
learning,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1285–1298, 2016.
35. D. Sabareeswaran and R. G. Sundari, “A hybrid of plant leaf disease and soil moisture
prediction in agriculture using data mining techniques,” International Journal of Applied
Engineering Research, vol. 12, no. 18, pp. 7169–7175, 2017.
36. [Sardogan, M., Tuncer, A., and Ozen, Y.: Mango Leaf Disease Detection and
Classification Based on CNN with the LVQ Algorithm. In: 3rd Int. Conf. Comput.
Sci. Eng. (2018) 382–385
37. Wallelign, S., Polceanu, M., and Buche, C. Plant disease identification using a
convolutional neural network. In: Proc. 31st Int. Florida Artif. Intell. Res. Soc. Conf.
FLAIRS 2018 (2018), 146–151
38. Sladojevic, S., Arsenovic, M., Anderla, A., Culibrk, D., and Stefanovic, D.: Deep
Neural Networks Based Recognition of Plant Diseases by Leaf Image
Classification. Comput. Intell. Neurosci (2016)
39. Fuentes, A., Yoon, S., Kim, S. C., and Park, D. S.: A robust deep-learning-based
detector for real-time plant diseases and pests recognition. Sensors (Switzerland)
17 (2017)
40. Arivazhagan, S. and Ligi, S. V.: Mango Leaf Diseases Identification Using
Convolutional Neural Network. Int. J. Pure Appl. Math., 120(2018) 11067–11079

You might also like