Professional Documents
Culture Documents
Disease Detection of Various Leaf Using Image Processing Techniques
Disease Detection of Various Leaf Using Image Processing Techniques
Disease Detection of Various Leaf Using Image Processing Techniques
PROCESSING TECHNIQUES
Project Report
Submitted in partial fulfilment for the award of degree of
BACHELOR OF TECHNOLOGY
in
INFORMATION TECHNOLOGY
Submitted By (Batch-5)
i
Department of Information Technology
CERTIFICATE
This is to certify that the Project report entitled “DISEASE DETECTION OF VARIOUS
LEAF USING IMAGE PROCESSING TECHNIQUE” is a bonafide record of project work
carried out under my supervision by K. Sri Sai Bhargavi bearing Reg.No.18L31A1245, D. Snigdha
Bhavani bearing Reg.No.18L31A1279, B. Pujitha bearing Reg.No.18L31A1266, S. Siddharth
bearing Reg.No.18L31A1290 in partial fulfilment of the degree of Bachelor of Technology in
Information Technology of Vignan’s Institute of Information Technology(A) affiliated to
Jawaharlal Nehru Technology University Kakinada, during the academic year 2018-2022.
Signature Signature
EXTERNAL EXAMINER
ii
DECLARATION
We here by declare that this project report entitled “Disease Detection of various Leaf using
Image Processing Technique” has undertaken by us for the fulfilment of Bachelor of Technology in
Information Technology. We declare that this project report has not been submitted anywhere in the
part of fulfilment for any degree of any other University.
PLACE:Visakhapatnam
DATE: K. Sri Sai Bhargavi (18L31A1245)
D. Snigdha Bhavani (18L31A1279)
B. Pujitha (18L31A1266)
S. Siddharth(18L31A1290)
iii
ACKNOWLEDGEMENT
An endeavor over a long period can be successfully with the advice and support of
many well-wishers. I take this opportunity to express our gratitude and appreciation to all of them.
I would be very grateful to our project coordinator , Mrs. A. Sirisha , Assistant Professor
for the continuous monitoring of my project work.I truly appreciate for her time and effort helping me.
I would like to thank our Head of the Department , Dr. G. Rajendra Kumar , Professor
and all other teaching and non - teaching staff of the department for their cooperation
required resources and for the knowledge sharing during my project work.
I extended my grateful thanks to our honorable Chairman Dr. L. Rathaiah for giving
iv
INFORMATION TECHNOLOGY
VISION :
MISSION:
To facilitate faculty and students to carry out research work by providing necessary latest
facilities and a conductive environment.
To mould the students into effective professionals with necessary communication skills ,
team spirit , leadership qualities , managerial skills , integrity , social and environmental
responsibility and lifelong learning ability with professional ethics and human values.
We envision to be recoginzed leader in technical education and shall aim at national excellence
by creating comptent and socially conscious technical manpower for the current and future
v
Industrial requirements and development of the nation.
Mission of the Institute (VIIT):
PEO3: To demonstrate communication skills , team spirit , leadership qualities , managerial skills,
Integrity , social & environmental responsibility and lifelong learning ability , professional
ethics and human values in profession/career.
PSO1: Analyze and design the solutions for data storage & computing systems.
PSO2: Implement the solutions for network and communication problems of Information Technology.
vi
PROGRAM OUTCOMES
Engineering Knowledge:
Apply the knowledge of mathematics science engineering fundamentals and mathematics, science,
PO1 engineering fundamentals, and an engineering specialization to the solution of complex
engineering problems engineering problems.
Problem analysis:
Identify, formulate, review research Literature, and analyze complex engineering problems
PO2
reaching substantiated conclusions using first principles of mathematics, and natural sciences, and
engineering sciences
Design/development of solutions:
Design solutions for complex engineering problems and design system components or processes
PO3 that meet the specified needs with appropriate consideration for the public health and safety, and
the cultural societal, and environmental considerations
PO4 Use research-based knowledge and research methods including design of experiments, analysis and
interpretation of data, and synthesis of the information to provide valid conclusions
Create, select, and apply appropriate techniques, resources, and modern engineering and IT tools
PO5 including prediction and modeling to complex engineering activities with an understanding of the
limitations.
vii
The engineer and society:
PO6 Apply reasoning informed by the contextual knowledge to assess societal, health, safety, legal
and cultural issues and the consequent responsibilities legal and cultural issues and the
consequent responsibilities relevant to the professional engineering practice
PO7 Understand the impact of the professional engineering solutions in societal and environmental
contexts, and demonstrate the knowledge of, and need for sustainable development and need for
sustainable development.
Ethics:
PO8 Apply ethical principles and commit to professional ethics and responsibilities and norms of the
engineering practice.
PO9 Function effectively as an individual and as a member or leader in diverse teams and individual,
and as a member or leader in diverse teams, and in multidisciplinary settings.
Communication:
PO10
Communicate effectively on complex engineering activities with the engineering community
and with society at large, such as, being able to comprehend and write effective reports and
design documentation, and write effective reports and design documentation, make effective
presentations, and give and receive clear instructions.
Life-long learning:
PO12
Recognize the need for and have the preparation and ability to engage in independent and life-
long learning in the broadest context of technological change.
viii
ABSTRACT
ix
Agriculture plays an vital role in Indian economy but owing to changing climatic
conditions, crops often get affected, as a result of which agricultural yield decreases
drastically. If the condition gets worse, crops may get vulnerable towards infections caused
by fungal, bacterial, virus, etc. diseases causing agents. The method that can be adopted to
prevent plant loss can be carried out by real-time identification of plant diseases. Our
proposed model provides an automatic method to determine leaf disease in a plant using a
trained dataset of pomegranate leaf images. The test set is used to check whether an image
entered into the system contains disease or not. If not, it is considered to be healthy,
otherwise the disease if that leaf is predicted and the prevention of plant disease is proposed
automatically. Further, the rodent causing disease is also identified with image analysis
performed on the image certified by biologists and scientists. This model provides an
accuracy of the results generated using different cluster sizes, optimized experimentally, with
image segmentation. Our model provides useful estimation and prediction of disease causing
agent with necessary precautions.
x
INDEX
S. No Content Page No
1 Introduction
1.1 Introduction to project 1-3
1.2 Purpose of the project
2 Literature Survey 4-6
3. System Analysis
3.1 Existing System
3.2 Proposed System
3.3 Feasibility study 7-12
3.3.1 Technical Feasibility
3.3.2 Operational Feasibility
3.3.3 Economic Feasibility
4. System specifications
4.1 Functional Requirements
4.2 Non-functional requirements 13-21
4.3 Hardware requirements
4.4 Software requirements (SRS)
5. System Design
5.1 System Architecture
5.2 UML Diagrams 22 -45
5.3 Data Flow Diagram
6. System Implementation
6.1 Project Modules 46-67
6.2 Methodology (Algorithms )
6.3 Source Code
7. System Testing
7.1 Testing Methods 68-71
7.2 Test cases
8. Experimental Results 72-77
9. Conclusion & Future scope 79-80
10. Bibliography 81-83
xi
1.INTRODUCTION
1
1.1 INTRODUCTION TO PROJECT
The research paper referenced in tells about the various techniques involved in detecting
disease in a plant leaf through image processing techniques which involves image acquisition,
pre-processing of image and segmentation steps. The paper discussed methods to determine
the health status of each plant by considering the requirement of that plant. This method was
used so that the chemicals are applied only to those plants which require this treatment. This
method will result in reducing the expense on plant health to a larger scale. Aims at using
real time applications to change captured RGB image to Grey scale images since it increases
the clarity of the images and the disease is detected more efficiently. However, the paper
discusses how diseases in a pomegranate plant can be detected by applying proper
segmentation methods on the extracted images and identify the condition of a Pomegranate
plant.
Visual symptoms in the leaves are used to detect diseases in leaves. There is need for an
automatic system used for leaf disease detection because the disease cannot be detected by
naked eye. Our technology has grown to such an extent that a machine is capable enough to
predict the disease by looking at a high definition image of that leaves at its early stage itself.
The objective of our research is to find out the disease in a leaf. This process of detection can
be performed using image processing techniques which is method of forming a signal
processing for an inputted image by a scientist.We have used R programming language for
implementing image processing of the diseased leaf and predicting the disease. The dataset
used in our experiment contains images of infected and healthy pomegranate leaves. Model
Setup In order to conduct our experimentation, the dataset of plant leaf diseases is taken from
which is composed of 5358420 images of a single leaf disease. All the photographs present
on this site are clicked by professional photographers to provide an easy access to educational
applications.
2
The organization covers invasive species, forestry, agriculture and the images chosen for this
research are selected from an entire collection of 5359752 images as shown in the figure.
Model Libraries This section provides list of R libraries that were used to implement
proposed automated disease prediction model.
Fig .1 Datasets
3
2. LITERATURE SURVEY
4
1. Jayashri Patil ,Sachin Naik et.al, In this paper the proposed model purposes
preprocessing, division , Feature extraction and characterization methods for
pomegranate organic product illnesses. During feature extraction color, texture and
morphology features are used to identify and classify pomegranate fruit diseases.
SVM, ANN, KNN, PNN classifier used to detect and fungal and viral diseases. For
image segmentation K-means clustering is used Fuzzy c means gives highest accuracy.
Future scope is developing fully automated system with collaboration of agriculture
universities and research centers for upgrading the system with new diseases.
2. Shivaputra S.Panchal, Rutuja Sonar et.al, In this paper the proposed model is disease
detection using image acquisition, image preprocessing, image segmentation,
statistical feature extraction and classification.K-mean clustering algorithm is used for
segmentation. Support vector machine is used for classification of disease Image
processing is a form of signal processing the methodology of the proposed work
contains the five stages
3. M. Bhange et.al, In this paper the proposed model gives an online apparatus has been
created to recognize natural product infections by transferring organic product picture
to the framework. Highlight extraction has been finished utilizing boundaries like tone,
morphology and CCV(colour cognizance vector). Bunching has been finished
utilizing the k-implies calculation SVM is utilized for characterization as
contaminated or non tainted.
4. J.D. Pujari et.al, In this paper the proposed model has taken number of harvest types
in particular, natural product , vegetable ,oat and business yields to distinguish
5
contagious sickness on plant leaves. Various strategies have been For natural product
crop, k-means grouping is the division technique utilized , surface elements have been
centered around and ordered utilizing ANN and nearest neighbor algorithms.
For vegetable yields, chan-vase strategy utilized for division, nearby paired designs
for surface component extraction and SVM and k-closest calculations for order.
For business crops have been fragmented utilizing get cut algorithm. Wavelet based
feature extraction has been taken on utilizing Mahalnobis distance and PNN as
classifiers.
For oat crops have been portioned utilizing k-means grouping and vigilant edge
finder.Color, shape ,texture ,color texture and random transform features have been
extracted. SVM and nearest neighbor classifiers used.
5. Garima Shrestha et.al in this paper the proposed model deployed the CNN to detect
the plant disease. Creators have effectively arranged 12 plant sicknesses and the
dataset of 3000 high goal RGB images were utilized for trial and error.The network
has 3 blocks of convolutional and pooling layers. This makes the networks
computationally also the F1 score of the model is 0.12 which is very low because of
higher number of false negative predictions.adopted for each type of crop.
6. H. Ali et.al in this paper the proposed model work aims to apply ΔE color difference
algorithm to separate the disease affected area and uses color histogram and textural
features to classify diseases achieving an overall accuracy of 99.9% [9]. A variety of
classifiers have been used such as fine KNN, Cubic SVM, Boosted tree and Bagged
tree classifiers. The bagged tree classifier out-performs the other classifiers achieving
99.5%, 100%, 100% accuracy on RGB, HSV and LBP features respectively.
7. D.A. Bashish, et.al in this paper the model opted for k-means segmentation for
partitioning the leaf image into four clusters using the squared Euclidean distances.
The method applied for feature extraction is Color Co-occurrence method for both
color and texture features. Finally, classification is completed using neural network
detection algorithm based on Back Propagation methodology. The overall system
disease detection and classification accuracy was found to be around 93%.
6
3. SYSTEM ANALYSIS
7
3.1 EXISTING METHODOLOGY
1. The most plant infections have or brought about by microscopic organisms, growths and
numerous destructive hurtful infections which we can’t distinguish with our unaided eye for
this we need innovative viewpoints, a few specialists in noticing and recognizing the plant
illnesses by utilization of some computational methodologies like PC vision, Man-made
brainpower and so forth.
2. These infections will annihilate the live plants affecting the shortage of the creation to us
and keeping the ranchers life in question which is the cultural issue as well.
3. This testing issue make agricultural nations specialists costly and tedious.
This section provides various steps involved in analyzing the image given by the user.
A. Basic Framework
• Image Acquisition - The images were collected and stored in a database of files.
Training and Test set were separated according to the images which are needed to be
trained and which are to be tested. Then a web based application was prepared to
upload an image from the test set and was loaded into the R application.
• Image Segmentation - Input images are first converted into greyscale, obtaining
dimensions and removing noise. Then, applying K means algorithm. Figure 2 depicts
an example of k means applied to the dataset giving the prediction for different
number of color clusters used.
8
Fig. 3 Flowchart
• Image Classification - The SVM classifier is used to identify the classes, which are
closely connected to the known and trained classes. The Support vector machine
creates the optimal separating hyper plane between the classes using the training data.
• Image Accuracy- Calculate the accuracy of each image of the dataset by varying
the number of clusters and forming a plot showing the contrast between the accuracy
of results obtained by each cluster value.
Algorithms K-Means algorithm was applied for image segmentation. In this algorithm the
objects are broken into some points and depending on the frequency of these points, number
of clusters are decided. Then these clustered are portioned according to the minimum distance
each point has with the clusters. The algorithm of finding the minimum distance is then
performed. Centroid formation of the selected points continues and it discontinues when
points get repeated. In R the function that performs k means clustering uses many sub
algorithms to find k means clusters but the default used in R is “Hartigan-Wong”. It is the
most preferred algorithm because it is time efficient. Figure 3 depicts the implementation of
image enhancement (a) and segmentation with clusters 5, 7 and 11(b,c,d). SVM is used for
9
classification of pomegranate leaf. This algorithm is used to analyze the data set used and
perform the linear vector binding between the categories of the images, i.e. their respective
diseases. A hyper plane is formed by the training points that bind with each other to form
binary classifiers. These are then matched and mismatched and provides a predicted disease
according to any feature that can be extractable.
10
3.3 FEASIBILITY STUDY:
Preliminary investigation examine project feasibility, the likelihood the system will be useful
to the organization. The main objective of the feasibility study is to test the Technical,
Operational and Economical feasibility for adding new modules and debugging old running
system. All system is feasible if they are unlimited resources and infinite time. There are
aspects in the feasibility study portion of the preliminary investigation:
Technical Feasibility
Operation Feasibility
Economical Feasibility
11
current equipment and existing software technology. Necessary bandwidth exists for
providing a fast feedback to the users irrespective of the number of users using the system.
12
4. SYSTEM SPECIFICATIONS
13
4.1 FUNCTIONAL REQUIREMENTS:
OUTPUT DESIGN
Outputs from computer systems are required primarily to communicate the results of
processing to users. They are also used to provides a permanent copy of the results for later
consultation. The various types of outputs in general are:
External Outputs, whose destination is outside the organization.
Internal Outputs whose destination is with in organization and they are the
User’s main interface with the computer.
Operational outputs whose use is purely with in the computer department.
Interface outputs, which involve the user in communicating directly.
OUTPUT DEFINITION
The outputs should be defined in terms of the following points:
For Example
Will decimal points need to be inserted
Should leading zeros be suppressed.
14
Output Media:
In the next stage it is to be decided that which medium is the most appropriate for the
output. The main considerations when deciding about the output media are:
The suitability for the device to the particular application.
The need for a hard copy.
The response time required.
The location of the users
The software and hardware available.
Keeping in view the above description the project is to have outputs mainly coming under the
category of internal outputs. The main outputs desired according to the requirement
specification are:
The outputs were needed to be generated as a hot copy and as well as queries to be viewed on
the screen. Keeping in view these outputs, the format for the output is taken from the outputs,
which are currently being obtained after manual processing. The standard printer is to be
used as output media for hard copies.
INPUT DESIGN
Input design is a part of overall system design. The main objective during the input design is
as given below:
INPUT STAGES:
The main input stages can be listed as below:
Data recording
Data transcription
Data conversion
Data verification
Data control
15
Data transmission
Data validation
Data correction
INPUT TYPES:
It is necessary to determine the various types of inputs. Inputs can be categorized as follows:
External inputs, which are prime inputs for the system.
Internal inputs, which are user communications with the system.
Operational, which are computer department’s communications to the system?
Interactive, which are inputs entered during a dialogue.
INPUT MEDIA:
At this stage choice has to be made about the input media. To conclude about the
input media consideration has to be given to;
Type of input
Flexibility of format
Speed
Accuracy
Verification methods
Rejection rates
Ease of correction
Storage and handling requirements
Security
Easy to use
Portability
Keeping in view the above description of the input types and input media, it can be
said that most of the inputs are of the form of internal and interactive. As
Input data is to be the directly keyed in by the user, the keyboard can be considered to be the
most suitable input device.
16
ERROR AVOIDANCE
At this stage care is to be taken to ensure that input data remains accurate form the
stage at which it is recorded upto the stage in which the data is accepted by the system. This
can be achieved only by means of careful control each time the data is handled.
ERROR DETECTION
Even though every effort is make to avoid the occurrence of errors, still a small
proportion of errors is always likely to occur, these types of errors can be discovered by using
validations to check the input data.
DATA VALIDATION
Procedures are designed to detect errors in data at a lower level of detail. Data
validations have been included in the system in almost every area where there is a possibility
for the user to commit errors. The system will not accept invalid data. Whenever an invalid
data is keyed in, the system immediately prompts the user and the user has to again key in the
data and the system will accept the data only if the data is correct. Validations have been
included where necessary.
The system is designed to be a user friendly one. In other words the system has been
designed to communicate effectively with the user. The system has been designed with pop
up menus.
In the computer initiated interfaces the computer guides the progress of the
user/computer dialogue. Information is displayed and the user response of the computer
takes action or displays further information.
17
USER_INITIATED INTERGFACES
User initiated interfaces fall into tow approximate classes:
1. Command driven interfaces: In this type of interface the user inputs commands or queries
which are interpreted by the computer.
2. Forms oriented interface: The user calls up an image of the form to his/her screen and fills in
the form. The forms oriented interface is chosen because it is the best choice.
This application must be able to produce output at different modules for different
inputs.
The requirement specification for any system can be broadly stated as given below:
The system should be able to interface with the existing system
The system should be accurate
The system should be better than the existing system
18
The existing system is completely dependent on the user to perform all the duties.
19
HARDWARE REQUIREMENTS:
The software, Site Explorer is designed for management of web sites from a
remote location.
INTRODUCTION
Purpose: The main purpose for preparing this document is to give a general insight into the
analysis and requirements of the existing system or situation and for determining the
operating characteristics of the system.
Scope: This Document plays a vital role in the development life cycle (SDLC) and it
describes the complete requirement of the system. It is meant for use by the developers and
will be the basic during testing phase. Any changes made to the requirements in the future
will have to go through formal change approval process.
Developing the system, which meets the SRS and solving all the requirements of the
system?
Demonstrating the system and installing the system at client's location after the
acceptance testing is successful.
Submitting the required user manual describing the system interfaces to work on it and
also the documents of the system.
20
Conducting any user training that might be needed for using the system.
Maintaining the system for a period of one year after installation.
SOFTWARE REQUIREMENTS:
21
5. SYSTEM DESIGN
22
INTRODUCTION
Software design sits at the technical kernel of the software engineering process and is
applied regardless of the development paradigm and area of application. Design is the first
step in the development phase for any engineered product or system. The designer’s goal is to
produce a model or representation of an entity that will later be built. Beginning, once system
requirement have been specified and analyzed, system design is the first of the three technical
activities -design, code and test that is required to build and verify software.
The importance can be stated with a single word “Quality”. Design is the place where
quality is fostered in software development. Design provides us with representations of
software that can assess for quality. Design is the only way that we can accurately translate a
customer’s view into a finished software product or system. Software design serves as a
foundation for all the software engineering steps that follow. Without a strong design we risk
building an unstable system – one that will be difficult to test, one whose quality cannot be
assessed until the last stage.
What is UML
The UML stands for Unified modeling language, is a standardized general-purpose visual
modeling language in the field of Software Engineering. It is used for specifying, visualizing,
constructing, and documenting the primary artifacts of the software system. It helps in
designing and characterizing, especially those software systems that incorporate the concept
of Object orientation. It describes the working of both the software and hardware systems.
24
The Object Management Group (OMG) is an association of several companies that controls
the open standard UML. The OMG was established to build an open standard that mainly
supports the interoperability of object-oriented systems. It is not restricted within the
boundaries, but it can also be utilized for modeling the non-software systems. The OMG is
best recognized for the Common Object Request Broker Architecture (CORBA) standards.
Goals of UML
Characteristics of UML
C++ vs Java
o It is a generalized modeling language.
Conceptual Modeling
Before moving ahead with the concept of UML, we should first understand the basics of the
conceptual model.
Object:
An individual that describes the behavior and the functions of a system. The notation of the
object is similar to that of the class.
Interface:
A set of operations that describes the functionality of a class, which is implemented
whenever an interface is implemented.
Collaboration:
It represents the interaction between things that is done to meet the goal. It is symbolized as
a dotted ellipse with its name written inside it.
26
Use case:
Use case is the core concept of object-oriented modeling. It portrays a set of actions executed
by a system to achieve the goal.
Actor:
It comes under the use case diagrams. It is an object that interacts with the system, for
example, a user.
Component:
It represents the physical part of the system.
Node:
A physical element that exists at run time.
27
2. Behavioral Things
They are the verbs that encompass the dynamic parts of a model. It depicts the behavior of a
system. They involve state machine, activity diagram, interaction diagram, grouping things,
annotation things
State Machine:
It defines a sequence of states that an entity goes through in the software development
lifecycle. It keeps a record of several distinct states of a system component.
Activity Diagram:
It portrays all the activities accomplished by different entities of a system. It is represented
the same as that of a state machine diagram. It consists of an initial state, final state, a
decision box, and an action notation.
28
Interaction Diagram:
It is used to envision the flow of messages between several components in a system.
3. Grouping Things
It is a method that together binds the elements of the UML model. In UML, the package is
the only thing, which is used for grouping.
Package:
Package is the only thing that is available for grouping behavioral and structural things.
4. Annotation Things
It is a mechanism that captures the remarks, descriptions, and comments of UML model
elements. In UML, a note is the only Annotational thing.
Note:
It is used to attach the constraints, comments, and rules to the elements of the model. It is a
kind of yellow sticky note.
29
Relationships
It illustrates the meaningful connections between things. It shows the association between
the entities and defines the functionality of an application. There are four types of
relationships given below:
Dependency:
Dependency is a kind of relationship in which a change in target element affects the source
element.It depicts the dependency from one entity to another.
It is denoted by a dotted line followed by an arrow at one side as shown below,
Association:
A set of links that associates the entities to the UML model. It tells how many elements are
actually taking part in forming that relationship.
It is denoted by a dotted line with arrowheads on both sides to describe the relationship with
the element on both sides.
Generalization:
It portrays the relationship between a general thing (a parent class or superclass) and a
specific kind of that thing (a child class or subclass). It is used to describe the concept of
inheritance.
It is denoted by a straight line followed by an empty arrowhead at one side.
Realization:
It is a semantic kind of relationship between two things, where one defines the behavior to
be carried out, and the other one implements the mentioned behavior. It exists in interfaces.
It is denoted by a dotted line with an empty arrowhead at one side.
30
Diagrams
The diagrams are the graphical implementation of the models that incorporate symbols and
text. Each symbol has a different meaning in the context of the UML diagram. There are
thirteen different types of UML diagrams that are available in UML 2.0.
UML diagrams are classified into three categories that are given below:
1. Structural Diagram
2. Behavioral Diagram
3. Interaction Diagram
Structural Diagram: It represents the static view of a system by portraying the structure of a
system. It shows several objects residing in the system. Following are the structural diagrams
given below:
Class diagram
Object diagram
Package diagram
Component diagram
Deployment diagram
Behavioral Diagram: It depicts the behavioral features of a system. It deals with dynamic
parts of the system. It encompasses the following diagrams:
Activity diagram
State machine diagram
Use case diagram
Interaction diagram: It is a subset of behavioral diagrams. It depicts the interaction between
two objects and the data flow between them. Following are the several interaction diagrams
in UML:
Timing diagram
Sequence diagram
Collaboration diagram
31
Purpose of Use Case Diagrams:
The principle reason for class graphs is to fabricate a static perspective on an application.
It is the solitary graph that is generally utilized for development, and it very well may be
planned with object-situated dialects. It is quite possibly the most famous UML graphs.
Following are the reason for class graphs given underneath:
1.It examinations and plans a static perspective on an application.
2.It portrays the significant obligations of a framework.
3.It is a base for part and arrangement charts.
4.It consolidates forward and figuring out
Utilization of Utilization CASE graphs:
1.Analyzing the prerequisites of a framework
2.High-level visual programming planning
3.Capturing the functionalities of a framework
4.Modeling the fundamental thought behind the framework
5.Forward and figuring out of a framework utilizing different experiments.
32
View image and send the report to the user.
After processing the image the lab will send report to the user.
User will receive the image with a disease detector.
CLASS DIAGRAM
33
User will start the application.
After starting the application the user can select image from the trained data set.
Then the user can select image id and everything the user can accessing from the
trained data set.
By using image() class it will accessing the image.
Then the image can be process by using class Get Image().
After selecting the image the user can upload image to the knn algorithm.
The algorithm can accessing all the information and it will send it to the user.
It will accessing through id only. Id means the trained data set can accessing all the
numbers visible to the user purpose only.
Lab or service center will process the image by using knn algorithm.
After using the knn algorithm it will processing the data sets.
By using class only the algorithm can accessing the image by Get Datasets().
In our project we have two data sets trained datasets and test datasets.
After checking the algorithm can send the image view to the user.
View image and send the report to the user.
After processing the image the lab will send report to the user.
User will receive the image with a disease detector.
It gives the detail information of class and its process
34
SEQUENCE DIAGRAM
A social outline that shows a cooperation, stressing the time requesting of messages.
35
If there is any disease detection or healthy it will upload image to the user.
The user can accesing the image, the image can view by the user.
The user can read the id and leaf has any diseased or healthy.
The algorithm can give meassage to the user the leaf is diseased or healthy.
COMPONENT DIAGRAM
A part chart is utilized to separate a huge item arranged framework into the more modest
segments, in order to make them more reasonable. It displays the actual perspective on a
framework, for example, executables, records, libraries, and so forth that dwells inside the
hub. The execution subtleties of a part are covered up, and it requires an interface to execute
capacity. It resembles a black box whose conduct is clarified by the gave and required
interfaces.
Why use component diagram:
The segment graphs have momentous significance. It is utilized to portray the usefulness
and conduct of the relative multitude of segments present in the framework, dissimilar to
different charts that are utilized to address the design of the framework, working of a
framework, or essentially the actual framework.
In UML, the segment graph depicts the conduct and association of segments at any moment
of time. The framework can't be imagined by any individual segment, yet it tends to be by the
assortment of parts.
Following are a few purposes behind the prerequisite of the segment chart:
1. It depicts the segments of a framework at the runtime.
2. It is useful in testing a framework.
3. It imagines the connections between a few associations.
Where to use component diagram?
1. To model the components of the system.
2. To model the schemas of a database.
3. To model the applications of an application.
4. To model the system's source code.
36
The user can take all the details of the application.
After taking all the datails of the application the user can start the application.
After checking the app details by the user the it will give a message to the user as
update the app details.
After starting the application the lab can updating all the details given by the user.
After applying the algorithm, the algorithm can create a new source to the user.
Once algorithm can be applied it will give message to the user like source created
successfully.
The user can requested to the lab to enter the existing sources like all the datasets
(trained dataset and test dataset).
After requesting the details of the existing resources it will give message from the lab
as to update the address from the algorithm.
The image can accessing from the lab it will take reports from the lab it can accessing
all the information from the user then the reports can be send it to the user.
After the user can request to the lab the reports will send it to the user and give a
message to the user as view reports as per your request.
The user can accessing all from the algorithm only. By using algorithm we can set all
the datasets in the datasets.
37
In that datasets have both the trained datasets and test datasets.
By using this the lab can send message to the user like a message.
ACTIVITY DIAGRAM
In UML, the movement outline is utilized to show the progression of control inside the
framework as opposed to the execution. It displays the simultaneous and successive exercises.
The movement outline helps in imagining the work process starting with one action then onto
the next. It put accentuation on the state of stream and the request in which it happens. The
stream can be consecutive, extended, or simultaneous, and to manage such sorts of streams,
the action outline has thought of a fork, join, and so forth It is likewise named as an article
arranged flowchart. It incorporates exercises made out of a bunch of activities or tasks that
are applied to show the social graph.
38
When to use activity diagram:
An activity diagram can be used to portray business processes and workflows. Also, it used
for modeling business as well as the software. An activity diagram is utilized for the
followings:
1. To graphically model the workflow in an easier and understandable way.
2. To model the execution flow among several activities.
3. To model comprehensive information of a function or an algorithm employed within the
system.
4. To model the business process and its workflow.
5. To envision the dynamic aspect of a system.
6. To generate the top-level flowcharts for representing the workflow of an application.
7. To represent a high-level view of a distributed or an object-oriented system
39
The user can capturing the image.
After capturing the image it will input image to the lab the it will accesiing all the
algorithm.
After input image, then it will apply all th algorithms then the image is healthy it will
give success message and the image is diseased then it will give message as failure.
Once it is failure then it will give to the input image and set all the details of the leaf
image then it will re image.
Once it is success then it will move to the both classifier and report.
In classifier, it will checking all the filtering, enchancement, segmentation, feature
extrraction and then it will stop the processing image.
In report, it will checking all the filtering, enchancement, segmentation, feature
extrraction and then it will stop the processing image.
After checking the classifier and report whether the leaf is diseased or not it will
checking all the report of the image.
40
STATECHART DIAGRAM
The state machine diagram is also called the Statechart or State Transition diagram, which
shows the order of states underwent by an object within the system. It captures the software
system's behavior. It models the behavior of a class, a subsystem, a package, and a complete
system.
Following are the types of a state machine diagram that are given below:
1. Behavioural state machine:
The behavioral state machine diagram records the behavior of an object within the system.
It depicts an implementation of a particular entity. It models the behavior of the system.
2. Protocal state machine:
It captures the behavior of the protocol. The protocol state machine depicts the change in
the state of the protocol and parallel changes within the system. But it does not portray the
implementation of a particular component.
In this state chart diagram, in dataset all the leaf can be inherited all the leafs can be in
the datasets ( trained dataset and test dataset ).
In datasets we have both the trained dataset and test dataset.
In trained dataset it has Train.csv and it can be send to the Train.record.
In test dataset it has Test.csv and it can be send to the Test.record.
41
Both the Train.record and Test.record these will combine with the training the
different layer models.
These training the different layer models it will combine with the evaluating the
different layer model.
COLLABORATION DIAGRAM
42
5.3 DATAFLOW DIAGRAM
A data flow diagram is graphical tool used to describe and analyze movement of data
through a system. These are the central tool and the basis from which the other components
are developed. The transformation of data from input to output, through processed, may be
described logically and independently of physical components associated with the
system. These are known as the logical data flow diagrams. The physical data flow diagrams
show the actual implements and movement of data between people, departments and
workstations. A full description of a system actually consists of a set of data flow diagrams
The idea behind the explosion of a process into more process is that understanding at
one level of detail is exploded into greater detail at the next level. This is done until further
explosion is necessary and an adequate amount of detail is described for analyst to understand
the process.
A DFD is also known as a “bubble Chart” has the purpose of clarifying system
requirements and identifying major transformations that will become programs in system
design. So it is the starting point of the design to the lowest level of detail. A DFD consists
of a series of bubbles joined by data flows in the system.
DFD SYMBOLS:
In the DFD, there are four symbols
1. A square defines a source(originator) or destination of system data.
2. An arrow identifies data flow. It is the pipeline through which the information flows.
3. A circle or a bubble represents a process that transforms incoming data flow into outgoing
data flows.
4. An open rectangle is a data store, data at rest or a temporary repository of data.
CONSTRUCTING A DFD:
Several rules of thumb are used in drawing DFD’S:
1. Process should be named and numbered for an easy reference. Each name should be
representative of the process.
43
2. The direction of flow is from top to bottom and from left to right. Data traditionally flow
from source to the destination although they may flow back to the source. One way to
indicate this is to draw long flow line back to a source. An alternative way is to repeat the
source symbol as a destination. Since it is used more than once in the DFD it is marked with
a short diagonal.
3. When a process is exploded into lower level details, they are numbered.
4. The names of data stores and destinations are written in capital letters. Process and dataflow
names have the first letter of each work capitalized
A DFD typically shows the minimum contents of data store. Each data store should
contain all the data elements that flow in and out.
Questionnaires should contain all the data elements that flow in and out. Missing
interfaces redundancies and like is then accounted for often through interviews.
CURRENT PHYSICAL:
In Current Physical DFD proecess label include the name of people or their positions or
the names of computer systems that might provide some of the overall system-processing
label includes an identification of the technology used to process the data. Similarly data
flows and data stores are often labels with the names of the actual physical media on which
data are stored such as file folders, computer files, business forms or computer tapes.
CURRENT LOGICAL:
44
The physical aspects at the system are removed as mush as possible so that the current
system is reduced to its essence to the data and the processors that transform them regardless
of actual physical form.
NEW LOGICAL:
This is exactly like a current logical model if the user were completely happy with he
user were completely happy with the functionality of the current system but had problems
with how it was implemented typically through the new logical model will differ from current
logical model while having additional functions, absolute function removal and inefficient
flows recognized.
NEW PHYSICAL:
The new physical represents only the physical implementation of the new system.
RULES GOVERNING THE DFD’S
PROCESS
1) No process can have only outputs.
2) No process can have only inputs. If an object has only inputs than it must be a sink.
3) A process has a verb phrase label.
DATA STORE
1) Data cannot move directly from one data store to another data store, a process must move
data.
2) Data cannot move directly from an outside source to a data store, a process, which receives,
must move data from the source and place the data into data store
3) A data store has a noun phrase label.
SOURCE OR SINK
The origin and /or destination of data.
1) Data cannot move direly from a source to sink it must be moved by a process
2) A source and /or sink has a noun phrase land
DATA FLOW
45
1) A Data Flow has only one direction of flow between symbols. It may flow in both directions
between a process and a data store to show a read before an update. The later is usually
indicated however by two separate arrows since these happen at different type.
2) A join in DFD means that exactly the same data comes from any of two or more different
processes data store or sink to a common location.
3) A data flow cannot go directly back to the same process it leads. There must be atleast one
other process that handles the data flow produce some other data flow returns the original
data into the beginning process.
4) A Data flow to a data store means update (delete or change).
5) A data Flow from a data store means retrieve or use.
A data flow has a noun phrase label more than one data flow noun phrase can appear on a
single arrow as long as all of the flows on the same arrow move together as one package.
46
6. SYSTEM IMPLEMENTATION
47
6.1 PROJECT MODULES:
IMAGE ACQUISITION:
Different leaf images are collected and stored in a database. Images are classified into
training set and testing set. Training and testing sets were separated according to the images
which are needed to be trained and which are needed to be tested.
IMAGE ENHANCEMENT:
It is used to increase contrast of the images present in the training and test sets. This
enhancement is used to obtain dimensions of the image.
IMAGE SEGMENTATION:
Input images are first changed over into greyscale, acquiring aspects and eliminating
clamor. Then, applying K means calculation. Figure 2 portrays an illustration of k means
applied to the dataset giving the expectation for various number of variety color clusters
utilized.
FEATURE EXTRACTION:
We calculated the average value of each image pixel by calculating their separate R G B
values and finding the mean for the entire image containing various pixels.
IMAGE CLASSIFICATION:
The SVM classifier is used to identify the classes, which are closely connected to the known
and trained classes. The Support vector machine makes the ideal separating hyper plane
between the classes utilizing the preparation information.
IMAGE ACCURACY:
Calculate the accuracy of each image of the dataset by varying the number of clusters and
forming a plot showing the contrast between the accuracy of results obtained by each cluster
value.
48
6.2 METHODOLOGY(ALGORITHMS)
ALGORITHMS:
K-Means clustering is an unaided learning calculation that is utilized to tackle the grouping
issues in AI or information science. In this theme, we will realize what is K-implies
clustering algorithm, how the calculation works, alongside the Python execution of k-implies
grouping. K-means algorithm K-Means clustering is a Solo Learning calculation, which
bunches the unlabeled dataset into various groups. Here K characterizes the quantity of pre-
characterized bunches that should be made all the while, as though K=2, there will be two
groups, and for K=3, there will be three groups, etc. It is an iterative calculation that separates
the unlabeled dataset into k various bunches so that each dataset has a place just one
gathering that has comparable properties. It permits us to bunch the information into various
gatherings and a helpful method to find the classifications of gatherings in the unlabeled
dataset all alone without the requirement for any preparation. It is a centroid-based
calculation, where each bunch is related with a centroid The worth of k ought to be
foreordained in this calculation.
The k-implies bunching calculation basically performs two assignments:
1. Determines the best incentive for K focus focuses or centroids by an iterative interaction. 2.
Assigns every information highlight its nearest k-focus. Those information focuses which are
close to the specific k-focus, make a bunch.
The underneath outline clarifies the working of the K-implies Grouping Calculation:
49
Fig. 6(a) Before and After k-means
Random Forest is a popular machine learning algorithm that belongs to the supervised
learning technique. It can be used for both Classification and Regression problems in ML. It
50
is based on the concept of ensemble learning, which is a process of combining multiple
classifiers to solve a complex problem and to improve the performance of the model.
Random Forest is a classifier that contains a number of decision trees on various subsets of
the given dataset and takes the average to improve the predictive accuracy of that
dataset."Instead of relying on one decision tree, the random forest takes the prediction from
each tree and based on the majority votes of predictions, and it predicts the final output.
The greater number of trees in the forest leads to higher accuracy and prevents the problem
of overfitting.
The below diagram explains the working of the Random Forest algorithm:
51
Random Forest works in two-phase first is to create the random forest by combining N
decision tree, and second is to make predictions for each tree created in the first phase.
The Working process can be explained in the below steps and diagram:
Step-1: Select random K data points from the training set.
Step-2: Build the decision trees associated with the selected data points (Subsets).
Step-3: Choose the number N for decision trees that you want to build.
Step-4: Repeat Step 1 & 2.
Step-5: For new data points, find the predictions of each decision tree, and assign the new
data points to the category that wins the majority votes.
52
Fig. 6(d) SVM
Types of SVM
SVM can be of two types:
Linear SVM: Linear SVM is used for linearly separable data, which means if a
dataset can be classified into two classes by using a single straight line, then such data
is termed as linearly separable data, and classifier is used called as Linear SVM
classifier.
Non-linear SVM: Non-Linear SVM is used for non-linearly separated data, which
means if a dataset cannot be classified by using a straight line, then such data is
termed as non-linear data and classifier used is called as Non-linear SVM classifier
How does SVM works?
Linear SVM:
The working of the SVM algorithm can be understood by using an example. Suppose we
have a dataset that has two tags (green and blue), and the dataset has two features x1 and x2.
We want a classifier that can classify the pair(x1, x2) of coordinates in either green or blue.
Consider the below image:
53
Fig. 6(e) Coordinates
So as it is 2-d space so by just using a straight line, we can easily separate these two classes.
But there can be multiple lines that can separate these classes. Consider the below image:
54
Logistic Regression is much similar to the Linear Regression except that how they are
used. Linear Regression is used for solving Regression problems, whereas Logistic
regression is used for solving the classification problems.
In Logistic regression, instead of fitting a regression line, we fit an "S" shaped logistic
function, which predicts two maximum values (0 or 1).
The below image is showing the logistic function:
In Logistic Regression y can be between 0 and 1 only, so for this let's divide the
above equation by (1-y):
55
But we need range between -[infinity] to +[infinity], then take logarithm of the
equation it will become:
#-----------------------------------
#-----------------------------------
import numpy as np
import mahotas
import cv2
import os
import h5py
#--------------------
56
# tunable-parameters
#--------------------
images_per_class = 800
train_path = "dataset/train"
h5_train_data = 'output/train_data.h5'
h5_train_labels = 'output/train_labels.h5'
bins =8
def rgb_bgr(image):
return rgb_img
def bgr_hsv(rgb_img):
return hsv_img
# image segmentation
57
def img_segmentation(rgb_img,hsv_img):
lower_green = np.array([25,0,20])
upper_green = np.array([100,255,255])
lower_brown = np.array([10,0,10])
upper_brown = np.array([30,255,255])
return final_result
# feature-descriptor-1: Hu Moments
def fd_hu_moments(image):
feature = cv2.HuMoments(cv2.moments(image)).flatten()
return feature
def fd_haralick(image):
haralick = mahotas.features.haralick(gray).mean(axis=0)
return haralick
58
# feature-descriptor-3: Color Histogram
hist = cv2.calcHist([image], [0, 1, 2], None, [bins, bins, bins], [0, 256, 0, 256, 0, 256])
cv2.normalize(hist, hist)
return hist.flatten()
train_labels = os.listdir(train_path)
train_labels.sort()
print(train_labels)
global_features = []
labels = []
# join the training data path and each species training folder
current_label = training_name
59
# loop over the images in each sub-folder
for x in range(1,images_per_class+1):
image = cv2.imread(file)
RGB_BGR = rgb_bgr(image)
BGR_HSV = bgr_hsv(RGB_BGR)
IMG_SEGMENT = img_segmentation(RGB_BGR,BGR_HSV)
fv_hu_moments = fd_hu_moments(IMG_SEGMENT)
fv_haralick = fd_haralick(IMG_SEGMENT)
fv_histogram = fd_histogram(IMG_SEGMENT)
# Concatenate
60
# update the list of labels and feature vectors
labels.append(current_label)
global_features.append(global_feature)
targetNames = np.unique(labels)
le = LabelEncoder()
target = le.fit_transform(labels)
rescaled_features = scaler.fit_transform(global_features)
61
h5f_data.create_dataset('dataset_1', data=np.array(rescaled_features))
h5f_label.create_dataset('dataset_1', data=np.array(target))
h5f_data.close()
h5f_label.close()
# training
#-----------------------------------
#-----------------------------------
import h5py
import numpy as np
import os
import glob
import cv2
import warnings
62
from sklearn.svm import SVC
import joblib
warnings.filterwarnings('ignore')
#--------------------
# tunable-parameters
#--------------------
num_trees = 100
test_size = 0.20
seed =9
train_path = "dataset/train"
test_path = "dataset/test"
h5_train_data = 'output/train_data.h5'
h5_train_labels = 'output/train_labels.h5'
scoring = "accuracy"
train_labels = os.listdir(train_path)
train_labels.sort()
if not os.path.exists(test_path):
os.makedirs(test_path)
63
# create all the machine learning models
models = []
models.append(('LR', LogisticRegression(random_state=seed)))
models.append(('LDA', LinearDiscriminantAnalysis()))
models.append(('KNN', KNeighborsClassifier()))
models.append(('CART', DecisionTreeClassifier(random_state=seed)))
models.append(('NB', GaussianNB()))
models.append(('SVM', SVC(random_state=seed)))
results = []
names = []
global_features_string = h5f_data['dataset_1']
global_labels_string = h5f_label['dataset_1']
global_features = np.array(global_features_string)
global_labels = np.array(global_labels_string)
h5f_data.close()
h5f_label.close()
64
print("[STATUS] features shape: {}".format(global_features.shape))
train_test_split(np.array(global_features),
np.array(global_labels),
test_size=test_size,
random_state=seed)
trainDataGlobal
scoring=scoring)
results.append(cv_results)
names.append(name)
print(msg)
65
fig = pyplot.figure()
ax = fig.add_subplot(111)
pyplot.boxplot(results)
ax.set_xticklabels(names)
pyplot.show()
sns.heatmap(cm ,annot=True)
print(classification_report(testLabelsGlobal,y_predict))
accuracy_score(testLabelsGlobal, y_predict)
import cv2
import numpy as np
import numpy as np
img = cv2.imread('./image.jpg')
plt.imshow(img)
plt.show()
66
plt.imshow(img)
plt.show()
plt.imshow(hsv_img)
lower_green = np.array([25,0,20])
upper_green = np.array([100,255,255])
plt.subplot(1, 2, 1)
plt.imshow(mask, cmap="gray")
plt.subplot(1, 2, 2)
plt.imshow(result)
plt.show()
lower_brown = np.array([10,0,10])
upper_brown = np.array([30,255,255])
plt.subplot(1, 2, 1)
plt.imshow(disease_mask, cmap="gray")
plt.subplot(1, 2, 2)
plt.imshow(disease_result)
plt.show()
plt.figure(figsize=(15,15))
67
plt.subplot(1, 2, 1)
plt.imshow(final_mask, cmap="gray")
plt.subplot(1, 2, 2)
plt.imshow(final_result)
plt.show()
sift = cv2.SIFT_create()
kp = sift.detect(img,None)
img=cv2.drawKeypoints(img,kp,img)
plt.imshow(img)
68
7. SYSTEM TESTING
69
7.1 TESTING METHODS:
Framework tests are intended to approve a completely created framework with the end
goal of expecting that it meets its prerequisites. There are three sorts of framework testing
they are
Alpha Testing:
Alpha testing alludes to the framework testing that is conveyed by the client inside the
association alongside the engineer. The alpha tests are directed in controlled way.
Beta Testing:
Beta testing is the framework performed by a chose gathering of clients, the designer is
absent at the site and the client will educate the issues that are experienced during testing.
The product designer makes the essential changes and submits to the client.
Acceptance Testing:
Acceptance testing is the framework testing performed by the client to whether to
acknowledge the conveyance of the framework.
Unit Testing:
Unit testing centers check exertion around the littlest unit of the product structure the module.
Utilizing the point by point structure depiction as a guide, significant control ways are tried to
reveal blunders with the limit of the module for the accompanying modules. All the
announcements in the module are executed atleast once. From this we can guarantee that
every autonomous way through the control structures are worked out.
Integration Testing:
Incorporation testing is a precise procedure for building the program structure and to lead
tests for revealed blunders with interfacing. In this framework, Top-Down combination is
performed for the development of program structures.
Validation Testing:
70
This testing is performed to guarantee that the framework capacities in a way that can be
sensibly by the clients. Here information is approved first at interface(authentication), before
sending it to the server.
71
An error occurs when a module is not found as shown in the above figure an error occured
because “sklearn” module is not found.
72
8. EXPERIMENTAL RESULTS
73
Machine Learning algoritm comparison:
74
Selected Image:
Diseased or Healthy:
75
Converted image into hsv image:
After converting hsv image then it will convert into gray scale:
76
Gray Scale Image:
77
Fig. 8(i) Diseased image
78
9. CONCLUSION & FUTURE SCOPE
79
Our research innovated a computer aided segmentation and classification method.
Main focus of our project is to recognize diseases on the pomegranate leaf. At first
preprocessing is done. Second stage occurs with K-Means algorithm being applied on all the
images contained in the dataset. Third stage involves feature extraction that includes color
features and shape features. This is followed by classification of diseases with the help of rich
trained data repositories. The statistical parameters are used as features for classification. The
work can be used to identify the condition of the pomegranate leaf and for the identification
of the diseased leaf or healthy leaf of the pomegranate plant. This project can be broadly
expanded. The future scope of this project includes extending it to various leaves and their
respective diseases. Also, this research is useful in finding out weather the accuracy of the
result is sufficient enough to rely on the prediction made by the automated analysis of leaf
disease. Special application can be built for easy access in which the researcher may upload a
real time image of the diseased leaf and it may get the name and cure of the disease. Many
more improved classification algorithms can be applied to predict the leaf diseases.
80
10. BIBILIOGRAPHY
81
[1] Nikita Goel, Dhruv Jain and Adwitiya Sinha, “Prediction Model for Automated Leaf
Disease Detection & Analysis”,IEEE 2018.
[2] Shivaputra S.Panchal, Rutuja Sonar, “Pomegranate Leaf Disease Detection Using Support
Vector Machine”,International Journal Of Engineering And Computer Science ISSN: 2319-
7242 Volume 5 Issues 6 June 2016, Page No. 16815-16818.
[3] J Shirahatti, R Patil and P Akulwar, “A survey paper on plant disease identification using
machine learning approach”,IEEE 2018.
[4] Jayashri Patil, Sachin Naik , “Pomegranate fruit diseases detection using image
processing techniques:”A Review",IT in Industry, Vol. 9, No.2, 2021.
[5] SD Khirade and AB Patil, “Plant disease detection using image processing”,IEEE 2015.
[6] Pujari, J.D., Yakkundimath, R., Byadgi, A.S., “Image Processing Based Detection of
Fungal Diseases In Plants”, International Conference on Information and Communication
Technologies, Volume 46, pp. 1802-1808, 2015.
[7] K. Sakthidasan Sankaran,N. Vasudevan and V. Nagarajan, “Plant Disease Detection and
Recognition using K means Clustering”,IEEE 2020.
[8] G. Shrestha, Deepsikha, M. Das and N. Dey, “Plant Disease Detection Using CNN”, 2020
IEEE Applied Signal Processing Conference (ASPCON), 2020.
[9] Bijaya Kumar Hatuwal, Aman Shakya, and Basanta Joshi, “Plant Leaf Disease
Recognition Using Random Forest, KNN, SVM and CNN”,Polibits 2020.
[10] Bhange, M., Hingoliwala, H.A., “Smart Farming Disease Detection Using Image
Processing”, Second International Symposium on Computer Vision and the Internet, Volume
58, pp. 280-288, 2015.
82
[11] U.Shruthi, V.Nagaveni; B.K. Raghavendra, “A Review on Machine Learning
Classification Techniques for Plant Disease Detection” ,International Conference on
Advanced Computing & Communication Systems(ICACCS), 2019.
[13] Jay Trivedi, Yash Shamnai, Ruchi Gajjar, “ Plant Leaf Disease Detection Using Machine
Learning”, International Conference on Emerging Technology Trends in Electronics
Communication and Networking, 2020.
[14] Adhao Asmita Sarangdhar, V.R. Pawar, “Machine learning regression technique for
cotton leaf disease detection and controlling using IoT”, International Conference of
Electronics, Communication and Aerospace Technology(ICECA), 2017.
[15] Amrita S. Tulshan; Nataasha Raul, “ Plant Leaf Disease Detection using Machine
Learning”, International Conference on Computing, Communication and Networking
Technologies(ICCNT), 2019.
83