Professional Documents
Culture Documents
Team Report Dayakar
Team Report Dayakar
On
BACHELOR OF TECHNOLOGY
in
2021 - 2022
B. V. Raju Institute of Technology
(UGC Autonomous, Accredited By NBA & NAAC)
Vishnupur, Narspur, Medak (Dist.),
Telangana State, India – 502313
_________________________________________________________
This is to certify that the Major Project entitled “Crop Yield Prediction
Using Machine Learning Algorithms”, being submitted by
This is to certify that the above statement made by the students is/are
correct to the best of my knowledge.
CANDIDATE’S DECLARATION
i
ACKNOWLEDGEMENT
The success and final outcome of this project required a lot of guidance
and assistance from many people and We are extremely fortunate to have got
this all along the completion. Whatever we have done is due to such guidance
and assistance. We would not forget to thank them.
We thank Mrs. B. Usha Sri for guiding us and providing all the support
in completing this project. We are thankful to Mrs. T. Shilpa, our section
project coordinator for supporting us in doing this project. We thank the
person who has our utmost gratitude is Dr. Ch. Madhu Babu, Head of CSE
Department.
ii
Crop Yield Prediction Using Machine Learning
Algorithms
ABSTRACT
Key Words: KNN, RF, DT, SVM, GNB, GB, XGBoost, and Voting ensemble
classifiers
iii
CONTENTS
Candidate’s Declaration i
Acknowledgement ii
Abstract iii
Contents
1. INTRODUCTION
1.1 Motivation 1
1.2 Problem Definition 2
1.3 Objective of Project 2
1.4 Limitations of Project 2
1.5 Organization of Documentation 3
2. LITERATURE SURVEY
2.1 Introduction 4
2.2 Existing System 5
2.3 Disadvantages of Existing system 5
2.4 Proposed System 5
3. ANALYSIS
3.1 Introduction 7
3.2 Software Requirement Specification 7
3.2.1 User requirements 7
3.2.2 Software requirements 8
3.2.3 Hardware requirements 10
3.3 Content Diagrams of Project 10
3.4 Algorithms and Flowcharts 10
4. DESIGN
4.1 Introduction 16
4.2 DFD / ER / UML diagram (any other project diagrams) 17
4.3 Module design and organization 23
5. IMPLEMENTATION & RESULTS
5.1 Introduction 26
5.2 Explanation of Key functions 26
5.3 Method of Implementation 28
5.3.1 Forms 28
5.3.2 Output Screens 29
5.3.3 Result Analysis 38
6. TESTING & VALIDATION
6.1 Introduction 45
6.2 Design of test cases and scenarios 45
6.3 Validation 47
7. CONCLUSION & FUTURE WORK 48
8. REFERENCES 49
Chapter 1
INTRODUCTION
Achieving maximum yield rates with limited land resource is the goal of
agriculture planning in agro-based country. Earlier farming predictions was
performed based on farmers past experience in a particular field of crops.
Now-a-days as the conditions are changing there is a need of advancement in
the farming activities. What happens the farmers in rural areas are not aware
of new crop and their benefits while farming them? The proposed system
applies machine learning and prediction algorithms to suggest the best
suitable crops for the farmers. The aim of the system is to reduce the losses
due to drastic climatic changes and increase the yield rates of crops. The
system integrates the data obtained from the past prediction, current weather
and soil condition due to this farmers gets the idea and list of crops that can
be cultivated. Machine Learning methods are widely used in prediction
techniques like SVM (Support Vector Machine), linear regression. This in
return gives the best crop for cultivation based on the current environment
condition. The proposed system considers the rainfall amount of past, current
and future and also the type of soil the farmer have. Based on this parameters
the suitable crops for the given condition is predicted using the machine
learning algorithms more accurate prediction results are produced.
1.1 Motivation
In recent years, Asian country has been agitated by economic and social
forces related to higher suicide rates amongst tiny and marginal farmers. Our
aim is to supply help and tools to assist such farmers and communities
and address these problems. Generally, they face challenges accessing and
trusting academic reach and coaching to higher perceive the way to
increase crop yields and improve money standing.
1
1.2 Problem Definition
2
1.5 Organization of Documentation
Chapter 1: This chapter discusses the project's motivation, problem
definition, project objective, and project limitations, as well as the project's
key slogan.
Chapter 2: Given the present system, this chapter explains the shortcomings
of the existing system and how to remove those disadvantages in the
suggested system.
Chapter 6: This chapter covers the testing of testcases as well as the project
validation.
Chapter 8: This chapter provides all of the references that were considered
during the project's execution.
3
Chapter 2
LITERATURE SURVEY
2.1 Introduction
Monali Paul, Santosh K Vishwakarma, Ashok Verma [1]
This paper provided to predict the yielding of crops, the crops square measure
analyzed supported analysis they're categorized. This categorization is
completed on data processing algorithms like KNN, Naïve mathematician. this
may be helpful for developing our project with the assistance of knowledge
mining.
Abdullah sodium, William patriarch, Ekaram Khan[2]
In this paper a wise phone based mostly application which is able to live the
hydrogen ion concentration worth of soil, temperature and wetness in real
time. This paper assists in remote analysis of soil through numerous
techniques.
4
Pooja More, Sachi Nene [5]
This paper uses advanced artificial neural network technology in conjunction
with machine learning algorithms like SVM, rectilinear regression square
measure used for prediction of best suited crops.
5
Advantages:
6
Chapter 3
ANALYSIS
3.1 Introduction
Achieving maximum yield rates with limited land resource is the goal of
agriculture planning in agro-based country. Earlier farming predictions was
performed based on farmers past experience in a particular field of crops.
Now-a-days as the conditions are changing there is a need of advancement in
the farming activities. What happens the farmers in rural areas are not aware
of new crop and their benefits while farming them? The proposed system
applies machine learning and prediction algorithms to suggest the best
suitable crops for the farmers. The aim of the system is to reduce the losses
due to drastic climatic changes and increase the yield rates of crops. The
system integrates the data obtained from the past prediction, current weather
and soil condition due to this farmers gets the idea and list of crops that can
be cultivated. Machine Learning methods are widely used in prediction
techniques like SVM (Support Vector Machine), linear regression. This in
return gives the best crop for cultivation based on the current environment
condition. The proposed system considers the rainfall amount of past, current
and future and also the type of soil the farmer have. Based on this parameters
the suitable crops for the given condition is predicted using the machine
learning algorithms more accurate prediction results are produced.
7
3.2.2 Software requirements
For demonstrating my proposed system, building a web-based
application that can use to demonstrate the proposed system. For
demonstrating I choose a web application for getting a better visual
experience. Major development goals of this system are, implement the
spam features results and classification analysis. To fulfill these
requirements, I choose python programming language because I can
get huge API support for developing Machine Learning and Deep
Learning algorithms. Many web framework tools are also developed
based on the python programming language. For better interface
experience I developed this SRD system in a web-based environment.
Based on these reasons, I choose the flask web framework because it
follows the MVT design pattern. For storing and handling the
application-generated data, we need to integrate with a database server.
For the integration of the database with the flask framework, I choose
the Mysql Database server. For developing the flask web application, I
set up the software which are mentioned in the following.
3.2.2.1 Python
Python is an open-source and multi-used purpose language. Using
python we can develop Machine Learning, Artificial Intelligence-based
applications without GUI, or with GUI-based applications, we can
develop. Also, Operating system-based, mobile-based, and Web-based
applications can develop. In this system, I have two major reasons to
choose the python programming language. First, need to implement
classification algorithms. And second, to implement the web-based
application. In python, we have strong API support for implementing
the classification algorithms. In the Python programming language, we
have two third-party frameworks that are there to build web
applications. Those are flask and Flask. In this, flask supports Model
View Template (MVT) design pattern. It is similar to MVC (Model View
Controller). I Installed python 3.8 version software to get the python
8
environment in my Windows 10 operating system. The python program
language is an object-oriented language, and it has much easy
programming syntax to learn python easily and quickly.
3.2.2.2 MySQL
All web frameworks support connecting to the database server, in those
major web frameworks using internal interfaces connect to the
database. Communication between the web server and the database
server happens with SQL queries. Generally, using SQL queries when
we are communicating the database that will become platform
dependent. We always focus on the lower level of access code (SQL)
depends on the database server. Flask supports Object-Relational
Mapping (ORM) for database management. ORM will take care of the
conversion of data between RDBMS and programming languages like
python. Developers need not focus on the lower level of writing the SQL
queries. By using Object Oriented concepts like methods, constructors,
objects, etc. data will store and manipulate in the database server. Only
a few database servers are support ORM like Mysql 8.0, Postgresql,
Mongo, etc. From these, I choose Mysql 8.0 software to integrate with
the flask server for storing and manipulating the data.
3.2.2.3 PYCharm IDE
For better programming writing experience, I need Full Integrated
Development Environment (IDE) software that can identify the python
syntaxes. Many IDE’s are suitable for only particular programming
languages, for example, Netbeans IDE is only suitable for Java
programming. But Pycharm only suitable for python programming.
Based on the programming language which we are using we need to
install plug-ins for getting more services like identification of compile
errors, suggestions, execution, etc. For my project requirements, I
choose PYCharm IDE because of open source, lightweight software and
I need to write the python code as well as HTML code.
9
3.2.3 Hardware requirements
10
Support Vector Machine is an extremely popular supervised machine learning
technique (having a pre-defined target variable) which can be used as a
classifier as well as a predictor. For classification, it finds a hyper-plane in the
feature space that differentiates between the classes. An SVM model
represents the training data points as points in the feature space, mapped in
such a way that points belonging to separate classes are segregated by a
margin as wide as possible. The test data points are then mapped into that
same space and are classified based on which side of the margin they fall.
Random Forest:
11
only depends on the strength of individual tree but also the correlation
between different trees. The stronger the strength of single tree and the less
the correlation of different tress, the better the performance of random forest.
The variation of trees comes from their randomness which involves
bootstrapped samples and randomly selects a subset of data attributes.
Below is the step-by-step Python implementation. ...
Step 1 : Import and print the dataset.
Step 2 : Select all rows and column 1 from dataset to x and all rows and
column 2 as y.
Step 3 : Fit Random forest regressor to the dataset.
Step 4 : Predicting a new result.
Step 5 : Visualising the result.
12
Decision Tree:
Decision Tree algorithm belongs to the family of supervised learning
algorithms. Unlike other supervised learning algorithms, decision tree
algorithm can be used for solving regression and classification problems too.
The general motive of using Decision Tree is to create a training model which
can use to predict class or value of target variables by learning decision rules
inferred from prior data (training data).
KNN is slow supervised learning algorithm, it takes more time to get trained
classification like other algorithm is divided into two step training from data
and testing it on new instance. The K Nearest Neighbour working principle is
based on assignment of weight to the each data point which is called as
neighbour. In K Nearest Neighbour distance is calculate for training dataset
for each of the K Nearest data points now classification is done on basis of
majority of votes there are three types of distances need to be measured in
KNN Euclidian, Manhattan, Minkowski distance in which Euclidian will be
consider most one the following formula is used to calculate their distance.
13
1. D represents the samples used in the training and k denotes the number
of nearest neighbour.
2. Create super class for each sample class.
3. Compute Euclidian distance for every training sample
4. Based on majority of class in neighbour, classify the sample
K- Nearest Neighbour
Logistic regression:
14
specific task. It also helps to eliminate some basic implementation bugs
regarding data set treatment.
Flowcharts
.
15
Chapter 4
DESIGN
4.1 Introduction
Design is focused with identifying software components and defining their
interactions. For the document phase, defining the structure and providing a
plan. One of the desirable qualities of huge systems is modularity. It denotes
that the system is separated into multiple components. The interaction
between pieces is minimally stated in this manner. The design phase's goal is
to devise a strategy for resolving the problem identified in the requirement
document. This is the initial stage in transitioning from the problem to the
solution domain. The design of a system is the most important aspect
impacting software quality, and it has a significant impact on following
phases, especially testing and maintenance. The design document is the
result of this step. This document serves as a solution's blueprint or plan, and
it will be used during implementation, testing, and maintenance.
The design process is frequently split into two phases: system design and
detail design. The goal of system design, also known as top-level design, is to
identify the modules that should be included in the system, their
specifications, and how they interact with one another to create the intended
results. All of the primary data structures, file formats, and output formats,
as well as the major modules in the system and their specifications, are
decided at the end of system design.
16
4.2 DFD/ER/UML diagram (any other project
diagrams)
UML DIAGRAMS
This chapter describes SRD system functionalities and responsibilities with
use case diagram. Use Case diagram is a graphical representation of
interactions of the actors with the system. My project Use Case diagram
visually described in Figure In this system, I defined two actors. Those are,
1. Admin
2. User
Let discuss the actors and their responsibilities and operation here.
Actor: Admin
Name Admin of the system
Actor Admin
Description Admin is a super user of the system, after successful login
admin can perform various operations in the admin
portal. Mainly, upload the dataset, Features calculations,
implementation of classification algorithms on features
dataset, and evaluation. Based on the dataset which is
uploaded by the admin, users can see the product
details.
Actor: User
Name Admin of the system
Actor User
Description The end user will create account and login in to the
application using his/her login credentials to predict the
cop based on trained information
17
Table 4.2: Description of User actor
18
Data Flow Diagram:
2. One of the most essential modelling tools is the data flow diagram
(DFD). It's used to represent the many components of the system. The
system process, the data that the process uses, an external entity that
interacts with the system, and the information flows in the system are
all examples of these components.
3. The DFD depicts the flow of data through the system and how it is
altered by a series of transformations. It's a graphical representation of
data flow and the transformations that occur when data goes from input
to output.
19
Login
InValid
Verification
Valid
Admin User
Upload Dataset
Crop Prediction
View Dataset
Models Evaluations
Class diagram:
The class diagram is used to further enhance the use case diagram and
establish the system's detailed design. The class diagram divides the players
in the use case diagram into a group of related classes. The relationship
between the classes might be either "is-a" or "has-a." Each class in the class
diagram may be able to perform specific functions. The class's functions are
referred to as "methods." Aside from that, each class may have certain
"attributes" that help to distinguish it from others.
20
Figure-6 Class Diagram
21
Collaboration diagram:
22
4.3 Module design and organization
Dataset Collection:
In this system, we are using the prediction of crop type dataset which was
imported from internet resources. This dataset contains four independent
attributes such as temperature, humidity, and rainfall, and one dependent
attribute or target attribute is a target label such as rice, papaya, coconut,
groundnut, etc. Figure. 3.2 is depicting the crop dataset which contains 5
columns and 3100 records and the target class like crop type has 31 labels.
23
8 Cotton 21 Papaya
9 Grapes 22 Peas
10 Ground 23 Pigeon Peas
Nut
11 Jute 24 Pomegranate
12 Kidney 25 Rice
Beans
13 Lentil 26 Rubber
Data Preprocessing:
The crop dataset will be in the form of raw data which is not understood by
ML algorithms. So, in the data preprocessing stage, this system will read the
data from the .csv file format and convert it into data frames. Later it will
check the dataset contains any missing values like question marks, special
characters, and null values. But the selected dataset does not contain any
missing values.
Train and Test Split:
After the pre-processing stage, the crop dataset will be split into 80 to 20
ratios. By using of train_test_split () method this system will split the 80%
training dataset with 2480 records and 20% testing dataset with 620 records.
Training the models:
After splitting the dataset as training and testing, then this system will train
the ML models with help of invoking the fit () method with input parameters
as independent variables x_train and target column values y_train of the
training dataset.
Predicting the model:
Here the trained model will predict the crop name with help of predict ()
method with an input parameter of testing data x_test then it returns the list
of predicted crop names as predicted values.
ML Evaluations:
In the ML evaluations, this system will compare the algorithm performances
between all ML techniques, so finally, with input parameters of predicted
24
values and actual values then this system will calculate all performance
metrics such as accuracy score, precision, recall, f1score, MCC, and Kappa
scores.
Crop Prediction:
From the ML evaluations, this system will select the best classification model
based on algorithm performance metrics. According to this system
experimental results, the voting classifier model was selected as the best
model which is predicted with 95% accuracy. So, when the user enters the
testing data like temperature, humidity, pH, and rainfall then this system
predicts the different crop names such as rice, papa, watermelon, etc with
various testing data points.
25
Chapter 5
IMPLEMENTATION & RESULTS
5.1 Introduction
Our initiative primarily aims to determine whether or not a student has
thyroid disease. When it determines that a person has thyroid disease, it
administers the appropriate treatment. Finally, during the building of a
prediction model, machine learning algorithms play a critical role in tackling
complicated and non-linear problems. In disease prediction models, variables
that may be selected from diverse data sets and used to describe a healthy
patient as precisely as feasible are required. The modules will be divided into
two categories of data: test data and train data. Later, we'll read the files,
import the modules, clean the data, visualize it, and run machine learning
algorithms on it to make the appropriate predictions.
Pandas
Pandas is a free Python library that uses strong data structures to provide
high-performance data manipulation and analysis. Python was used
primarily for data preprocessing and munging. It didn't make much of a
difference in the data analysis. This was a difficulty that the pandas were
able to solve. Regardless of the source of the data, we may use Pandas to
26
complete five common phases in data processing and analysis: prepare,
manipulate, model, and analyse. Python and Pandas are utilized in a variety
of sectors, including academic and business domains such as finance,
economics, statistics, and analytics.
Matplotlib
Scikit – learn
Scikit-learn is a Python library that offers a uniform interface to a variety of
supervised and unsupervised learning techniques. It is distributed under
several Linux distributions and is licenced under a permissive simplified
BSD licence, making it suitable for academic and commercial use.
fit(x,y): The python-fit module is designed for people who need to fit data.
Print(): This function print is the statement.
27
5.3 Method of Implementation
5.3.1 Forms
Python
Python is an interpreted high-level programming language for general-
purpose programming. Created by Guido van Rossum and first released
in 1991, Python has a design philosophy that emphasizes code readability,
notably using significant whitespace.
Python features a dynamic type system and automatic memory
management. It supports multiple programming paradigms, including
object-oriented, imperative, functional and procedural, and has a large
and comprehensive standard library.
Python is Interpreted − Python is processed at runtime by the interpreter.
You do not need to compile your program before executing it. This is
similar to PERL and PHP.
Python is Interactive − you can actually sit at a Python prompt and interact
with the interpreter directly to write your programs.
Python also acknowledges that speed of development is important.
Readable and terse code is part of this, and so is access to powerful
constructs that avoid tedious repetition of code. Maintainability also ties
into this may be an all but useless metric, but it does say something about
how much code you have to scan, read and/or understand to troubleshoot
problems or tweak behaviors.
This speed of development, the ease with which a programmer of other
languages can pick up basic Python skills and the huge standard library
is key to another area where Python excels.
28
extensively used and popular, therefore you'll get more community
support.
All its tools have been quick to implement, saved a lot of time, and several
of them have later been patched and updated by people with no Python
background - without breaking.
3. Python is a Language for Everyone
Python code may execute on any platform, including Linux, Mac OS X,
and Windows. Programmers must learn many languages for various roles,
but Python allows you to create sophisticated web apps, perform data
analysis and machine learning, automate tasks, scrape the web, and
create games and amazing visualizations. It is a programming language
that can be used in a variety of situations.
Preprocessing Dataset:
Once the dataset is uploaded, we have started preprocessing the dataset so
that all the null values and missing values are removed.
29
After preprocessing started training the dataset with different
algorithms.
SVM Algorithm
Steps:
30
SVM.py
31
KNN.py
Step-3: Divide the S into subsets that contains possible values for the
best attributes.
Step-4: Generate the decision tree node, which contains the best
attribute.
Step-5: Recursively make new decision trees using the subsets of the
dataset created in step -3. Continue this process until a stage is
reached where you cannot further classify the nodes and called the
final node as a leaf node.
32
Database Connection Code Snippet:
DBConfig.py
33
Code Snippet of GaussianNB:
GNB.py
34
Code Snippet of Random Forest Classifier:
RFC.py
35
Figure 2: Admin Login Screen of the application
36
Figure 4: Enter Input Attributes Screen in the application
37
Figure 6: Result Screen
From the table.5.3.3.1, the Voting ensemble classifiers was giving the best accuracy with 94.67
percent compared to other ML classifiers.
38
Figure 7: Accuracy of all models
From Figure 7, the voting classifier was given the highest accuracy 94.67 percent compared to
other ML models.
39
Figure.8 Precision of all models
From Figure.8, the voting classifier was given the highest precision 94.17 percent compared to
other ML models.
40
Figure.9 Recall of all models
From Figure.9, the voting classifier was given the highest recall score 94.26 percent compared
to other ML models.
41
Figure.10 F1-Score of all models
From Figure.10, the voting classifier was given the highest F1-score 93.98 percent compared
to other ML models.
42
Figure.11 MCC Score of all models
From Figure.11, the voting classifier was given the highest MCC score 94.50 percent compared
to other ML models.
43
Figure 12 Kappa Score of all models
From Figure 12, the voting classifier was given the highest Kappa score 94.49 percent
compared to other ML models.
44
Chapter 6
6.1 Introduction
The goal of testing is to find mistakes. Testing is the practise of attempting
to find all possible flaws or weaknesses in a work product. It allows you to
test the functionality of individual components, subassemblies, assemblies,
and/or a finished product. It is the process of testing software to ensure
that it meets its requirements and meets user expectations, and that it does
not fail in an unacceptable way. There are many different types of tests.
Each test type is designed to fulfil a distinct testing need.
45
Output : identified classes of application outputs must be exercised.
System Test
White Box Testing is a type of software testing in which the software tester is
familiar with the software's inner workings, structure, and language, or at the
very least its purpose. It serves a purpose. It's used to test regions that aren't
accessible with a black box level.
Integration testing
Integration tests are used to see if two or more software components can
work together as a single application. Testing is event-driven, with a focus
on the basic consequence of screens or fields. Integration tests show that,
while the components were individually satisfying, the combination of
components is proper and consistent, as demonstrated by successful unit
testing. Integration testing is a type of testing that focuses on uncovering
issues that occur from the integration of components.
46
6.3 Validation
Test strategy and approach
Field testing will be performed manually, and functional tests will be written
in detail.
Test objectives
All field entries must work properly.
Pages must be activated from the identified link.
The entry screen, messages and responses must not be delayed.
Features to be tested
Verify that the entries are of the correct format
No duplicate entries should be allowed
All links should take the user to the correct page.
Integration Testing:
Test Results: All the test cases mentioned above passed successfully. No
defects encountered.
Acceptance Testing
Acceptance by the users Testing is an important aspect of any project, and it
necessitates active engagement from the end user. It also guarantees that the
system satisfies the functional specifications.
Test Results: All the test cases mentioned above passed successfully. No
defects encountered.
47
Chapter 7
CONCLUSION & FUTURE WORK
CONCLUSION
This paper is focus on crop prediction with several machine learning
algorithms such as K-NN, RF, DTC, GNB, SVM, GB, XGBoost and Voting
classifier. Based on crop dataset we had calculated accuracy of all models.
This system experimental results concluded that Voting classifier was given
highest accuracy with 94.67 percent compared to other algorithms. Therefore,
this system useful for users like farmers to predict the crop name or type to
cultivate specific crop in various agriculture fields. In the future scope, we will
implement several machine learning algorithms to prediction of crop yields.
FUTURE WORK
In returning years, will strive applying information freelance system. that's
no matter be the format our system ought to work with same accuracy.
desegregation soil details to the system is a plus, as for the choice of crops
data on soil is additionally a parameter. correct irrigation is additionally a
required feature crop cultivation. In relevance downfall will depict whether
or not additional water convenience is required or not. This analysis work
are often increased to higher level by availing it to whole Republic of India.
48
Chapter 8
REFERENCES
[1] Arun Kumar, Naveen Kumar, Vishal Vats, “Efficient Crop Yield Prediction Using Machine
Learning Algorithms”, International Research Journal of Engineering and Technology
(IRJET)- e-ISSN: 2395-0056, pISSN:2395-0072, Volume: 05 Issue: 06, June-2018
[2] Aakash Parmar & Mithila Sompura, "Rainfall Prediction using Machine Learning", 2017
International Conference on (ICIIECS) at Coimbatore Volume: 3, March 2017.
[3] Vinita Shah & Prachi Shah, "Groundnut Prediction Using Machine Learning Techniques “,
published in IJSRCSEIT. UGC Journal No: 64718, March-2020.
[4]. Prof. D.S. Zingade, Omkar Buchade, Nilesh Mehta, Shubham Ghodekar, Chandan Mehta,
“Machine Learning-based Crop Prediction System Using Multi-Linear Regression,”
International Journal of Emerging Technology and Computer Science (IJETCS), Vol 3, Issue
2, Apri12018.
[5] Priya, P., Muthaiah, U., Balamurugan, M.” Predicting Yield of the Crop Using Machine
Learning Algorithm”,2015.
[6] Mishra, S., Mishra, D., Santra, G. H., “Applications of machine learning techniques in
agricultural crop production”,2016.
[7] Ramesh Medar & Anand M. Ambekar, “Sugarcane Crop Prediction Using Supervised
Machine Learning" published in International Journal of Intelligent Systems and Applications
Volume: 3 | August 2019.
[8] Nithin Singh & Saurabh Chaturvedi, “Weather Forecasting Using Machine Learning”, 2019
International Conference on Signal Processing and Communication (ICSC) Volume: 05 | DEC-
2019.
49