Automation of Farming Using Deep Learning: Final Presentation

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 44

Final Presentation

AUTOMATION OF FARMING
USING DEEP LEARNING

By
Rahul Sharma
Roll No : N269
Sap Id: 70471118076
Course: MBA-Tech CS

Under the guidance of


Prof. Sachin Chavan

Department of Computer Engineering MPSTME, Shirpur Campus 2020-2021


6/28/2021 1
Outline
• Introduction
• Problem Definition
• Objective
• Methodology
• Data Set
• Model Building
• Model Tuning
• Model Evaluation
• Work Description and Results
• Progress Table
• References

2
Introduction
• Automation in the agricultural sector is extremely imperative in order to
boost the growth of this sector.

• Deep-Learning based methods have emerged as a possible way to


perform this automation

• System which makes plucking vegetables from farms just like a human do
will surely improve the efficiency of farmers.

3
Problem Statement
• Our farmers are facing severe labor shortages and inefficient harvesting
cycles simply because mankind has a physical limit. Also, due to human’s
inherent propensity to work for a limited number of hours, farmers have
difficulty dealing with productivity issues. Despite constantly working all
day, labor comes to a halt at night, necessitating rest to work the next.

• Having a system which makes plucking the vegetables from the field just
as a human doe will surely improve the efficiency of farmers. Thus,
working with the embedded system team, a robotic arm is to be designed
which can pluck the raw and ripe tomatoes from the tomato trees without
the intervention of any human. The goal of my team is to create a code
using computer vision which can enable the robotic arm to pluck the
tomatoes in the field.

4
Objective
• The objective of this project is to make algorithms and programs which
will be capable of performing the following tasks :

– Identify raw and ripe tomatoes in an image


– Identify raw and ripe tomatoes in a video
– Count the number of raw and ripe tomatoes in the frame
– Identify the percentage colour density of the tomato in a video as
well as in an image

5
Methodology

Pascal 2007 mAP Speed

R-CNN 66.0 .05 FPS | 20 s/img

Fast R-CNN 70.0 .5 FPS | 2 s/img

Faster R-CNN 73.2 7 FPS | 140 ms/img

YOLO 63.4 45 FPS | 22 ms/img

6
Methodology

7
Methodology

8
Methodology
Fast R-CNN

9
Methodology
Faster R-CNN

10
Methodology

11
Yolo Framework

12
Dataset
• The dataset (images) was made available by the company. Since it
consisted of just images and no pre-trained models were available,
labeling images was necessary in order to feed it into the model for
creating a custom dataset model using YOLOv3.
• Around 900 images of raw and ripe tomatoes were available which
included:
– 450 images labeled Raw
– 450 images labeled Ripe

13
Model Building
YOLO Framework

● Detection speed and accuracy of Yolo V3 meets the real-time


requirements for the detection process
● Precision on small objects is provided the best by Yolo V3
● YOLOv3 uses a multilabel approach which allows classes to be
more specific and be multiple for individual bounding boxes. This
will be helpful when differentiating between raw and ripe tomatoes.

14
Yolo Framework

We split the image into an 7*7 grid


15
Yolo Framework

16
Yolo Framework

each box predict:

P(Object): probability that


the box contains an object

C= P(Object)*IOU

17
Intersection Over Union (IoU):
• IoU is used to evaluate the object detection algorithm
• It is the overlap between the ground truth and the predicted bounding box
• Usually, the threshold for IoU is kept as greater than 0.5.

18
Yolo Framework

19
Yolo Framework

20
Yolo Framework

21
Yolo Framework
Non-Max Suppression
• Non-max suppression is a technique by which the algorithm detects the object
only once.

22
Yolo Framework
Ripe

•Raw
Raw = 0.8
Ripe = 0

23
Work Description
• Any given machine learning problem begins with a well-formed problem
statement, data collection and preparation, model training and improvement,
and inference.

The machine learning workflow followed in the project

24
Model Tuning
• The training parameters for this project are stated as follows:

1. Batch hyper-parameter in YOLOv3

2. Subdivision’s configuration parameter in YOLOv3

3. Momentum and Decay

25
Model Tuning
• The training parameters for this project are stated as follows:

4. Learning Rate

5. Number of iterations

26
Model Evaluation

27
Results

Raw and Ripe Tomato Detection

28
Work Description
Image 1

29
Work Description
Image 2

30
Work Description
Image 2

31
Work Description
Image 3

32
Work Description
Image 4

33
Results

Percentage Colour Detection

34
Work Description
Image 1

35
Work Description
Image 2

36
Work Description
Image 3

37
Results

Counting total raw and ripe tomatoes

38
Work Description
Video

39
Results

Detection of Raw and Ripe tomatoes


from a video

40
Work Description
Video

41
Work Description
Laptop’s Web Camera

42
Progress Table
Work Schedule Expected Date of Completion Status

1. Allotment of the project 3rd May 2021 Completed


2. Research on the project topic 4th May 2021 Completed
3. Data Collection 14th May 2021 Completed
4. Data Cleaning 25th May 2021 Completed
5. Training the model on custom 28th May 2021 Completed
dataset
6. Coding for detecting tomatoes 10th June, 2021 Completed
from live camera feed
7. Coding for detecting tomatoes 13th June, 2021 Completed
from a video
8. Coding for detecting the % colour 15th June, 2021 Completed
of tomatoes
9. Coding to determine the count of 20th June, 2021 Completed
raw and ripe tomatoes
10. Presenting the generated Model 24th-25th June, 2021 Completed
to ORA rental authorities
11. Model testing on real-time data 26th -28th June, 2021 Completed
12. Project completion, issue of 1 July, 2021 Pending
reliving letter and no dues certificate
Thank You

44

You might also like