Flower Recognition System

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 46

A

PROJECT REPORT

ON

FLOWER RECOGNITION SYSTEM

BY

TUSHAR SUDHIR GAIKWAD

MCA – II, SEM – III, DIV-D

ROLL.NO - 417

SEAT.NO - 20338

2021-2022

TO
SAVITRIBAI PHULE PUNE UNIVERSITY
PUNE- 411041

IN PARTIAL FULFILLMENT OF
MASTER OF COMPUTER APPLICATION (M. C. A.)

UNDER THE GUIDANCE OF


Dr.SUNIL KHILARI

Sinhgad Technical Education Society’s


SINHGAD INSTITUTE OF MANAGEMENT,
Vadgaon Bk, Pune
(Affiliated to Savitribai Phule Pune University, Approved by AICTE
& Accredited by National Board of Accreditation, New Delhi)

1
Date:-

CERTIFICATE

This is to certify that Mr. TUSHAR SUDHIR GAIKWAD, has successfully /

partially completed his project work entitled “FLOWER RECOGNITION SYSTEM”

in partial fulfillment of MCA-II SEM-III Mini Project for the year 2021-2022. He have

worked under our guidance and direction.

Dr. Sunil Khilari Dr. Chandrani Singh


Project Guide Director, SIOM-MCA

Examiner 1 Examiner 2

Date:

Place:

2
DECLARATION
We certify that the work contained in this report is original and has been done by us
under the guidance of my supervisor(s).

 The work has not been submitted to any other Institute for any degree or diploma.

 We have followed the guidelines provided by the Institute in preparing the report.

 We have conformed to the norms and guidelines given in the Ethical Code of Conduct
of the Institute.

 Whenever we have used materials (data, theoretical analysis, figures, and text) from
other sources, we have given due credit to them by citing them in the text of the report
and giving their details in the references.

Name and Signature of Project Team Members:

Sr. No. Seat No. Name of students Signature of students

1 20338 Tushar Sudhir Gaikwad

3
ACKNOWLEDGEMENT

We have immense pleasure in expressing our sincerest and deepest


sense of gratitude towards our successfully. We have developed this project
with the help of Faculty members of our institute and we are extremely grateful
to all of them. We also take this opportunity to thank Director-MCA Dr.
Chandrani Singh, for providing the required facilities in completing this
project. We are greatly thankful to our parents, friends and faculty members
for their motivation, guidance and help whenever needed.

Thank You,
Tushar Sudhir Gaikwad

4
Index

Chapter Page
number
CHAPTER 1 : INTRODUCTION 6
1.1 Existing System 8
1.2 Need for System 9
1.3 Operating Environment Hardware and Software 10
CHAPTER 2 : PROPOSED SYSTEM
2.1 Proposed System (Introduction of system) 11
2.2 Project Scope 12
2.3 Objective of System 13
CHAPTER 3: ANALYSIS MODEL SDLC APPLIED 14
3.1 SYSTEM IMPLEMENTATIO PLAN 18
CHAPTER 4: SYSTEM DESIGN
4.1 System Architecture Diagram 21
4.2 Mathematical Model 22
4.3 Data Flow Diagram 24
4.4 Activity Diagram 25
4.5 Use Case Diagram 26
4.6 Sequence Diagram 28
4.7 ER Diagram 29
4.8 Class Diagram 31
ALGORITHM DETAILS 34
Proposed Algorithm 36
CHAPTER 5 : USER MANUAL
5.1 Results Screen shots (Input & Output) 39
OTHER SPECIFICATION 43
References 45

5
CHAPTER : 1
INTRODUCTION

The increase in number of researches and invention, new species of

flowers are discovered frequently. Not everyone has the knowledge of

these flowers. We need Expert’s experience and practical knowledge that

is quite difficult.

With only the image, there is no way we can obtain further details about

the flowers unless consulting a botanist. In order to search the information

over the internet, at least a keyword related to that flower should be

known. Although there is a method of searching images by input image

(Google Image Search), derived results are often irrelevant to what we

want. Recently there has been the development of android applications

and researches on the use of machine learning for recognitions of objects

6
including flowers. As the machine learning technology advances,

sophisticated models have been proposed for automatic plant

identification. It used the probabilistic neural network as a classifier to

identify the plant leaf images. Similarly, our application “FLOWER

RECOGNITION SYSTEM” help to recognize a flower image in order to

get further information about their common names, scientific names,

kingdom and its uses . In this proposed software color, shape and texture

are used to extract the features to feed the models for comparison of the

images to find the exact flower. The main base of the software is a data

set containing various images of flowers, which is further split into train

sets and test sets keeps all the information related to the image of the

flower. Our application uses CONVOLUTIONAL NEURAL NETWORK

(CNN) to train the datasets. For comparison the image has to be uploaded.

User will be able to derive important information related to the input

flower image such as flower’s scientific name, botanical information and

so on. The information provided can then be used for further information

gathering activities.

7
“Flower Recognition System” helps to identify flower species from

images, through image recognition. Along with identification, this

application also provides additional information like details about that

particular flower (Common Name, Scientific Name, and Higher

Classifications) and uses. Such application will be useful not only for

regular users, who would like to identify plant of an unknown species but

also for other professionals in the fields of botany or related fields. This

application can also raise interest in nature among the non-expert users

In this proposed system we develop an efficient model for Flower

Recognition System using Convolutional Neural Network (CNN).

1.1 Existing System

The previously collected images of several flowers and their

corresponding labels will be used to train the model. Once trained, the

model takes as input, the image of a flower and predicts the Name of

flower, Family name, Colors in which the flower is found as well as

Countries in which the flower is found.


8
Displaying additional features like location, colors thus increases the

functionality of the system.

1.2 Need Of System

In our country lot of flower and plant researchers are there,and they wants
a Correct and Simple results.It is need to be developed forthe
Researchers and inventions.Basically before developing these system lot
of applications is there running but our application Is Provided simplicity
and correctness about result.for Correctness of result we need to some
mathematical operations in the system.

9
1.3 Operating Environment Hardware and Software

A] Hardware Specifications:
Processor : CORE i3
RAM : 4GB
Hard Disk : 1TB

B] Software Specification:
Operating System : Window 10 (64 bit)
Font End Tool : HTML, CSS,Bootstrap
Back End Tool : Python,Datasets

C] Tools Use:
IDE : Pycharm,Anaconda

10
CHAPTER : 2
Proposed System

2.1 Propoed System

The main Purpose of Project is to help the Botonical Studentsto


recognise the flowers and research on them. In previous days
Students was don’t know a new type of flower and not recognise
properly but Our Application help them to knowing the flowers
Botonical and Scientific name,its color , texcture,size etc.
With increase in number of researches and invention, new species of
flowersdiscovered frequently. Not everyone has knowledge of these
flowers. We need Expert’s experience and practical knowledge that is
quite difficult.In this proposed system we develop an efficient model
forFlower Recognition System using Convolutional Neural Network
(CNN).

11
2.2 PROJECT SCOPE

Similar projects have been recently developed for identifying flowers and
plants an
as well as plants through leaves. The key challenges faced by the
developers are finding proper feature extraction factors related to the
plants and flowers since there are many variations in shape, color and
texture of flowers. During the development of these projects, it was
observed that most of the systems focused on computational logic
involved in image representation.
Thus the main challenge identified was the semantic gap which
occursbecause of the difference in the representation of the digital image a
nd the human perception
Objectives of this project is to:
1. To identify certain flower from its image by training the application
with datasets.
2. To provide additional information and usage about the identified
flower.
3. To greatly speed up the process of flower species identification,
collection, and monitoring.

12
2.3 Objective Of System

Functional requirements define the internal workings of the software:


The technical details, data manipulation and processing and other an
specific functionality that show how the use cases are to be satisfied.
They are supported by non-functional requirements, which impose
constraints on the design implementation

13
CHAPTER : 3
Analysis Model

In this Waterfall model, typically, the outcome of one phase acts as the
input for the next phase sequentially.
The following illustration is a representation of the different phases of the
Waterfall Model.

14
Figure 3.6: Waterfall Model

The sequential phases in Waterfall model are −


Requirement Gathering and analysis − All possible requirements of the
system to be developed are captured in this phase and documented in a
requirement specification document.
System Design − The requirement specifications from first phase are
studied in this phase and the system design is prepared. This system
design helps in specifying hardware and system requirements and helps in
defining the overall system architecture.
Implementation − With inputs from the system design, the system is first
developed in small programs called units, which are integrated in the next

15
phase. Each unit is developed and tested for its functionality, which is
referred to as Unit Testing.
Integration and Testing − All the units developed in the implementation
phase are integrated into a system after testing of each unit. Post
integration the entire system is tested for any faults and failures.
Deployment of system − Once the functional and non-functional testing
is done; the product is deployed in the customer environment or released
into the market.
Maintenance − There are some issues which come up in the client
environment. To fix those issues, patches are released. Also to enhance
the product some better versions are released. Maintenance is done to
deliver these changes in the customer environment.

All these phases are cascaded to each other in which progress is seen as
flowing steadily downwards (like a waterfall) through the phases. The
next phase is started only after the defined set of goals are achieved for
previous phase and it is signed off, so the name "Waterfall Model". In
this model, phases do not overlap.
Most of the preprocessing is done automatically which is one of the major
advantages of CNN. In addition to that input images were resized.
Augmentation is also applied which increases the size of the dataset by
applying operations such as rotation, shear etc.

16
To increase the user friendliness of the system, the model was deployed
into a web application

17
SYSTEM IMPLEMENTATION PLAN

The proposed system was implemented as follows:


Step 1: Image acquisition: This step involves collecting images that can
be used to train the model so that later when it comes across an unknown
image, it can identify the flower based on the knowledge acquired during
the training phase.
Step 2: Image Preprocessing: Here the images collected in the previous
step were resized and augmented to increase the efficiency of the model.
During augmentation, the size of the dataset would be increased by
performing operations such as rotation, shear etc. Then the image will be
split into 75% training and 25% testing sets.
Step 3: Training Phase: This is the step where the actual training of the
model takes place. In this phase the model extracts features such as color
and shape of the flower used for training. Each of the training images will
be passed through a stack of layers which includes convolutional layer,
Relu layer, pooling layer and fully connected layer.
Step 4: Validation phase: Once the model completes its training from the
training set it tries to improve itself by tuning its weight values. The loss
function used is categorical cross entropy and the optimizer used is
stochastic gradient descent.
Step 5: Output prediction: Once the validation phase is over, the model
is ready to take an unknown image of a flower and predict its name from
the knowledge it gained during training and validation phases. Once the
18
classification is done by the model, it displays the common name as well
as the family name of that flower.
Step 6: Display Features: Once the identity of the flower is found out, a
previously created CSV file is imported and the benefits of the
corresponding flower will be found out and displayed to the user.
Step 7: Web Application: Finally the developed model was deployed
into a web application which further makes the system more user friendly.

19
DIAGRAMS

20
CHAPTER : 4
SYSTEM DESIGN

4.1 SYSTEM ARCHITECTURE

Figure 4.1: System Architecture Diagram

21
Most of the preprocessing is done automatically which is one of the major
advantages of CNN. In addition to that input images were resized.
Augmentation is also applied which increases the size of the dataset by
applying operations such as rotation, shear etc.
During the training process, the model discovers features and patterns and
learns them. This knowledge is then used to later find the name of a
flower when a new flower image is given as input. Categorical cross
entropy is used as loss function. Initially the loss values would be very
high but as the process advances the loss function is reduced by adjusting
the weight values. Once the classification is done, the CSV file is
imported and the major uses of that plant would be displayed.
To increase the user friendliness of the system, the model was deployed
into a web application.

4.2 MATHEMATICAL MODEL

The convolutional layer is the core building block of a CNN.


The layer's parameters consist of a set of learnable filters

Equation 4.2.1

f = Input Image
h= Kernelss

22
m, j= index of rows
n, k = index of column

POOL layer inserted between successive Convolutional layers,


applying a down sampling operation along the spatial dimensions
width and height. It uses MAX operation to optimize the spatial size of
the representation as well as reducing the amount of parameters.
nh - f + 1) / s x (nw - f + 1)/s x nc

Equation 4.2.2

nh - height of feature map


nw - width of feature map
nc - number of channels in the feature map
f - size of filter
s - stride length

23
4.3 DATA FLOW DIAGRAM

Figure 4.3: Data Flow Diagram

This diagram depicts how the process is done from user end as well as
from system side. Initially user will input the image of flower, then
internally pre-processing of the image will be done. The model is trained
using CNN algorithm thus will further do classification of image based on

24
localized features. User at end of the process will see the end result
generated by the model displaying expected results.

4.4 ACTIVITY DIAGRAM

Figure 4.4: Activity Diagram

In this activity diagram, activities are shown in the order they


occur.Initially the screen is displayed. Then an image is uploaded to the
system.The system decides the next task after recognizing the image. If
the image is detected, the recognition results are declared.If the image is
not detected, process is repeated again.

25
4.5 USE CASE DIAGRAM

Figure 4.5: Use Case Diagram

The above diagram represents the actors, User and Flower Recognition
System. The Use cases which drive the process to completion are Start
Application, Select Image, Upload Image, Receive Image, Process the
Image, Upload Flower name and its features and View flower name and
its features.

26
Use Case Description :

Section Purpose

Name Use Case System

Description The Use Case describes steps of the role of User and System.
The User will firstly Start Application the select Image from
Internet and Local System and then Upload that Image.System
Sends the Image for training to the Model.after Successfully
training Image will sends to testing

Actors User , System.

27
4.6 SEQUENCE DIAGRAM

Figure 4.6: Sequence Diagram

The above diagram represents interaction between various components.


Application that is installed on user’s phone is used to upload the image
of flower. The system receives the image. The image received is
processed and the message is uploaded. Using the system the user
identifies the name of the flower and its different features.

28
4.7 ENTITY RELATIONSHIP DIAGRAM

Flower recognition System requires trained data set which contains


collection of images on which training is done so that each character can
be labeled. This training is done on a limited number of images so
recognition can also be done only on these images. Extracted features
from the captured image are labeled according to this data set. EDR of the
system is given below

Figure 4.7 : ER Diagram

29
Entities & Attributes

1. User
2. Image
I) Resolution (must be within the range of 2-4 Mpixels)
II)Clutterdness (must be less cluttered)
III) Background (must have homogenous background)
IV) Type (gif, jpeg etc)
3. Flower Recognition System
4. Flower Recognition Application

30
4.8 CLASS DIAGRAM

Figure 4.8: Class Diagram

31
1. User
This class contains the functions which are directly related to the
UI.Pressing_CaptureImage_Button () is called when an external user
presses the Capture Image button ().This function then invokes the
functions in the CaptureImage class which processes the request.
Pressing_RecognizeImage_Button() is called when an external user
presses the Recognize Text button.This function then invokes the
functions in the Recognize Image class which processes the request.
Pressing_UsingApplication_Button is called when an external user
presses the Using Application button. This function then invokes the
functions in the UseApplication class which processes the request.
2. CaptureImage
This class contains OpenningCamera( ),takingPicture( ) and
adjustingCamera( ).This class is solely responsible for the capturing
of image.
3. ProcessImage
The main role of this class is to pre-process the image.Pre-
processing is done by binarize( ),blurrinessReduction()
,noiseRemoval( ),brightnessAdjustment( ).This pre-processing is
done so that the image meets the criteria for further processing.It
takes the image from the CaptureImage class.
4. LocalizeImage
This class localizes the flower regions from the image using its
functions .These functions take the output of the previous function

32
as input like lines are extracted from the textual regions. This class
uses the pre-processed image from the ProcessImage class for
processing.
5. RecognizeImage
This class matches the characters which are taken from the
LocalizeImage class with the trained dataset from the TrainDataSet
class and then these characters are labelled accordingly. This is done
in Classification ( ) function.And then the final recognized text is
sent to the user by the function RecognizeImage( ).This recognized
image is then used by the Use Application class.
6. UseApplication
This class contains all the functionality of processing related to the
application of text recognition system.In this class apiProcessing( )
function handles all the processing on the recognized text retrieved
from the RecognizeText classs .Then the translated text is then
passed to the user in the function retrievingTranslatedText( ).
7. TrainDataSet
This class contains the training dataset and processing on it so that
the extracted characters can be labelled according to the trained
dataset.

33
ALGORITHM DETAILS

IMAGE PROCESSING
This process is accessed by administrator. This module perform
certain activities such as scanning images, recognizing features in
images to transfer them into identifiable format. The module
supports the following services:-
 Scanning images.
 Storing the images.
 Processing those images.
 Recognizing the features in images.
SYSTEM TRAINING
This module can be accessed by both the administrator and the end-
user. Before converting the images in to editable and searchable
images, the first and the mandatory step is providing training to the
system. Here training in the sense the image should be identified by
the user. Then the user enters all the features that are required for
recognition from the image.
This image file should be provided as an input during the training
process. The user then clicks the train button provided in the
recognition module. Then the training gets completed. Thus the
system gets familiar with the new image. This module supports:-
 Training the system with images of flowers.

34
 Training the system with the new flower images that are not
present in the system and that cannot be identified by the
system.
IMAGE RECOGNITION
This process can be accessed by both the administrator and the end-
user. Once the images are trained, system can recognize the features
present in the image. That means the system can recognize the
features of any flower image user chooses which makes Flower
Recognition more flexible. This is the module where the main
functionality of Flower Recognition System is tested.
In scanned image recognition, the system is first trained with the
features in the image in the training module itself. Now in the
recognition module, the system takes the scanned images image as
an input file, first crops the image and then extracts/recognizes the
features from the image and makes these images editable and
searchable. Thus the scanned image recognition recognizes the
characters from the scanned image and makes the image editable
and searchable. Hence the image recognition module on a whole
supports the following services:-
 Converts the image into specific format
 Recognizes the features
 Heterogeneous Flower Recognition

35
IMAGE SEARCHING
This process can be accessed by both the administrator and the end-
user during the search of the user required image to implement the
Flower recognition process on it. The user requests the system to
search for a particular image. Then the system finds the images based
on CNN methodology and returns the result of the search to the user.

PROPOSED ALGORITHM

1. Line up the feature and the image patch


2. Multiply each image pixel by corresponding feature pixel.
3. Add them up
4. Divide by total no of pixels in the feature
5. Repeat the process till all the pixel values of the image are
coverered.
6. Remove negative values from filtered images
7. Replace them with zero
8. Declare window size as 3
9. Filter the image according to the window size
10. Take the maximum value from the given values in the window
11. Repeat these process for all the images

36
Table Name : ProcessImage
Description : The main role of this class is to pre-process the image.Pre-
processing is done by
binarize( ),blurrinessReduction(),noiseRemoval( ),brightnessAdjustment(
).This pre-processing is done so that the image meets the criteria for
further processing.It takes the image from the CaptureImage class.

Field Name Field Type Size constraint Description


P_ID int 10 Primary_Key Process Image
ID
Image Varchar 20 NOT NULL Upload the
Image
ThresholdValue Varchar 20 NOT NULL Change the Pixel
Size Of Image
MaskFilter Varchar 30 NOT NULL Apply the mask
Effect.

Table Name : CaptureImage


Description : This class contains OpenningCamera( ),takingPicture( ) and
adjustingCamera( ).This class is solely responsible for the capturing of
image.

Field Name Field Type Size constraint Description


ScreenResolution VarChar 10 NOT NULL Process to recoverig
Image
CameraResolution Varchar 20 NOT NULL Process to Image
Displayheight Varchar 20 NOT NULL Display the Height of
Image
DisplayWidth Varchar 20 NOT NULL Display the Width of
37
Image.
Table Name : TrainDataset
Description : This class contains the training dataset and processing on it
so that the extracted characters can be labelled according to the trained
dataset.

Field Name Field Type Size constraint Description


InputLayers VarChar 10 NOT NULL Bring the data from
system for
Processing
OutputLayers Varchar 30 NOT NULL Produces given
output for program
OutputValue Varchar 30 NOT NULL Display the Value of
Image.

38
CHAPTER : 5
USER MANUAL

5.1 Screen Shots

Fig.5.1.1 Index Page


39
5.2 Screen Shots

Fig.8.1.2 Selecting the file

In this above image Fig.5.2.2. shows selection of file for uploading image

40
5.3 Screen Shots

Fig.8.1.3 Image Uploaded

Above image Fig. 5.2.3 shows uploaded image and message is displayed “image uploaded
successfully” .

41
5.4 Screen Shots

Fig.5.1.4.Final Output

The above Fig.5.2.4 shows the final output

42
OTHER SPECIFICATIONS

CONCLUSION

Flower being the most attractive part is the best way to identify a plant.
Thus identifying the flower can help in knowing more about that plant.
The proposed system takes as input, an image of a flower and displays the
name of flower, family name, colors as well as countries in which flower
can be found. Since the model is based on convolutional neural network
which has proven to be one of the most efficient image classification
methods, the proposed system is highly reliable.
The reason for doing any project is to tackle problems or to gain
knowledge. The main aim of this project is to learn machine learning
models and apply it. Upon completing this project, we were able to create
an application using machine learning models and functions of python for
image processing to extract features. This project has been platform for us
to learn about a machine learning and its uses and image processing
through team work.

43
Future enhancement

Possible future development in our program can be:


1) Increase accuracy rate.
2) Validate if the uploaded image is flower or not.
3) Increase the species of flower that can be identified.
4) Convert the program into android compatible.
5) Add recommendation system.
6) Save datasets in cloud rather than in local disk

APPLICATIONS

a) Introduction to machine learning algorithms such as Convolution


Neural Network for identifying flower species would be of great help for
industries like pharmaceuticals and cosmetics.
b) There are a multitude of medicines and drugs made from plants, as you
probably know; however, some are made from the actual flowers – not
just the leaves or roots.

44
c) Flowers are widely used by man for regular purposes like decoration,
worship as a flower bouquet for greetings, etc. Their attractiveness,
fragrance, and color give them a prominent place in human daily life.

REFERENCES

[1] Hoo -Chang Shin, Holger R. Roth, Mingchen Gao, Le Lu, Ziyue
Xu,Isabella Nogues, Jianhua Yao, Daniel Mollura ,Ronald M. Summers,”
Deep Convolution Neural Networks for Computer -Aided Detection:
CNN Architectures, Dataset Characteristics and Transfer Learning”IEEE
2016. [2] Büşra Rümeysa Mete, Tolga Ensari , ” Flower Classification
with Deep CNN and Machine Learning Algorithms ”IEEE 2019
[3] Sathiesh Kumar, Gogul , ” Flower species recognition system using
convolution neural networks and transfer learning ”IEEE 2017
[4] Pavan Kumar Mishral, Sanjay Kumar Maurya, Ravindra KumarSingh,
Arun Kumar Misral “A semi -automatic plant identification based on
digital leaf and flower Images” IEEE -2012.
[5]Tanakorn Tiay, Pipimphorn Benyaphaichit, and Panomkhawn
Riyamongkol”Flower Recognition System Based on Image Processing”
IEEE -2014.

45
[6] Fadzilah Siraj, Muhammad Ashraq Salahuddin and Shahrul Azmi
Mohd Yusof, ”Digital Image Classification for Malaysian
BloomingFlower” IEEE – 2010
[7] Prof.Suvarna Nandyal, Miss.Supriya Bagewadi, “Automated
Identification of Plant Species from Images of Leaves and Flowers used
in the Diagnosis of Arthritis” IJREAT-Volume 1, Issue 5, OctNov, 2013.
[8] Yuita Arum Sari and Nanik Suciati,“Flower classification using
combined a*b*color and Fractal-based Texture feature”, International
Journal of Hybrid Information Technology Vol.7, No.2 (2014).
[9] Mari Partio, Bogdan Cramariuc, Moncef Gabbouj, and Ari Visa,”Rock
Texture retrieval using gray level cooccurrence Matrix” ,(ITS) Surabaya,
Indonesia.
[10] Dr.S.M.Mukane and Ms.J.A.Kendule, “Flower Classification Using
Neural Network Based Image Processing” IOSR-JECE Volume 7, Issue
3, Sep. - Oct. 2013

46

You might also like