Shape Recognition System: School of Information Technology and Engineering

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 47

SHAPE RECOGNITION SYSTEM

REVIEW REPORT
Submitted by

Naga Sai Hemanth V-18MIS0009


Sannapaneni Nagarjuna Naidu-18MIS0231
B D V Krishna Gangadhar-18MIS0278
Dokku Akash-18MIS0281
Arja Lasya Sree-18MIS0352
Gattappagari Lokesh-18MIS0376
Paidi Shiva Rakshith-18MIS0402
S Kowshik-18MIS0414
Komal Bhagat-19MIS0380

for

SOFTWARE METRICS (SWE2020)


PROJECT COMPONENT

Submitted To
Dr. Nancy Victor

School of Information Technology and Engineering

1
Abstract
Humans’ eye has some natural tendency to recognize shapes based on their prior knowledge.
Therefore, vision plays an important role in human knowledge. We can corelate this and apply
the same operation in computers to assist the software to recognize the shapes. There are many
existing systems which recognize shapes based on their color and size. Since we know that
different shapes may possess identical color and size values, these parameters are not sufficient
to identify and recognize the shapes.
In this project, a new system has been proposed which recognizes shapes based on the shape’s
edges to increase the accuracy. We use canny edge detection method to find edges of the shape
by looking for local maxima of the gradient of image to recognize the shape. Shape recognition
finds its application in Analysis of fingerprint, robotics, handwriting mapping, remote sensors
and in many other. In pattern recognition system, recognizing and identifying the shapes is one
of the significant research areas. Main focus of the pattern recognition is the classification
between objects. In this project, we focus on developing a shape recognition system and also
focus on improving the efficiency of our application using various software metrics.

2
Table of Contents

Chapter Page No
Abstract 2
1. Introduction 4-6
1.1 Overview 4
1.2 Work Breakdown 4-5
1.3 Gantt Chart 5-6
2. Project Resource Requirements 6
2.1 Software Requirements 6
2.2 Hardware Requirements 6
3. Literature Survey 7-11
4. System Architecture 12
5. Use case diagram 13
6. Module Description 14
7. Software Metrics (used by Project Manager, Team Manager, 15-27
Developer and Tester)
8.Work done by each member and the software metrics used 28-34
9.Output & screenshots 35-44
10.Conclusion and future work 45
References 46-47

3
1. Introduction:
1.1 Overview
Humans’ eye has some natural tendency to recognize shapes based on their prior knowledge.
Therefore, vision plays an important role in human knowledge. We can corelate this and apply
the same operation in computers to assist the software to recognize the shapes. There are many
existing systems which recognize shapes based on their colour and size. Since we know that
different shapes may possess identical colour and size values, these parameters are not
sufficient to identify and recognize the shapes. In this project, a new system has been proposed
which recognizes shapes based on the shape’s edges to increase the accuracy. We use canny
edge detection method to find edges of the shape by looking for local maxima of the gradient
of image to recognize the shape. Shape recognition finds its application in Analysis of
fingerprint, robotics, handwriting mapping, remote sensors and in many other. In pattern
recognition system, recognizing and identifying the shapes is one of the significant research
areas. Main focus of the pattern recognition is the classification between objects.

1.2 Work Breakdown

Hierarchy:

Project Manager : Komal Bhagat – 19MIS0380


TEAM 1 Manager : V N S Hemanth – 18MIS0009
Developer 1 : S Nagarjuna Naidu – 18MIS0231
Developer 2 : G Lokesh – 18MIS0376
Tester : B D V K Gangadhar – 18MIS0278
TEAM 2 Manager : Arja Lasya Sree – 18MIS0352
Developer 1 : Paidi Shiva Rakshith – 18MIS0402
Developer 2 : S Kowshik – 18MIS0414
Tester : D Akash – 18MIS0281

4
1.3 Gantt Chart

5
2. Project Resource Requirements:
2.1 Software Requirements
➢ IDE : MATLAB
➢ Project Management : INSTAGANTT

2.2 Hardware Requirements


➢ Processor : Intel Core I5
➢ Hard Disk Drive(HDD) : 2-4 GB
➢ Ram : 2-4 GB

6
3. Literature Survey:

S.No Author Title Review Methodology Used

1 Krystian Shape recognition Introduce a new Lowe's SIFT method


Mikolajczyk, with edge-based edge-based feature
Andrew Zisserman, features detector that is
Cordelia Schmid invariant to similarity
transformations. The
features are localized
on edges and a
neighborhood is
estimated in a scale
invariant manner.
2 Mohd Firdaus Object Shape This is analogous to Parameter estimation
Zakaria, Hoo Seng Recognition in machine vision such algorithm
Choon, and Shahrel Image for Machine as shape recognition
Azmin Suandi Vision Application application which is
important nowadays.
This paper proposed
shape recognition
method where circle,
square and triangle
object in the image
will be recognizable
by the algorithm.
This proposed
method utilizes
intensity value from
the input image then
thresholder by Otsu’s
method to obtain the
binary image.

7
3 Jelmer Philip de Object Recognition: The approach is SUSAN algorithm,
Vries A Shape-Based shape-based and insertion sort
Approach using works towards algorithm
Artificial Neural recognition under a
Networks broad range of
circumstances, from
varied lighting
conditions to affine
transformations. The
main emphasis is on
its neural elements
that allow the system
to learn to recognize
objects about which
it has no prior
information.
4 Ohtani, Kozo & Position and posture In most of the past Flight methods,
Baba, Mitsuru & measurements and methods of this kind, acoustic holographic
Konishi, Tadataka. shape recognition of the characteristic methods
columnar objects quantities have been
using an ultrasonic based on either time-
sensor array and of-flight methods or
neural networks acoustic holographic
methods. In these
methods, measuring
and recognizing the
width and depth
directions
simultaneously with
a high resolution has
been difficult in
principle.

8
5 Das, Manas Ranjan Object Shape The approach here is Corner detection
and Barla, Sunil Recognition to classify some of method, signature
the common objects method and chain
around us and decide code method
whether they belong
to any geometric
shape or not. The
shape of the objects
can be represented by
some feature space
which may be used
for recognizing shape
of the objects.
6 El Abbadi, Nidhal Automatic Detection This is analogous to Statistical method,
& Saadi, Lamis. and Recognize machine vision such structural method.
Different Shapes in as shape recognition Contrast-limited
an Image application which is adaptive histogram
important field equalization
nowadays. This (CLAHE) is used.
paper introduces a
new approach for
recognizing two-
dimensional shapes
in an image, and also
recognizes the shapes
type.
7 Pedro F. Representation and Our methods revolve Segmentation
Felzenszwalb Detection of Shapes around a particular algorithm, generic
in Images shape representation optimization
based on the methods
description of objects
using triangulated
polygons. This

9
representation is
similar to the medial
axis transform and
has important
properties from a
computational
perspective.
8 Gulce Bal, Julia Research in shape We present a novel The geometric
Diebold, Erin Wolf modeling image recognition algorithm, SAT-EDF
Chambers, Ellen method based of the algorithms
Gasparovic, Blum medial axis
Ruizhen Hu, that identifies shape
Kathryn information present
Leonard, Matineh in unsegmented input
Shaker, and Carola images.
Wenk
9 Jaruwan Toontham Object Recognition This paper presents Hough Transform
and Chaiyapon and Identification an object recognition method and sobel
Thongchaisuratkrul System Using the and edge detection
Hough Transform identification system algorithm.
Method. Object using the Hough
Recognition and Transform method.
Identification System The
Using process starts from
the Hough Transform imported images into
Method the system by
webcam,
detected image edge
by fuzzy, recognized
the object by Hough
Transform, and
separated the objects
by the robot arm.

10
10 A.Ashbrook and Algorithms For two Representation of Stereo matching
N.A.Thacker Dimensional Object arbitrary shape for algorithm and
Recognition. purposes of visual thinning algorithm
recognition is an
unsolved problem.
The task of
representation is
intimately
constrained by the
recognition process
and one cannot be
solved without some
solution for the
other. We have
already done some
work on the use of an
associative neural
network system for
hierarchal pattern
recognition of the
sort that may be
ultimately useful for
generic object
recognition.

11
4. System Architecture:

12
5. Use case diagram:

13
6. Module Description:

Image Pre-processing module:


The pre-processing performed well, as expected on the black and white images. For real colour
images performance is of course less exact, especially in cases of noisy or complex images.
And if we take a look at the shapes that were abstracted from the different images of the same
object, we can say the pre-processing component performed rather well in terms of consistency.
The quality of edges is increased by applying differentiation.

Image Acquisition module:


This module helps in capturing an image and make the image as a dataset to identify the shape
of an image.

Morphological Processing module:


This module performs morphological dilation and erosion. Morphological is a broad set of
image processing operations that process images based on shape. Morphological operations
apply a structuring element to an input image; create an output image of the same size. The
goal of removing the imperfection objects accounting for the form and structure of the things

Feature extraction module:


The feature extraction is used to reduce the large input data to small data so that it will take
less time to process data but in extracted the feature must have important data to be process.
We use canny edge detector to extract the edges. The feature extraction can be done by using
morphology, colour, edges, texture and etc.

Edge Detection module:


This is used to classify the shape of an image based on the size like area, perimeter etc., A
software environment was designed to test and use the proposed method, and to evaluate its
speed and accuracy.

14
7. Software Metrics:

Metrics used by Project and Team managers


1.Productivity Metrics
These metrics can be used to track and measure how efficient our team is in getting their tasks
done. These metrics are used to manage and improve performance of our project as well as
highlight where we need to improve. Some productivity metrics used in our project are as
followed:
Planned-to-done ratio: The planned-to-done ratio calculates what percentage of the assigned
tasks were completed adequately. It also helps in tracking whether the project is getting done
in the way we planned them.
Effort per Team member: This metrics helps us to assess the effort of each team member.
This will help us to determine the overall productivity of the project and team members.

2. Cost Effectiveness metrics:


Cost effectiveness is a category of metrics that are used to measure the results of strategies,
programs, projects and operations where benefits are non-financial. These metrics will help the
project manager to find the open-source tools by comparing the cost of tools and strategies.

3.Maintainability Index:
Calculates an index value between 0 and 100 that represents the relative ease of maintaining
the code. A high value means better maintainability. Color coded ratings can be used to quickly
identify trouble spots in your code. A green rating is between 20 and 100 and indicates that the
code has good maintainability. A yellow rating is between 10 and 19 and indicates that the code
is moderately maintainable. A red rating is a rating between 0 and 9 and indicates low
maintainability.
Maintainability Index = 171 - 5.2 * ln(Halstead Volume) - 0.23 * (Cyclomatic Complexity) -
16.2 * ln(Lines of Code)

4. Quality metrics:
A major challenge in software maintenance is understanding the existing code, and this is
where code quality metrics can have a big impact. These metrics help to improve the project
efficiency also.

15
5.Defect detection efficiency:
This metric will help us to assess the performance and productivity of the tester in order to
ensure the quality of the product.
Defect detection efficiency = Number of defects detected / Total Number of defects

Metrics used by Developers


1.Lines of Code (LOC):
Lines of code can be used by the developers to measure the size of code developed. This in
turn helps the developer to assess few other parameters such as number of lines of code
developed per day and effort to develop it.

2.Cyclomatic Complexity:
This metric used to indicate the complexity of a program. It is a quantitative measure of the
number of linearly independent paths through a program's source code. This gives the
developer an idea about the complexity of code needed to develop.

3. Function point metrics:


This provides a standardized method for measuring the various functions of a software
application. It measures the functionality from the user's point of view, that is, on the basis of
what the user requests and receives in return. This would be helpful for the developer to develop
the code efficiently.

4.Efficiency:
This metric will help the developer to assess the efficiency of the code developed. This will
give a clear picture of whether the code provide accurate results according to the user needs
and also evaluates the code performance.

5.Time complexity:
Time complexity is the computational complexity that describes the amount of
computer time it takes to run an algorithm. Thus, the amount of time taken and the number of
elementary operations performed by the algorithm are taken to differ by at most a constant
factor.

16
Metrics Used by Testers
1.Code Coverage:
Code coverage is a metric that can help testers to understand how much of your source is tested.
This in turn helps to assess the quality of test suit and to find bugs.
Code Coverage Percentage = (Number of lines of code executed by a testing algorithm/Total
number of lines of code in a system component) * 100.

2.Defect Density:
Defect Density is the number of defects confirmed in software/module during a specific period
of operation. This metrics helps the testers to find out the density of bugs in the code developed.

3. Portability:
Portability measures how usable the same software is in different environments. It relates to
platform independency. There isn’t a specific measure of portability. But there are several ways
you can ensure portable code. It’s important to regularly test code on different platforms, rather
than waiting until the end of development.

4.Bug Find Rate:


One of the most important metrics used during the test effort percentage is bug find rate. It
measures the number of defects/bugs found by the team during the process of testing.

5.Accuracy:
This metric helps the tester to check and assess the accuracy of the developed code by giving
various sample data. This helps to calculate the accuracy rate of the software. In our project,
we take different shapes of different parameters and calculate the efficiency of the output.

6. Severity of the defects:


By measuring the severity of the defects, the developers can get a wide picture of the impact
of defect on the quality of the application developed.

17
Metrics Calculation:

Month-March Planned Activities Done Activities Planned to done Ratio


Team A 1.Literature Survey 1.Literature Survey 4:3
(4 papers) (4 papers)
2.Module’s 2.Module’s
description and description and
analysis analysis
3.Requirements 3.Requirements
gathering and analysis gathering and analysis
4. Algorithm Analysis

Team B 1.Literature Survey 1.Literature Survey 4:4


(4 papers) (4 papers)
2. Use case Diagram 2. Use case Diagram
3. Requirements 3. Requirements
gathering and analysis gathering and analysis
4. Architecture 4. Architecture
Diagram Diagram

Testers 1.Literature Survey 1. Literature Survey 2:2


(2 papers) (2 papers)
2. Analysis on 2. Analysis on
Literature Survey Literature Survey

Month-April Planned Activities Done Activities Planned to done Ratio


Team A 1. Development of 1. Development of 4:4
Image acquisition Image acquisition
module module
2. Development of 2. Development of
Morphological Morphological
Processing module Processing module
3. Algorithm analysis 3. Algorithm analysis
4.Gather software 4.Find metrics for
metrics for developers developers
Team B 1. Development of 1. Development of 5:4
Image preprocessing Image preprocessing
module module
2.Development of 2.Development of
direction detection direction detection
module. module extraction
3. Algorithm analysis module.
4. Find metrics for 3. Algorithm analysis
managers 4. Find metrics for
5.Gather software managers
metrics for testers
Testers 1.Find errors in 1.Find errors in 3:2
developed modules developed modules
2. Report errors 2. Report errors
3. Gather software
metrics for testers

18
Month-May Planned Activities Done Activities Planned to done Ratio
Team A 1. Development of 1. Development of 3:2
edge detection edge detection
module. module.
2. Evaluate Metrics of 2. Evaluate Metrics of
Manager Manager
3. Evaluate Metrics of
developers
Team B 1. Development of 1. Development of 3:2
edge detection edge detection
module. module.
2. Evaluate Metrics of 2. Evaluate Metrics of
Manager Manager
3. Evaluate Metrics of
developers

Testers 1. Gather software 1. Gather software 3:2


metrics for testers metrics for testers
2. Evaluate Metrics of 2. Test the Source
developers code.
3. Test the Source
code.

Month-June Planned Activities Done Activities Planned to done Ratio


Team A 1.Evaluate Metrics of 1.Evaluate Metrics of 3:3
developers developers
2. Documentation 2. Documentation
3. Evaluate Accuracy 3. Evaluate Accuracy
metrics metrics

Team B 1. Evaluate Metrics of 1.Evaluate Metrics of 3:3


developers developers
2. Documentation 2. Documentation
3. Evaluate Accuracy 3. Evaluate Accuracy
metrics metrics

Testers 1.Evalute test metrics 1.Evalute test metrics 3:3


2. Documentation 2. Documentation
3. Evaluate Accuracy 3. Evaluate Accuracy
metrics metrics

19
2.Effort per Team member:

Team Members Number of Number of Number of weeks


Activities Activities worked
assigned Performed

Komal Bhagat (Project manager) 10 10 13 weeks

Naga Sai Hemanth (Team Manager) 12 12 13 weeks

Arja Lasya Sree (Team Manager) 12 12 13 weeks

S Nagarjuna Naidu (Developer) 8 8 13 weeks

G. Lokesh (Developer) 7 7 13 weeks

Paidi Shiva Rakshith (Developer) 8 8 13 weeks

S Kowshik (Developer ) 7 7 13 weeks

B D V K Gangadhar 6 6 10 weeks

D Akash 6 6 10 weeks

20
3. Quality Metrics:

4.Maintainability Index:
 171 - 5.2 * ln(Halstead Volume) - 0.23 * (Cyclomatic Complexity) - 16.2 * ln(Lines of
Code)
 171-5.2*ln(6480.31)-0.23*(13)-16.2ln(110)
 171-(5.2*8.7765)-2.99-(16.2*4.7)
 171-45.6378-2.99-76.14
 46.2322 (good maintainability)

21
5.Defect detection efficiency:
This metric will help us to assess the performance and productivity of the tester in order to
ensure the quality of the product.
Defect detection efficiency = (Number of defects detected / Total Number of defects) *100

Modules No.of defects Total No.of defects Defect detection


detected efficiency
Image acquisition 1 1 100%
module
Morphological 2 3 66.67%
Processing module
Image preprocessing 1 2 50%
module
direction detection 1 1 100%
module
edge detection 0 0 100%
module.

6.Lines of code:
Manual approach: 110 lines
Halsted approach:
Operators Number of occurrences Operators Number of occurrences
Clear all 1 size 2
clc 1 for 8
; 57 if 4
= 45 < 11
imread 1 + 14
() 120 end 12
‘’ 4 zeros 3
: 9 >= 9
\ 5 > 9
. 1 && 9
, 115 || 12
figure 4 elseif 8
imshow 3 colorbar 1
rgb2gray 1 .^ 2
double 1 sqrt 1
[] 7 - 16
/ 3 == 8
.* 3 max 8
conv2 3 Unit8 1
atan2 1 % 15
* 3
imagesc 1

22
U1(Number of distinct operators in the program) =42
N1(Total number of occurrences of operators in the program) =542

Operands Number of Operands Number of


occurrences occurrences
img 7 arah 24
strings 72 180 1
T_Low 4 pi 1
T_High 12 pan 8
0.075 1 leb 8
0.175 1 i 64
B 4 j 64
2 11 360 1
4 8 arah2 10
5 4 22.5 2
9 4 157.5 2
12 4 202.5 2
15 1 337.5 2
1 28 360 1
159 1 67.5 2
A 5 247.5 2
KGx 2 45 3
KGy 2 112.5 2
-1 4 295.5 2
0 12 90 3
-2 2 8 1
Filtered_X 3 135 3
Filtered_Y 3 255 1
magnitude 2 magnitude2 18
BW 18 T_res 5
edge_final 2

U2(Number of distinct operands in the program) = 51


N2(Total number of occurrences of operands in the program) = 449

➢ Program vocabulary = µ=µ1+ µ2 = 42+51 = 93


➢ Program length = N = N1+ N2 = 991
➢ Program volume = V = N * log2(µ) = 6480.31
➢ Program level= L =2 * (µ2) / (µ1)(N2) = 0.01
➢ Program Difficulty = D = 1/L = 184.88
➢ Effort = E = V*D = 1198094.29
➢ Programming Time = T = E/ β = 66560.79
➢ Program bugs = B = E^(2/3) / 3000 = 3.76

23
7. Cyclomatic complexity:
 D (number of predicate nodes) = 12
 1+d= 1+12= 13

8. Efficiency:
Modules Number of Images No.of Images Efficiency
given as input for achieved
each module Expected
output=actual output
Image acquisition 3 1 33.33%
module
Morphological 2 2 100%
Processing module
Image preprocessing 2 2 100%
module
direction detection 2 2 100%
module
edge detection 1 1 100%
module.

9.Code coverage:
➢ Testing approach used: unit Testing
➢ Code Coverage Percentage = (Number of lines of code executed by a testing
algorithm/Total number of lines of code in a system component) * 100.
➢ (110/110) *100= 100%

10.Defect Density:
Modules No.of defects detected

Image acquisition module 1

Morphological Processing module 3

Image preprocessing module 4

direction detection module 4

edge detection module. 3

24
11. Bug Find Rate (per week):
Week 1st week 2nd week 3rd week 4th week
Bugs Found 3 5 4 3

Average Bug Find rate: 3+5+4+3/4 = 15/4 = 3.75 (4) bugs found per week

12.Accuracy:
Accuracy = (correct predicted image / total testing image)*100 %

13.Fixed Defects percentage:


Modules No.of defects detected No.of defects fixed Percentage

Image acquisition 1 0 0
module
Morphological 2 2 100%
Processing module
Image preprocessing 2 2 100%
module
direction detection 1 1 100%
module
edge detection module. 0 0 0

14. Number of Testcase passed:


 (Passed testcases) / (total number of testcases) * 100
 8/10 *100 = 80%

25
15. Severity of the defects:

Number of defects founded Category

Critical defects 1

High 2

Medium 5

Low 7

Total number of defects identified 15

Category Critical High Medium Low

Impact 90%-100% 50%-75% 10%-50% 0-10%

Probability of 0-20% 20%-40% 40%-60% 60%-100%


occurrence

16.Test Analysis:

S.No Test metrics Data retrieved during


Development and Testing
process
1 Number of Requirements 6
for the project
2 Total number of testcases 10
written for all requirements
3 Total number of testcases 10
executed
4 Total number of testcases 8
passed
5 Total number of testcases 2
failed
6 Total number of testcases not 0
executed

26
17.Requirement Creep
 (Total number of requirements added/No of initial requirements) X 100
 2/6 =0.333

18.Number of defects per test hour


 Total number of defects/Total number of test hours
 15/5 = 3

19.Cost of finding a defect in testing


 (Total effort spent on testing/ defects found in testing)
 300/15 = 20 mins per bug

20.Accepted Defects Percentage


 (Defects Accepted as Valid by Dev Team /Total Defects Reported) X 100
 (15/15) * 100 = 100

21.Number of tests run per time period


 Number of tests run/Total time
 10/5 = 2 tests run per hour

22.Test design efficiency


 Number of tests designed /Total time
 10 / 2 = 5 test cases designed per hour

27
8. Work done by each member and the software metrics used:

Project Manager: Komal Bhagat (19MIS0380)

Work done:
➢ As a Project Manager, I have divided the work based on their positions.
➢ With the help of references, I had written the abstract.
➢ I had done the Use case Diagram based on the Shape recognition.
➢ Prepared schedule of deadlines and activities
➢ Evaluate various productivity and quality metrics
➢ Coordinated the team by overcoming miscommunication.
➢ Assessed the performance of each team member.
➢ Conducted some cost metrics.

Software Metrics used:


➢ Quality metrics
▪ Proper Functioning of team members
▪ Overcoming communication pitfalls
▪ Proper Scheduling
▪ Estimating project outcomes
▪ Customer Satisfaction
➢ Productivity metrics
▪ Plan to done ratio
▪ Effort per Team member
➢ Maintainability Index
➢ Cost Effectiveness metrics

28
Team Manager: Naga Sai Hemanth V (18MIS0009)

Work done:
➢ Prepared Work breakdown structure and Gantt chart and made a detailed schedule of
deadlines and activities for the developers.
➢ Referred some websites and designed the use case diagram depicting the functionality of
our project.
➢ Illustrated few metrices for developers to enhance the productivity of our project.
➢ Constant monitoring on the scheduled deadlines and updating the activities.
➢ Worked in some part of Implementation and contributed for documentation of the project.
➢ Evaluated some productivity and quality metrics.
➢ Coordinated my team members.
➢ Assessed the progress of our work at the end of each month.

Software Metrics used:


➢ Productivity metrics
▪ Plan to done ratio
▪ Effort per Team member
➢ Quality metrics
▪ Estimation of efficiency of research study
▪ Proper Documentation
▪ Efficiency of Implementation
➢ Maintainability Index
➢ Defect detection efficiency
➢ Effort per Team member
➢ Accuracy
➢ Test design efficiency

29
Team Manager: Lasya Sree (18MIS0352)

Work done:
➢ Made sure the work is going as scheduled.
➢ Scheduled the dates for work that has to be done.
➢ Maintained the progress in Gantt charts
➢ Referred a few papers and made the literature survey.
➢ Helped in the code implementation
➢ Assessed the progress of work after each month.
➢ Evaluated various productivity and maintainability metrics.
➢ Coordinated the work of our team.

Software Metrics used:


➢ Time efficiency
➢ Productivity metrics
▪ Plan to done ratio
▪ Effort per Team member
➢ Quality metrics
▪ Proper requirements specification
▪ Formatting
▪ Efficiency of testing
➢ Maintainability Index
➢ Defect detection efficiency
➢ Effort per Team member

30
Developer: G. Lokesh (18MIS0376)

Work done:
➢ As a developer, I have measured the sensitivity of shape recognition and detection.
➢ Helped in designing system architecture diagram.
➢ Helped in Literature Survey.
➢ Suggested some modules for our project.
➢ Evaluated various code metrics.
➢ Assessed the efficiency of the algorithm used in our implementation.

Software Metrics used:


➢ Lines of code
▪ Program Difficulty
▪ Effort
➢ Time complexity
➢ Cyclomatic Complexity
➢ Function point metrics
➢ Accuracy
➢ Accepted defects percentage

Developer: S.Nagarjuna Naidu (18MIS0231)

Work done:
➢ As a developer I have developed some part of code using canny edge detection algorithm.
➢ We are taking image as input and filter the image in horizontal and vertical direction to
identify the shape of the image.
➢ Assessed various different algorithms for shape recognition.
➢ Evaluated various code metrics to produce error free code.
➢ Helped in literature survey.

31
Software Metrics used:
➢ Lines of code
▪ Program Volume
▪ Program Level
➢ Time complexity
➢ Cyclomatic Complexity
➢ Function point metrics
➢ Efficiency
➢ Requirement’s creep

Developer: P. Shiva Rakshith (18MIS0402)

Work done:
➢ As a developer, I have worked on the code.
➢ Suggested few necessary modules for our project.
➢ Assessed various different algorithms for shape recognition.
➢ Worked on algorithms for edge detection.
➢ Evaluated various code metrics to make sure there are no irregularities.
➢ Helped in designing system architecture diagram.

Software Metrics used:


➢ Lines of code
▪ Program vocabulary
▪ Program length
➢ Time complexity
➢ Cyclomatic Complexity
➢ Function point metrics
➢ Accuracy
➢ Accepted defect density

32
Developer: S. Kowshik (18MIS0414)

Work done:
➢ As a developer, I have worked on the code of my part.
➢ Tried to code in a way to recognize shape from complex images.
➢ Evaluated metrics to improve the efficiency of code.
➢ Helped in documentation.
➢ Helped in requirements gathering.

Software Metrics used:


➢ Lines of code
▪ Program Time
▪ Bugs
➢ Time complexity
➢ Cyclomatic Complexity
➢ Function point metrics
➢ Efficiency
➢ Requirement’s creep

Tester: B D V Krishna Gangadhar (18MIS0278)

Work Done:
➢ We test code using MATLAB by taking image in format of “.PNG” as input and we check
whether the edges are detected accurately or not and image used is in the format that
accepted by code or not.
➢ Performed testcases as per the schedule.
➢ Helped in the literature survey in finding journals related to our title.
➢ Evaluated various test metrics.
➢ Suggested some improvements in the code.
➢ Helped in the documentation part.

33
Software Metrics used:
➢ Test case execution productivity metrics
➢ Code Coverage
➢ Defect Density
➢ Accuracy
➢ Bug Find Rate
➢ Fixed defects percentage
➢ Number of Test cases passed
➢ Number of defects per test hour

Tester: D.Akash (18MIS0281)

Work done:
➢ As a tester, I have taken various images as input and tested if the image is correct format
like only .jpeg,.png, implemented or not
➢ Performed testcases according to the schedule.
➢ Evaluated various test metrics.
➢ Suggested some improvements in the code.
➢ Helped in the documentation part.

Software Metrics used:


➢ Test case execution productivity metrics
➢ Code Coverage
➢ Defect Density
➢ Accuracy
➢ Bug Find Rate
➢ Fixed defects percentage
➢ Number of Test cases passed
➢ Number of tests run per time

34
9. Output & screenshots:

Source Code:
clear all;
clc;

%Input image
img = imread ('C:\Users\dokkuakash\Downloads\Canny\House.jpg');
%Show input image
figure, imshow(img);
img = rgb2gray(img);
img = double (img);

%Value for Thresholding


T_Low = 0.075;
T_High = 0.175;

%Gaussian Filter Coefficient


B = [2, 4, 5, 4, 2; 4, 9, 12, 9, 4;5, 12, 15, 12, 5;4, 9, 12, 9, 4;2, 4, 5, 4, 2 ];
B = 1/159.* B;

%Convolution of image by Gaussian Coefficient


A=conv2(img, B, 'same');

%Filter for horizontal and vertical direction


KGx = [-1, 0, 1; -2, 0, 2; -1, 0, 1];
KGy = [1, 2, 1; 0, 0, 0; -1, -2, -1];

%Convolution by image by horizontal and vertical filter


Filtered_X = conv2(A, KGx, 'same');
Filtered_Y = conv2(A, KGy, 'same');

%Calculate directions/orientations

35
arah = atan2 (Filtered_Y, Filtered_X);
arah = arah*180/pi;

pan=size(A,1);
leb=size(A,2);

%Adjustment for negative directions, making all directions positive


for i=1:pan
for j=1:leb
if (arah(i,j)<0)
arah(i,j)=360+arah(i,j);
end;
end;
end;

arah2=zeros(pan, leb);

%Adjusting directions to nearest 0, 45, 90, or 135 degree


for i = 1 : pan
for j = 1 : leb
if ((arah(i, j) >= 0 ) && (arah(i, j) < 22.5) || (arah(i, j) >= 157.5) && (arah(i, j) < 202.5)
|| (arah(i, j) >= 337.5) && (arah(i, j) <= 360))
arah2(i, j) = 0;
elseif ((arah(i, j) >= 22.5) && (arah(i, j) < 67.5) || (arah(i, j) >= 202.5) && (arah(i, j) <
247.5))
arah2(i, j) = 45;
elseif ((arah(i, j) >= 67.5 && arah(i, j) < 112.5) || (arah(i, j) >= 247.5 && arah(i, j) <
292.5))
arah2(i, j) = 90;
elseif ((arah(i, j) >= 112.5 && arah(i, j) < 157.5) || (arah(i, j) >= 292.5 && arah(i, j) <
337.5))
arah2(i, j) = 135;
end;
end;

36
end;

figure, imagesc(arah2); colorbar;

%Calculate magnitude
magnitude = (Filtered_X.^2) + (Filtered_Y.^2);
magnitude2 = sqrt(magnitude);

BW = zeros (pan, leb);

%Non-Maximum Supression
for i=2:pan-1
for j=2:leb-1
if (arah2(i,j)==0)
BW(i,j) = (magnitude2(i,j) == max([magnitude2(i,j), magnitude2(i,j+1),
magnitude2(i,j-1)]));
elseif (arah2(i,j)==45)
BW(i,j) = (magnitude2(i,j) == max([magnitude2(i,j), magnitude2(i+1,j-1),
magnitude2(i-1,j+1)]));
elseif (arah2(i,j)==90)
BW(i,j) = (magnitude2(i,j) == max([magnitude2(i,j), magnitude2(i+1,j),
magnitude2(i-1,j)]));
elseif (arah2(i,j)==135)
BW(i,j) = (magnitude2(i,j) == max([magnitude2(i,j), magnitude2(i+1,j+1),
magnitude2(i-1,j-1)]));
end;
end;
end;

BW = BW.*magnitude2;
figure, imshow(BW);

%Hysteresis Thresholding
T_Low = T_Low * max(max(BW));

37
T_High = T_High * max(max(BW));

T_res = zeros (pan, leb);

for i = 1 : pan
for j = 1 : leb
if (BW(i, j) < T_Low)
T_res(i, j) = 0;
elseif (BW(i, j) > T_High)
T_res(i, j) = 1;
%Using 8-connected components
elseif ( BW(i+1,j)>T_High || BW(i-1,j)>T_High || BW(i,j+1)>T_High || BW(i,j-
1)>T_High || BW(i-1, j-1)>T_High || BW(i-1, j+1)>T_High || BW(i+1, j+1)>T_High ||
BW(i+1, j-1)>T_High)
T_res(i,j) = 1;
end;
end;
end;

edge_final = uint8(T_res.*255);
%Show final edge detection result
figure, imshow(edge_final);

38
Screenshots:

Implementation:

39
Input Image-1:

Outputs:

40
Input Image-2:

41
Outputs:

42
Input Image-3:

Outputs:

43
44
10. Conclusion and future work:
We proposed an algorithm to detect a shape from any input image and we could even recognize the
edges of the shape given in the input image and after applying our proposed algorithm on images we
saw that the algorithm gives very good results even if they are many shapes in one photo by depending
on the value of the shape factor which is proposed in our project and if we compare our work with other
works we could see that most of other works are focusing on detecting and recognizing some specific
shapes but our work is detecting all the kinds of shapes. And further, many software metrics have been
used by the project managers, team managers, developers and testers to improve the productivity and
quality of our project. These metrics helped to improve the efficiency and effectiveness of our
application as well.

Future Work:
We are interested in doing a detailed study on the applications of canny edge detection algorithm in real
world and also interested in making a brief study on its utility in developing the applications in various
fields like in biometrics, medical field and many other.
In future, we would like to improve the efficiency of our application further by improving the edge
detection efficiency for complex images with have high noise and background complexity. Apart from
this, we would like to extend the applicability of our project incase if we come across new and
innovative ideas.

45
References:
[1] Mikolajczyk, K., Zisserman, A., & Schmid, C. Shape recognition with edge-based
features. In British Machine Vision Conference (BMVC'03) (Vol. 2, pp. 779-788). The British
Machine Vision Association, September 2003.

[2] Mohd Firdaus Zakaria, Hoo Seng Choon, and Shahrel Azmin Suandi, Object Shape
Recognition in Image for Machine Vision Application, International Journal of Computer
Theory and Engineering, Vol. 4, No. 1, February 2012.

[3] Jelmer de Vries, Object Recognition: A Shape-Based Approach using Artificial Neural
Networks, Marco Wiering.

[4] Kozo Ohtani and Mitsuru Baba, Shape Recognition and Position Measurement of an Object
Using an Ultrasonic Sensor Array, Hiroshima Institute of Technology Ibaraki University Japan.

[5] Das, M. R., & Barla, S. (2012). Object Shape Recognition (Doctoral dissertation).

[6] Nidhal El Abbadi and Lamis Al Saadi, Automatic Detection and Recognize Different
Shapes in an Image, Computer Science Department University of Kufa, Najaf, Iraq.

[7] F. Felzenszwalb, Representation and Detection of Shapes in Images, W. Eric L. Grimson


Bernard Gordon Professor of Medical Engineering Thesis Supervisor, MASSACHUSETTS
INSTITUTE OF TECHNOLOGY September 2003.

[8] Gulce Bal, Julia Diebold, Erin Wolf Chambers, Ellen Gasparovic, Ruizhen Hu, Kathryn
Leonard, Matineh Shaker, and Carola Wenk, Skeleton-Based Recognition of Shapes in Images
via Longest Path Matching, K. Leonard, S. Tari (eds.), Research in Shape Modeling,
Association for Women in Mathematics Series 1, DOI 10.1007/978- 3319-16348-2_6, Springer
International Publishing Switzerland & The Association for Women in Mathematics 2015.

[9] A.P.Ashbrook and N.A.Thacker, Algorithms For 2-Dimensional Object Recognition,


Imaging Science and Biomedical Engineering Division,Medical School, University of
Manchester, Stopford Building, Oxford Road, Manchester, M13 9PT, 1 / 12 / 1998.

46
[10] Jaruwan Toontham and Chaiyapon Thongchaisuratkrul, An Object Recognition and
Identification System Using the Hough Transform Method, International Journal of
Information and Electronics Engineering, Vol. 3, No. 1, January 2013.

47

You might also like