Professional Documents
Culture Documents
Shape Recognition System: School of Information Technology and Engineering
Shape Recognition System: School of Information Technology and Engineering
Shape Recognition System: School of Information Technology and Engineering
REVIEW REPORT
Submitted by
for
Submitted To
Dr. Nancy Victor
1
Abstract
Humans’ eye has some natural tendency to recognize shapes based on their prior knowledge.
Therefore, vision plays an important role in human knowledge. We can corelate this and apply
the same operation in computers to assist the software to recognize the shapes. There are many
existing systems which recognize shapes based on their color and size. Since we know that
different shapes may possess identical color and size values, these parameters are not sufficient
to identify and recognize the shapes.
In this project, a new system has been proposed which recognizes shapes based on the shape’s
edges to increase the accuracy. We use canny edge detection method to find edges of the shape
by looking for local maxima of the gradient of image to recognize the shape. Shape recognition
finds its application in Analysis of fingerprint, robotics, handwriting mapping, remote sensors
and in many other. In pattern recognition system, recognizing and identifying the shapes is one
of the significant research areas. Main focus of the pattern recognition is the classification
between objects. In this project, we focus on developing a shape recognition system and also
focus on improving the efficiency of our application using various software metrics.
2
Table of Contents
Chapter Page No
Abstract 2
1. Introduction 4-6
1.1 Overview 4
1.2 Work Breakdown 4-5
1.3 Gantt Chart 5-6
2. Project Resource Requirements 6
2.1 Software Requirements 6
2.2 Hardware Requirements 6
3. Literature Survey 7-11
4. System Architecture 12
5. Use case diagram 13
6. Module Description 14
7. Software Metrics (used by Project Manager, Team Manager, 15-27
Developer and Tester)
8.Work done by each member and the software metrics used 28-34
9.Output & screenshots 35-44
10.Conclusion and future work 45
References 46-47
3
1. Introduction:
1.1 Overview
Humans’ eye has some natural tendency to recognize shapes based on their prior knowledge.
Therefore, vision plays an important role in human knowledge. We can corelate this and apply
the same operation in computers to assist the software to recognize the shapes. There are many
existing systems which recognize shapes based on their colour and size. Since we know that
different shapes may possess identical colour and size values, these parameters are not
sufficient to identify and recognize the shapes. In this project, a new system has been proposed
which recognizes shapes based on the shape’s edges to increase the accuracy. We use canny
edge detection method to find edges of the shape by looking for local maxima of the gradient
of image to recognize the shape. Shape recognition finds its application in Analysis of
fingerprint, robotics, handwriting mapping, remote sensors and in many other. In pattern
recognition system, recognizing and identifying the shapes is one of the significant research
areas. Main focus of the pattern recognition is the classification between objects.
Hierarchy:
4
1.3 Gantt Chart
5
2. Project Resource Requirements:
2.1 Software Requirements
➢ IDE : MATLAB
➢ Project Management : INSTAGANTT
6
3. Literature Survey:
7
3 Jelmer Philip de Object Recognition: The approach is SUSAN algorithm,
Vries A Shape-Based shape-based and insertion sort
Approach using works towards algorithm
Artificial Neural recognition under a
Networks broad range of
circumstances, from
varied lighting
conditions to affine
transformations. The
main emphasis is on
its neural elements
that allow the system
to learn to recognize
objects about which
it has no prior
information.
4 Ohtani, Kozo & Position and posture In most of the past Flight methods,
Baba, Mitsuru & measurements and methods of this kind, acoustic holographic
Konishi, Tadataka. shape recognition of the characteristic methods
columnar objects quantities have been
using an ultrasonic based on either time-
sensor array and of-flight methods or
neural networks acoustic holographic
methods. In these
methods, measuring
and recognizing the
width and depth
directions
simultaneously with
a high resolution has
been difficult in
principle.
8
5 Das, Manas Ranjan Object Shape The approach here is Corner detection
and Barla, Sunil Recognition to classify some of method, signature
the common objects method and chain
around us and decide code method
whether they belong
to any geometric
shape or not. The
shape of the objects
can be represented by
some feature space
which may be used
for recognizing shape
of the objects.
6 El Abbadi, Nidhal Automatic Detection This is analogous to Statistical method,
& Saadi, Lamis. and Recognize machine vision such structural method.
Different Shapes in as shape recognition Contrast-limited
an Image application which is adaptive histogram
important field equalization
nowadays. This (CLAHE) is used.
paper introduces a
new approach for
recognizing two-
dimensional shapes
in an image, and also
recognizes the shapes
type.
7 Pedro F. Representation and Our methods revolve Segmentation
Felzenszwalb Detection of Shapes around a particular algorithm, generic
in Images shape representation optimization
based on the methods
description of objects
using triangulated
polygons. This
9
representation is
similar to the medial
axis transform and
has important
properties from a
computational
perspective.
8 Gulce Bal, Julia Research in shape We present a novel The geometric
Diebold, Erin Wolf modeling image recognition algorithm, SAT-EDF
Chambers, Ellen method based of the algorithms
Gasparovic, Blum medial axis
Ruizhen Hu, that identifies shape
Kathryn information present
Leonard, Matineh in unsegmented input
Shaker, and Carola images.
Wenk
9 Jaruwan Toontham Object Recognition This paper presents Hough Transform
and Chaiyapon and Identification an object recognition method and sobel
Thongchaisuratkrul System Using the and edge detection
Hough Transform identification system algorithm.
Method. Object using the Hough
Recognition and Transform method.
Identification System The
Using process starts from
the Hough Transform imported images into
Method the system by
webcam,
detected image edge
by fuzzy, recognized
the object by Hough
Transform, and
separated the objects
by the robot arm.
10
10 A.Ashbrook and Algorithms For two Representation of Stereo matching
N.A.Thacker Dimensional Object arbitrary shape for algorithm and
Recognition. purposes of visual thinning algorithm
recognition is an
unsolved problem.
The task of
representation is
intimately
constrained by the
recognition process
and one cannot be
solved without some
solution for the
other. We have
already done some
work on the use of an
associative neural
network system for
hierarchal pattern
recognition of the
sort that may be
ultimately useful for
generic object
recognition.
11
4. System Architecture:
12
5. Use case diagram:
13
6. Module Description:
14
7. Software Metrics:
3.Maintainability Index:
Calculates an index value between 0 and 100 that represents the relative ease of maintaining
the code. A high value means better maintainability. Color coded ratings can be used to quickly
identify trouble spots in your code. A green rating is between 20 and 100 and indicates that the
code has good maintainability. A yellow rating is between 10 and 19 and indicates that the code
is moderately maintainable. A red rating is a rating between 0 and 9 and indicates low
maintainability.
Maintainability Index = 171 - 5.2 * ln(Halstead Volume) - 0.23 * (Cyclomatic Complexity) -
16.2 * ln(Lines of Code)
4. Quality metrics:
A major challenge in software maintenance is understanding the existing code, and this is
where code quality metrics can have a big impact. These metrics help to improve the project
efficiency also.
15
5.Defect detection efficiency:
This metric will help us to assess the performance and productivity of the tester in order to
ensure the quality of the product.
Defect detection efficiency = Number of defects detected / Total Number of defects
2.Cyclomatic Complexity:
This metric used to indicate the complexity of a program. It is a quantitative measure of the
number of linearly independent paths through a program's source code. This gives the
developer an idea about the complexity of code needed to develop.
4.Efficiency:
This metric will help the developer to assess the efficiency of the code developed. This will
give a clear picture of whether the code provide accurate results according to the user needs
and also evaluates the code performance.
5.Time complexity:
Time complexity is the computational complexity that describes the amount of
computer time it takes to run an algorithm. Thus, the amount of time taken and the number of
elementary operations performed by the algorithm are taken to differ by at most a constant
factor.
16
Metrics Used by Testers
1.Code Coverage:
Code coverage is a metric that can help testers to understand how much of your source is tested.
This in turn helps to assess the quality of test suit and to find bugs.
Code Coverage Percentage = (Number of lines of code executed by a testing algorithm/Total
number of lines of code in a system component) * 100.
2.Defect Density:
Defect Density is the number of defects confirmed in software/module during a specific period
of operation. This metrics helps the testers to find out the density of bugs in the code developed.
3. Portability:
Portability measures how usable the same software is in different environments. It relates to
platform independency. There isn’t a specific measure of portability. But there are several ways
you can ensure portable code. It’s important to regularly test code on different platforms, rather
than waiting until the end of development.
5.Accuracy:
This metric helps the tester to check and assess the accuracy of the developed code by giving
various sample data. This helps to calculate the accuracy rate of the software. In our project,
we take different shapes of different parameters and calculate the efficiency of the output.
17
Metrics Calculation:
18
Month-May Planned Activities Done Activities Planned to done Ratio
Team A 1. Development of 1. Development of 3:2
edge detection edge detection
module. module.
2. Evaluate Metrics of 2. Evaluate Metrics of
Manager Manager
3. Evaluate Metrics of
developers
Team B 1. Development of 1. Development of 3:2
edge detection edge detection
module. module.
2. Evaluate Metrics of 2. Evaluate Metrics of
Manager Manager
3. Evaluate Metrics of
developers
19
2.Effort per Team member:
B D V K Gangadhar 6 6 10 weeks
D Akash 6 6 10 weeks
20
3. Quality Metrics:
4.Maintainability Index:
171 - 5.2 * ln(Halstead Volume) - 0.23 * (Cyclomatic Complexity) - 16.2 * ln(Lines of
Code)
171-5.2*ln(6480.31)-0.23*(13)-16.2ln(110)
171-(5.2*8.7765)-2.99-(16.2*4.7)
171-45.6378-2.99-76.14
46.2322 (good maintainability)
21
5.Defect detection efficiency:
This metric will help us to assess the performance and productivity of the tester in order to
ensure the quality of the product.
Defect detection efficiency = (Number of defects detected / Total Number of defects) *100
6.Lines of code:
Manual approach: 110 lines
Halsted approach:
Operators Number of occurrences Operators Number of occurrences
Clear all 1 size 2
clc 1 for 8
; 57 if 4
= 45 < 11
imread 1 + 14
() 120 end 12
‘’ 4 zeros 3
: 9 >= 9
\ 5 > 9
. 1 && 9
, 115 || 12
figure 4 elseif 8
imshow 3 colorbar 1
rgb2gray 1 .^ 2
double 1 sqrt 1
[] 7 - 16
/ 3 == 8
.* 3 max 8
conv2 3 Unit8 1
atan2 1 % 15
* 3
imagesc 1
22
U1(Number of distinct operators in the program) =42
N1(Total number of occurrences of operators in the program) =542
23
7. Cyclomatic complexity:
D (number of predicate nodes) = 12
1+d= 1+12= 13
8. Efficiency:
Modules Number of Images No.of Images Efficiency
given as input for achieved
each module Expected
output=actual output
Image acquisition 3 1 33.33%
module
Morphological 2 2 100%
Processing module
Image preprocessing 2 2 100%
module
direction detection 2 2 100%
module
edge detection 1 1 100%
module.
9.Code coverage:
➢ Testing approach used: unit Testing
➢ Code Coverage Percentage = (Number of lines of code executed by a testing
algorithm/Total number of lines of code in a system component) * 100.
➢ (110/110) *100= 100%
10.Defect Density:
Modules No.of defects detected
24
11. Bug Find Rate (per week):
Week 1st week 2nd week 3rd week 4th week
Bugs Found 3 5 4 3
Average Bug Find rate: 3+5+4+3/4 = 15/4 = 3.75 (4) bugs found per week
12.Accuracy:
Accuracy = (correct predicted image / total testing image)*100 %
Image acquisition 1 0 0
module
Morphological 2 2 100%
Processing module
Image preprocessing 2 2 100%
module
direction detection 1 1 100%
module
edge detection module. 0 0 0
25
15. Severity of the defects:
Critical defects 1
High 2
Medium 5
Low 7
16.Test Analysis:
26
17.Requirement Creep
(Total number of requirements added/No of initial requirements) X 100
2/6 =0.333
27
8. Work done by each member and the software metrics used:
Work done:
➢ As a Project Manager, I have divided the work based on their positions.
➢ With the help of references, I had written the abstract.
➢ I had done the Use case Diagram based on the Shape recognition.
➢ Prepared schedule of deadlines and activities
➢ Evaluate various productivity and quality metrics
➢ Coordinated the team by overcoming miscommunication.
➢ Assessed the performance of each team member.
➢ Conducted some cost metrics.
28
Team Manager: Naga Sai Hemanth V (18MIS0009)
Work done:
➢ Prepared Work breakdown structure and Gantt chart and made a detailed schedule of
deadlines and activities for the developers.
➢ Referred some websites and designed the use case diagram depicting the functionality of
our project.
➢ Illustrated few metrices for developers to enhance the productivity of our project.
➢ Constant monitoring on the scheduled deadlines and updating the activities.
➢ Worked in some part of Implementation and contributed for documentation of the project.
➢ Evaluated some productivity and quality metrics.
➢ Coordinated my team members.
➢ Assessed the progress of our work at the end of each month.
29
Team Manager: Lasya Sree (18MIS0352)
Work done:
➢ Made sure the work is going as scheduled.
➢ Scheduled the dates for work that has to be done.
➢ Maintained the progress in Gantt charts
➢ Referred a few papers and made the literature survey.
➢ Helped in the code implementation
➢ Assessed the progress of work after each month.
➢ Evaluated various productivity and maintainability metrics.
➢ Coordinated the work of our team.
30
Developer: G. Lokesh (18MIS0376)
Work done:
➢ As a developer, I have measured the sensitivity of shape recognition and detection.
➢ Helped in designing system architecture diagram.
➢ Helped in Literature Survey.
➢ Suggested some modules for our project.
➢ Evaluated various code metrics.
➢ Assessed the efficiency of the algorithm used in our implementation.
Work done:
➢ As a developer I have developed some part of code using canny edge detection algorithm.
➢ We are taking image as input and filter the image in horizontal and vertical direction to
identify the shape of the image.
➢ Assessed various different algorithms for shape recognition.
➢ Evaluated various code metrics to produce error free code.
➢ Helped in literature survey.
31
Software Metrics used:
➢ Lines of code
▪ Program Volume
▪ Program Level
➢ Time complexity
➢ Cyclomatic Complexity
➢ Function point metrics
➢ Efficiency
➢ Requirement’s creep
Work done:
➢ As a developer, I have worked on the code.
➢ Suggested few necessary modules for our project.
➢ Assessed various different algorithms for shape recognition.
➢ Worked on algorithms for edge detection.
➢ Evaluated various code metrics to make sure there are no irregularities.
➢ Helped in designing system architecture diagram.
32
Developer: S. Kowshik (18MIS0414)
Work done:
➢ As a developer, I have worked on the code of my part.
➢ Tried to code in a way to recognize shape from complex images.
➢ Evaluated metrics to improve the efficiency of code.
➢ Helped in documentation.
➢ Helped in requirements gathering.
Work Done:
➢ We test code using MATLAB by taking image in format of “.PNG” as input and we check
whether the edges are detected accurately or not and image used is in the format that
accepted by code or not.
➢ Performed testcases as per the schedule.
➢ Helped in the literature survey in finding journals related to our title.
➢ Evaluated various test metrics.
➢ Suggested some improvements in the code.
➢ Helped in the documentation part.
33
Software Metrics used:
➢ Test case execution productivity metrics
➢ Code Coverage
➢ Defect Density
➢ Accuracy
➢ Bug Find Rate
➢ Fixed defects percentage
➢ Number of Test cases passed
➢ Number of defects per test hour
Work done:
➢ As a tester, I have taken various images as input and tested if the image is correct format
like only .jpeg,.png, implemented or not
➢ Performed testcases according to the schedule.
➢ Evaluated various test metrics.
➢ Suggested some improvements in the code.
➢ Helped in the documentation part.
34
9. Output & screenshots:
Source Code:
clear all;
clc;
%Input image
img = imread ('C:\Users\dokkuakash\Downloads\Canny\House.jpg');
%Show input image
figure, imshow(img);
img = rgb2gray(img);
img = double (img);
%Calculate directions/orientations
35
arah = atan2 (Filtered_Y, Filtered_X);
arah = arah*180/pi;
pan=size(A,1);
leb=size(A,2);
arah2=zeros(pan, leb);
36
end;
%Calculate magnitude
magnitude = (Filtered_X.^2) + (Filtered_Y.^2);
magnitude2 = sqrt(magnitude);
%Non-Maximum Supression
for i=2:pan-1
for j=2:leb-1
if (arah2(i,j)==0)
BW(i,j) = (magnitude2(i,j) == max([magnitude2(i,j), magnitude2(i,j+1),
magnitude2(i,j-1)]));
elseif (arah2(i,j)==45)
BW(i,j) = (magnitude2(i,j) == max([magnitude2(i,j), magnitude2(i+1,j-1),
magnitude2(i-1,j+1)]));
elseif (arah2(i,j)==90)
BW(i,j) = (magnitude2(i,j) == max([magnitude2(i,j), magnitude2(i+1,j),
magnitude2(i-1,j)]));
elseif (arah2(i,j)==135)
BW(i,j) = (magnitude2(i,j) == max([magnitude2(i,j), magnitude2(i+1,j+1),
magnitude2(i-1,j-1)]));
end;
end;
end;
BW = BW.*magnitude2;
figure, imshow(BW);
%Hysteresis Thresholding
T_Low = T_Low * max(max(BW));
37
T_High = T_High * max(max(BW));
for i = 1 : pan
for j = 1 : leb
if (BW(i, j) < T_Low)
T_res(i, j) = 0;
elseif (BW(i, j) > T_High)
T_res(i, j) = 1;
%Using 8-connected components
elseif ( BW(i+1,j)>T_High || BW(i-1,j)>T_High || BW(i,j+1)>T_High || BW(i,j-
1)>T_High || BW(i-1, j-1)>T_High || BW(i-1, j+1)>T_High || BW(i+1, j+1)>T_High ||
BW(i+1, j-1)>T_High)
T_res(i,j) = 1;
end;
end;
end;
edge_final = uint8(T_res.*255);
%Show final edge detection result
figure, imshow(edge_final);
38
Screenshots:
Implementation:
39
Input Image-1:
Outputs:
40
Input Image-2:
41
Outputs:
42
Input Image-3:
Outputs:
43
44
10. Conclusion and future work:
We proposed an algorithm to detect a shape from any input image and we could even recognize the
edges of the shape given in the input image and after applying our proposed algorithm on images we
saw that the algorithm gives very good results even if they are many shapes in one photo by depending
on the value of the shape factor which is proposed in our project and if we compare our work with other
works we could see that most of other works are focusing on detecting and recognizing some specific
shapes but our work is detecting all the kinds of shapes. And further, many software metrics have been
used by the project managers, team managers, developers and testers to improve the productivity and
quality of our project. These metrics helped to improve the efficiency and effectiveness of our
application as well.
Future Work:
We are interested in doing a detailed study on the applications of canny edge detection algorithm in real
world and also interested in making a brief study on its utility in developing the applications in various
fields like in biometrics, medical field and many other.
In future, we would like to improve the efficiency of our application further by improving the edge
detection efficiency for complex images with have high noise and background complexity. Apart from
this, we would like to extend the applicability of our project incase if we come across new and
innovative ideas.
45
References:
[1] Mikolajczyk, K., Zisserman, A., & Schmid, C. Shape recognition with edge-based
features. In British Machine Vision Conference (BMVC'03) (Vol. 2, pp. 779-788). The British
Machine Vision Association, September 2003.
[2] Mohd Firdaus Zakaria, Hoo Seng Choon, and Shahrel Azmin Suandi, Object Shape
Recognition in Image for Machine Vision Application, International Journal of Computer
Theory and Engineering, Vol. 4, No. 1, February 2012.
[3] Jelmer de Vries, Object Recognition: A Shape-Based Approach using Artificial Neural
Networks, Marco Wiering.
[4] Kozo Ohtani and Mitsuru Baba, Shape Recognition and Position Measurement of an Object
Using an Ultrasonic Sensor Array, Hiroshima Institute of Technology Ibaraki University Japan.
[5] Das, M. R., & Barla, S. (2012). Object Shape Recognition (Doctoral dissertation).
[6] Nidhal El Abbadi and Lamis Al Saadi, Automatic Detection and Recognize Different
Shapes in an Image, Computer Science Department University of Kufa, Najaf, Iraq.
[8] Gulce Bal, Julia Diebold, Erin Wolf Chambers, Ellen Gasparovic, Ruizhen Hu, Kathryn
Leonard, Matineh Shaker, and Carola Wenk, Skeleton-Based Recognition of Shapes in Images
via Longest Path Matching, K. Leonard, S. Tari (eds.), Research in Shape Modeling,
Association for Women in Mathematics Series 1, DOI 10.1007/978- 3319-16348-2_6, Springer
International Publishing Switzerland & The Association for Women in Mathematics 2015.
46
[10] Jaruwan Toontham and Chaiyapon Thongchaisuratkrul, An Object Recognition and
Identification System Using the Hough Transform Method, International Journal of
Information and Electronics Engineering, Vol. 3, No. 1, January 2013.
47