SkinDisease Diagnosis Docc

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 28

Virtual Marker Using Python

A Mini Project Report Submitted in partial fulfillment of the requirement for the award of
the degree of

BACHELOR OF TECHNOLOGY
In
ELECTRONICS & COMMUNICATIONS ENGINEERING
By

SOMAROUTHU SAI VENKATA MANI KRISHNA TENTUANU SARANYA


(20B95A0420) (19B91A04M3)

DONGA SATYA PAVAN KARTHIK SANDRALA MINISHA


(19B91A04K3) (19B91A04J9)

Under the esteemed guidance of


Dr.
M.E, Ph.D.

DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS ENGINEERING


S.R.K.R. ENGINEERING COLLEGE(AUTONOMOUS)
(Affiliated to JNTU, KAKINADA)
(Recognized by A.I.C.T.E., Accredited by N.B.A., & Accredited by N.A.A.C. with ‘A’ Grade, New Delhi)

CHINNAMIRAM, BHIMAVARAM-534204
(2019-2023)
S.R.K.R.ENGINEERINGCOLLEGE(AUTONOMOUS)
(Affiliated JNTU, KAKINADA)

(Recognized by A.I.C.T.E. Accredited by N.B.A., Accredited by NAAC. with ‘A’ Grade NEW DELHI)

CHINNAMIRAM, BHIMAVARAM - 534204

DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS ENGINEERING

CERTIFICATE
This is to certify that the project work entitled
VIRTUAL MARKER USING PYTHON
Is the bonafide work of

Mr/Miss SOMAROUTHU SAI VENKATA MANI KRISHNA (20B95A0420), TENTU ANU

SARANYA(19B91A04M3), DONGA SATYA PAVAN KARTHIK(19B91A04K3),

SANDRALA MINISHA (19B91A04J9)submitted the partial fulfillment of the requirement for

the award of the degree of BACHELOR OF TECHNOLOGY in ELECTRONICS AND

COMMUNICATIONS ENGINEERING during the year 2019-2023.

Guide: Head Of the Department:

Dr. G. NAGA RAJU Dr. N. UDAYA KUMAR

M.E, Ph.D. M.Tech,Ph.D,M.I.S.T.E,S.M.I.E.E.E,F.I.E.T.E,F.I.E.,

Asst. Professor, Department of E.C.E. Professor and Head Of the E.C.E Department

2
ACKNOWLEDGMENTS

Our most sincere and grateful acknowledgment to our alma mater SAGI RAMA
KRISHNAM RAJU ENGINEERING COLLEGE for allowing us to fulfill our aspirations
and for the successful completion of the project.
We are grateful to our principal Dr. M. JAGAPATHI RAJU, for providing us
with the necessary facilities to carry out our project.
We convey our sincere thanks to Dr. N.UDAYA KUMAR, Head of the
Department of Electronics and Communication Engineering, for his kind cooperation in the
successful completion of the project work.
We express our sincere thanks to our esteemed guide Dr. G. NAGA RAJU, Asst.
Professor, Department of Electronics and Communication Engineering, for giving valuable
and timely suggestions for the project work, constant encouragement, and support in times of
trouble throughout the project work.
We extend our sense of gratitude to all our teaching and non-teaching staff and all
our friends, who indirectly helped us in this endeavor.
-Project Associates

3
CONTENTS
TITLE OF THE PROJECT..................................................................................1
CERTIFICATE.....................................................................................................2
ACKNOWLEDGEMENT...................................................................................3
CONTENTS........................................................................................................4
ABSTRACT........................................................................................................5
CHAPTER 1: INTRODUCTION
1.1 Introduction about skin........................................................................6
● Epidermis
● Dermis
● Hypodermis
1.2 The conditions that affect the skin layer.......................................................7
1.3 Introduction about segmentation.............................................................8
1.4 Classes of Segmentation Technique…...............................................9
CHAPTER 2: PROPOSED METHODOLOGY
2.1 The block diagram of the operation system...............................................11
2.2 Contrast enhancement...............................................................................12
2.3 Conversion of color image into gray image.............................................13
● Normalized Co-Occurrence Matrix & Trace
2.4 Segmentation process..............................................................................14
● 2D Otsu’s Method & Build new 2D histogram
2.5 Feature extraction....................................................................................18
● GLCM & IQA
2.6 Decision Tree....................................................................................19
CHAPTER 3: RESULTS
3.1 Case 1: Malignant - Melanoma is detected....................................21
3.2 Case 2: Malignant - Basal Cell Carcinoma is detected...................22
3.3 Case 3: Malignant - Squamous Cell Carcinoma is detected...................23
3.4 Case 4: Benign - Melanocytic Nevi is detected............................. 24
3.5 Case 5: Benign - Seborrheic Keratoses is detected.........................25
3.6 Case 6: Benign - Acrochordon is detected.......................................... 26
CHAPTER 4 : CONCLUSION
REFERENCES................................................................................................. 28

4
ABSTRACT

Skin diseases are more common than other diseases. Skin diseases may be caused
by bacteria trapped in skin pores and hair follicles, fungal parasites, or microorganisms living
on the skin, viruses, a weakened immune system, contact with allergens, irritants, or another
person’s infected skin, etc. The advancement of lasers and photonics-based medical
technology has made it possible to diagnose skin diseases much more quickly and accurately.
But the cost of such a diagnosis is still limited and very expensive. So, image processing
techniques help to build an automated screening system for dermatology at an initial stage.
The extraction of features plays a key role in helping to classify skin diseases. Computer
vision has a role in the detection of skin diseases in a variety of techniques. This work
contributes to the research of skin disease detection. We proposed an image processing-based
method to detect skin diseases.
This method takes the digital image of the disease that affects the skin area, then
uses image analysis to identify the type of disease. Our proposed approach is simple, fast, and
does not require expensive equipment other than a camera and a computer. The approach
works on the inputs of a color image. Then pre-process the image, perform a segmentation
threshold on it and extract features from it. Finally, the results are shown to the user,
including the type of disease, Asymmetrical Index, compactness Index, Diameter, Mean,
Standard Deviation, and PSNR. The system successfully detects 6 different types of skin
diseases.

5
CHAPTER-1

INTRODUCTION

1.1 SKIN is the body’s largest organ, made of water, protein, fats, and minerals. Your skin
protects your body from germs and regulates body temperature. Nerves in the skin help you
feel sensations like hot and cold. Your skin, along with your hair, nails, oil glands, and sweat
glands, is part of the integumentary. “Integumentary” means a body’s outer covering.

Three layers of tissue make up the skin:

● Epidermis, the top layer.


● The dermis, the middle layer.
● The hypodermis, the bottom or fatty layer.

1.1.1 Epidermis :

Your epidermis is the top layer of the skin that you can see and touch. Keratin, a protein
inside skin cells, makes up the skin cells and, along with other proteins, sticks together to
form this layer. The epidermis

● Makes new skin: The epidermis continually makes new skin cells. These new cells
replace the approximately 40,000 old skin cells that your body sheds every day. You
have new skin every 30 days.
● Protects your body: Langerhans cells in the epidermis are part of the body’s immune
system. They help fight off germs and infections.

6
● Provides skin color: The epidermis contains melanin, the pigment that gives skin its
color. The amount of melanin you have determines the color of your skin, hair, and
eyes. People who make more melanin have darker skin and tan more quickly.

1.1.2 Dermis:

The dermis makes up 90% of the skin’s thickness. This middle layer of skin:

● Has collagen and elastin: Collagen is a protein that makes skin cells strong and
resilient. Another protein found in the dermis, elastin, keeps skin flexible. It also helps
stretched skin regain its shape.
● Makes oil: Oil glands in the dermis help keep the skin soft and smooth. Oil also
prevents your skin from absorbing too much water when you swim or get caught in
rainstorm.
● Supplies blood: Blood vessels in the dermis provide nutrients to the epidermis,
keeping the skin layers healthy.

1.1.3 Hypodermis:

Then you fall or are in an accident. The bottom layer of skin, or hypodermis, is the fatty
layer. The hypodermis:

Cushions muscles and bones: Fat in the hypodermis protects muscles and bones from
injuries when Has connective tissue: This tissue connects layers of skin to muscles
and bones.

● Helps the nerves and blood vessels: Nerves and blood vessels in the dermis

(middle layer) get larger in the hypodermis. These nerves and blood vessels branch
out to connect the hypodermis to the rest of the body.

● Regulates body temperature: Fat in the hypodermis keeps you from getting too cold or
hot.

1.2 The conditions that affect the skin layer:

7
As the body’s external protection system, your skin is at risk for various problems.
These include:

● Allergies like contact dermatitis and poison ivy rashes.


● Bug bites, such as spider bites, tick bites, and mosquito bites.
● Skin cancer, including melanoma.
● Skin infections like cellulitis.
● Skin rashes and dry skin.
● Skin disorders like acne, eczema, psoriasis, and vitiligo.
● Skin lesions, such as moles, freckles, and skin tags.
● Wounds, burns (including sunburns), and scars.

From the 19th century onwards, skin cancer detection started by using technology. And it was
very difficult for the doctors to find the disease and give treatment to patients but it takes so
many days for them. It is said that the most regular skin diseases in 2013 were acne vulgaris,
dermatitis, urticaria, and psoriasis and it is a fact that skin illnesses have become regular
around the world. After that, they searched for it to find it. In 2016, the authors built a mobile
application to detect and classify acne. The main objective of this research was to find a
proper solution to identify and classify acne severity from photos taken by a cellphone. Here
three different segmentation methods have been used in which two-level k-means clustering
outperformed the others and also when it comes to classification, part two machine learning
methods were used. Here the FCM (fuzzy c-means) method outperformed the Support Vector
Machine (SVM). The authors have mentioned that the texture method they used is
insufficient and needs further improvement.

Recent research done in 2018 focused on identifying acne using smartphone images.
However, they mainly classify Acne into only two subtypes namely papules and pustules.
They have used a facial recognition algorithm to separately identify features in the face and
classify acne accordingly. One of the main limitations the authors have presented is that the
presence of more than one face and also bad lighting will affect the classification. However,
the main disadvantage seen in this study is even though the accuracy is high they have
classified acne only into two types but dermatologists and also websites have specifically
stated that Acne can be of mainly six types as mentioned above.

8
In research done by Sunyani technical university in 2019, they have used a Web-based
approach to diagnose Skin Diseases. Researchers concluded that CNN is enough to extract
features from the images. Also, they have successfully managed to reduce the computational
time (0.0001 seconds) together with an increase in the accuracy. However, the study did not
specifically focus on acne subtype classification, and also this is mainly a web-based
application therefore Cueto will address the issue of portability and subtype classification.

To overcome those problems we are now going to use an image segmentation method to
identify the six types of disease of acne. The thresholding method (8 bit ) is used in it. Now
in

1.3 What is image segmentation?

Image segmentation is the process of partitioning a digital image into multiple image
segments, also known as image regions or image objects (sets of pixels). The goal of
segmentation is to simplify and/or change the representation of an image into something
more meaningful and easier to analyze.
Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in
images. More precisely, image segmentation is the process of assigning a label to every pixel
in an image such that pixels with the same label share certain characteristics.
1.4 Classes of Segmentation Technique

There are two classes of segmentation techniques.

● Classical computer vision approaches


● AI-based techniques

The simplest method of image segmentation is called the thresholding method. This method
is based on a clip-level (or a threshold value) to turn a gray-scale image into a binary image.

The key to this method is to select the threshold value (or values when multiple levels are
selected). Several popular methods are used in industry including the maximum entropy
method, balanced histogram thresholding, Otsu’s method (maximum variance), and k- means
clustering.

9
Recently, methods have been developed for thresholding computed tomography (CT) images.
The key idea is that, unlike Otsu's method, the thresholds are derived from the radiographs
instead of the (reconstructed) image.

New methods suggested the usage of multi-dimensional fuzzy rule-based non-linear


thresholds. In these works decision over each pixel's membership to a segment is based on
multi-dimensional rules derived from fuzzy logic and evolutionary algorithms based on
image lighting environment and application

10
CHAPTER-2

PROPOSED METHODOLOGY
2.1 The block diagram of the operation system:

Fig:1.1 The block diagram of the operating system of skin disease detection

11
In this methodology, we used thresholding segmentation.The fig1.1 gives the information
about sequences of operation performed in it. Before going to the segmentation the image is
converted into gray which means the 3D is converted into 2d to make the further process
easy.

Our methodology starts by giving input images. The given image future is processed by
preprocessing and segmentation, and then the decision tree stage which is to classify the type
of disease with the help of a given data set.

The preprocessing includes :

● Contrast enhancement
● RGB to grayscale conversion

2.2 CONTRAST ENHANCEMENT :


The first stage after giving the input image was contrast enhancement of the image.The
contrast process takes place with three functions. By using the default setting we can process
it.
Three functions are particularly suitable for contrast enhancement: imadjust, histeq, and
adapthisteq. Using the default settings, compare the effectiveness of the three techniques:
● imadjust increases the contrast of the image by mapping the values of the input
intensity image to new values such that, by default, 1% of the data is saturated at
low and high intensities of the input data.
● histeq performs histogram equalization. It enhances the contrast of images by
transforming the values in an intensity image so that the histogram of the output
image approximately matches a specified histogram (uniform distribution by
default).
● adapthisteq performs contrast-limited adaptive histogram equalization. Unlike
histeq, it operates on small data regions (tiles) rather than the entire image. Each
tile's contrast is enhanced so that the histogram of each output region
approximately matches the specified histogram (uniform distribution by default).
The contrast enhancement can be limited to avoid amplifying the noise which
might be present in the image

12
2.3 CONVERSION OF COLOR IMAGE OF RGB INTO GRAY :

To extend the concept of co-occurrence matrices to n-dimensional Euclidean space,


mathematical model for the above concepts is required. We treat our universal set as n Z .
Here n Z =Z x Z x … x Z, the Cartesian product of Z taken n times with itself. Where Z is the
set of all integers. A point (or pixel in n Z ) X in n Z is an n-tuple of the form X=(x1,x2,
…,xn) where i x Z ∈ ∀ =i n 1,2,3. An image I is a function from a subset of n
Z to Z. That is f I Z: → where n I Z ⊂ . If X I ∈ , then X is assigned an integer Y such that
Y f X = ( ). Y is called the intensity of the pixel X. The image is called a greyscale image in
the n-dimensional space n Z . Volumetric data [11] can be treated as three-dimensional
images or images in 3 Z . 2.3. Generalized Co-occurrence Matrices Consider a greyscale
image I defined in n Z . The grey level co-occurrence matrix is defined to be a square matrix
Gd of size N where N is the N be the total number of grey levels in the image. the (i, j)th
entry of Gd represents the number of times a pixel X with intensity value i is separated from a
pixel Y with intensity value j at a particular distance k in a particular direction d. where the
distance k is a non-negative integer and the direction d is specified by 1, 2, 3, ( ,)ndddd
d =, where {0, , } i d k k ∈ − ∀ =i n 1, 2,3,. , . As an illustration consider the greyscale
image in 3 Z with the four intensity values 0, 1, 2, and 3. The image is represented as a three-
dimensional matrix of size 3 3 3 × × in which the three slices are as follows. 0 0 1 0 1 2
0 2 3 , 1 2 3 0 2 3 0 1 2 and 1 3 0 0 3 1 3 2 1 . The three dimensional co-occurrence matrix
Gd for this image in the direction d = (1,0,0)is the 4 4 × matrix 1 3 2 1 0 0 3 1 0 1 0 3 1 1 1 0
Gd = Note that 1 0 0 1 3 0 1 1 ' 2 3 0 1 1 1 3 0 G G −d d == It can be seen that
X d Y + = , so that ' G G −d d = , where ' Gd is the transpose of Gd . Hence G G d d + − is a
symmetric matrix. Since ' G G −d d =, we say that Gd and G−d are dependent (or not
independent). Therefore the directions d and –d are called dependent or not independent.
Theorem: If n X Z ∈ , the number of independent directions from X in n Z is 3 1 2 n −.

2.3.1 Normalized Co-Occurrence Matrix :

Consider d i j N= G (i,j) ∑∑ , which is the total number of co-occurrence pairs in Gd. Let 1
( , ) ( , ) GN i j G i j d d N = . GNd is called the normalized co-occurrence matrix, where the

13
(i, j)th entry of ( , ) GN i j d is the joint probability of co-occurrences of pixels with intensity i
and pixels with intensity j separated by a distance k, in a particular direction .

2.3.2 TRACE:

In addition to the well-known Haralick features such as Angular Second Moment, Contrast,
Correlation, etc. listed in, we define a new feature from the normalized co-occurrence matrix,
which can be used to identify constant regions in an image. For convenience, we consider
n=2 so that the image is a two-dimensional grayscale image and the normalized co-
occurrence matrix becomes the traditional Gray Level Co-occurrence Matrix.

Fig:1.2.1 shows the color image(rgb) Fig: 1.2.2 it shows the gray color image

The fig 1.2.1 and 1.2.2 gives the idea about conversion of color image(rgb) into gray we can
observe them in it

2.4 Segmentation process:

Segmentation is the main process of this methodology. The segmentation. Histogram


according to our required output parameters

2.4.1 2D Otsu’s Method :

After the preprocessing of the image segmentation is held. An image f(x, y) of the size

M × N is represented in L gray levels. Its corresponding averaged image g(x, y) is defined


by g(x, y) = 1 k 2 (k ∑−1)/2 a=−(k−1)/2 (k ∑−1)/2 b=−(k−1)/2 f(x + a, y + b). Denoting the
gray level at pixel (x, y) in image f(x, y) and g(x, y) as i and j respectively, we obtain a gray

14
level pair (i, j) for each pixel. Let Fij be the frequency of the pair (i, j), its joint probability is
given by pij = Fij M × N , (2) where i, j = 0, 1, . . . , L − 1 and ∑L−1 i=0 ∑L−1 j=0 pij = 1For
one image we can build a 2D histogram with i and j as the two dimensions. For example,
Figure 1(b) shows the 2D histogram of the image in Figure 1(a), and the projection of the 2D
histogram is shown in Figure 1(c). Using an arbitrary threshold vector (s, t), the 2D histogram
can be divided into four areas as illustrated in Figure 1(d). Pixels belonging to the
background or the foreground should contribute mainly to the near-diagonal elements, as the
areas of the background or the foreground are relatively smooth and there is little difference
between the original gray level and the smoothed one. On the contrary, most pixels of noise
and edges are away from the diagonal. Therefore, pixels in regions 1 and 4 can be considered
as in foreground (or background) and background (or foreground) respectively, whereas those
in regions 2 and 3 can be regarded as noise and edge. In this way, we obtain a segmentation
of the image.

For our required output, we should mention the following characteristics properly in our
program for better efficiency

● Intensity characteristics of the objects


● Size of the objects

15
Figure:1.3 Illustration of the 2D histogram. (a) The original image, (b) 2D histogram, (c)
projection of 2D histogram, and (d) regional division.

From the fig 1.3 we clearly observe the image segmentation by using 2d histogram and it is
differ from the traditional one.

The thresholds obtained with the traditional 2D Otsu method differ widely. This enables a
large number of near-diagonal pixels belonging to the foreground or background to be
located in region 3 (pointed by the red arrow) of the 2D histogram. From 2 we know that
these pixels will be regarded as noise and edges incorrectly. To illustrate this problem more
evidently, the pixels in regions 1 and 4 with dark pixels, and those in regions 2 and 3 with
white pixels. It is evident that a large number of foreground pixels are located in regions 2
and 3 and are regarded as noise and edges by the traditional 2D Otsu’s method. To
accomplish image segmentation, we label the pixels in regions 2 and 3 as belonging to
the foreground and obtain the segmentation results . On the contrary, we obtain the
segmentation. if the pixels in regions 2 and 3 are labeled as belonging to the background.
While the segmentation can be regarded as similar to the correct one . This implies that to
obtain satisfactory segmentation results, we must determine the appropriate assignment of the
pixels of regions 2 and 3, which is usually not a trivial task. The reason for this problem lies
in that the traditional 2D Otsu’s method yields an incorrect optimal threshold vector in the
presence of Salt & Pepper noise. In other words, the traditional 2D Otsu method is sensitive
to Salt & Pepper noise. In the example , if we label the pixels in regions 2 and 3 as in the
foreground, the segmentation results are still acceptable.

2.4.2 Building a new 2D histogram: In the traditional 2D Otsu method, we use the
gray level and average gray level of pixels to build the 2D histogram. Since average filtering
is suitable for removing Gaussian noise but not good at removing Salt&Pepper noise, the
traditional 2D Otsu’s method is not robust against Salt&Pepper noise. On the other hand,
median filtering can remove Salt&Pepper noise effectively. This observation motivates us to
use both average filtering and median filtering in 2D Otsu’s method, to obtain the robustness
of both Gaussian and Salt&Pepper noise. Therefore we call this method as Median-Average
2D Otsu’s method (MAOTSU 2D). Our method is detailed as follows. First, we calculate the

16
median of pixels within k × k neighborhood in an image f(x, y) and obtain the median image
m(x, y) as m(x, y) =med{f(x + a, y + b)|a, b = − k − 1 2 , − k − 3 2 . . . k − 1 2 }. In the next
step we calculate the average image G(x, y) of the median image m(x, y) as G(x, y) = 1 k 2
(k ∑−1)/2 c=−(k−1)/2 (k ∑−1)/2 d=−(k−1)/2 m(x + c, y + d). Since we smooth the image
with median filtering followed by average filtering, we call the combination of these filters a
median-average filter and the smoothed image a median-average image. With the median-
average image G(x, y) of the image f(x, y), we can build the 2D histogram . after creating the
2d histograms. Histograms are a type of bar plot for numeric data that group the data into
bins. After you create a Histogram object, you can modify aspects of the histogram by
changing its property values. This is particularly useful for quickly modifying the properties
of the bins or changing the display. The inbuilt Matlab function imhist is used for it.

The following graph shows the underlying distribution for the normally distributed data. You
can, however, use the 'pdf' histogram plot to determine the underlying probability distribution
of the data by comparing it against a known probability density function. The probability
density function for a normal distribution with mean μ, standard deviation σ, and variance
f(x,μ,σ)=1/σ√2 exp[−(x−μ)^2/2σ^2].Overlay a plot of the probability density function for a
normal distribution with a mean of required measure and a standard deviation of required
measurement for the desired result.

17
Fig:1.4 Displays histogram of the image, I is grayscale by default histogram will be
256 bins.

2.5 FEATURE EXTRACTION:

Feature extraction is a technique used to reduce a large input data set into relevant features.
This is done with dimensionality reduction to transform large input data into smaller,
meaningful groups for processing. In this, we are using two methods here for the feature
extraction of this 1. GLCM 2. IQA

2.5.1 GLCM (Gray Level Co-occurrence Matrix):

In this method, the texture of the image is analyzed. The gray level co-occurrence matrix is
created by counting the number of times each pair of those specific values in a specified
spatial relationship occur in the image. A statistical method of examining texture that

18
considers the spatial relationship of pixels is the gray-level co-occurrence matrix (GLCM),
also known as the gray-level spatial dependence matrix. The GLCM functions characterize
the texture of an image by calculating how often pairs of pixels with specific values and in a
specified spatial relationship occur in an image, creating a GLCM, and then extracting
statistical measures from this matrix. The texture filter functions, described in calculating
statistical texture, cannot provide information about shape, that is, the spatial relationships of
pixels in an image. After creating the GLCMs using gray comatrix, you can derive several
statistics from them using graycoprops. These statistics provide information about the texture
of an image. Statistics provide information such as contrast, correlation, energy, and
Homogeneity of the image.

2.5.2 IQA(Image Quality Assessment):

IQA ( image quality assessment ) is mainly focused on accuracy at different captures of


images for compression, and process, and mostly for signal processing. and it also focuses on
the image that is pleasant for the human viewer. Image Quality Assessment Features
MSE(Mean Square Error) and PSNR(Peak Signal to Noise Ratio) are extracted from the
segmented images metric is the mean squared error (MSE), computed by averaging the
squared intensity differences of distorted and reference image pixels, along with the related
quantity of peak signal-to-noise ratio (PSNR). The assessments are determined in the output
with the table of information about PSNR.

2.6 DECISION TREE: Decision trees are a popular tool for classification and
prediction. The decision is used for the classification of the images here. By using the
algorithm. A Decision tree is a flowchart-like tree structure, where each internal node denotes
a test on an attribute, each branch represents an outcome of the test, and each leaf node
(terminal node) holds a class label. The decision tree is responsible for certain probabilities of
the output. Decision trees classify instances by sorting them down the tree from the root to
some leaf node, which provides the classification of the instance. An instance is classified by
starting at the root node of the tree, testing the attribute specified by this node, then moving

19
down the tree branch corresponding to the value of the attribute. This process is then repeated
for the subtree rooted at the new node. For this process, we get a certain output based on the
dataset.ID3 is an algorithm used to construct a decision tree for a given dataset. In the
beginning, set S is the root node. At each level of the tree, it iterates through every unused
attribute of the set S and calculates the entropy(S) or information gain IG(S) of that attribute.
It then selects the attribute which has the smallest entropy or largest information gain value.
The set S is then split or partitioned by the selected attribute to produce subsets of the data.

After the completion of the above process, the output is obtained by using MATLAB 2018b
software. By using GUI at the output by giving the input image is converted into gray then
pre-processing and lastly detection of disease classification disease from a database

20
CHAPTER-3

RESULTS AND DISCUSSIONS

The system is implemented in MATLAB 2018b. The research work was tested under
different conditions to find strengths and weaknesses in different components. The
implementation results are shown in the following figures. Initially, the input color image is
pre-processed, segmented using thresholding, then features are extracted using GLCM, and
IQA and classified using a decision tree. Our model can detect 6 skin diseases. It also
provides additional information like the Asymmetric Index, Compactness Index, Diameter,
Mean, Standard deviation, and PSNR.

3.1 Case 1: Malignant - Melanoma is detected.

FIG 1.5: Figure indicates detection of Malignant-Melanoma

21
3.2 Case 2: Malignant - Basal Cell Carcinoma is detected.

FIG 1.6: Figure indicates detection of Malignant - Basal Cell Carcinoma

22
3.3 Case 3: Malignant - Squamous Cell Carcinoma is detected.

FIG 1.7: Figure indicates detection of Malignant- Squamous Cell Carcinoma

23
3.4 Case 4: Benign - Melanocytic Nevi is detected.

FIG 1.8: Figure indicates detection of Benign - Melanocytic Nevi

24
3.5 Case 5: Benign - Seborrheic Keratoses is detected.

FIG 1.9: Figure indicates detection of Benign - Seborrheic Keratoses

25
3.6 Case 6: Benign - Acrochordon is detected.

FIG 1.10: Figure indicates detection of Benign - Acrochordon

26
CHAPTER 4

CONCLUSION
Detection of skin diseases is a very important step to reduce death rates, disease
transmission, and the development of skin disease. Clinical procedures to detect skin diseases
are very expensive and time-consuming. Image processing techniques help to build
automated screening systems for dermatology at an initial stage. The extraction of features
plays a key role in helping to classify skin diseases.

This system uses images of skin captured with a camera to detect if it is healthy or not; if
not, then classified as Melanoma, Basal Cell Carcinoma, Squamous Cell Carcinoma,
Melanocytic Nevi, Seborrheic Keratoses, Acrochordon. The proposed system uses image
processing and machine learning techniques. The process begins with pre-processing an input
image using contrast enhancement and grayscale conversion. The global Value Thresholding
technique is used to segment the pre-processed image through which the actual affected
region is obtained. Features like PSNR, Asymmetric Index, Compactness Index, Diameter,
Mean, and Standard deviation are extracted. These features will be used to classify the image
into one of these 6 categories.

This system can be used by dermatologists to give a better diagnosis and treatment to the
patients. The system can be used to diagnose skin diseases at a lower cost. In the future, this
system can be improved to detect and classify more diseases as well as their severity.

27
REFERENCES

[1] Manerkar Mugdha S, Harsh Shashwata, Saxena Juhi, Sarma Simanta P, Dr. U.
Snekhalatha, Dr. M. Anburajan. (2016). Classification of Skin Disease Using multi SVM
Classifier. 3rd International Conference on Electrical, Electronics, Engineering Trends,
communication, Optimization and Sciences (EEECOS), 363-368.

[2] Alam, Md & Munia, Tamanna Tabassum Khan & Tavakolian, Kouhyar & Vasefi, Fartash
& MacKinnon, Nicholas & Fazel-Rezai, Reza. (2016). Automatic and severity Measurement
of Eczema Using Image Processing. Engineering in Medicine and Biology Society
Conference - August 2016. 1365-1368

[3] Sumithra R, Mahamad Suhil, D. S. Guru. (2015). Segmentation and Classification of Skin
Lesions for Disease Diagnosis. International Conference on Advanced Computing
Technologies and Applications (ICACTA2015), 45, 76 – 85.

[4] Sheha Mariam A., Mabrouk Mai S, Sharawy Amr. (2012). Automatic Detection of
Melanoma Skin Cancer. International Journal of Computer Applications ( 0975 – 8887),
42(20), 22-26

[5] https://en.wikipedia.org /wiki/Histogram_equalizati on


[6]https://www.tutorialspoint.com/dip /grayscale_to_rgb_co nversion.htm
[7]https://en.wikipedia.org /wiki/Thresholding_(image_processing)

[8] https://www.ucalgary.ca /Whalley/glcm1

[9] https://en.wikipedia.org /wiki/Image_quality#Image_qual ity_assessment_methods

[10] https://en.wikipedia.org/wiki/Decision_tree

[11] Zeljkovic, V., Druzgalski, C., Bojic-Minic, S., Tameze, C., & Mayorga, P. (2015) “
Supplemental Melanoma Diagnosis for Darker Skin Complexion Gradients.” Pan American
Health Care Exchanges

28

You might also like