Download as pdf or txt
Download as pdf or txt
You are on page 1of 62

A PROJECT REPORT ON

BRAIN TUMOR EXTRACTION USING MRI SCANS


Submitted in partial fulfillment for award of
Bachelor of Technology
Degree
In
Electronics Engineering

Under the guidance of


Dr. Krishna Raj
Professor,
Department of Electronics Engineering,
HBTU, Kanpur

By
Shreya Vaish (211/14)
Kirtivardhan Singh (193/14)
Nishant Kumar (197/14)
Rishabh Gupta (204/14)

1
CERTIFICATE

It is certified that Ms. SHREYA VAISH, Mr. KIRTIVARDHAN SINGH, Mr. NISHANT
KUMAR and Mr. RISHABH GUPTA, students of Final B. Tech. Electronics Engineering
H.B.T.U. Kanpur have been working on the project titled “BRAIN TUMOR EXTRACTION
USING MRI SCANS” under my guidance and supervision. They have shown sincere efforts and
keen interest during the preparation of this project report, and this work has been submitted as a
project report for the award of Bachelor of Technology Degree in Electronics Engineering.

(Dr. Krishna Raj)

Professor,
Department of Electronics Engineering
HBTU KANPUR

2
ACKNOWLEDGEMENT

I would like to take the opportunity to express my gratitude to all those people who have helped
in various ways in making successful my seminar on “BRAIN TUMOR EXTRACTION
USING MRI SCANS”. I would specially like to thank my project supervisor, Dr. KRISHNA
RAJ, Professor, Electronics Engineering Department at H.B.T.U., Kanpur, for his valuable
guidance, advice and positive gesture during the preparation of the project. I convey my sincere
thanks to all the faculty members of the department and my classmates for their valuable support.

(SHREYA VAISH)

Sr. No. : 211/14

(KIRTIVARDHAN SINGH)

Sr. No. : 193/14

(NISHANT KUMAR)

Sr. No. : 197/14

(RISHABH GUPTA)

Sr. No. : 204/14

Date: …………. B.Tech. (Final Year)

Place: ………… Dept. of Electronics Engineering

3
CONTENTS

Abstract……………………………………………………………………………….………..…5

1. Historical Background………………………………………………..………………………..6

2. Concept of the project…………..……………………………………………………………..8

3. Process steps for tumor extraction

3.1 Grayscale conversion……………………………..…………....................................10

3.2 Weiner Filter……………………...............................................................................11

3.3 High Pass filter…………………………..………………………………………….12

3.4 Median Filter…………………………………..……………………………………13

3.5 Threshold segmentation…………………………………..………………………...15

3.6 Watershed segmentation …………………………………………………………...17

3.7 Area Calculation of tumor………………………………………………………… 21

4. MRI………………………………………………………………………………………….22

5. Flowchart of the brain tumor extraction on a binary image…………………………………24

6. Mathematical Model of the project………………………………………………………….25

7. List of functions used……………………………………………………………………….27

8. Implementation of the above process steps…………………………………………………29

9. GUI Application…………………………………………………………………………….39

10. Some other instances of the brain tumor extraction……………………………………….44

11. Conclusion………………………………………………………………………………...46

References…………………………………………………………………………………….47

APPENDIX A: Program for brain tumor extraction…………………………………………48

4
Abstract

Brain tumor extraction form a very crucial part in the detection of cancer. One of the best
methods to get to know about the primary requisites of the tumor is through the MRI scans. MRI
scans are very expensive and require a radiologists with a peaking fees for the diagnosis. Still a
radiologist is a human being and human error is inevitable. Thereby, wrong reports might be
passed on to the respective doctors which can result in blunders. To reduce this effect, image
processing can be used to classify few details of the tumor and enhance it from the rest of the
brain so that more clear perspective can be seen of the image. In our project we have with the
help of image processing through MATLAB have developed a model which extracts the tumor
portion of the brain from the rest of the skull on a binary image for better diagnosis. Along with
this, tumor area in pixel square have also been shown which can be easily converted to the
standard measuring units once the specifications of the MRI machine has been know, i.e. the
scale. A graphical user interface of the same model is prepared to make the whole work flow
handy for the operations with the required personnel.

5
CHAPTER 1:

HISTORICAL BACKGROUND

Advancements in technologies directly affected the ability of neurosurgeons to remove tumors


and at the same time decreased operative risk. William Macewen is considered to have
performed the first successful brain tumor removal in 1879 in a young woman. In the late 19th
century accounts began to be published on attempts to remove brain tumors (meningiomas).
Macewen, Victor Horsley, and William W. Keen began to perform aggressive maneuvers in
attempting brain tumor removal. Nevertheless, they describe limited systematic diagnostic
processes beyond localization by clinical examination. [1]

Other practitioners were rapidly adopting new technology for use in neurosurgery. At least by
the first decade of the 20th century, only a few years after the introduction to the world of X-
Rays in late 1895, Fedor Krause was using X-Rays routinely for assistance in localizing
intracranial tumors. [1]

In 1911 Krause had written an entire chapter devoted to “Radiography,” in which he promoted
the benefits of X-Rays for diagnosis of masses and tumors that had changed the contours of the
skull or that had left osseous deposits. [1]

As seen in another article published in 1928 in Radiology, cranial nerve VIII schwannomas were
diagnosed by demonstrating expanded internal auditory canals on oblique skull radiographs,
optic nerve gliomas could be inferred by enlargement of the optic canals, and cranial nerve
tumors were suggested by expansion of their corresponding outlet foramina. [2]

In 1954, the authors of an article in Radiology reported the use of nuclear scanning in 200
patients and concluded that accurate localization of brain tumors was possible in 46% (the rate of
localization of nontumoral lesions was about the same). By this time, the use of nuclear scanning
as the first noninvasive method to localize brain tumors was already fairly routine. [2]

In the mid-1960s, Kuhl at el reported in Radiology the development of the first practical
transverse, or cross-sectional, isotopic imaging method for brain lesions, which resulted in
improved visualization of tumors located in the posterior fossa. [2]

6
One of the most famous names in medical imaging is that of Sir Godfrey N. Hounsfield, FRS, an
engineer who, while working at EMI in England, created the first CT scanner. In 1971, a head-
only scanner was installed in London, England, and was soon thereafter installed in the United
States at the Mayo Clinic. In November 1978, the first article to show how imaging findings,
specifically contrast enhancement, correlated with astrocytoma grade was published in
Radiology. [2]

In 1984, the first two articles dealing with MR imaging of brain tumors appeared in Radiology.
In the first report (48), T1 measurements of brain masses were performed, and the authors found
that astrocytomas had the longest T1 and lipomas had the shortest. The second article was a
[2]
comparison between the then well-established CT and MR imaging.

The first 16-section scanner was introduced 2001, and a 64-section scanner became available in
2004. With respect to brain tumor imaging, the greatest advantage of this technique is speed,
which allows creation even of CT angiograms and acquisition of time-dependent blood perfusion
measurements. [2]

7
CHAPTER 2:

OVERVIEW OF THE PROJECT

Tumor is defined as the abnormal growth of the tissues. Brain tumor is an abnormal mass of
tissue in which cells grow and multiply uncontrollably, seemingly unchecked by the mechanisms
that control normal cells. Brain tumors can be either malignant or benign. Benign being non-
cancerous and malignant being cancerous.

Magnetic Resonance Imaging (MRI) is an advanced medical imaging technique used to produce
high quality images of the parts contained in the human body MRI imaging is often used when
treating brain tumors, ankle, and foot. From these high-resolution images, we can derive detailed
anatomical information to examine human brain development and discover abnormalities.

Brain tumour diagnosis is quite difficult because of diverse shape, size, location and appearance
of tumour in brain. Brain tumour detection is very hard in the beginning stage because it can’t
find the accurate measurement of the tumour. Radiologists find it difficult to give an accurate
description of the tumour in the form of reports in short span of time. The image processing
based extraction of the same is the faster and much accurate way of determining the tumour in
the patient’s brain free from any human error or wrong judgment of the type of the tumour which
can be a plausible way of risking the patient’s life.

Pre-processing of MRI images is the primary step in image analysis which perform image
enhancement and noise reduction techniques which are used to enhance the image quality, then
some morphological operations are applied to detect the tumor in the image. The morphological
operations are basically applied on some assumptions about the size and shape of the tumour and
in the end the tumour is mapped onto the original gray scale image with 255 intensity to make
visible the tumour in the image.

The algorithm has two stages, first is pre-processing of given MRI image and after that
segmentation and then perform morphological operations. Steps of algorithm are as following:[3]

1) Give MRI image of brain as input.

2) Convert it to gray scale image.

8
Input [MRI Image]

Preprocessing

Feature Extraction

Segmentation

Classification (Output)

Fig 1: Block diagram representation [3]

3) Apply high pass filter for noise removal

4) Apply median filter to enhance the quality of image.

5) Compute threshold segmentation.

6) Compute watershed segmentation.

7) Compute morphological operation.

8) Finally output will be a tumor region.

9
CHAPTER 3:

PROCESS STEPS FOR TUMOR EXTRACTION

3.1 Grayscale conversion:

Grayscale conversion is the first basic step of almost every image processing model. It separates
the luminance plane from chrominance (plane containing color) plane. Grayscale is a range of
shades of gray without apparent color. The darkest possible shade is black, which is the total
absence of transmitted or reflected light. The lightest possible shade is white, the total
transmission or reflection of light at all visible wavelengths. The illusion of gray shading in a
halftone image is obtained by rendering the image as a grid of black dots on a white background
(or vice versa), with the sizes of the individual dots determining the apparent lightness of the
gray in their vicinity. [4]

The brightness levels of the red (R), green (G) and blue (B) components are each represented as
a number from decimal 0 to 255, or binary 00000000 to 11111111. For every pixel in a red-
green-blue (RGB) grayscale image, R = G = B. The lightness of the gray is directly proportional
to the number representing the brightness levels of the primary colors. Black is represented by R
= G = B = 0 or R = G = B = 00000000, and white is represented by R = G = B = 255 or R = G =
B = 11111111.

• It increases the signal to noise ratio (SNR) of the image.

In brain scans, the tumor is basically the illuminated regions, i.e. every other detail can be
considered as unwanted (noise). Thus, grayscale conversion increases the signal content of the
scan.

• Decreases the complexity.

The RGB value consideration would unnecessarily increases the complexity and cost of
the model designed.

• Algorithms used are compatible with it.

The extraction involves the analysis of contour formation and values of the pixels of the
scans. This information is readily available in the grayscale format of the image.

10
3.2 Weiner Filter:

In signal processing, the Wiener filter is a filter used to produce an estimate of a desired or target
random process by linear time-invariant (LTI) filtering of an observed noisy process, assuming
known stationary signal and noise spectra, and additive noise. The Wiener filter minimizes the
mean square error between the estimated random process and the desired process.

The goal of the Wiener filter is to compute a statistical estimate of an unknown signal using a
related signal as an input and filtering that known signal to produce the estimate as an output.
The Wiener filter is based on a statistical approach, and a more statistical account of the theory is
given in the minimum mean square error (MMSE) estimator article. [8]

One is assumed to have knowledge of the spectral properties of the original signal and the noise,
and one seeks the linear time-invariant filter whose output would come as close to the original
signal as possible. Wiener filters are characterized by the following: [8]

 Assumption: signal and (additive) noise are stationary linear stochastic processes with
known spectral characteristics or known autocorrelation and cross-correlation
 Requirement: the filter must be physically realizable/causal (this requirement can be
dropped, resulting in a non-causal solution)
 Performance criterion: minimum mean-square error (MMSE)

 Rw 0 Rw 1  Rw N   a 0   Rws 0 


 R 1 R 0  Rw N  1 a1   Rws 1 
 w w

       
    
 Rw N  Rw N  1  Rw 0  a N   Rws N 

v is the last matrix and a is the second matrix. The equations formed are known as the Wiener–
Hopf equations. The matrix T is a symmetric Toeplitz matrix (First matrix). Under suitable
conditions on R, these matrices are known to be positive definite and therefore non-singular
yielding a unique solution to the determination of the Wiener filter coefficient vector a= T-1v.
Furthermore, there exists an efficient algorithm to solve such Wiener–Hopf equations known as
the Levinson-Durbin algorithm so an explicit inversion of T is not required.

11
3.3 High Pass Filter:

A high-pass filter (HPF) is an electronic filter that passes signals with a frequency higher than a
certain cutoff frequency and attenuates signals with frequencies lower than the cutoff frequency.
The amount of attenuation for each frequency depends on the filter design. A high-pass filter is
usually modeled as a linear time-invariant system. It is sometimes called a low-cut filter or bass-
cut filter. High-pass filters have many uses, such as blocking DC from circuitry sensitive to non-
zero average voltages or radio frequency devices. They can also be used in conjunction with a
low-pass filter to produce a bandpass filter.

After that image is given as an input to high pass filter. A high pass filter is the basis for most
sharpening methods. An image is sharpened when contrast is enhanced between adjoining areas
with little variation in brightness or darkness.

A high pass filter tends to retain the high frequency information within an image while reducing
the low frequency information. The kernel of the high pass filter is designed to increase the
brightness of the center pixel relative to neighboring pixels. The kernel array usually contains a
single positive value at its center, which is completely surrounded by negative values.

Kernel which is used for the purpose of high pass filtering must have sum equal to zero with its
center value amplified with respect to the neighboring values.

Kernel used here is:

0 -1.25/4 0

-1.25/4 1.25 -1.25/4

0 -1.25/4 0

12
4.4 Median Filter:

In signal processing, it is often desirable to be able to perform some kind of noise reduction on
an image or signal. The median filter is a nonlinear digital filtering technique, often used to
remove noise. Such noise reduction is a typical pre-processing step to improve the results of
later processing (for example, edge detection on an image). Median filtering is very widely used
in digital image processing because, under certain conditions, it preserves edges while removing
noise.

The main idea of the median filter is to run through the signal entry by entry, replacing each
entry with the median of neighboring entries. The pattern of neighbors is called the "window",
which slides, entry by entry, over the entire signal. For 2D (or higher-dimensional) signals such
as images, more complex window patterns are possible (such as "box" or "cross" patterns). Note
that if the window has an odd number of entries, then the median is simple to define: it is just the
middle value after all the entries in the window are sorted numerically. For an even number of
entries, there is more than one possible median, see median for more details. [9]

As there is no entry preceding the first value, the first value is repeated, as with the last value, to
obtain enough entries to fill the window. This is one way of handling missing window entries at
the boundaries of the signal, but there are other schemes that have different properties that might
be preferred in particular circumstances: [9]

 Avoid processing the boundaries, with or without cropping the signal or image boundary
afterwards,
 Fetching entries from other places in the signal. With images for example, entries from
the far horizontal or vertical boundary might be selected,
 Shrinking the window near the boundaries, so that every window is full.

The operating principle is based on the median values of pixels in a particular window.

 Efficient for salt and pepper noise


Salt and pepper noises are the impulse noises. Impulse noise is a set of random pixels
which has a very high contrast compared to the surrounding pixels. It appears as a

13
sprinkle of bright and dark spots on the image. It is caused by malfunctioning by camera
sensors, faulty memory locations in hardware, transmission of images in a noisy channel.
 Boundaries and edges are preserved.
 Complex and a time consuming procedure

Various median filtering techniques:


 Standard median filter
 Weighted median filter
 Recursive median filter
 Iterative median filter
 Directional Median Filter

Fig 2: Application of median filter on the image

14
3.5 Threshold Segmentation:

It is the simplest method of image segmentation. From a grayscaled image, thresholding can be
used to create binary image.

The simplest thresholding methods replace each pixel in an image with a black pixel if the image
intensity is less than some fixed value constant T, or a white pixel if the image intensity is
greater than that constant. Here, we have used values 0.4, 0.5, 0.55, 0.58 and 0.6. At these values
the results are being provided. T= 0.58 and 0.6 were the best.

For making it completely automated we can use various methods- 1. histogram 2. clustering 3.
spatial 4. local.

greythresh() function is used for thresholding. greythresh() function generates a random number
according to the image passed as an argument. The number is between 0 and 1.

It is for automation but was not giving suitable results therefore, manual is better. Thereby, we
switch to K means clustering.

3.5.1 K Means Clustering:

K-means clustering is used here for the sake of threshold segmentation. Here, k means the
number of clusters which need to be done for the clustering purpose on the basis of the
intensities of the pixels in a particular cluster. These clusters are saved in an array of images with
different clusters in different images so that we can access the tumor by directly accessing the
image of the cluster containing the tumor. The clusters are made upon various factors like the
intensity of light. k-means clustering is a method of vector quantization, originally from signal
processing, that is popular for cluster analysis in data mining.

Clusters formed are divided and kept in separate accessible variables so that signal processing
can work upon it. This results in a partitioning of the data space into Voronoi cells.

For cluster formation, various spatially separated points on a graph can be assumed. Now, these
separated points on the graph can depict any feature of the pixel. Here, generally intensity is
taken into account of various pixel points of the image. The number of clusters to be made

15
decide upon the number of center points to be taken. If k is the number of clusters and k center
points are assumed on the graph all as far as possible from one another. This allows for more
accuracy for the clustering. Now, distance between these center points and other points is
calculated. ‘k’ groups are made along with center points as reference points and the points
closest to the respective cluster point. Now when these groups are formed, new center point is
calculated through these group points, and the process iterates until there is no movement of the
center point. Now, these group points forms a cluster in an array of k clusters. [6]

Fig 3: Application on the graph of


k means clustering where k=3

Steps taken in grouping of data points: [6]

I. Randomly select ‘c’ cluster centers.

II. Calculate distance between each data point and cluster center.

III. Assign the data point to the cluster center.

16
𝑐 𝑐𝑖

𝐽(𝑉) = ∑ ∑(|𝑥𝑖 − 𝑣𝑗 |)2 ……(3.1)


𝑖=1 𝑗=1

c- no. of cluster centers

𝑐𝑖 - no. of points in ith cluster.

|𝑥𝑖 − 𝑣𝑗 | − Euclidiean distance

Recalculate new cluster center using:


𝑐
𝑖
𝑣𝑖 =(1/𝑐𝑖 ) ∑𝑗=1 𝑥𝑖

Recalculate distance between cluster points and cluster center and obtain cluster centers.

If no reassigning has been done, stop else, continue the process.

Three key features of k-means which make it efficient are often regarded as its biggest
drawbacks:

 Euclidean distance is used as a metric and variance is used as a measure of cluster scatter.
 The number of clusters k is an input parameter: an inappropriate choice of k may yield
poor results. That is why, when performing k-means, it is important to run diagnostic
checks for determining the number of clusters in the data set.
 Convergence to a local minimum may produce counterintuitive ("wrong") results

k means clustering forms the best alternative for the grouping of data points with respect to the
intensity of pixels. Since tumor is all about intensity, it is the best method.

3.6 Watershed Segmentation: [10]

The term watershed refers to a ridge that divides areas drained by different river systems. A
catchment basin is the geographical area draining into a river or reservoir. Computer analysis of
image objects starts with finding them-deciding which pixels belong to each object. This is
called image segmentation, the process of separating objects from the background, as well as
from each other.

17
To do this we'll use another new tool in the Image Processing Toolbox: bwdist, which computes
the distance transform. The distance transform of a binary image is the distance from every pixel
to the nearest nonzero-valued pixel, as this example shows.

A small binary image (left) and its distance transform (right).

If you imagine that bright areas are "high" and dark areas are "low," then it might look like the
surface (left). With surfaces, it is natural to think in terms of catchment basins and watershed
lines. The Image Processing Toolbox function watershed can find the catchment basins and
watershed lines for any grayscale image.

Edge detection is an image processing technique for finding the boundaries of objects within
images. It works by detecting discontinuities in brightness. Edge detection is used for image
segmentation and data extraction in areas such as image processing, computer vision, and
machine vision.

It can be shown that under rather general assumptions for an image formation model,
discontinuities in image brightness are likely to correspond to discontinuities in depth,
discontinuities in surface orientation, changes in material properties and variations in scene
illumination.

In the ideal case, the result of applying an edge detector to an image may lead to a set of
connected curves that indicate the boundaries of objects, the boundaries of surface markings as
well as curves that correspond to discontinuities in surface orientation. Thus, applying an edge
detection algorithm to an image may significantly reduce the amount of data to be processed and

18
may therefore filter out information that may be regarded as less relevant, while preserving the
important structural properties of an image.

The Hough transform is a feature extraction technique used in image analysis, computer vision,
and digital image processing. The purpose of the technique is to find imperfect instances of
objects within a certain class of shapes by a voting procedure. This voting procedure is carried
out in a parameter space, from which object candidates are obtained as local maxima in a so-
called accumulator space that is explicitly constructed by the algorithm for computing the Hough
transform.

The classical Hough transform was concerned with the identification of lines in the image, but
later the Hough transform has been extended to identifying positions of arbitrary shapes, most
commonly circles or ellipses.

In automated analysis of digital images, a subproblem often arises of detecting simple shapes,
such as straight lines, circles or ellipses. In many cases an edge detector can be used as a pre-
processing stage to obtain image points or image pixels that are on the desired curve in the image
space. Due to imperfections in either the image data or the edge detector, however, there may be
missing points or pixels on the desired curves as well as spatial deviations between the ideal
line/circle/ellipse and the noisy edge points as they are obtained from the edge detector. For these
reasons, it is often non-trivial to group the extracted edge features to an appropriate set of lines,
circles or ellipses. The purpose of the Hough transform is to address this problem by making it
possible to perform groupings of edge points into object candidates by performing an explicit
voting procedure over a set of parameterized image objects.

The simplest case of Hough transform is detecting straight lines. In general, the straight line y =
mx + b can be represented as a point (b, m) in the parameter space. Thus, a set of lines with
similar parameter can easily be noticed by the number of lines intersecting at similar points on
the parameter space. Similarly a particular point can be depicted by a line in parameter space and
thus the number of lines in parameter space can depict the number of points with similar
features. Thus, we can connect these lines to form a line. Is the points are in close vicinity then it
is local area edge detection else global edge detection.

19
Fig 5: Here, points on a line in the x-y plane intersect in parameter space on a single point. (‘n’ is
the intercept of the line)

Fig 6: A line in the x-y plane is depicted by a point in the parameter space.

3.7 Area Calculation of the tumor:

Since the image is a binary image containing 1s and 0s, we can easily find out whether a
particular pixel contains the tumor portion or not. The black dots are the one which now not the
part of any tumor while the white dots are the part of tumor extracted.

1. Calculate number of pixels containing the tumor region.


2. Divide it by the number of pixels contained in the image.

20
3. This gives the ratio which can be further used to determine the area of the tumor with the
help of the area of the whole image.

1 pixel= 0.26458333333333 mm

21
CHAPTER 4:

MRI: (MAGNETIC RESONANCE IMAGING)

Magnetic resonance imaging is a medical imaging technique used in radiology to form pictures
of the anatomy and the physiological processes of the body in both health and disease. MRI
scanners use strong magnetic fields, electric field gradients, and radio waves to generate images
of the organs in the body. MRI is created by the radio frequencies and acquisition of data. Radio
frequencies are given by a source. It is used to stimulate the tissues of the body. This energy
excites the tissues and reflects back the energy when the energy source is removed. This happens
due to the entropy of the respective proton spins of the tissues. [7]

1. Give energy to the brain tissues through radio frequencies.

2. Remove energy source.

3. Observe energy we get back due to relaxation of proton.

Contrast seen in a MRI= difference in rate of relaxation of different tissues.

This implies that similar tissues having similar rate of relaxation gives out same brightness
pattern in a MRI scan. This difference in the contrast shows different body tissues. These body
tissues can be separated and analyzed for as long as there is a level of difference in the contrast.

There are various schemes which have different conventions of the contrast formation with
respect to body tissues or the fluids. These schemes are:

1. T1
2. T2 or FLAIR (Fluid Attenuation Inversion )

T1: Here, fat, subacute hemorrhage, melanin, protein-rich fluid, slowly flowing blood are high
while, bone, urine, CSF, air, more water content, as in edema, tumor, infarction, inflammation,
infection, hyperacute or chronic hemorrhage, low proton density as in calcification are low.

T2: It is the total opposite of T1 scheme.

For the detection purpose we have used highly contrasted T1 scheme.

22
T1 Fig 7 T2 [7]

Fig 8: MRI post contrast is used for the purpose of extraction. [7]

23
CHAPTER 5: FLOWCHART OF THE TUMOR EXTRACTION ON A BINARY IMAGE

Start

input of image

Grayscaling

Wiener filtering

High pass filtering for


sharpening

salt and pepper noise removal


by Median Filter

k means clustering End.

(Threshold segmentation)

Watershed Segmentation output image as tumor enhanced

24
Program for the GUI Application

Start

Open app.m file

End.
browse the image from the
computer
Start

load the image from the


input of image

computer into the MATLAB Here, the area of the image is


Grayscaling

program calculated and the final output is


Wiener filtering
shown.
High pass filtering for
sharpening

Select Tumor extraction button.


salt and pepper noise removal
Transferred to Stage1screen2.m
by Median Filter This cluster is transferred into the

k means clustering file lastscreen.m file


End.
(Threshold segmentation)

Watershed Segmentation output image as tumor enhanced

High pass filtering for Cluster number containing the tumor

sharpening is inputted through the edit box

salt and pepper noise removal k means clustering


by Median Filter and displayed
(Threshold segmentation) where all
on the GUI also
the clusters are shown

25
CHAPTER 6:

MATHEMATICAL MODEL OF THE PROJECT

 Step 1-

For grayscale conversion:

lightning= (max RGB + min RGB) /2 ……(6.1)

average= (R+G+B)/3 ……(6.2)

luminosity= 0.21R + 0.72G + 0.07B ……(6.3)

This is through the DCT transform of the image.

 Step 2-

Wiener Filter:

We choose the transfer function of wiener filter such as to minimize mean square error.

𝑒 2 = E{(𝑓 − 𝑓̂)2 } ……(6.4)

 Step 3-

High Pass filtering used for sharpening of the image.

0 −0.3125 0
kernel: −0.3125 1.250 −0.3125
0 −0.3125 0

The kernel sharpens the intensity of the pixel with respect to the other neighboring pixels.

 Step 4-

Median Filtering:

Intensity of a pixel is determined by the median of the all the neighboring pixels.

 Step 5-
k- means clustering:

I. Randomly select ‘c’ cluster centers.

26
II. Calculate distance between each data point and cluster center.

III. Assign the data point to the cluster center.

𝑐 𝑐𝑖

𝐽(𝑉) = ∑ ∑(|𝑥𝑖 − 𝑣𝑗 |)2 ……(6.5)


𝑖=1 𝑗=1

c- no. of cluster centers

𝑐𝑖 - no. of points in ith cluster.

|𝑥𝑖 − 𝑣𝑗 | − Euclidiean distance

Recalculate new cluster center using:


𝑐
𝑖
𝑣𝑖 =(1/𝑐𝑖 ) ∑𝑗=1 𝑥𝑖

Recalculate distance between cluster points and cluster center and obtain cluster centers.

If no reassigning has been done, stop else, continue the process.

 Step 6- Watershed Segmentation


Calculated through the Hough Transform of the image.
 Step 7- Convert it to a binary image.
 Step 8- Computation of area of the tumor. (Explained earlier.)

27
CHAPTER 7:

LIST OF FUNCTIONS BEING USED

 imread(path): It reads the image specified by the path and returns the image in a matrix
format which contains the intensity of the particular pixel given by the dimensions of the
matrix.

 rgb2gray(image): It reads the image given as parameter and returns an image which
contains only its luminance plane. It is used for the conversion of the rgb to greyscale.

 imshow(image): Displays the image given as a parameter.

 ceil(real_no.): Gives the ceil value of the given decimal real no.

 fspecial(type, parameters): Creates predefined 2D filter. It returns a correlation kernel


which is used with the image filtering.

 conv2(A,B, ‘same’): It is used for the 2D convolution of matrices A and B. ‘same’ here,
returns the central part of the convolution of the same size as A.

 wiener2(image, [m,n]): Wiener lowpass-filters a grayscale image that has been degraded
by constant power additive noise. wiener2 uses a pixelwise adaptive Wiener method
based on statistics estimated from a local neighborhood of each pixel, using
neighborhoods of size m-by-n to estimate the local image mean.

 size(image): Returns the dimensions of the image matrix.


 zeros()
 uint8()
 max()
 min()

28
 med(matrix): Returns the median value of the values of the matrix passed as parameter.

 reshape(): Reshapes the given matrix by the size determined by as the parameter.
 double()
 kmeans(image, k): Segregates the image in k clusters according to the intensity of the
pixels.

 ones()
 bwareaopen(BW, P): Removes all connected components (objects) that have fewer
than P pixels from the binary image BW, producing another binary image.

 imbinarize()
 bwdist(image): Computes the Euclidean distance transform of the binary image image.
For each pixel in image, the distance transform assigns a number that is the distance
between that pixel and the nearest nonzero pixel of image. bwdist uses the Euclidean
distance metric by default. image can have any dimension. Returned image is the same
size as image.

 watershed(): Returns a label matrix L that identifies the watershed regions of the input
matrix A, which can have any dimension. The watershed transform finds "catchment
basins" or "watershed ridge lines" in an image by treating it as a surface where light
pixels represent high elevations and dark pixels represent low elevations. The elements
of L are integer values greater than or equal to 0. The elements labeled 0 do not belong to
a unique watershed region. The elements labeled1 belong to the first watershed region
 label2rgb()

29
CHAPTER 8:

IMPLEMENTATION OF ABOVE PROCESS STEPS

This chapter includes all the screenshots of the output taken from the program written for the
extraction of brain tumor. The program is attached in Appendix A.

Original Image:

Fig 9: Original image of the tumor

Fig. 9 shows the original image of the tumor of the MRI scan on which we would apply our
algorithm for the extraction of brain tumor on a binary image.

30
Grayscaled Image:

Fig 10: Grayscaled image

Grayscaled image forms the first and most important step for any image processing to take place.
It samples the luminance plane which is of core importance to the detection of intensity of pixels
for the brain tumor extraction.

31
For Low Pass Filtering:

Wiener Filtered Image:

Fig 13: Weiner Filtered image

In Fig 13, we can see that the image has got blurred a little bit, but at the same time is enhanced
with respect to the original image of the tumor. The blurring is caused due to the removal of any
high frequency component from the image. At the same time we have also increased the contrast
of the image. Our main focus is to increase the contrast at almost every step so that is easier for
the model to separate the tumor portion from the rest of the part of the MRI.

32
High Pass Filter:

Fig 14: High pass filtering for image sharpening

The high pass filter here, is used for the sharpening of the image. One can see a more sharpened
image from the one which we got from the weiner filter. Every pixel is enhanced or suppressed
according to the neighboring values of the image and the kernel.

33
Median Filter:

Fig 15: Median Filtered image

Fig 15 seems more furnished than that of the high pass filtered since it has filtered out any salt
and pepper noise which may have occurred in the image. This salt and pepper noise is any
random, abrupt or infeasible value of the pixel intensity in the image.

34
After k means clustering (Thresholding): (k=4)

Fig 16.

Here,

1- skull
2- brain or the cerebral portion
3- tumor
4- background

35
Cluster containing the tumor in binary image format:

Fig 17: Cluster containing tumor

This figure contains the tumor which is stored in form of an array in the program and can be
accessed from there. One can see here, that the portion containing the tumor is low symbolized
as 1- white pixel while all other pixels are symbolized as 0- black pixel.

36
After Watershed Segmentation:

Fig 18: Watershed segmented

By the figure we can make out that deeper the contour, darker is the color, thus the tumor
containing area is lighter than the area of rest of the brain which is treated as single quatity due to
the binarization of image by k means clustering.

37
Output Image of the tumor in binary format:

Fig 19: Final Tumor

Fig 19 contains the final output of the extracted tumor after the watershed segmentation also.

38
Area calculation of the tumor extracted:

area of the tumor in square mm: 231.2943

area of the image in square mm: 2.6268 x 10^3

Thus, the ratio of tumor containing area to the rest of the brain is: 231.2943/2626.8= 8.1%

39
10.0 GUI Application:

Fig 20: Front page of the GUI application

It is the front page of the GUI where we explore files to gather the MRI image of our use.

40
Fig 21: File browsing through the application

41
Fig 22: File showing through the application

42
Fig 23: k means
showing and median
filtered image

43
Fig 24: Final output
of the application

44
CHAPTER 10:

SOME MORE INSTANCES OF THE TUMOR EXTRACTED ON A


BINARY IMAGE

Fig 25: Instance 1

45
Fig 26: In this extraction, the
extracted brain tumor also
includes the eyes of the patient
and states it as the tumor. This is
happening because the MRI scan
is from the superior side of the
brain. This is why, it is necessary
to examine the side of the brain in
the MRI scan.

46
CHAPTER 11:

CONCLUSION

This project when backed up with a radiologist can prove to be very helpful for the medical
world. It eases the task of image enhancement which first was only accessible by the person’s
visionary sense, thus it gives a new and better perspective for the tumor analysis. The radiologist
can look into other reports of the tumor like 3D ultrasounds of the brain images to get other
parameters like the volume of the tumor and its location in the 3D space. This when combined
with medical knowledge can be a boon to the industry. The other parameters on which we can
judge a tumor is carcinogenic and non-carcinogenic i.e. malignant and benign are the texture and
surface parameters. Basing a tumor only on the size or area is not an optimum solution since
many benign tumors are quite big in size. Thus, it totally depends upon the location and cells by
which it made up of.

47
REFERENCES

[1] MARK C. PREUL, M.D., “History of brain tumor surgery”, Neurosurg. Focus / Volume
18 / April, 2005, pp. 1-3.

[2] Mauricio Castillo, MD, “History and Evolution of Brain Tumor Imaging: Insights
through Radiology”, RSNA Radiology, November 2014 Volume 273, Issue 2S, pp. 1-2

[3] Y. K. Lai, P.L. Rosin, Efficient Circular Thresholding, IEEE Trans. On Image Processing
23(3), pp. 992-1001(2014).
[4] Rajesh C. Patil, Dr. A. S. Bhalchandra., Brain Tumour Extraction from MRI Images
Using MATLAB, International Journal of Electronics, Communication & Soft
Computing Science and Engineering ISSN:2277-9477, Volume2, Issue1.
[5] Vipin Y. Borole, Sunil S. Nimbhore, Dr. Seema S. Kawthekar, Image processing
techniques for Brain Tumor Detection: A Review.. Internation Journal of Emerging
Trends & Technology in Computer Science,ISSN:2278-6856 ,Volume 4, Issue 5(2).
[6] https://sites.google.com/site/dataclusteringalgorithms/k-means-clustering-algorithm
An Efficient k-means Clustering Algorithm: Analysis and Implementation by
Tapas Kanungo, David M. Mount, Nathan S. Netanyahu, Christine D. Piatko, Ruth
Silverman and Angela Y. Wu.

[7] http://casemed.case.edu/clerkships/neurology/Web%20Neurorad/MRI%20Basics.htm

[8] https://en.wikipedia.org/wiki/Wiener_filter

[9] http://fourier.eng.hmc.edu/e161/lectures/smooth_sharpen/node2.html

[10] https://www.mathworks.com/company/newsletters/articles/the-watershed-transform-
strategies-for-image-segmentation.html

[11] Krishna Raj, Amrish Kumar, and Ashish Chaturvedi. “A Comparative Analysis of LMS
and NLMS Algorithms for Adaptive Filtration of compressed ECG Signal”. Power Control and
Embedded Systems (ICPCES), 2012 2nd International Conference on pp. 1-6. Print ISBN: 978-1-
4673-1047-5. DOI 10.1109/ICPCS.2012.6508051

48
APPENDIX A

PROGRAM FOR BRAIN TUMOR EXTRACTION:


%read the image of the mri scan
fig1=imread('C:\Users\Admin\Desktop\final
project\Screenshots\0.4\Screenshot(171).png');
figure(1),imshow(fig1);

%conversion of image to grayscale


fig2=rgb2gray(fig1);
figure(2), imshow(fig2);

%declaration of gaussian filter


%sigma=3;
%sigma=2;
%sigma=7;
%sigma=1;
sigma=1.5;
cutoff=ceil(3*sigma);
h=fspecial('gaussian',2*cutoff+1,sigma);
out=conv2(fig2,h,'same');
figure(3), imshow(out/256);

%declaration of weiner filter


%w=wiener2(fig2,[6 6]);
%w=wiener2(fig2,[10 10]);
%w=wiener2(fig2,[18 18]);
%w=wiener2(fig2,[15 15]);
w=wiener2(fig2,[12 12]);
figure(4), imshow(w);

%high pass filtering for sharpening


%kernel=[0 -1 0; -1 4 -1; 0 -1 0];
%kernel=[0 -1/4 0; -1/4 1 -1/4; 0 -1/4 0];
%kernel=[-1/8 -1/8 -1/8; -1/8 1 -1/8; -1/8 -1/8 -1/8];
%kernel=[0 -1/2 0; -1/2 2 -1/2; 0 -1/2 0];
%kernel=[0 -3 0; -3 12 -3; 0 -3 0];
kernel=[0 -1.25/4 0; -1.25/4 1.25 -1.25/4; 0 -1.25/4 0];
[row col]=size(w);

for x=2:1:row-1
for y=2:1:col-1

hpf(x,y)= kernel(1)*w(x-1,y-1)+kernel(2)*w(x-1,y)+kernel(3)*w(x-
1,y+1)+...
kernel(4)*w(x,y-
1)+kernel(5)*w(x,y)+kernel(6)*w(x,y+1)+kernel(7)*w(x+1,y-1)+...
kernel(8)*w(x+1,y)+kernel(9)*w(x+1,y+1);
end
end
figure(5),imshow(hpf);

%median filtering

49
[row,col]=size(hpf);
med=zeros(row, col);
med=uint8(med);

for i=1:row
for j=1:col %intensity at (i,j)
xmin=max(1,i-1);
xmax=min(row,i);
ymin=max(1,j-1);
ymax=min(col,j+1);

temp=hpf(xmin:xmax , ymin:ymax);

med(i,j)=median(temp(:));

end
end

figure(6), imshow(med);

%threshold segmentation by kmeans clustering


onedimage= reshape(med, [], 1);
onedimage= double(onedimage);
[index nn]=kmeans(onedimage, 4);
imindex=reshape(index, size(med));

figure(8),
subplot(3,2,1), imshow(imindex==1,[]);
subplot(3,2,2), imshow(imindex==2,[]);

subplot(3,2,3), imshow(imindex==3, []);


subplot(3,2,4), imshow(imindex==4, []);
%%

cluster= (imindex==3);
se=ones(5);
cluster=bwareaopen(cluster, 400);
figure(9),imshow(cluster);

%medbw=imbinarize(med, 0.9);
%figure, imshow(medbw);
%[row col]=size(med);
%maxcount=0;
%max=1;
%for a=imindex(:)
% count=0;
% for b=1:row-1
% for c=1:col-1
% if(med(b,c)==1 && imindex[a](b,c)==1)
% count=count+1;
% end

50
% end
% end
% if(count>maxcount)
% maxcount=count;
% max=a;
% end
%end

%cluster= (imindex==a);
%se=ones(5);
%cluster=bwareaopen(cluster, 400);
%figure(9),imshow(cluster);

%watershed segmentation

negatecluster=~cluster;
%figure(10), imshow(negatecluster);
dist=-bwdist(negatecluster);
dist(negatecluster)=-Inf;
L=watershed(dist);
%figure(11), imshow(L);
wi=label2rgb(L, 'hot', 'w');
figure(12),imshow(wi);
im=cluster;
im(L==0)=0;
figure(13), imshow(im);

%% working on area calculation

[row, col]= size(im);


area=0;

for x=1:row
for y=1:col
if(im(x,y)==1)
area=area+1;
end
end
end

elements= row*col; %no. of pixels in the image


arearatio=area/elements;
%1 pixel= 0.26458333333333 mm
px=0.26458333333333;
areaimage=row*col*px*px;
areafinal= arearatio*areaimage;
areaimage
areafinal

app.m: It is the front page of the GUI application where we load the MRI image of the brain
through the file exploring.

51
function varargout = app(varargin)
% APP MATLAB code for app.fig
% APP, by itself, creates a new APP or raises the existing
% singleton*.
%
% H = APP returns the handle to a new APP or the handle to
% the existing singleton*.
%
% APP('CALLBACK',hObject,eventData,handles,...) calls the local
% function named CALLBACK in APP.M with the given input arguments.
%
% APP('Property','Value',...) creates a new APP or raises the
% existing singleton*. Starting from the left, property value pairs are
% applied to the GUI before app_OpeningFcn gets called. An
% unrecognized property name or invalid value makes property application
% stop. All inputs are passed to app_OpeningFcn via varargin.
%
% *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one
% instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES

% Edit the above text to modify the response to help app

% Last Modified by GUIDE v2.5 15-Apr-2018 15:09:27

% Begin initialization code - DO NOT EDIT


gui_Singleton = 1;
gui_State = struct('gui_Name', mfilename, ...
'gui_Singleton', gui_Singleton, ...
'gui_OpeningFcn', @app_OpeningFcn, ...
'gui_OutputFcn', @app_OutputFcn, ...
'gui_LayoutFcn', [] , ...
'gui_Callback', []);
if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end

if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT

% --- Executes just before app is made visible.


function app_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% varargin command line arguments to app (see VARARGIN)

52
% Choose default command line output for app
handles.output = hObject;

% Update handles structure


guidata(hObject, handles);

% UIWAIT makes app wait for user response (see UIRESUME)


% uiwait(handles.figure1);

% --- Outputs from this function are returned to the command line.
function varargout = app_OutputFcn(hObject, eventdata, handles)
% varargout cell array for returning output args (see VARARGOUT);
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

% Get default command line output from handles structure


varargout{1} = handles.output;

% --- Executes during object creation, after setting all properties.


function axes2_CreateFcn(hObject, eventdata, handles)
% hObject handle to axes2 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns called

% Hint: place code in OpeningFcn to populate axes2


imshow ('C:\Users\Admin\Desktop\final project\college logo.png');

% --- Executes during object creation, after setting all properties.


function axes3_CreateFcn(hObject, eventdata, handles)
% hObject handle to axes3 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns called

% Hint: place code in OpeningFcn to populate axes3

% --- Executes on button press in loadmri.


function loadmri_Callback(hObject, eventdata, handles)
% hObject handle to loadmri (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
[a b]=uigetfile(''.'','All Files');
img=imread([b a]);
imshow(img, 'Parent', handles.axes3);
setappdata(0,'loadmri', img);

% --- Executes on button press in pushbutton2.


function pushbutton2_Callback(hObject, eventdata, handles)

53
% hObject handle to pushbutton2 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
img=getappdata(0,'loadmri');
%save('img.png','img');
imwrite(img, 'img.png');
%exportToFile(img, 'img.png');
Stage1screen2();

Stage1screen2.m: It is the second screen of the GUI application where the clustered image of
the brain tumor is shown separately and then the control is transferred to the last screen.

function varargout = Stage1screen2(varargin)


% STAGE1SCREEN2 MATLAB code for Stage1screen2.fig
% STAGE1SCREEN2, by itself, creates a new STAGE1SCREEN2 or raises the
existing
% singleton*.
%
% H = STAGE1SCREEN2 returns the handle to a new STAGE1SCREEN2 or the
handle to
% the existing singleton*.
%
% STAGE1SCREEN2('CALLBACK',hObject,eventData,handles,...) calls the
local
% function named CALLBACK in STAGE1SCREEN2.M with the given input
arguments.
%
% STAGE1SCREEN2('Property','Value',...) creates a new STAGE1SCREEN2 or
raises the
% existing singleton*. Starting from the left, property value pairs are
% applied to the GUI before Stage1screen2_OpeningFcn gets called. An
% unrecognized property name or invalid value makes property application
% stop. All inputs are passed to Stage1screen2_OpeningFcn via varargin.
%
% *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one
% instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES

% Edit the above text to modify the response to help Stage1screen2

% Last Modified by GUIDE v2.5 15-Apr-2018 21:27:30

% Begin initialization code - DO NOT EDIT


gui_Singleton = 1;
gui_State = struct('gui_Name', mfilename, ...
'gui_Singleton', gui_Singleton, ...
'gui_OpeningFcn', @Stage1screen2_OpeningFcn, ...
'gui_OutputFcn', @Stage1screen2_OutputFcn, ...
'gui_LayoutFcn', [] , ...
'gui_Callback', []);
if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end

54
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT

% --- Executes just before Stage1screen2 is made visible.


function Stage1screen2_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% varargin command line arguments to Stage1screen2 (see VARARGIN)

% Choose default command line output for Stage1screen2


handles.output = hObject;

% Update handles structure


guidata(hObject, handles);

% UIWAIT makes Stage1screen2 wait for user response (see UIRESUME)


% uiwait(handles.figure1);

% --- Outputs from this function are returned to the command line.
function varargout = Stage1screen2_OutputFcn(hObject, eventdata, handles)
% varargout cell array for returning output args (see VARARGOUT);
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

% Get default command line output from handles structure


varargout{1} = handles.output;

% --- Executes during object creation, after setting all properties.


function axes1_CreateFcn(hObject, eventdata, handles)
% hObject handle to axes1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns called

% Hint: place code in OpeningFcn to populate axes1


img=imread('C:\Users\Admin\Desktop\final project\img.png');
%imshow(img);
%conversion of image to grayscale
gray=rgb2gray(img);

%declaration of weiner filter


%w=wiener2(gray,[6 6]);
%w=wiener2(gray,[10 10]);
%w=wiener2(gray,[18 18]);
%w=wiener2(gray,[15 15]);

55
w=wiener2(gray,[12 12]);

%high pass filtering for sharpening


%kernel=[0 -1 0; -1 4 -1; 0 -1 0];
%kernel=[0 -1/4 0; -1/4 1 -1/4; 0 -1/4 0];
%kernel=[-1/8 -1/8 -1/8; -1/8 1 -1/8; -1/8 -1/8 -1/8];
%kernel=[-1/16 -1/16 -1/16; -1/16 1/2 -1/16; -1/16 -1/16 -1/16];
%kernel=[0 -1/2 0; -1/2 2 -1/2; 0 -1/2 0];
%kernel=[0 -3 0; -3 12 -3; 0 -3 0];
%kernel=[0 -1.25/4 0; -1.25/4 1.25 -1.25/4; 0 -1.25/4 0];
%kernel=[0 -1.01/4 0; -1.01/4 1.01 -1.01/4; 0 -1.01/4 0]
%kernel=[0 -1.25/2 0; -1.25/2 2.50 -1.25/2; 0 -1.25/2 0]
kernel=[0 -1.25/4 0; -1.25/4 1.25 -1.25/4; 0 -1.25/4 0]
[row col]=size(w);

for x=2:1:row-1
for y=2:1:col-1

hpf(x,y)= kernel(1)*w(x-1,y-1)+kernel(2)*w(x-1,y)+kernel(3)*w(x-
1,y+1)+...
kernel(4)*w(x,y-
1)+kernel(5)*w(x,y)+kernel(6)*w(x,y+1)+kernel(7)*w(x+1,y-1)+...
kernel(8)*w(x+1,y)+kernel(9)*w(x+1,y+1);
end
end

%median filtering
[row,col]=size(hpf);
med=zeros(row, col);
med=uint8(med);

for i=1:row
for j=1:col %intensity at (i,j)
xmin=max(1,i-1);
xmax=min(row,i);
ymin=max(1,j-1);
ymax=min(col,j+1);

temp=hpf(xmin:xmax , ymin:ymax);

med(i,j)=median(temp(:));

end
end

imshow(med);
onedimage= reshape(med, [], 1);
onedimage= double(onedimage);

[index nn]=kmeans(onedimage, 4);


imindex=reshape(index, size(med));
img1=imindex==1, [];
imwrite(img1,'cluster1.png');
img2=imindex==2, [];
imwrite(img2,'cluster2.png');

56
img3=imindex==3, [];
imwrite(img3,'cluster3.png');
img4=imindex==4, [];
imwrite(img4,'cluster4.png');
%setappdata(0,'axes1','imindex');

function edit1_Callback(hObject, eventdata, handles)


% hObject handle to edit1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of edit1 as text


% str2double(get(hObject,'String')) returns contents of edit1 as a
double
cluster=str2num(get(handles.edit1,'String'));
if(cluster == 1)
tumor=imread('C:\Users\Admin\Desktop\final project\cluster1.png');
imwrite(tumor,'tumor.png');

elseif(cluster == 2)
tumor=imread('C:\Users\Admin\Desktop\final project\cluster2.png');
imwrite(tumor,'tumor.png');

elseif(cluster == 3)
tumor=imread('C:\Users\Admin\Desktop\final project\cluster3.png');
imwrite(tumor,'tumor.png');

elseif(cluster == 4)
tumor=imread('C:\Users\Admin\Desktop\final project\cluster4.png');
imwrite(tumor,'tumor.png');
end

% --- Executes during object creation, after setting all properties.


function edit1_CreateFcn(hObject, eventdata, handles)
% hObject handle to edit1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns called

% Hint: edit controls usually have a white background on Windows.


% See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'),
get(0,'defaultUicontrolBackgroundColor'))
set(hObject,'BackgroundColor','white');

57
end

% --- Executes on button press in pushbutton1.


function pushbutton1_Callback(hObject, eventdata, handles)
% hObject handle to pushbutton1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
lastscreen;

% --- Executes during object creation, after setting all properties.


function axes2_CreateFcn(hObject, eventdata, handles)
% hObject handle to axes2 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns called

% Hint: place code in OpeningFcn to populate axes2

imshow('C:\Users\Admin\Desktop\final project\cluster1.png');

% --- Executes during object creation, after setting all properties.


function axes3_CreateFcn(hObject, eventdata, handles)
% hObject handle to axes3 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns called

% Hint: place code in OpeningFcn to populate axes3


imshow('C:\Users\Admin\Desktop\final project\cluster2.png');

% --- Executes during object creation, after setting all properties.


function axes4_CreateFcn(hObject, eventdata, handles)
% hObject handle to axes4 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns called

% Hint: place code in OpeningFcn to populate axes4


imshow('C:\Users\Admin\Desktop\final project\cluster3.png');

% --- Executes during object creation, after setting all properties.


function axes5_CreateFcn(hObject, eventdata, handles)
% hObject handle to axes5 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB

58
% handles empty - handles not created until after all CreateFcns called

% Hint: place code in OpeningFcn to populate axes5


imshow('C:\Users\Admin\Desktop\final project\cluster4.png');

% --- Executes during object creation, after setting all properties.


function axes6_CreateFcn(hObject, eventdata, handles)
% hObject handle to axes6 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns called

% Hint: place code in OpeningFcn to populate axes6


imshow ('C:\Users\Admin\Desktop\final project\college logo.png');

lastscreen.m: It is the final screen where the output of the tumor extracted is shown area in pixel
ratios is given.

function varargout = lastscreen(varargin)


% LASTSCREEN MATLAB code for lastscreen.fig
% LASTSCREEN, by itself, creates a new LASTSCREEN or raises the existing
% singleton*.
%
% H = LASTSCREEN returns the handle to a new LASTSCREEN or the handle to
% the existing singleton*.
%
% LASTSCREEN('CALLBACK',hObject,eventData,handles,...) calls the local
% function named CALLBACK in LASTSCREEN.M with the given input
arguments.
%
% LASTSCREEN('Property','Value',...) creates a new LASTSCREEN or raises
the
% existing singleton*. Starting from the left, property value pairs are
% applied to the GUI before lastscreen_OpeningFcn gets called. An
% unrecognized property name or invalid value makes property application
% stop. All inputs are passed to lastscreen_OpeningFcn via varargin.
%
% *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one
% instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES

% Edit the above text to modify the response to help lastscreen

% Last Modified by GUIDE v2.5 15-Apr-2018 14:56:30

% Begin initialization code - DO NOT EDIT


gui_Singleton = 1;
gui_State = struct('gui_Name', mfilename, ...
'gui_Singleton', gui_Singleton, ...
'gui_OpeningFcn', @lastscreen_OpeningFcn, ...
'gui_OutputFcn', @lastscreen_OutputFcn, ...
'gui_LayoutFcn', [] , ...

59
'gui_Callback', []);
if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end

if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT

% --- Executes just before lastscreen is made visible.


function lastscreen_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% varargin command line arguments to lastscreen (see VARARGIN)

% Choose default command line output for lastscreen


handles.output = hObject;

% Update handles structure


guidata(hObject, handles);

% UIWAIT makes lastscreen wait for user response (see UIRESUME)


% uiwait(handles.figure1);

% --- Outputs from this function are returned to the command line.
function varargout = lastscreen_OutputFcn(hObject, eventdata, handles)
% varargout cell array for returning output args (see VARARGOUT);
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

% Get default command line output from handles structure


varargout{1} = handles.output;

% --- Executes during object creation, after setting all properties.


function axes3_CreateFcn(hObject, eventdata, handles)
% hObject handle to axes3 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns called

% Hint: place code in OpeningFcn to populate axes3


imshow ('C:\Users\Admin\Desktop\final project\college logo.png');

function edit1_Callback(hObject, eventdata, handles)

60
% hObject handle to edit1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of edit1 as text


% str2double(get(hObject,'String')) returns contents of edit1 as a
double
cluster=imread('C:\Users\Admin\Desktop\final project\tumor.png');
[row, col]= size(cluster);
area=0;

for x=1:row
for y=1:col
if(cluster(x,y)==1)
area=area+1;
end
end
end

elements= row*col; %no. of pixels in the image


arearatio=area/elements;
%1 pixel= 0.26458333333333 mm
px=0.26458333333333;
areaimage=row*col*px*px;
areafinal= arearatio*areaimage;
set(handles.edit1, 'String',num2str(arearatio) );

% --- Executes during object creation, after setting all properties.


function edit1_CreateFcn(hObject, eventdata, handles)
% hObject handle to edit1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns called

% Hint: edit controls usually have a white background on Windows.


% See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'),
get(0,'defaultUicontrolBackgroundColor'))
set(hObject,'BackgroundColor','white');
end

% --- Executes on button press in pushbutton1.


function pushbutton1_Callback(hObject, eventdata, handles)
% hObject handle to pushbutton1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

% --- Executes during object creation, after setting all properties.


function axes1_CreateFcn(hObject, eventdata, handles)
% hObject handle to axes1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns called

61
% Hint: place code in OpeningFcn to populate axes1
cluster=imread('C:\Users\Admin\Desktop\final project\tumor.png');
negatecluster=~cluster;

dist=-bwdist(negatecluster);
dist(negatecluster)=-Inf;
L=watershed(dist);

wi=label2rgb(L, 'hot', 'w');


imshow(wi);

% --- Executes during object creation, after setting all properties.


function axes2_CreateFcn(hObject, eventdata, handles)
% hObject handle to axes2 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns called

% Hint: place code in OpeningFcn to populate axes2


cluster=imread('C:\Users\Admin\Desktop\final project\tumor.png');
imshow(cluster);

62

You might also like