Professional Documents
Culture Documents
Final Sem Project 235 PDF
Final Sem Project 235 PDF
By
Shreya Vaish (211/14)
Kirtivardhan Singh (193/14)
Nishant Kumar (197/14)
Rishabh Gupta (204/14)
1
CERTIFICATE
It is certified that Ms. SHREYA VAISH, Mr. KIRTIVARDHAN SINGH, Mr. NISHANT
KUMAR and Mr. RISHABH GUPTA, students of Final B. Tech. Electronics Engineering
H.B.T.U. Kanpur have been working on the project titled “BRAIN TUMOR EXTRACTION
USING MRI SCANS” under my guidance and supervision. They have shown sincere efforts and
keen interest during the preparation of this project report, and this work has been submitted as a
project report for the award of Bachelor of Technology Degree in Electronics Engineering.
Professor,
Department of Electronics Engineering
HBTU KANPUR
2
ACKNOWLEDGEMENT
I would like to take the opportunity to express my gratitude to all those people who have helped
in various ways in making successful my seminar on “BRAIN TUMOR EXTRACTION
USING MRI SCANS”. I would specially like to thank my project supervisor, Dr. KRISHNA
RAJ, Professor, Electronics Engineering Department at H.B.T.U., Kanpur, for his valuable
guidance, advice and positive gesture during the preparation of the project. I convey my sincere
thanks to all the faculty members of the department and my classmates for their valuable support.
(SHREYA VAISH)
(KIRTIVARDHAN SINGH)
(NISHANT KUMAR)
(RISHABH GUPTA)
3
CONTENTS
Abstract……………………………………………………………………………….………..…5
1. Historical Background………………………………………………..………………………..6
4. MRI………………………………………………………………………………………….22
9. GUI Application…………………………………………………………………………….39
11. Conclusion………………………………………………………………………………...46
References…………………………………………………………………………………….47
4
Abstract
Brain tumor extraction form a very crucial part in the detection of cancer. One of the best
methods to get to know about the primary requisites of the tumor is through the MRI scans. MRI
scans are very expensive and require a radiologists with a peaking fees for the diagnosis. Still a
radiologist is a human being and human error is inevitable. Thereby, wrong reports might be
passed on to the respective doctors which can result in blunders. To reduce this effect, image
processing can be used to classify few details of the tumor and enhance it from the rest of the
brain so that more clear perspective can be seen of the image. In our project we have with the
help of image processing through MATLAB have developed a model which extracts the tumor
portion of the brain from the rest of the skull on a binary image for better diagnosis. Along with
this, tumor area in pixel square have also been shown which can be easily converted to the
standard measuring units once the specifications of the MRI machine has been know, i.e. the
scale. A graphical user interface of the same model is prepared to make the whole work flow
handy for the operations with the required personnel.
5
CHAPTER 1:
HISTORICAL BACKGROUND
Other practitioners were rapidly adopting new technology for use in neurosurgery. At least by
the first decade of the 20th century, only a few years after the introduction to the world of X-
Rays in late 1895, Fedor Krause was using X-Rays routinely for assistance in localizing
intracranial tumors. [1]
In 1911 Krause had written an entire chapter devoted to “Radiography,” in which he promoted
the benefits of X-Rays for diagnosis of masses and tumors that had changed the contours of the
skull or that had left osseous deposits. [1]
As seen in another article published in 1928 in Radiology, cranial nerve VIII schwannomas were
diagnosed by demonstrating expanded internal auditory canals on oblique skull radiographs,
optic nerve gliomas could be inferred by enlargement of the optic canals, and cranial nerve
tumors were suggested by expansion of their corresponding outlet foramina. [2]
In 1954, the authors of an article in Radiology reported the use of nuclear scanning in 200
patients and concluded that accurate localization of brain tumors was possible in 46% (the rate of
localization of nontumoral lesions was about the same). By this time, the use of nuclear scanning
as the first noninvasive method to localize brain tumors was already fairly routine. [2]
In the mid-1960s, Kuhl at el reported in Radiology the development of the first practical
transverse, or cross-sectional, isotopic imaging method for brain lesions, which resulted in
improved visualization of tumors located in the posterior fossa. [2]
6
One of the most famous names in medical imaging is that of Sir Godfrey N. Hounsfield, FRS, an
engineer who, while working at EMI in England, created the first CT scanner. In 1971, a head-
only scanner was installed in London, England, and was soon thereafter installed in the United
States at the Mayo Clinic. In November 1978, the first article to show how imaging findings,
specifically contrast enhancement, correlated with astrocytoma grade was published in
Radiology. [2]
In 1984, the first two articles dealing with MR imaging of brain tumors appeared in Radiology.
In the first report (48), T1 measurements of brain masses were performed, and the authors found
that astrocytomas had the longest T1 and lipomas had the shortest. The second article was a
[2]
comparison between the then well-established CT and MR imaging.
The first 16-section scanner was introduced 2001, and a 64-section scanner became available in
2004. With respect to brain tumor imaging, the greatest advantage of this technique is speed,
which allows creation even of CT angiograms and acquisition of time-dependent blood perfusion
measurements. [2]
7
CHAPTER 2:
Tumor is defined as the abnormal growth of the tissues. Brain tumor is an abnormal mass of
tissue in which cells grow and multiply uncontrollably, seemingly unchecked by the mechanisms
that control normal cells. Brain tumors can be either malignant or benign. Benign being non-
cancerous and malignant being cancerous.
Magnetic Resonance Imaging (MRI) is an advanced medical imaging technique used to produce
high quality images of the parts contained in the human body MRI imaging is often used when
treating brain tumors, ankle, and foot. From these high-resolution images, we can derive detailed
anatomical information to examine human brain development and discover abnormalities.
Brain tumour diagnosis is quite difficult because of diverse shape, size, location and appearance
of tumour in brain. Brain tumour detection is very hard in the beginning stage because it can’t
find the accurate measurement of the tumour. Radiologists find it difficult to give an accurate
description of the tumour in the form of reports in short span of time. The image processing
based extraction of the same is the faster and much accurate way of determining the tumour in
the patient’s brain free from any human error or wrong judgment of the type of the tumour which
can be a plausible way of risking the patient’s life.
Pre-processing of MRI images is the primary step in image analysis which perform image
enhancement and noise reduction techniques which are used to enhance the image quality, then
some morphological operations are applied to detect the tumor in the image. The morphological
operations are basically applied on some assumptions about the size and shape of the tumour and
in the end the tumour is mapped onto the original gray scale image with 255 intensity to make
visible the tumour in the image.
The algorithm has two stages, first is pre-processing of given MRI image and after that
segmentation and then perform morphological operations. Steps of algorithm are as following:[3]
8
Input [MRI Image]
Preprocessing
Feature Extraction
Segmentation
Classification (Output)
9
CHAPTER 3:
Grayscale conversion is the first basic step of almost every image processing model. It separates
the luminance plane from chrominance (plane containing color) plane. Grayscale is a range of
shades of gray without apparent color. The darkest possible shade is black, which is the total
absence of transmitted or reflected light. The lightest possible shade is white, the total
transmission or reflection of light at all visible wavelengths. The illusion of gray shading in a
halftone image is obtained by rendering the image as a grid of black dots on a white background
(or vice versa), with the sizes of the individual dots determining the apparent lightness of the
gray in their vicinity. [4]
The brightness levels of the red (R), green (G) and blue (B) components are each represented as
a number from decimal 0 to 255, or binary 00000000 to 11111111. For every pixel in a red-
green-blue (RGB) grayscale image, R = G = B. The lightness of the gray is directly proportional
to the number representing the brightness levels of the primary colors. Black is represented by R
= G = B = 0 or R = G = B = 00000000, and white is represented by R = G = B = 255 or R = G =
B = 11111111.
In brain scans, the tumor is basically the illuminated regions, i.e. every other detail can be
considered as unwanted (noise). Thus, grayscale conversion increases the signal content of the
scan.
The RGB value consideration would unnecessarily increases the complexity and cost of
the model designed.
The extraction involves the analysis of contour formation and values of the pixels of the
scans. This information is readily available in the grayscale format of the image.
10
3.2 Weiner Filter:
In signal processing, the Wiener filter is a filter used to produce an estimate of a desired or target
random process by linear time-invariant (LTI) filtering of an observed noisy process, assuming
known stationary signal and noise spectra, and additive noise. The Wiener filter minimizes the
mean square error between the estimated random process and the desired process.
The goal of the Wiener filter is to compute a statistical estimate of an unknown signal using a
related signal as an input and filtering that known signal to produce the estimate as an output.
The Wiener filter is based on a statistical approach, and a more statistical account of the theory is
given in the minimum mean square error (MMSE) estimator article. [8]
One is assumed to have knowledge of the spectral properties of the original signal and the noise,
and one seeks the linear time-invariant filter whose output would come as close to the original
signal as possible. Wiener filters are characterized by the following: [8]
Assumption: signal and (additive) noise are stationary linear stochastic processes with
known spectral characteristics or known autocorrelation and cross-correlation
Requirement: the filter must be physically realizable/causal (this requirement can be
dropped, resulting in a non-causal solution)
Performance criterion: minimum mean-square error (MMSE)
v is the last matrix and a is the second matrix. The equations formed are known as the Wiener–
Hopf equations. The matrix T is a symmetric Toeplitz matrix (First matrix). Under suitable
conditions on R, these matrices are known to be positive definite and therefore non-singular
yielding a unique solution to the determination of the Wiener filter coefficient vector a= T-1v.
Furthermore, there exists an efficient algorithm to solve such Wiener–Hopf equations known as
the Levinson-Durbin algorithm so an explicit inversion of T is not required.
11
3.3 High Pass Filter:
A high-pass filter (HPF) is an electronic filter that passes signals with a frequency higher than a
certain cutoff frequency and attenuates signals with frequencies lower than the cutoff frequency.
The amount of attenuation for each frequency depends on the filter design. A high-pass filter is
usually modeled as a linear time-invariant system. It is sometimes called a low-cut filter or bass-
cut filter. High-pass filters have many uses, such as blocking DC from circuitry sensitive to non-
zero average voltages or radio frequency devices. They can also be used in conjunction with a
low-pass filter to produce a bandpass filter.
After that image is given as an input to high pass filter. A high pass filter is the basis for most
sharpening methods. An image is sharpened when contrast is enhanced between adjoining areas
with little variation in brightness or darkness.
A high pass filter tends to retain the high frequency information within an image while reducing
the low frequency information. The kernel of the high pass filter is designed to increase the
brightness of the center pixel relative to neighboring pixels. The kernel array usually contains a
single positive value at its center, which is completely surrounded by negative values.
Kernel which is used for the purpose of high pass filtering must have sum equal to zero with its
center value amplified with respect to the neighboring values.
0 -1.25/4 0
0 -1.25/4 0
12
4.4 Median Filter:
In signal processing, it is often desirable to be able to perform some kind of noise reduction on
an image or signal. The median filter is a nonlinear digital filtering technique, often used to
remove noise. Such noise reduction is a typical pre-processing step to improve the results of
later processing (for example, edge detection on an image). Median filtering is very widely used
in digital image processing because, under certain conditions, it preserves edges while removing
noise.
The main idea of the median filter is to run through the signal entry by entry, replacing each
entry with the median of neighboring entries. The pattern of neighbors is called the "window",
which slides, entry by entry, over the entire signal. For 2D (or higher-dimensional) signals such
as images, more complex window patterns are possible (such as "box" or "cross" patterns). Note
that if the window has an odd number of entries, then the median is simple to define: it is just the
middle value after all the entries in the window are sorted numerically. For an even number of
entries, there is more than one possible median, see median for more details. [9]
As there is no entry preceding the first value, the first value is repeated, as with the last value, to
obtain enough entries to fill the window. This is one way of handling missing window entries at
the boundaries of the signal, but there are other schemes that have different properties that might
be preferred in particular circumstances: [9]
Avoid processing the boundaries, with or without cropping the signal or image boundary
afterwards,
Fetching entries from other places in the signal. With images for example, entries from
the far horizontal or vertical boundary might be selected,
Shrinking the window near the boundaries, so that every window is full.
The operating principle is based on the median values of pixels in a particular window.
13
sprinkle of bright and dark spots on the image. It is caused by malfunctioning by camera
sensors, faulty memory locations in hardware, transmission of images in a noisy channel.
Boundaries and edges are preserved.
Complex and a time consuming procedure
14
3.5 Threshold Segmentation:
It is the simplest method of image segmentation. From a grayscaled image, thresholding can be
used to create binary image.
The simplest thresholding methods replace each pixel in an image with a black pixel if the image
intensity is less than some fixed value constant T, or a white pixel if the image intensity is
greater than that constant. Here, we have used values 0.4, 0.5, 0.55, 0.58 and 0.6. At these values
the results are being provided. T= 0.58 and 0.6 were the best.
For making it completely automated we can use various methods- 1. histogram 2. clustering 3.
spatial 4. local.
greythresh() function is used for thresholding. greythresh() function generates a random number
according to the image passed as an argument. The number is between 0 and 1.
It is for automation but was not giving suitable results therefore, manual is better. Thereby, we
switch to K means clustering.
K-means clustering is used here for the sake of threshold segmentation. Here, k means the
number of clusters which need to be done for the clustering purpose on the basis of the
intensities of the pixels in a particular cluster. These clusters are saved in an array of images with
different clusters in different images so that we can access the tumor by directly accessing the
image of the cluster containing the tumor. The clusters are made upon various factors like the
intensity of light. k-means clustering is a method of vector quantization, originally from signal
processing, that is popular for cluster analysis in data mining.
Clusters formed are divided and kept in separate accessible variables so that signal processing
can work upon it. This results in a partitioning of the data space into Voronoi cells.
For cluster formation, various spatially separated points on a graph can be assumed. Now, these
separated points on the graph can depict any feature of the pixel. Here, generally intensity is
taken into account of various pixel points of the image. The number of clusters to be made
15
decide upon the number of center points to be taken. If k is the number of clusters and k center
points are assumed on the graph all as far as possible from one another. This allows for more
accuracy for the clustering. Now, distance between these center points and other points is
calculated. ‘k’ groups are made along with center points as reference points and the points
closest to the respective cluster point. Now when these groups are formed, new center point is
calculated through these group points, and the process iterates until there is no movement of the
center point. Now, these group points forms a cluster in an array of k clusters. [6]
II. Calculate distance between each data point and cluster center.
16
𝑐 𝑐𝑖
Recalculate distance between cluster points and cluster center and obtain cluster centers.
Three key features of k-means which make it efficient are often regarded as its biggest
drawbacks:
Euclidean distance is used as a metric and variance is used as a measure of cluster scatter.
The number of clusters k is an input parameter: an inappropriate choice of k may yield
poor results. That is why, when performing k-means, it is important to run diagnostic
checks for determining the number of clusters in the data set.
Convergence to a local minimum may produce counterintuitive ("wrong") results
k means clustering forms the best alternative for the grouping of data points with respect to the
intensity of pixels. Since tumor is all about intensity, it is the best method.
The term watershed refers to a ridge that divides areas drained by different river systems. A
catchment basin is the geographical area draining into a river or reservoir. Computer analysis of
image objects starts with finding them-deciding which pixels belong to each object. This is
called image segmentation, the process of separating objects from the background, as well as
from each other.
17
To do this we'll use another new tool in the Image Processing Toolbox: bwdist, which computes
the distance transform. The distance transform of a binary image is the distance from every pixel
to the nearest nonzero-valued pixel, as this example shows.
If you imagine that bright areas are "high" and dark areas are "low," then it might look like the
surface (left). With surfaces, it is natural to think in terms of catchment basins and watershed
lines. The Image Processing Toolbox function watershed can find the catchment basins and
watershed lines for any grayscale image.
Edge detection is an image processing technique for finding the boundaries of objects within
images. It works by detecting discontinuities in brightness. Edge detection is used for image
segmentation and data extraction in areas such as image processing, computer vision, and
machine vision.
It can be shown that under rather general assumptions for an image formation model,
discontinuities in image brightness are likely to correspond to discontinuities in depth,
discontinuities in surface orientation, changes in material properties and variations in scene
illumination.
In the ideal case, the result of applying an edge detector to an image may lead to a set of
connected curves that indicate the boundaries of objects, the boundaries of surface markings as
well as curves that correspond to discontinuities in surface orientation. Thus, applying an edge
detection algorithm to an image may significantly reduce the amount of data to be processed and
18
may therefore filter out information that may be regarded as less relevant, while preserving the
important structural properties of an image.
The Hough transform is a feature extraction technique used in image analysis, computer vision,
and digital image processing. The purpose of the technique is to find imperfect instances of
objects within a certain class of shapes by a voting procedure. This voting procedure is carried
out in a parameter space, from which object candidates are obtained as local maxima in a so-
called accumulator space that is explicitly constructed by the algorithm for computing the Hough
transform.
The classical Hough transform was concerned with the identification of lines in the image, but
later the Hough transform has been extended to identifying positions of arbitrary shapes, most
commonly circles or ellipses.
In automated analysis of digital images, a subproblem often arises of detecting simple shapes,
such as straight lines, circles or ellipses. In many cases an edge detector can be used as a pre-
processing stage to obtain image points or image pixels that are on the desired curve in the image
space. Due to imperfections in either the image data or the edge detector, however, there may be
missing points or pixels on the desired curves as well as spatial deviations between the ideal
line/circle/ellipse and the noisy edge points as they are obtained from the edge detector. For these
reasons, it is often non-trivial to group the extracted edge features to an appropriate set of lines,
circles or ellipses. The purpose of the Hough transform is to address this problem by making it
possible to perform groupings of edge points into object candidates by performing an explicit
voting procedure over a set of parameterized image objects.
The simplest case of Hough transform is detecting straight lines. In general, the straight line y =
mx + b can be represented as a point (b, m) in the parameter space. Thus, a set of lines with
similar parameter can easily be noticed by the number of lines intersecting at similar points on
the parameter space. Similarly a particular point can be depicted by a line in parameter space and
thus the number of lines in parameter space can depict the number of points with similar
features. Thus, we can connect these lines to form a line. Is the points are in close vicinity then it
is local area edge detection else global edge detection.
19
Fig 5: Here, points on a line in the x-y plane intersect in parameter space on a single point. (‘n’ is
the intercept of the line)
Fig 6: A line in the x-y plane is depicted by a point in the parameter space.
Since the image is a binary image containing 1s and 0s, we can easily find out whether a
particular pixel contains the tumor portion or not. The black dots are the one which now not the
part of any tumor while the white dots are the part of tumor extracted.
20
3. This gives the ratio which can be further used to determine the area of the tumor with the
help of the area of the whole image.
1 pixel= 0.26458333333333 mm
21
CHAPTER 4:
Magnetic resonance imaging is a medical imaging technique used in radiology to form pictures
of the anatomy and the physiological processes of the body in both health and disease. MRI
scanners use strong magnetic fields, electric field gradients, and radio waves to generate images
of the organs in the body. MRI is created by the radio frequencies and acquisition of data. Radio
frequencies are given by a source. It is used to stimulate the tissues of the body. This energy
excites the tissues and reflects back the energy when the energy source is removed. This happens
due to the entropy of the respective proton spins of the tissues. [7]
This implies that similar tissues having similar rate of relaxation gives out same brightness
pattern in a MRI scan. This difference in the contrast shows different body tissues. These body
tissues can be separated and analyzed for as long as there is a level of difference in the contrast.
There are various schemes which have different conventions of the contrast formation with
respect to body tissues or the fluids. These schemes are:
1. T1
2. T2 or FLAIR (Fluid Attenuation Inversion )
T1: Here, fat, subacute hemorrhage, melanin, protein-rich fluid, slowly flowing blood are high
while, bone, urine, CSF, air, more water content, as in edema, tumor, infarction, inflammation,
infection, hyperacute or chronic hemorrhage, low proton density as in calcification are low.
22
T1 Fig 7 T2 [7]
Fig 8: MRI post contrast is used for the purpose of extraction. [7]
23
CHAPTER 5: FLOWCHART OF THE TUMOR EXTRACTION ON A BINARY IMAGE
Start
input of image
Grayscaling
Wiener filtering
(Threshold segmentation)
24
Program for the GUI Application
Start
End.
browse the image from the
computer
Start
25
CHAPTER 6:
Step 1-
Step 2-
Wiener Filter:
We choose the transfer function of wiener filter such as to minimize mean square error.
Step 3-
0 −0.3125 0
kernel: −0.3125 1.250 −0.3125
0 −0.3125 0
The kernel sharpens the intensity of the pixel with respect to the other neighboring pixels.
Step 4-
Median Filtering:
Intensity of a pixel is determined by the median of the all the neighboring pixels.
Step 5-
k- means clustering:
26
II. Calculate distance between each data point and cluster center.
𝑐 𝑐𝑖
Recalculate distance between cluster points and cluster center and obtain cluster centers.
27
CHAPTER 7:
imread(path): It reads the image specified by the path and returns the image in a matrix
format which contains the intensity of the particular pixel given by the dimensions of the
matrix.
rgb2gray(image): It reads the image given as parameter and returns an image which
contains only its luminance plane. It is used for the conversion of the rgb to greyscale.
ceil(real_no.): Gives the ceil value of the given decimal real no.
conv2(A,B, ‘same’): It is used for the 2D convolution of matrices A and B. ‘same’ here,
returns the central part of the convolution of the same size as A.
wiener2(image, [m,n]): Wiener lowpass-filters a grayscale image that has been degraded
by constant power additive noise. wiener2 uses a pixelwise adaptive Wiener method
based on statistics estimated from a local neighborhood of each pixel, using
neighborhoods of size m-by-n to estimate the local image mean.
28
med(matrix): Returns the median value of the values of the matrix passed as parameter.
reshape(): Reshapes the given matrix by the size determined by as the parameter.
double()
kmeans(image, k): Segregates the image in k clusters according to the intensity of the
pixels.
ones()
bwareaopen(BW, P): Removes all connected components (objects) that have fewer
than P pixels from the binary image BW, producing another binary image.
imbinarize()
bwdist(image): Computes the Euclidean distance transform of the binary image image.
For each pixel in image, the distance transform assigns a number that is the distance
between that pixel and the nearest nonzero pixel of image. bwdist uses the Euclidean
distance metric by default. image can have any dimension. Returned image is the same
size as image.
watershed(): Returns a label matrix L that identifies the watershed regions of the input
matrix A, which can have any dimension. The watershed transform finds "catchment
basins" or "watershed ridge lines" in an image by treating it as a surface where light
pixels represent high elevations and dark pixels represent low elevations. The elements
of L are integer values greater than or equal to 0. The elements labeled 0 do not belong to
a unique watershed region. The elements labeled1 belong to the first watershed region
label2rgb()
29
CHAPTER 8:
This chapter includes all the screenshots of the output taken from the program written for the
extraction of brain tumor. The program is attached in Appendix A.
Original Image:
Fig. 9 shows the original image of the tumor of the MRI scan on which we would apply our
algorithm for the extraction of brain tumor on a binary image.
30
Grayscaled Image:
Grayscaled image forms the first and most important step for any image processing to take place.
It samples the luminance plane which is of core importance to the detection of intensity of pixels
for the brain tumor extraction.
31
For Low Pass Filtering:
In Fig 13, we can see that the image has got blurred a little bit, but at the same time is enhanced
with respect to the original image of the tumor. The blurring is caused due to the removal of any
high frequency component from the image. At the same time we have also increased the contrast
of the image. Our main focus is to increase the contrast at almost every step so that is easier for
the model to separate the tumor portion from the rest of the part of the MRI.
32
High Pass Filter:
The high pass filter here, is used for the sharpening of the image. One can see a more sharpened
image from the one which we got from the weiner filter. Every pixel is enhanced or suppressed
according to the neighboring values of the image and the kernel.
33
Median Filter:
Fig 15 seems more furnished than that of the high pass filtered since it has filtered out any salt
and pepper noise which may have occurred in the image. This salt and pepper noise is any
random, abrupt or infeasible value of the pixel intensity in the image.
34
After k means clustering (Thresholding): (k=4)
Fig 16.
Here,
1- skull
2- brain or the cerebral portion
3- tumor
4- background
35
Cluster containing the tumor in binary image format:
This figure contains the tumor which is stored in form of an array in the program and can be
accessed from there. One can see here, that the portion containing the tumor is low symbolized
as 1- white pixel while all other pixels are symbolized as 0- black pixel.
36
After Watershed Segmentation:
By the figure we can make out that deeper the contour, darker is the color, thus the tumor
containing area is lighter than the area of rest of the brain which is treated as single quatity due to
the binarization of image by k means clustering.
37
Output Image of the tumor in binary format:
Fig 19 contains the final output of the extracted tumor after the watershed segmentation also.
38
Area calculation of the tumor extracted:
Thus, the ratio of tumor containing area to the rest of the brain is: 231.2943/2626.8= 8.1%
39
10.0 GUI Application:
It is the front page of the GUI where we explore files to gather the MRI image of our use.
40
Fig 21: File browsing through the application
41
Fig 22: File showing through the application
42
Fig 23: k means
showing and median
filtered image
43
Fig 24: Final output
of the application
44
CHAPTER 10:
45
Fig 26: In this extraction, the
extracted brain tumor also
includes the eyes of the patient
and states it as the tumor. This is
happening because the MRI scan
is from the superior side of the
brain. This is why, it is necessary
to examine the side of the brain in
the MRI scan.
46
CHAPTER 11:
CONCLUSION
This project when backed up with a radiologist can prove to be very helpful for the medical
world. It eases the task of image enhancement which first was only accessible by the person’s
visionary sense, thus it gives a new and better perspective for the tumor analysis. The radiologist
can look into other reports of the tumor like 3D ultrasounds of the brain images to get other
parameters like the volume of the tumor and its location in the 3D space. This when combined
with medical knowledge can be a boon to the industry. The other parameters on which we can
judge a tumor is carcinogenic and non-carcinogenic i.e. malignant and benign are the texture and
surface parameters. Basing a tumor only on the size or area is not an optimum solution since
many benign tumors are quite big in size. Thus, it totally depends upon the location and cells by
which it made up of.
47
REFERENCES
[1] MARK C. PREUL, M.D., “History of brain tumor surgery”, Neurosurg. Focus / Volume
18 / April, 2005, pp. 1-3.
[2] Mauricio Castillo, MD, “History and Evolution of Brain Tumor Imaging: Insights
through Radiology”, RSNA Radiology, November 2014 Volume 273, Issue 2S, pp. 1-2
[3] Y. K. Lai, P.L. Rosin, Efficient Circular Thresholding, IEEE Trans. On Image Processing
23(3), pp. 992-1001(2014).
[4] Rajesh C. Patil, Dr. A. S. Bhalchandra., Brain Tumour Extraction from MRI Images
Using MATLAB, International Journal of Electronics, Communication & Soft
Computing Science and Engineering ISSN:2277-9477, Volume2, Issue1.
[5] Vipin Y. Borole, Sunil S. Nimbhore, Dr. Seema S. Kawthekar, Image processing
techniques for Brain Tumor Detection: A Review.. Internation Journal of Emerging
Trends & Technology in Computer Science,ISSN:2278-6856 ,Volume 4, Issue 5(2).
[6] https://sites.google.com/site/dataclusteringalgorithms/k-means-clustering-algorithm
An Efficient k-means Clustering Algorithm: Analysis and Implementation by
Tapas Kanungo, David M. Mount, Nathan S. Netanyahu, Christine D. Piatko, Ruth
Silverman and Angela Y. Wu.
[7] http://casemed.case.edu/clerkships/neurology/Web%20Neurorad/MRI%20Basics.htm
[8] https://en.wikipedia.org/wiki/Wiener_filter
[9] http://fourier.eng.hmc.edu/e161/lectures/smooth_sharpen/node2.html
[10] https://www.mathworks.com/company/newsletters/articles/the-watershed-transform-
strategies-for-image-segmentation.html
[11] Krishna Raj, Amrish Kumar, and Ashish Chaturvedi. “A Comparative Analysis of LMS
and NLMS Algorithms for Adaptive Filtration of compressed ECG Signal”. Power Control and
Embedded Systems (ICPCES), 2012 2nd International Conference on pp. 1-6. Print ISBN: 978-1-
4673-1047-5. DOI 10.1109/ICPCS.2012.6508051
48
APPENDIX A
for x=2:1:row-1
for y=2:1:col-1
hpf(x,y)= kernel(1)*w(x-1,y-1)+kernel(2)*w(x-1,y)+kernel(3)*w(x-
1,y+1)+...
kernel(4)*w(x,y-
1)+kernel(5)*w(x,y)+kernel(6)*w(x,y+1)+kernel(7)*w(x+1,y-1)+...
kernel(8)*w(x+1,y)+kernel(9)*w(x+1,y+1);
end
end
figure(5),imshow(hpf);
%median filtering
49
[row,col]=size(hpf);
med=zeros(row, col);
med=uint8(med);
for i=1:row
for j=1:col %intensity at (i,j)
xmin=max(1,i-1);
xmax=min(row,i);
ymin=max(1,j-1);
ymax=min(col,j+1);
temp=hpf(xmin:xmax , ymin:ymax);
med(i,j)=median(temp(:));
end
end
figure(6), imshow(med);
figure(8),
subplot(3,2,1), imshow(imindex==1,[]);
subplot(3,2,2), imshow(imindex==2,[]);
cluster= (imindex==3);
se=ones(5);
cluster=bwareaopen(cluster, 400);
figure(9),imshow(cluster);
%medbw=imbinarize(med, 0.9);
%figure, imshow(medbw);
%[row col]=size(med);
%maxcount=0;
%max=1;
%for a=imindex(:)
% count=0;
% for b=1:row-1
% for c=1:col-1
% if(med(b,c)==1 && imindex[a](b,c)==1)
% count=count+1;
% end
50
% end
% end
% if(count>maxcount)
% maxcount=count;
% max=a;
% end
%end
%cluster= (imindex==a);
%se=ones(5);
%cluster=bwareaopen(cluster, 400);
%figure(9),imshow(cluster);
%watershed segmentation
negatecluster=~cluster;
%figure(10), imshow(negatecluster);
dist=-bwdist(negatecluster);
dist(negatecluster)=-Inf;
L=watershed(dist);
%figure(11), imshow(L);
wi=label2rgb(L, 'hot', 'w');
figure(12),imshow(wi);
im=cluster;
im(L==0)=0;
figure(13), imshow(im);
for x=1:row
for y=1:col
if(im(x,y)==1)
area=area+1;
end
end
end
app.m: It is the front page of the GUI application where we load the MRI image of the brain
through the file exploring.
51
function varargout = app(varargin)
% APP MATLAB code for app.fig
% APP, by itself, creates a new APP or raises the existing
% singleton*.
%
% H = APP returns the handle to a new APP or the handle to
% the existing singleton*.
%
% APP('CALLBACK',hObject,eventData,handles,...) calls the local
% function named CALLBACK in APP.M with the given input arguments.
%
% APP('Property','Value',...) creates a new APP or raises the
% existing singleton*. Starting from the left, property value pairs are
% applied to the GUI before app_OpeningFcn gets called. An
% unrecognized property name or invalid value makes property application
% stop. All inputs are passed to app_OpeningFcn via varargin.
%
% *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one
% instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT
52
% Choose default command line output for app
handles.output = hObject;
% --- Outputs from this function are returned to the command line.
function varargout = app_OutputFcn(hObject, eventdata, handles)
% varargout cell array for returning output args (see VARARGOUT);
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
53
% hObject handle to pushbutton2 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
img=getappdata(0,'loadmri');
%save('img.png','img');
imwrite(img, 'img.png');
%exportToFile(img, 'img.png');
Stage1screen2();
Stage1screen2.m: It is the second screen of the GUI application where the clustered image of
the brain tumor is shown separately and then the control is transferred to the last screen.
54
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT
% --- Outputs from this function are returned to the command line.
function varargout = Stage1screen2_OutputFcn(hObject, eventdata, handles)
% varargout cell array for returning output args (see VARARGOUT);
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
55
w=wiener2(gray,[12 12]);
for x=2:1:row-1
for y=2:1:col-1
hpf(x,y)= kernel(1)*w(x-1,y-1)+kernel(2)*w(x-1,y)+kernel(3)*w(x-
1,y+1)+...
kernel(4)*w(x,y-
1)+kernel(5)*w(x,y)+kernel(6)*w(x,y+1)+kernel(7)*w(x+1,y-1)+...
kernel(8)*w(x+1,y)+kernel(9)*w(x+1,y+1);
end
end
%median filtering
[row,col]=size(hpf);
med=zeros(row, col);
med=uint8(med);
for i=1:row
for j=1:col %intensity at (i,j)
xmin=max(1,i-1);
xmax=min(row,i);
ymin=max(1,j-1);
ymax=min(col,j+1);
temp=hpf(xmin:xmax , ymin:ymax);
med(i,j)=median(temp(:));
end
end
imshow(med);
onedimage= reshape(med, [], 1);
onedimage= double(onedimage);
56
img3=imindex==3, [];
imwrite(img3,'cluster3.png');
img4=imindex==4, [];
imwrite(img4,'cluster4.png');
%setappdata(0,'axes1','imindex');
elseif(cluster == 2)
tumor=imread('C:\Users\Admin\Desktop\final project\cluster2.png');
imwrite(tumor,'tumor.png');
elseif(cluster == 3)
tumor=imread('C:\Users\Admin\Desktop\final project\cluster3.png');
imwrite(tumor,'tumor.png');
elseif(cluster == 4)
tumor=imread('C:\Users\Admin\Desktop\final project\cluster4.png');
imwrite(tumor,'tumor.png');
end
57
end
imshow('C:\Users\Admin\Desktop\final project\cluster1.png');
58
% handles empty - handles not created until after all CreateFcns called
lastscreen.m: It is the final screen where the output of the tumor extracted is shown area in pixel
ratios is given.
59
'gui_Callback', []);
if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT
% --- Outputs from this function are returned to the command line.
function varargout = lastscreen_OutputFcn(hObject, eventdata, handles)
% varargout cell array for returning output args (see VARARGOUT);
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
60
% hObject handle to edit1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
for x=1:row
for y=1:col
if(cluster(x,y)==1)
area=area+1;
end
end
end
61
% Hint: place code in OpeningFcn to populate axes1
cluster=imread('C:\Users\Admin\Desktop\final project\tumor.png');
negatecluster=~cluster;
dist=-bwdist(negatecluster);
dist(negatecluster)=-Inf;
L=watershed(dist);
62