Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

CHAPTER 4

PROPOSED CLASSIFICATION AND SEGMENTATION FOR


STROKE

4.1. Image Enhancement

Image acquisition is the process of obtaining a digitized image from a real


world source. Each step in the acquisition process may introduce random changes into
the values of pixels in the image. These changes are called noise. The aim of image
enhancement is to improve the interpretability or perception of information in images
for human viewers, or to provide better input for other automated image processing
techniques [10]. Early detection and correct treatment based on accurate diagnosis are
important steps to improve disease outcome. The idea behind enhancement techniques
is to bring out detail that is obscured, or simply to highlight certain features of interest
in an image.

4.1.1. Median filter

The median filter was one of the most popular nonlinear filters for removing
salt & pepper noise. The noise is removed by replacing the window center value by
the median value of center neighborhood.
The median filter is often applied to gray value images due to its property of
edge preserving smoothing. The median filter is a nonlinear operator that arranges the
pixels in a local window according to the size of their intensity values and replaces
the value of the pixel in the result image by the middle value in this order. The median
is a more robust average than the mean and so a single very unrepresentative pixel in
a neighborhood will not affect the median value significantly. Since the median value
must actually be the value of one of the pixels in the neighborhood, the median filter
does not create new unrealistic pixel values when the filter straddles an edge. For this
reason the median filter is much better at preserving sharp edges than the mean filter
[37].
Some of the properties of median filter are:
 It is a nonlinear filter.

48
 It is useful in removing isolated lines or pixels while preserving spatial
resolution. It is found that median filter works well on binary noise but not so
well when the noise is Gaussian.
The steps to perform median filtering are as follows:
1)Assume a 3x3 empty mask.
2) Place the empty mask at the left hand corner.
3) Arrange the 9 pixels in ascending or descending order.
4) Chose the median from these nine values.
5) Place this median at the centre.
6) Move the mask throughout the image.
Thus, in median filtering, the grey level of the centre pixel is replaced by the
median value of the neighborhood. Image enhancement in medical field is a wide
problem because of the noise occurrence in the captured image. The captured brain
image will have high amount of noise or distortion so this noise must be removed
before it is used for diagnosis purpose. Median filtering is a nonlinear operation often
used in image processing to reduce "salt and pepper" noise. A median filter is more
effective than convolution when the goal is to simultaneously reduce noise and
preserve edges.
In median filtering, the neighboring pixels are ranked according to brightness
(intensity) and the median value becomes the new value for the central pixel. Median
filters can do an excellent job of rejecting certain types of noise, in particular, “shot”
or impulse noise in which some individual pixels have extreme values. In the median
filtering operation, the pixel values in the neighborhood window are ranked according
to intensity, and the middle value (the median) becomes the output value for the pixel
under evaluation.
Median filters are particularly effective in the presence of impulse noise, also
called salt and pepper noise because of its appearance as white and black dots
superimposed on an image. The median is usually taken from a template centered on
the point of interest.

4.2. Features

Mean returns the mean values of the elements along different dimensions of an
array. The image mean is the average pixel value of an image [63]. For a grayscale

49
image this is equal to the average brightness or intensity. The mean may be calculated
by
Y −1 X −1
1
E [ f ]=
YX
∑ ∑ f ( x , y) (4.1)
y=0 x=0

The image variance, gives an estimate of the spread of pixel values around the
image mean [02Tor]. The standard deviation is simply√ Var [f ].
2
Var [ f ] =E [f −E [ f ] ] …

Y −1 X −1
1
∑ ∑
2
¿ (f ( x , y )−E [ f ] ) (4.2)
YX y=0 x=0

The four features derived from GLCM are contrast, correlation, energy and
homogeneity. Contrast the intensity contrast between a pixel and its neighbor
over the whole image [11Pad]. Contrast can be computed by Equation (4.3).
Correlation statistical measure of how correlated a pixel is to its neighbor over the
whole image. Its formula is shown in Equation (4.4). Energy is the summation of
squared elements in the GLCM and it can be calculated by Equation (4.5).
Homogeneity is closeness of the distribution of elements in the GLCM to the GLCM
diagonal [56]. Equation (4.6) is the formula for homogeneity.

∑|i− j|2 p (i , j) (4.3)


i, j

( i−μi ) ( j−μj ) p (i, j)


∑ σ iσ j
i, j

(4.4)
∑ p (i, j)2 (4.5)
i, j

p(i , j)
∑ 1+|i− j| (4.6)
i, j

The following table shows the features and their related definitions.

50
Table 4.1. Features and their meaning
Feature Explanation

Mean The average of pixels contrast in the region

Standard deviation Variation from mean

Area Number of pixels in the region

Contrast Measures the local variations in the gray-level co-occurrence matrix.

Correlation Measures how correlated a pixel is to its neighbour over the whole image.

Energy Provides the sum of squared elements in the GLCM.

Homogeneity Measures the closeness of the distribution of elements in the GLCM to the
GLCM diagonal.

In the proposed system, the features shown in table are extracted from an
image. The extracted features are used to construct the rule-based system. The
features are constructed as rule in If-Then forms to determine the class of abnormality
based on the values of features.

4.3. Feature Extraction

Feature extraction generally refers to the extraction of discontinuities such as


point, line and edge, and pixels forming homogeneous regions. Such features have
difference in gray level when compared to the background area. In statistical
approaches, texture statistics such as the moments of the gray-level histogram, or
statistics based on gray-level co-occurrence matrix are computed to discriminate
different textures.

4.3.1. Grey Level Co-occurrence Matrix (GLCM)

Texture analysis refers to the characterization of regions in an image by their


texture content. All image processing operations generally aim at a better recognition

51
of objects of interest, i. e., at finding suitable local features that can be distinguished
from other objects and from the background [21]. Texture analysis attempts to
quantify intuitive qualities described by terms such as rough, silky, or bumpy in the
context of an image. In this case, the roughness or bumpiness refers to variations in
the brightness values or gray levels. The GLCM is a tabulation of how often different
combinations of pixel brightness values (gray levels) occur in a pixel pair in an image.
The GLCM estimates image properties in terms of second-order statistics defined via
co-occurrence matrix of the gray level (GLCM). This second order statistics
corresponds to the likelihood of observing a pair of voxel v 1 and v2 separated by a
distance vector dxy in 2D space (x, y). A statistical method of examining texture that
considers the spatial relationship of pixels is the gray-level co-occurrence matrix
(GLCM), also known as the gray-level spatial dependence matrix. The GLCM
functions characterize the texture of an image by calculating how often pairs of pixel
with specific values and in a specified spatial relationship occur in an image, creating
a GLCM, and then extracting statistical measures from this matrix. To create a
GLCM, use the graycomatrix function. The graycomatrix function creates a gray-level
co-occurrence matrix (GLCM) by calculating how often a pixel with the intensity
(gray-level) value i occurs in a specific spatial relationship to a pixel with the value j.
By default, the spatial relationship is defined as the pixel of interest and the pixel to
its immediate right (horizontally adjacent), but you can specify other spatial
relationships between the two pixels. Each element (i,j) in the resultant glcm is simply
the sum of the number of times that the pixel with value i occurred in the specified
spatial relationship to a pixel with value j in the input image. The number of gray
levels in the image determines the size of the GLCM. By default, graycomatrix uses
scaling to reduce the number of intensity values in an image to eight, but you can use
the NumLevels and the GrayLimits parameters to control this scaling of gray levels.
The gray-level co-occurrence matrix can reveal certain properties about the spatial
distribution of the gray levels in the texture image. For example, if most of the entries
in the GLCM are concentrated along the diagonal, the texture is coarse with respect to
the specified offset. You can also derive several statistical measures from the GLCM.
After you create the GLCMs, you can derive several statistics from them using the
‘graycoprops’ function. These statistics provide information about the texture of an
image. The following table lists the statistics you can derive. Some of the most
commonly used texture measures are derived from the Grey Level Co-occurrence
52
Matrix (GLCM). The GLCM is a tabulation of how often different combinations of
pixel brightness values (gray levels) occur in a pixel pair in an image. The texture
analysis support also includes several new functions that filter using standard
statistical measures, such as range, standard deviation, and entropy. Feature extraction
based on grey-level co-occurrence matrix (GLCM) is the second-order statistics that
can be use to analyzing image as a texture. GLCM is a tabulation of the frequencies or
how often a combination of pixel brightness values in an image occurs.

4.4. Histogram-based Thresholding

A histogram plots the relative frequency of each pixel value that occurs in a
grayscale image. The histogram provides a convenient summary of the intensities in
an image, but is unable to convey any information regarding spatial relationships
between pixels.
If the histogram of an image includes some peaks, we can separate it into a
number of modes. Each mode is expected to correspond to a region, and there exists a
threshold at the valley between any two adjacent modes. The midpoint method finds
an appropriate threshold value in an iterative fashion (Arifin & Asano 2006). The
algorithm is outlined below:
1. Apply a reasonable initial threshold value.
2. Compute the mean of the pixel values below and above this threshold, respectively
3. Compute the mean of the two means and use this value as the new threshold value.
Continue until the difference between two consecutive threshold values are smaller
than a preset minimum.

4.4.1. Histogram-based Thresholding for ‘Hemorrhage’

An image histogram is defined as a plot of the occurrence of each gray level


represented in the image. Based on the shape of the histogram, i.e. the valleys and
peaks, a certain gray scale value can be found and set as a threshold. The threshold
can be derived from the difference of the two histograms of hemispheres. By
computing difference, the pixel values at abnormal region can be extracted.

53
Figure 4.1. Differences of two histograms (normal and hemorrhage)

4.4.2. Histogram-based Thresholding for ‘Infarct’

1. Computing Histogram and k = 80.

2. Computing two thresholds ‘p1’ and ‘p2’.


3. Computing two thresholds ‘m1’ and ‘m2’.
4. Compute Global mean value (mg = p1(k)*m1(k)+p2(k)*m2(k)).
5. Var (v) = ( p1(v)*(m1(v)-mg)^2 )+( p2(v)*(m2(v)-mg)^2 ).
6. Finidng maximum Var (k*) to get IndexofMaxnum
‘IndexofMaxnum’ is set as threshold to segment the infarct region.

4.5. Region Growing

Region growing is based on the fact that the grey levels of the hot seeds are
lower than the pixels not far away from the edge region in the hot object and the grey
levels of the cold seeds is higher than the pixels not far away from the edge region of
the cold background. Thus the seeds grow into their respective regions to give a
segmented binary image, which is the final output image. In all the region growing
algorithms criteria of similarity of pixels is applied, but the mechanism of region
growing is closer to the watershed algorithm. Instead of controlling region growing by
tuning homogeneity parameters, is controlled by choosing a usually small number of
pixels, known as seeds. Initially a single seed is chosen. Then its neighboring seeds
are compared one by one with the seed chosen initially. If the homogeneity criteria
match then same types of seeds are grouped together and hence region is grown [8].

54
4.5.1. Seeded Region Growing

Seeded Region Growing starts with an initial seed pixel and tries to compare
their neighborhood pixels with the seed according to some attributes, such as the
intensity or texture. It then merges them if they are similar enough. The first factor is
determining the initial seed pixel that the Seeded Region Growing can start growing.
The second factor is the threshold value for measuring the difference between the
pixel and their neighbors.
The region is iteratively grown by comparing all unallocated neighboring
pixels to the region. The difference between a pixel's intensity value and the region's
mean is used as a measure of similarity. The pixel with the smallest difference
measured this way is allocated to the respective region. This process stops when the
intensity difference between region mean and new pixel become larger than a certain
threshold (t).

4.6. Summary

To sum up, this chapter firstly describes about the median filter used in
proposed system. And then discuss about GLCM and features extracted from GLCM.
Histogram-based thresholding method and seeded region growing method is also used
to detect and segment the abnormal region in brain image.

55

You might also like