Prostate Cancer

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 64

Abstract

The prostate cancer disease detection and analysis are constrained by human’s visual
potential as it entirely depends on microscopic behaviour. The computer-based image
reorganization schemes are implemented in accurate classification and identification of
prostate cancer diseases. Disease Detection operation is performed by k-mean clustering
operation on captured real time prostate lesion image. Once the detection has been done its
features are extracted by Gray Level Co-occurrence Matrix (GLCM) based Texture features;
discrete wavelet Transform (DWT) based low level features and Statistical Color features
respectively. Generally, classification is done by SVM based approaches, but it is having the
low accuracy towards texture features. To implement features based matching operation, an
advanced artificial intelligence based Probabilistic Neural Networks (PNN) approach is
adopted for classification. The proposed approach is implemented in MATLAB environment,
and the accuracy of this methodology is much better that conventional approaches.

1
CHAPTER 1

INTRODUCTION

For sixteen century, human cancer was one of the complex diseases which are basically
caused by the accumulation of multiple molecular alternations and genetic instability.
Prognostic classifications and Current Diagnostic do not reflect the tumor and which is
insufficient to create a kind of prediction approach to successful treatment. Most of the
presently employed anti-cancer agents do not importantly differentiate between normal and
cancerous cells [1]. Additionally, the cancer is often treated and diagnosed too late, here also
considered about when the cancer cells have already metastasized and invaded into other
parts of the body. Among many sorts of cancer, prostate cancers are one of the most common
forms of cancer in a human. There are two major sorts of prostate cancer, namely Non-
Melanoma and Melanoma (Merkel cell carcinomas, squamous cell, basal cell and so on) [2].
Melanoma is one of the most dangerous prostate cancers and can be fatal if not treated. In
case detect the melanoma in early stages, it is curable highly, yet progressive melanoma is
deadly. Therefore, it is well known which means the early treatment and finding of prostate
cancer can minimize the morbidity. The digital image processing methods are considered
widely and accepted the system for the medical field [3]. An automatic image processing
approach normally has different kinds of stages such as the initial image analysing the given
image, proper segmentation after that feature extraction and selecting the needed features and
finally done the lesion recognition. The segmentation process is incredibly significant since it
affects the subsequent step precision values [4]. The supervised segmentation is used to vary
the different kinds of parameters such as lesion colors, sizes, shapes along with diverse
prostate textures and types. However, the unsupervised segmentation is a well-known
different task and different kind of properties. While the significant research effort has gone a
different kind of computerized procedure to process the prostate image, but still some
drawbacks.

In recent days, prostate cancer becomes most effected disease of all the types of
cancers, and it is divided as benign and malignant. In these two types of melanoma is
recognized as most deadliest one while comparing with the non-melanoma prostate cancers
[1].It is known fact that melanoma effects more people year by year wise and early treatment
are really important for the survival of the patients. Inspection of malignant melanoma needs

2
well experienced dermatologists. These people use computer-assisted system early detection
of melanoma [2]. More algorithms in deep learning models were used for diagnosis of
prostate cancer diagnosis. The accuracy rate of these models is the challenging task are still
facing more challenges For achieving the high accuracy rate, models are to be overcome all
the drawbacks of conventional models.

This paper proposes a novel prostate cancer detection approach. Many research papers
have utilized image pre-processing for the identification of the melanoma at the initial times,
which leads to effective treatment. In this way, it is necessary to broaden the span of such
essential diagnostic care by arranging efficient frameworks for prostate disease classification.
Many research papers have utilized image pre-processing for the identification of the
melanoma at the initial times, which leads to effective treatment. Proficient dermatologists
have set up the ABCDEs[3-4] (Asymmetrical shape, Border irregularities, Color, Diameter,
and Evolution) as the standardized descriptions to help with visualizing standard features of
severe melanoma cases. One of the main challenges of classifying harmful prostate injuries is
due to sheer proportions of varieties over the different prostate tones from people of different
ethnic backgrounds. Recently, new accomplishments in the improvement of artificial neural
networks (ANNs) have permitted computers to beat dermatologists in prostate cancer
classification tasks. The following phase is to improve the accuracy of melanoma location
further. Our strategy for early diagnosis of prostate lesion incorporates deep learning which
helps us to enhance the accuracy of automated framework compared methods. In this work,
we proposed our custom network for lesion classification.

1.1 Prostate Cancer

Prostate cancer is the most common diseases presently it increases 40 percent, suspicious
prostate cancer often biopsied, is a process which is unpleasant for the slow and patient to get
diagnostic results [5]. Additionally, the rate of unwanted biopsies is very high. The prostate
cancer image processing system used to diagnostics or diagnosis is the procedure of
recognizing a prostate texture or issue by symptoms, signs and the various kinds of diagnosis
process results. Prostate cancer is a malignant tumor, which grows in the prostate cells and
which takes more than 50 percent of whole cancers. Fortunately, prostate cancer (squamous
cell and basal cell, melanoma and malignant) are rare in children. When occurring
melanomas, they normally arise from pigmented nevi (moles) which are (diameter< 6mm)
asymmetric, with irregular coloration and borders. A mass, itching, bleeding under the

3
prostate are other signs of cancerous change. The prostate cancer detection system is the
basic process to recognize and identify prostate cancer diagnosis and symptoms in the early
stages. The prostate cancer detection system will use to diagnose the prostate cancer in the
accurate stage. The prostate is the largest body organ and it covers the whole body and it
thickness differs from the feet and more on palms of hand or 0.5 mm on eyelids to 4mm. The
key role of the prostate is to maintain the internal systems and protect the body. Its other
functions are the production of vitamin D, sensation regulation temperature, and insulation.
The prostate has three layers: The Epidermis the top layer of the prostate, next the dermis is
the middle layer of the prostate and the hypodermis layer or subcutaneous tissue are shown in
Fig.1.1.

Figure 1.1 Prostate Structure

1.2 Objective

To progress prostate cancer detection prototype of a system for identifies melanoma system
that can distinguish various kinds of prostate cancer symptoms and this work will help
patients prevent the melanoma in an early stage. In this procedure create efficient
classification and extraction of Pigmented Prostate lesion such as that of melanoma (thus
reducing possible biases prepared by medical specialists, particularly dermatologist). This
approach is very useful to ease treatment and diagnosis of prostate cancer patient (using the
automation, image processing approach) and offer for a cost-effective way of treatment and

4
the speed of identifying Pigmented Prostate Lesion such as in melanoma. This approach may
also help to the rural area physical practitioner can also identify the melanoma in early stage.
The main motto is cost, and time consummation is very less.

5
CHAPTER 2

IMAGE SEGMENTATION

2.1 Segmentation:

In computer vision, segmentation refers to the process of partitioning a digital image


into multiple segments (sets of pixels, also known as super pixels). The goal of segmentation
is to simplify and/or change the representation of an image into something that is more
meaningful and easier to analyze. Image segmentation is typically used to locate objects and
boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process
of assigning a label to every pixel in an image such that pixels with the same label share
certain visual characteristics.

The result of image segmentation is a set of segments that collectively cover the entire image,
or a set of contours extracted from the image (see edge detection). Each of the pixels in a
region is similar with respect to some characteristic or computed property, such as color,
intensity, or texture. Adjacent regions are significantly different with respect to the same
characteristics.

 Thresholding

 Edge finding

 Binary mathematical morphology

 Gray-value mathematical morphology

In the analysis of the objects in images it is essential that we can distinguish between
the objects of interest and "the rest." This latter group is also referred to as the background.
The techniques that are used to find the objects of interest are usually referred to as
segmentationtechniques - segmenting the foreground from background. In this section we
will two of the most common techniques thresholding and edge finding and we will present
techniques for improving the quality of the segmentation result. It is important to understand
that:

1. There is no universally applicable segmentation technique that will work for all images, and,

6
2. No segmentation technique is perfect.

2.1.1 THRESHOLDING:

This technique is based upon a simple concept. A parameter called the brightnessthreshold is
chosen and applied to the image a [m, n] as follows:

This version of the algorithm assumes that we are interested in light objects on a dark
background. For dark objects on a light background we would use:

The output is the label "object" or "background" which, due to its dichotomous nature, can be
represented as a Boolean variable "1" or "0". In principle, the test condition could be based
upon some other property than simple brightness (for example, If (Redness {a [m, n]} >=
θred), but the concept is clear.

The central question in thresholding then becomes: how do we choose the threshold θ? While
there is no universal procedure for threshold selection that is guaranteed to work on all
images, there are a variety of alternatives.

Fixed threshold - One alternative is to use a threshold that is chosen independently of the
image data. If it is known that one is dealing with very high-contrast images where the
objects are very dark and the background is homogeneous and very light, then a constant
threshold of 128 on a scale of 0 to 255 might be sufficiently accurate. By accuracy we mean
that the number of falsely-classified pixels should be kept to a minimum.

Histogram-derived thresholds -In most cases the threshold is chosen from the brightness
histogram of the region or image that we wish to segment. An image and its associated
brightness histogram are shown in Figure 2.

7
(a) Image to be threshold (b) Brightness histogram of the image

Figure 2: Pixels below the threshold (a [m, n] <θ) will be labeled as object pixels; those
above the threshold will be labeled as background pixels.

A variety of techniques have been devised to automatically choose a threshold starting from
the gray-value histogram, {h[b] | b = 0, 1, 2B-1}. Some of the most common ones are
presented below. Many of these algorithms can benefit from a smoothing of the raw
histogram data to remove small fluctuations but the smoothing algorithm must not shift the
peak positions. This translates into a zero-phase smoothing algorithm given below where
typical values for W are 3 or 5:

Isodata algorithm:This iterative technique for choosing a threshold was developed by Ridler
and Calvard. The histogram is initially segmented into two parts using a starting threshold
value such as θ0 = 2B-1, half the maximum dynamic range. The sample mean (mf, 0) of the gray
values associated with the foreground pixels and the sample mean (mb, 0) of the gray values
associated with the background pixels are computed. A new threshold value θ1 is now
computed as the average of these two sample means. The process is repeated, based upon the
new threshold, until the threshold value does not change any more. In formula:

Background-symmetry algorithm- This technique assumes a distinct and dominant peak for
the background that is symmetric about its maximum. The technique can benefit from

8
smoothing as described in eq. . . . The maximum peak (maxp) is found by searching for the
maximum value in the histogram. The algorithm then searches on the non-object pixel side of
that maximum to find a p% point as in eq. (39).

In Figure 2b, where the object pixels are located to the left of the background peak at
brightness 183, this means searching to the right of that peak to locate, as an example, the
95% value. At this brightness value, 5% of the pixels lie to the right (are above) that value.
This occurs at brightness 216 in Figure 2b. Because of the assumed symmetry, we use as a
threshold a displacement to the left of the maximum that is equal to the displacement to the
right where the p% is found. For Figure 2b this means a threshold value given by 183 - (216 -
183) = 150. In formula:

This technique can be adapted easily to the case where we have light objects on a dark,
dominant background. Further, it can be used if the object peak dominates and we have
reason to assume that the brightness distribution around the object peak is symmetric. An
additional variation on this symmetry theme is to use an estimate of the sample standard
deviation (s in eq. (37)) based on one side of the dominant peak and then use a threshold
based on θ= maxp +/- 1.96s (at the 5% level) or θ= maxp +/- 2.57s (at the 1% level). The
choice of "+" or "-" depends on which direction from maxp is being defined as the
object/background threshold. Should the distributions be approximately Gaussian around
maxp, then the values 1.96 and 2.57 will, in fact, correspond to the 5% and 1 % level.

Triangle algorithm- This technique due to Zack is illustrated in Figure 3. A line is


constructed between the maximum of the histogram at brightness bmax and the lowest value
bmin = (p=0) % in the image. The distance d between the line and the histogram h[b] is
computed for all values of b from b =bmin to b =bmax. The brightness value Bo where the
distance between h [Bo] and the line is maximal is the threshold value, that is, θ = Bo. This
technique is particularly effective when the object pixels produce a weak peak in the
histogram.

9
Figure 3: The triangle algorithm is based on finding the value of b that gives the maximum
distance d.

The three procedures described above give the values θ= 139 for the Isodata algorithm, θ=
150 for the background symmetry algorithm at the 5% level, and θ= 152 for the triangle
algorithm for the image.

Thresholding does not have to be applied to entire images but can be used on a region by
region basis. Chow and Kaneko developed a variation in which the MxN image is divided
into non-overlapping regions. In each region a threshold is calculated and the resulting
threshold values are put together (interpolated) to form a thresholding surface for the entire
image. The regions should be of "reasonable" size so that there are a sufficient number of
pixels in each region to make an estimate of the histogram and the threshold. The utility of
this procedure, like so many others, depends on the application at hand.

2.1.2 EDGE FINDING:

Thresholding produces a segmentation that yields all the pixels that, in principle, belong to
the object or objects of interest in an image. An alternative to this is to find those pixels that
belong to the borders of the objects. Techniques that are directed to this goal are termed
edgefindingtechniques. From our discussion on mathematical morphology, specifically eqs. ,
and, we see that there is an intimate relationship between edges and regions.

Gradient-based procedure- The central challenge to edge finding techniques is to find


procedures that produce closed contours around the objects of interest. For objects of
particularly high SNR, this can be achieved by calculating the gradient and then using a
suitable threshold. This is illustrated in Figure 4.

10
(a)SNR = 30 dB (b)SNR = 20 dB

Figure 4: Edge finding based on the Sobel gradient, eq, combined with the Isodata
thresholding algorithm

While the technique works well for the 30 dB image in Figure 4a, it fails to provide an
accurate determination of those pixels associated with the object edges for the 20 dB image in
Figure 4b. A variety of smoothing techniques as described in Section 9.4 and in eq. can be
used to reduce the noise effects before the gradient operator is applied.

Zero-crossing based procedure- A more modern view to handling the problem of edges in
noisy images is to use the zero crossings generated in the Laplacian of an image. The
rationale starts from the model of an ideal edge, a step function, that has been blurred by an
OTF such as Table 4 T.3 (out-of-focus), T.5 (diffraction-limited), or T.6 (general model) to
produce the result shown in Figure 5.

Figure 5: Edge finding based on the zero crossing as determined by the second derivative,
the Laplacian. The curves are not to scale.

The edge location is, according to the model, at that place in the image where the Laplacian
changes sign, the zero crossing. As the Laplacian operation involves a second derivative, this

11
means a potential enhancement of noise in the image at high spatial frequencies. To prevent
enhanced noise from dominating the search for zero crossings, a smoothing is necessary.

The appropriate smoothing filter, from among the many possibilities should according to
Canny have the following properties:

 In the frequency domain, (u,v) or (Ω,ψ), the filter should be as narrow as possible to
provide suppression of high frequency noise, and;
 In the spatial domain, (x, y) or [m, n], the filter should be as narrow as possible to
provide good localization of the edge. A too wide filter generates uncertainty as to
precisely where, within the filter width, the edge is located.

The smoothing filter that simultaneously satisfies both these properties--minimum bandwidth
and minimum spatial width is the Gaussian filter described. This means that the image should
be smoothed with a Gaussian of an appropriate followed by application of the Laplacian. In
formula:

Where g2D(x, y). The derivative operation is linear and shift-invariant as defined in eqs. (85)
And (86). This means that the order of the operators can be exchanged (eq. (4)) or combined
into one single filter (eq. (5)). This second approach leads to the Marr-ildreth formulation of
the "Laplacian-of-Gaussians" (LoG) filter:

Where

Given the circular symmetry this can also be written as:

This two-dimensional convolution kernel, which is sometimes referred to as a "Mexican hat


filter", is illustrated in Figure 6.

12
(a) -LoG(x, y) (b) LoG(r)

Figure 6: LoG filter with = 1.0.

PLUS-based procedure- Among the zero crossing procedures for edge detection, perhaps
the most accurate is the PLUS filter as developed by Verbeek and Van Vliet. The filter is
defined,

Neither the derivation of the PLUS's properties nor an evaluation of its accuracy is within the
scope of this section. Suffice it to say that, for positively curved edges in gray value images,
the Laplacian-based zero crossing procedure overestimates the position of the edge and the
SDGD-based procedure underestimates the position. This is true in both two-dimensional and
three-dimensional images with an error on the order of (σ/R) 2
where R is the radius of
curvature of the edge.

The PLUS operator has an error on the order of (σ/R) 4 if the image is sampled at, at least, 3x
the usual Nyquist sampling frequency or if we choose σ>= 2.7 and sample at the usual
Nyquist frequency.

All of the methods based on zero crossings in the Laplacian must be able to distinguish
between zero crossings and zero values. While the former represent edge positions, the latter
can be generated by regions that are no more complex than bilinear surfaces, that is, a(x,y) =
a0 + a1*x + a2*y + a3*x*y. To distinguish between these two situations, we first find the zero
crossing positions and label them as "1" and all other pixels as "0". We then multiply the
resulting image by a measure of the edgestrength at each pixel. There are various measures

13
for the edge strength that are all based on the gradient as described in Section 9.5.1 and eq. .
This last possibility, use of a morphological gradient as an edge strength measure, was first
described by Lee, aralick, and Shapiro and is particularly effective. After multiplication the
image is then thresholded (as above) to produce the final result. The procedure is thus as
follows:

Figure 7: General strategy for edges based on zero crossings.

The results of these two edge finding techniques based on zero crossings, LoG filtering and
PLUS filtering, are shown in Figure 7 for images with a 20 dB SNR.

a) Image SNR = 20 dB b)LoG filter c)PLUS filter

Figure 7: Edge finding using zero crossing algorithms LoG and PLUS.

14
In both algorithms, Edge finding techniques provide, as the name suggests, an image that contains a
collection of edge pixels. Should the edge pixels correspond to objects, as opposed to say simple lines
in the image, and then a region-filling technique such as eq. may be required to provide the complete
objects.

2.1.3 BINARY MATHEMATICAL MORPHOLOGY

The various algorithms that we have described for mathematical morphology in Section 9.6
can be put together to form powerful techniques for the processing of binary images and gray
level images. As binary images frequently result from segmentation processes on gray level
images, the morphological processing of the binary result permits the improvement of the
segmentation result.

Salt-or-pepper filtering - Segmentation procedures frequently result in isolated "1" pixels in


a "0" neighborhood (salt) or isolated "0" pixels in a "1" neighborhood (pepper). The
appropriate neighborhood definition must be chosen as in Figure 3. Using the lookup table
formulation for Boolean operations in a 3 x 3 neighborhood that was described in association
with Figure 43, saltfiltering and pepper filtering are straightforward to implement. We weight
the different positions in the 3 x 3 neighborhood as follows:

For a 3 x 3 window in a [m, n] with values "0" or "1" we then compute:

The result, sum, is a number bounded by 0 <= sum<= 511.

Salt Filter The 4-connected and 8-connected versions of this filter are the same and are
given by the following procedure:

i) Compute sum ii) If ((sum == 1) c [m, n] = 0 Else c [m, n] = a [m, n]

15
Pepper Filter- The 4-connected and 8-connected versions of this filter are the following
procedures:

4-connected 8-connected i) Compute sum i) Compute sum ii) If( (sum == 170) ii) If ( (sum
== 510) c[m,n] = 1 c[m,n] = 1 ElseElsec[m,n] = a[m,n] c[m,n] = a[m,n]

Isolate objects with holes - To find objects with holes we can use the following procedure
which is illustrated in Figure 8.

i) Segment image to produce binary mask representation

ii) Compute skeleton withoutendpixels

iii) Use salt filter to remove single skeleton pixels

iv) Propagate remaining skeleton pixels into original binary mask.

(a)Binary image b) Skeleton after salt filter c) Objects with holes

Figure 8: Isolation of objects with holes using morphologicaloperations.

The binary objects are shown in gray and the skeletons, after application of the salt filter, are
shown as a black overlay on the binary objects. Note that this procedure uses no parameters
other then the fundamental choice of connectivity; it is free from "magic numbers." In the
example shown in Figure 58, the 8-connected definition was used as well as the structuring
element B = N8.

Filling holes in objects- To fill holes in objects we use the following procedure which is
illustrated in Figure 9.

i) Segment image to produce binary representation of objects

ii) Compute complement of binary image as a maskimage

iii) Generate a seedimage as the border of the image

16
iv) Propagate the seed into the mask - eq.

v) Complement result of propagation to produce final result

a) Mask and Seed images b) Objects with holes filled

Figure 9: Filling holes in objects.

The maskimage is illustrated in gray in Figure 9a and the seedimage is shown in black in that
same illustration. When the object pixels are specified with a connectivity of C = 8, then the
propagation into the mask (background) image should be performed with a connectivity of C
= 4, that is, dilations with the structuring element B = N4. This procedure is also free of
"magic numbers."

Removing border-touching objects- Objects that are connected to the image border are not
suitable for analysis. To eliminate them we can use a series of morphological operations that
are illustrated in Figure 10.

i) Segment image to produce binary maskimage of objects

ii) Generate a seedimage as the border of the image

iii) Propagate the seed into the mask - eq.

iv) Compute XOR of the propagation result and the maskimage as final result

17
a) Mask and Seed images b) Remaining objects

Figure 10: Removing objects touching borders.

The maskimage is illustrated in gray in Figure 10a and the seedimage is shown in black in
that same illustration. If the structuring element used in the propagation is B = N4, then
objects are removed that are 4-connected with the image boundary. If B = N8 is used then
objects that 8-connected with the boundary are removed.

Exo-skeleton- The exo-skeleton of a set of objects is the skeleton of the background that
contains the objects. The exo-skeleton produces a partition of the image into regions each of
which contains one object. The actual skeletonization is performed without the preservation
of end pixels and with the border set to "0." The procedure is described below and the result
is illustrated in Figure 11.

i. Segment image to produce binary image.


ii. Compute complement of binary image.
iii. Compute skeleton using eq. i+ii with border set to "0"

Figure 11:Exoskeleton.

18
Touching objects- Segmentation procedures frequently have difficulty separating slightly
touching, yet distinct, objects. The following procedure provides a mechanism to separate
these objects and makes minimal use of "magic numbers." The exo-skeleton produces a
partition of the image into regions each of which contains one object. The actual
skeletonization is performed without the preservation of end pixels and with the border set to
"0." The procedure is illustrated in Figure 12.

i) Segment image to produce binary image.

ii) Compute a "small number" of erosions with B = N4

iii) Compute exo-skeleton of eroded result

iv) Complement exo-skeleton result.

v) Compute AND of original binary image and the complemented exo-skeleton

a) Eroded and exo-skeleton images b) Objects separated (detail)

Figure 12: Separation of touching objects.

The erodedbinaryimage is illustrated in gray in Figure 12a and the exo-skeletonimage is


shown in black in that same illustration. An enlarged section of the final result is shown in
Figure 12b and the separation is easily seen. This procedure involves choosing a small,
minimum number of erosions but the number is not critical as long as it initiates a coarse
separation of the desired objects. The actual separation is performed by the exo-skeleton
which, itself, is free of "magic numbers." If the exo-skeleton is 8-connected than the
background separating the objects will be 8-connected. The objects, themselves, will be
disconnected according to the 4-connected criterion.

19
2.1.4 GRAY-VALUE MATHEMATICAL MORPHOLOGY

Gray-value morphological processing techniques can be used for practical problems such as
shading correction. In this section several other techniques will be presented.

Top-hat transform- The isolation of gray-value objects that are convex can be accomplished
with the top-hattransform as developed by Meyer. Depending upon whether we are dealing
with light objects on a dark background or dark objects on a light background, the transform
is defined as:

Light objects -

Dark objects -

Where the structuring element B is chosen to be bigger than the objects in question and, if
possible, to have a convex shape.Because of the properties given in eqs. And,Topat (A, B) >=
0. An example of this technique is shown in Figure 13.

The original image including shading is processed by a 15 x 1 structuring element as


described in eqs. And to produce the desired result. Note that the transform for dark objects
has been defined in such a way as to yield "positive" objects as opposed to "negative" objects.
Other definitions are, of course, possible.

Thresholding- A simple estimate of a locally-varying threshold surface can be derived from


morphological processing as follows:

Threshold surface -

Once again, we suppress the notation for the structuring element B under the max and min
operations to keep the notation simple. Its use, however, is understood.

20
(A) Original

(a) Light object transform (b) Dark object transform

Figure 13: Top-hat transforms.

Local contrast stretching- Using morphological operations we can implement a technique


for localcontraststretching. That is, the amount of stretching that will be applied in a
neighborhood will be controlled by the original contrast in that neighborhood. The
morphological gradient defined in eq. may also be seen as related to a measure of the local
contrast in the window defined by the structuring element B:

The procedure for local contrast stretching is given by:

The max and min operations are taken over the structuring element B. The effect of this
procedure is illustrated in Figure 14. It is clear that thislocal operation is an extended version
of the point operation for contrast stretching.

21
Before after before after before after

Figure 14: Local contrast stretching.

Using standard test images (as we have seen in so many examples) illustrates the power of
this local morphological filtering approach.

22
CHAPTER 3

DIGITAL IMAGE PROCESSING

What is DIP?

An image may be defined as a two-dimensional function f(x, y), where x & y are spatial
coordinates, & the amplitude of f at any pair of coordinates (x, y) is called the intensity or
gray level of the image at that point. When x, y & the amplitude values of f are all finite
discrete quantities, we call the image a digital image. The field of DIP refers to processing
digital image by means of digital computer. Digital image is composed of a finite number of
elements, each of which has a particular location & value. The elements are called pixels.

Vision is the most advanced of our sensor, so it is not surprising that image play the
single most important role in human perception. However, unlike humans, who are limited to
the visual band of the EM spectrum imaging machines cover almost the entire EM spectrum,
ranging from gamma to radio waves. They can operate also on images generated by sources
that humans are not accustomed to associating with image.There is no general agreement
among authors regarding where image processing stops & other related areas such as image
analysis& computer vision start. Sometimes a distinction is made by defining image
processing as a discipline in which both the input & output at a process are images. This is
limiting & somewhat artificial boundary. The area of image analysis (image understanding) is
in between image processing & computer vision.

There are no clear-cut boundaries in the continuum from image processing at one end
to complete vision at the other. However, one useful paradigm is to consider three types of
computerized processes in this continuum: low-, mid-, & high-level processes. Low-level
process involves primitive operations such as image processing to reduce noise, contrast
enhancement & image sharpening. A low- level process is characterized by the fact that both
its inputs & outputs are images. Mid-level process on images involves tasks such as
segmentation, description of that object to reduce them to a form suitable for computer
processing & classification of individual objects. A mid-level process is characterized by the
fact that its inputs generally are images but its outputs are attributes extracted from those
images. Finally higher- level processing involves “Making sense” of an ensemble of
recognized objects, as in image analysis & at the far end of the continuum performing the
23
cognitive functions normally associated with human vision.Digital image processing, as
already defined is used successfully in a broad range of areas of exceptional social &
economic value.

What is an image?

An image is represented as a two-dimensional function f(x, y) where x and y are spatial


co-ordinates and the amplitude of ‘f’ at any pair of coordinates (x, y) is called the intensity of
the image at that point.

Gray scale image:

A grayscale image is a function I(xylem) of the two spatial coordinates of the image
plane.

I(x, y) is the intensity of the image at the point (x, y) on the image plane.

I (xylem)takes non-negative values assume the image is bounded by arectangle[0, a] [0, b]I:
[0, a]  [0, b]  [0, info)

Color image

It can be represented by three functions, R (xylem)for red,G (xylem)for green and B


(xylem)for blue.

An image may be continuous with respect to the x and y coordinates and also in amplitude.
Converting such an image to digital form requires that the coordinates as well as the
amplitude to be digitized. Digitizing the coordinate’s values is calledsampling. Digitizing the
amplitude values is called quantization.

Coordinate convention:

The result of sampling and quantization is a matrix of real numbers. We use two principal
ways to represent digital images. Assume that an image f(x, y) is sampled so that the resulting
image has M rows and N columns. We say that the image is of size M X N. The values of the
coordinates (xylem) are discrete quantities.

For notational clarity and convenience, we use integer values for these discrete
coordinates. In many image processing books, the image origin is defined to be at
(xylem)=(0,0).The next coordinate values along the first row of the image are

24
(xylem)=(0,1).It is important to keep in mind that the notation (0,1) is used to signify the
second sample along the first row. It does not mean that these are the actual values of
physical coordinates when the image was sampled.

Following figure shows the coordinate convention. Note that x ranges from 0 to M-1
and y from 0 to N-1 in integer increments.

The coordinate convention used in the toolbox to denote arrays is different from the
preceding paragraph in two minor ways. First, instead of using (xylem) the toolbox uses the
notation (race) to indicate rows and columns.

Note, however, that the order of coordinates is the same as the order discussed in the
previous paragraph, in the sense that the first element of a coordinate topples, (alb), refers to
a row and the second to a column. The other difference is that the origin of the coordinate
system is at (r, c) = (1, 1); thus, r ranges from 1 to M and c from 1 to N in integer increments.
IPT documentation refers to the coordinates.

Less frequently the toolbox also employs another coordinate convention called spatial
coordinates which uses x to refer to columns and y to refers to rows. This is the opposite of
our use of variables x and y.

Image as Matrices:

The preceding discussion leads to the following representation for a digitized image
function:

f (0, 0) f (0, 1) ……….. f (0, N-1)

f (1, 0) f (1, 1) ………… f (1, N-1)

f (xylem) = . . .

. . .

f (M-1, 0) f (M-1, 1) ………… f (M-1, N-1)

The right side of this equation is a digital image by definition. Each element of this array is
called an image element, picture element, pixel or pel. The terms image and pixel are used
throughout the rest of our discussions to denote a digital image and its elements.

A digital image can be represented naturally as a MATLAB matrix:

25
f (1, 1) f (1, 2) ……. f (1, N)

f (2, 1) f (2, 2) …….. f (2, N)

. . .

f= . . .

f (M, 1) f (M, 2) …….f (M, N)

Where f (1, 1) = f (0, 0) (note the use of a monoscope font to denote MATLAB
quantities). Clearly the two representations are identical, except for the shift in origin. The
notation f (p, q) denotes the element located in row p and the column q. For example f (6, 2)
is the element in the sixth row and second column of the matrix f. typically we use the letters
M and N respectively to denote the number of rows and columns in a matrix. A 1xN matrix is
called a row vector whereas an Mx1 matrix is called a column vector. A 1x1 matrix is a
scalar.

Matrices in MATLAB are stored in variables with names such as A, a, RGB, real array
and so on. Variables must begin with a letter and contain only letters, numerals and
underscores. As noted in the previous paragraph, all MATLAB quantities are written using
monoscope characters. We use conventional Roman, italic notation such as f(x, y), for
mathematical expressions

Reading Images

Images are read into the MATLAB environment using function imread whose syntax is

imread(‘filename’)

Format name Description recognized extension

TIFF Tagged Image File Format .tif, .tiff

JPEG Joint Photograph Experts Group .jpg, .jpeg

GIF Graphics Interchange Format .gif

BMP Windows Bitmap .bmp

PNG Portable Network Graphics .png

26
XWD X Window Dump .xwd

Here filename is a spring containing the complete of the image file(including any
applicable extension).For example the command line

>> f = imread (‘8. jpg’);

reads the JPEG (above table) image chestxray into image array f. Note the use of single
quotes (‘) to delimit the string filename. The semicolon at the end of a command line is used
by MATLAB for suppressing output. If a semicolon is not included.MATLAB displays the
results of the operation(s) specified in that line. The prompt symbol(>>) designates the
beginning of a command line, as it appears in the MATLAB command window.

When as in the preceding command line no path is included in filename, imread reads
the file from the current directory and if that fails it tries to find the file in the MATLAB
search path. The simplest way to read an image from a specified directory is to include a full
or relative path to that directory in filename.

For example,

>> f = imread( ‘D:\myimages\chestxray.jpg’);

reads the image from a folder called my images on the D: drive, whereas

>> f = imread(‘ . \ myimages\chestxray .jpg’);

reads the image from the my images subdirectory of the current of the current working
directory. The current directory window on the MATLAB desktop toolbar displays
MATLAB’s current working directory and provides a simple, manual way to change it.
Above table lists some of the most of the popular image/graphics formats supported by
imread and imwrite.

Function size gives the row and column dimensions of an image:

>>size (f)

ans = 1024 * 1024

This function is particularly useful in programming when used in the following form to
determine automatically the size of an image:

27
>>[M,N]=size(f);

This syntax returns the number of rows(M) and columns(N) in the image.

The whole function displays additional information about an array. For instance ,the
statement

>>whos f

gives

Name size Bytes Class

F 1024*1024 1048576 unit8 array

Grand total is 1048576 elements using 1048576 bytes

The unit8 entry shown refers to one of several MATLAB data classes. A semicolon at
the end of a line has no effect, so normally one is not used.

Displaying Images

Images are displayed on the MATLAB desktop using function imshow, which has the
basic syntax:

imshow(f,g)

Where f is an image array, and g is the number of intensity levels used to display
it. If g is omitted ,it defaults to 256 levels .using the syntax

Imshow (f, {low high})

Displays as black all values less than or equal to low and as white all values
greater than or equal to high. The values in between are displayed as intermediate intensity
values using the default number of levels .Finally the syntax

Imshow(f,[ ])

Sets variable low to the minimum value of array f and high to its maximum value.
This form of imshow is useful for displaying images that have a low dynamic range or that
have positive and negative values.

28
Function pixval is used frequently to display the intensity values of individual pixels
interactively. This function displays a cursor overlaid on an image. As the cursor is moved
over the image with the mouse the coordinates of the cursor position and the corresponding
intensity values are shown on a display that appears below the figure window .When working
with color images, the coordinates as well as the red, green and blue components are
displayed. If the left button on the mouse is clicked and then held pressed, pixval displays the
Euclidean distance between the initial and current cursor locations.

The syntax form of interest here is Pixval which shows the cursor on the last image
displayed. Clicking the X button on the cursor window turns it off.

The following statements read from disk an image called rose_512.tif extract basic
information about the image and display it using imshow :

>>f=imread(‘rose_512.tif’);

>>whos f

Name Size Bytes Class

F 512*512 262144 unit8 array

Grand total is 262144 elements using 262144 bytes

>>imshow(f)

A semicolon at the end of an imshow line has no effect, so normally one is not used.
If another image,g, is displayed using imshow, MATLAB replaces the image in the screen
with the new image. To keep the first image and output a second image, we use function
figure as follows:

>>figure ,imshow(g)

Using the statement

>>imshow(f),figure ,imshow(g) displays both images.

Note that more than one command can be written on a line ,as long as different
commands are properly delimited by commas or semicolons. As mentioned earlier, a
semicolon is used whenever it is desired to suppress screen outputs from a command line.

29
Suppose that we have just read an image h and find that using imshow produces the
image. It is clear that this image has a low dynamic range, which can be remedied for display
purposes by using the statement.

>>imshow (h, [ ])

WRITING IMAGES

Images are written to disk using function imwrite, which has the following basic syntax:

Imwrite (f,’filename’)

With this syntax, the string contained in filename must include a recognized file format
extension .Alternatively, the desired format can be specified explicitly with a third input
argument. >>imwrite(f,’patient10_run1’,’tif’)

Or alternatively

For example the following command writes f to a TIFF file named patient10_run1:

>>imwrite(f,’patient10_run1.tif’)

If filename contains no path information, then imwrite saves the file in the current
working directory.

The imwrite function can have other parameters depending on e file format selected.
Most of the work in the following deals either with JPEG or TIFF images ,so we focus
attention here on these two formats.

More general imwrite syntax applicable only to JPEG images is

imwrite(f,’filename.jpg,,’quality’,q)

where q is an integer between 0 and 100(the lower number the higher the degradation due to
JPEG compression).

For example, for q=25 the applicable syntax is

>>imwrite(f,’bubbles25.jpg’,’quality’,25)

The image for q=15 has false contouring that is barely visible, but this effect becomes
quite pronounced for q=5 and q=0.Thus, an expectable solution with some margin for error is

30
to compress the images with q=25.In order to get an idea of the compression achieved and to
obtain other image file details, we can use function imfinfo which has syntax.

Imfinfo filename

Here filename is the complete file name of the image stored in disk.

For example,

>>imfinfo bubbles25.jpg

outputs the following information(note that some fields contain no information in this
case):

Filename: ‘bubbles25.jpg’

FileModDate: ’04-jan-2003 12:31:26’

FileSize: 13849

Format: ‘jpg’

Format Version: ‘‘

Width: 714

Height: 682

Bit Depth: 8

Color Depth: ‘grayscale’

Format Signature: ‘‘

Comment: {}

Where file size is in bytes. The number of bytes in the original image is corrupted
simply by multiplying width by height by bit depth and dividing the result by 8. The result is
486948.Dividing this file size gives the compression ratio:(486948/13849)=35.16.This
compression ratio was achieved. While maintaining image quality consistent with the
requirements of the appearance. In addition to the obvious advantages in storage space, this

31
reduction allows the transmission of approximately 35 times the amount of un compressed
data per unit time.

The information fields displayed by imfinfo can be captured in to a so called structure


variable that can be for subsequent computations. Using the receding an example and
assigning the name K to the structure variable.

We use the syntax >>K=imfinfo(‘bubbles25.jpg’);

To store in to variable K all the information generated by command imfinfo, the


information generated by imfinfo is appended to the structure variable by means of fields,
separated from K by a dot. For example, the image height and width are now stored in
structure fields K. Height and K. width.

As an illustration, consider the following use of structure variable K to commute the


compression ratio for bubbles25.jpg:

>> K=imfinfo(‘bubbles25.jpg’);

>>image_ bytes =K.Width* K.Height* K.Bit Depth /8;

>> Compressed_ bytes = K.FilesSize;

>> Compression_ ratio=35.162

Note that iminfo was used in two different ways. The first was t type imfinfo
bubbles25.jpg at the prompt, which resulted in the information being displayed on the screen.
The second was to type K=imfinfo (‘bubbles25.jpg’),which resulted in the information
generated by imfinfo being stored in K. These two different ways of calling imfinfo are an
example of command_ function duality, an important concept that is explained in more detail
in the MATLAB online documentation.

More general imwrite syntax applicable only to tifimages has the form

Imwrite(g,’filename.tif’,’compression’,’parameter’,….’resloution’,[colres rowers] )

Where ‘parameter’ can have one of the following principal values: ‘none’ indicates no
compression, ‘pack bits’ indicates pack bits compression (the default for non ‘binary
images’) and ‘ccitt’ indicates ccitt compression. (The default for binary images).

32
The 1*2 array [colresrowers]Contains two integers that give the column resolution
and row resolution in dot per_ unit (the default values).

For example, if the image dimensions are in inches, colres is in the number of
dots(pixels)per inch (dpi) in the vertical direction and similarly for rowers in the horizontal
direction. Specifying the resolution by single scalar, res is equivalent to writing [res res].

>>imwrite(f,’sf.tif’,’compression’,’none’,’resolution’,……………..[300 300])

the values of the vector[colures rows] were determined by multiplying 200 dpi by the ratio
2.25/1.5, which gives 30 dpi. Rather than do the computation manually, we could write

>>res=round(200*2.25/1.5);

>>imwrite(f,’sf.tif’,’compression’,’none’,’resolution’,res)

where its argument to the nearest integer.It function round rounds is important to note that the
number of pixels was not changed by these commands. Only the scale of the image changed.
The original 450*450 image at 200 dpi is of size 2.25*2.25 inches. The new 300_dpi image is
identical, except that is 450*450 pixels are distributed over a 1.5*1.5_inch area. Processes
such as this are useful for controlling the size of an image in a printed document with out
sacrificing resolution.

Often it is necessary to export images to disk the way they appear on the MATLAB
desktop. This is especially true with plots .The contents of a figure window can be exported
to disk in two ways. The first is to use the file pull-down menu is in the figure window and
then choose export. With this option the user can select a location, filename, and format.
More control over export parameters is obtained by using print command:

Print-fno-dfileformat-rresno filename

Where no refers to the figure number in the figure window interest, file format
refers one of the file formats in table above. ‘resno’ is the resolution in dpi, and filename is
the name we wish to assign the file.

33
If we simply type print at the prompt, MATLAB prints (to the default printer) the
contents of the last figure window displayed. It is possible also to specify other options with
print, such as specific printing device.

CHAPTER 4

LITERATURE WORK

The author handles pre-processing, Segmentation and Classification in analysis images of the
prostate. The advanced technique utilizes CLAHE algorithm for the pre-processing of image
subsequence by bilateral filtering. The primary segmentation is attained between Fuzzy C-
Mean’s algorithm and a limited area recursive algorithm is executed for the concluding
segmentation outcome. Elliptical descriptor is utilized to attain area ellipticity and limited
model features to differentiate the melanocytes from the applicant nucleus area [3].

In D.C. Li, C.W. Liu, et al. (2011) [5] a three-step algorithm was suggested. Initially
non-linear transformation techniques on basis of fuzzy logic to enlarge classification
associated data from the new data quality principles for a little data set. Next, on basis of the
novel distorted data set, requests principal component analysis (PCA) to remove the best
separation of characteristics. Lastly, researchers utilized the distorted data with these most
favourable characteristics as the input data for a learning tool, a support vector machine. The
maximum description accuracy of the technique was 96.35% on WBCDD.

Techniques of image segmentation have been evolved on basis of numerous methods


to help the analysis of lesions on prostate from images (Oliveira et al., 2016) [6]. From these,
algorithms on basis of threshold have been extensively used, mostly due to their ease. Thus,
thresholding techniques, such as the Otsu (Schaefer, 2013) [7], type-2 fuzzy logic
(Yuksel&Borlu, 2009) [8] and the Renyi entropy technique (Beuren et al., 2012) [9], intend
to create the threshold rates that split the regions of interest (ROIs) in the input images.
Though, these methods might disclose various difficulties; for example, the lesions that are
segmented influence to be slighter than their actual dimension, and the segmentation method
can direct to extremely asymmetrical lesion boundaries.

Barata [10] presented substitute method for melanoma recognition through MRI
images, on basis of exterior appearance and shade of color characteristic extraction. Texture
34
and color were contrasted in execution in their study, with the latter verifying further efficient
in separation. It must be recognized that the technique is however acquired active
consequences in terms of classification precision. The disadvantages of these schemes were
the incapability to control on a real-time analysis application or develop any image
captivating operations obtainable.

Sadeghi et al. [11] proposed a graph-based technique to categorize and visualize


pigment system. They authenticate the technique by calculating its capability to organize and
picture the actual MRI images. The precision of the scheme was 92.6% in dividing images to
two classes of absent and present.

Glaister et al. [12] projected a novel multistep clarification modelling technique to


proper clarification difference in comprehensive images. This technique initially resolves a
non-statistical technique of the elucidation by utilizing a Monte Carlo sampling technique.
Then, a statistical polynomial plane model is utilized to resolve the concluding elucidation
evaluation. Lastly, the brightness-rectified image is acquired by utilizing the observance
factor calculated since the last predictable elucidation. ANN Classifier is simulated in
MATLAB software for tumor on prostate recognition. Initial step in the tumor on prostate
recognition scheme is the input image. MRI image in digital system is known as input to the
structure. MRI image in digital scheme is known as input to the scheme. Then step is the
noise elimination. The image comprises noises and other noises. These noises create
inaccuracy in classification. The noises are neglected by filtering. Filtering technique
executed at this point is the median filtering [13].

This paper [14] proposes an easy so far efficient and incorporated computer vision
algorithm utilized for identifying and analysing the initial step of melanoma. The structure is
developed on basis of three stages in incorporated varied characteristic technique:
segmentation, filtering, and localization steps. In the initial stage, user can choose numerous
color spaces and appeal learning and non-learning techniques to divide the entity. In the
filtering stage, morphological filter has been correlated for image noise elimination. In the
localization stage, associated constituent classifying, and K-means method are used for
classification of materials. Kind of tumor melanoma is driven on basis of a score designed
from ABCD features. Research has been regulated effectively by tumor images of prostate
are obtained from internet. This consequence justified that the improved structure can be

35
utilized for sustaining the premature recognition of tumor. In common, this study can provide
to the computer science knowledge particularly in domain of computer vision.

36
CHAPTER 5

PROPOSED METHODOLOGY

The proposed research work majorly focusing on detection of following prostate cancers such
as Malignant and Benign, respectively. The detailed operation of the prostate cancer detection
and classification approach is presented in figure1.

Database training and testing

Database is trained from the collected images of “International Prostate Imaging


Collaboration (I2CVB)” Archive. I2CVB is one of the biggest available collections of quality
controlled MRI images. The dataset consisted of 15 benign and 15 malignant images. All the
images are trained using the PNN network model with GLCM features, statistical and texture
features. And random unknown test sample is applied to the system for detection and
classification, respectively.

Fig. 1. Prostate cancer detection and classification.

Preprocessing

37
The query image is acquired from image acquisition step, which includes background
information and noise. Pre-processing is required and necessary to remove the above-
mentioned unwanted portions. The pre-processing stage is mainly used for eliminating the
irrelevant information such as unwanted background part, which includes noises, labels, tape
and artifacts and the pectoral muscle from the prostate image. The different types of noise
occurred in the mammogram images are salt and pepper, Gaussian, and speckle and Poisson
noise. When noise is occurred in an image, the pixels in the image show different intensity
values instead of true pixel values.

So by choosing the perfect method in the first stage of preprocessing, this noise
removal operation will perform effectively. Reduction of the noise to a great extent and
avoiding the introduction visual artifacts by the analysis of pixels at various scales,
sharpening and smoothing filter denoising efforts to eradicate the noise presented in the pixel,
as it conserves the image uniqueness, despite of its pixel satisfied. These filters can
effectively detect and remove noise and thin noises from the image; then we perform top hat
transform for removing the thick noises. Contrast limited adaptive histogram equalization
CLAHE is also performed on the prostate lesion to get the enhanced image in the spatial
domain. Histogram equalization works on the whole image and enhances the contrast of the
image, whereas adaptive histogram equalization divides the whole image and works on the
small regions called tiles. Each tile is typically 8*8 pixels, and within each tile histogram is
equalized, thus enhancing the edges of the lesion. Contrast limiting is applied to limit the
contrast below the specific limit to limit the noise.

Image Segmentation

After the pre-processing stage, segmentation of lesion was done to get the transparent portion
of the affected area of prostate. On transformation, K-means clustering method is applied to
the image to segment the prostate lesion area based on thresholding. In K-means clustering
algorithm, Segmentation is the initial process of this work, at the cluster centers, cost junction
must be minimized which varies with respect to memberships of user inputs. Image
segmentation is the process of dividing the image into multiple clusters based on the region
of interest presented to detect the prostate cancer. Regions of interest are portion of prostate
images, which are used by radiologists to detect abnormalities like micro classifications
(benign and malignant).

38
K-means clustering is used in the proposed procedure for segmentation to a certain
extent than Active counter clustering approach because of its speed of operation with
maintaining the highest accuracy. K means clustering procedure combines the properties of
jointly possibility and K means clustering approaches as shown in figure 2. Here the
membership functions are generated in the probability-based manner to gets better detection.
Among those detected tumors, the highest accurate cancer regions considered as ROI. The
automatic extraction of ROI is difficult. So, ROIs are obtained through possibility cropping,
which are based on location of abnormality of original test images. Here the membership
functions are generated in the probability-based manner to gets better detection. Among those
detected Cancer regions, the highest accurate Cancer region is considered as ROI.

Fig. 2. K-means clustering.

Feature extraction

Several features can be extracted from the prostate lesion to classify the given lesions. We
extracted some of the prominent features which help us in distinguishing the prostate lesions,

39
those are GLCM based Texture features; DWT based low level features and statistical color
features, respectively.

GLCM is a texture technique of scrutinizing textures considering spatial connection


of image pixels. The texture of mage gets characterized by GLCM functions through
computations of how often pairs of pixels with explicit values and in a particular spatial
connection are present in image. GLCM matrix can be created and then statistical texture
features are extracted from the GLCM matrix. GLCM shows how different combinations of
pixel brightness values which are also known as grey levels are present in image. It defines
the probability of a particular grey level being present in the surrounding area of other grey
level. In this paper, the GLCM is extracted first from the image for all three-color spaces i.e.,
RGB, CIE L*u*v, and YCbCr. Then the GLCM matrix is calculated in four directions which
are 135o, 90o, 45o, and 0o degrees as shown in figure 3. In the following formulas, let a, b be
number of rows and columns of matrix respectively, Sa , bbe the probability value recorded for
the cell (a, b), and number of gray levels in image be ‘N’. Then several textural features can
be extracted from these matrices, extracted textural features are as shown in following
equations:

Fig. 3. Orientations and distance to compute GLCM.

OVERVIEW OF TEXTURE

We all know about the term Texture but for defining it is a hard time. One can differentiate
the two different Textures by recognizing the similarities and differences. Commonly there
are three ways for the usage of the Textures:

40
Based on the Textures the images can be segmented to differentiate between already
segmented regions or to classify them.We can reproduce Textures by producing the
descriptions. The texture can be analyzed in three different ways.

They are Spectral, Structural and Statistical:

Statistical:

Normally the textures may be random but with the consistent properties. Such Textures can
be described by their statistical properties. Moment of intensity plays a major role in
describing the Texture in a region. Suppose in a region we construct the histogram of the
intensities then the moments of the 1-D (one dimensional) histogram can be computed.

 The mean intensity which we have discussed is the first moment.


 The variance describes how similar the intensities are within the region then this
variance is the second central moment.
 Skew describes the symmetry of the intensity distribution about the mean then this
skew is the third central moment.

The following figure shows how gray co matrix calculates the first three values in a GLCM.
In the output GLCM, element (1,1) contains the value 1 because there is only one instance in
the input image where two horizontally adjacent pixels have the values 1 and 1, respectively.
GLCM (1,2) contains the value 2 because there are two instances where two horizontally
adjacent pixels have the values 1 and 2. Element (1,3) in the GLCM has the value 0 because
there are no instances of two horizontally adjacent pixels with the values 1 and 3. gray co
matrix continues processing the input image, scanning the image for other pixel pairs (i, j)
and recording the sums in the corresponding elements of the GLCM.

41
Fig. 4. Example of GLCM

Process used to create the GLCM

Specifying the offsets: By default, a single GLCM with the spatial relationship, or offset,
defined as two horizontally adjacent pixels created by the Gray co matrix function. Gray co
matrix can create a multiple GLCM’S for a single image because a single horizontal offset
might not be sensitive to texture with a vertical orientation.

To create the multiple GLCM’S an array of 0ffsets to the Gray co matrix function is
specified. These offsets define pixel relationships of varying distance and direction. For
example, you can define an array of offsets that specify four directions (horizontal, vertical,
and two diagonals) and four distances. In this case, the input image is represented by 16
GLCM’ s. When you calculate statistics from these GLCM’ s, you can take the average.

These offsets can be specified as a p-by-2 array of integers. Each row in the array is a
two element vector. And it is represented in the form of the [Row_offset, column_offset].
Row_offset specifies the number of rows between the pixel of interest and its
neighbour.column_offset specifies the number of columns between the pixel of interest and
its neighbour.

offsets = [ 0 1; 0 2; 0 3; 0 4;...

-1 1; -2 2; -3 3; -4 4;...

-1 0; -2 0; -3 0; -4 0;...

-1 -1; -2 -2; -3 -3; -4 -4];

42
Deriving Statistics from a GLCM

After treating the Glcm’ s Gray co props function may be used to derive several statistics
from them. These derived statistics gives information about the texture of an image. By
calling the Gray props function you can specify the statistics you want. The following table
illustrates the statistics you have been derived. Then the table is as follows:-

Statistics Description

Homogeneity It measures the closeness of the distribution of


elements between the GLCM and the GLCM
diagonal

Energy It is known as the uniformity or the Angular


second moment. It provides the sum of the
square of the elements in the GLCM

Correlation It measures the occurrence of the joint


probability of the specified pixel pairs.

Contrast In the Gray level co-occurrence matrix it


measures the local variations.

Texture feature based on GLCM

Texture analysis refers to the characterization of regions in an image by their texture content.
Texture analysis attempts to quantify intuitive qualities described by terms such as rough,
smooth, silky, or bumpy as a function of the spatial variation in pixel intensities. In this sense,
the roughness or bumpiness refers to variations in the intensity values, or gray levels.

Texture analysis is used in a variety of applications, including remote sensing,


automated inspection, and medical image processing. Texture analysis can be used to find the
texture boundaries, called texture segmentation. Texture analysis can be helpful when objects
in an image are more characterized by their texture than by intensity, and traditional
thresholding techniques cannot be used effectively.

Using the texture filter functions these statistics can characterize the texture of an
image because they provide information about the local variability of the intensity values of
pixels in an image. For example, in areas with smooth texture, the range of values in the
neighbourhood around a pixel will be a small value; in areas of rough texture, the range will
be larger. Similarly, calculating the standard deviation of pixels in a neighbourhood can

43
indicate the degree of variability of pixel values in that region. And the statistics can be
explained in the following tabular column.

GLCM creates a matrix with the directions and the distance between the pixels, and
then extracts meaningful statistics from the matrix as texture features. GLCM texture features
commonly shown in the following.

GLCM is composed of the probability value, it is defined as which


expresses the probability of the couple pixels at direction and the d interval. When and d

are det6ermined is shown by . Distinctly GLCM is a symmetry matrix its level


is determined by the image gray level. Elements in the matrix are computed by the equation
shown as the following

GLCM expresses the texture feature according the correlation of the couple pixels Gray level
at different positions. It quantificationally describes the texture features. But here mainly four
things are considered they are energy, contrast, entropy and the inverse difference

Energy:

It is a gray scale image texture measure of the homogeneity changing reflecting the
distribution of the image gray-scale uniformity of the image and the texture.

Contrast:

Contrast is the main diagonal near the moment of inertia, Which measures the value of the
matrix is distributed and images of local changes in the number, reflecting the image clarity
and the texture of the shadow depth if the contrast is large then the texture is deeper.

Entropy:

44
Entropy measures image texture randomness, when the space co-occurrence matrix for all
values is equal, it achieved the minimum value; on the other hand, if the value of co-
occurrence matrix is very uneven, its value is greater. Therefore, the maximum entropy
implied by the image gray distribution is random.

Inverse difference:

It measures local changes in image texture number. Its value in large is illustrated that image
texture between the different regions of the lack of change and partial very evenly.

Here p(x , y ) is the gray level value at the co-ordinate (x , y )

Then, 2 level DWT is also used to extract the low-level features. Initially on the on the
segmented output DWT is applied, it will result the output as the LL1, LH1, HL1 and HH1
bands respectively. Then entropy, energy and correlation features are calculated on the LL
band. Then, on the LL output band again DWT is applied, and results the output as LL2,
LH2, HL2 and HH2 respectively. Again entropy, energy and correlation features are
calculated on the LL2 band respectively as shown in figure 4.

Fig. 4. 2-level DWT coefficients.

And finally, Mean, and standard deviation based Statistical Color features are extracted from
the segmented image. They are.

N
1
Mean ( μ ¿ = 2 ∑ I (i , j) (6)
N i , j =1

√∑
N
Standard Deviation (σ ¿ = ¿¿¿¿ (7)
i , j =1

45
Then all these features are combined using array concatenation and results the output as
hybrid feature matrix.

Classification

Neural networks have been effectively applied across a range of problem domains like
finance, medicine, engineering, geology, physics, and biology. From a statistical viewpoint,
neural networks are interesting because of their potential use in prediction and classification
problems. PNN is a method developed using emulation of birth neural scheme. The neurons
are connected in the predefined architecture for effectively performing the classification
operation. Depending on the hybrid features, the weights of the neurons are created. Then, the
relationships between weights are identified using its characteristic hybrid features. The
quantity of weights decides the levels of layers for the proposed network. Figure 5 represents
the architecture of artificial neural networks. PNN basically consists of two stages for
classification such as training and testing. The process of training will be performed based on
the layer based architecture. The input layer is used to perform the mapping operation on the
input dataset; the hybrid features of this dataset are categorized into weight distributions.

Fig. 5. Layered architecture of PNN model.

The PNN architecture has four hidden layers with weights. The first convolutional2D
hidden layer of the net takes in 224 * 224*3 pixels prostate lesion images and applies 96
11×11 filters at stride 4 pixels, followed by class node activation layer and decision
normalization layer. Then the classification operation was implemented in the two levels of
class nodes hidden layer. The two levels of hidden layer hold individually normality and
abnormalities of the prostate cancer characteristic information. Based on the segmentation

46
criteria, it is categorized as normal and abnormal classification stage. These two levels are
mapped as labels in output layer. Again the hidden layer also contains the abnormal cancer
types separately; it is also holds the benign and malignant cancer weights in the second stage
of hidden layer. Similarly, these benign and malignant weights are also mapped as label into
output layer. When the test image is applied, its hybrid features are applied for testing
purpose in the classification stage. Based on the maximum feature matching criteria utilizing
Euclidean distance manner it will function. If the feature match occurred with hidden layer
1labels, then it is classified as normal prostate image. If the feature match occurred with
hidden layer C1 labels with maximum weight distribution, then it is classified as benign
effected cancer image. If the feature match occurred with hidden layer C2 labels with
minimum weight distribution, then it is classified as malignant affected cancer image.

47
CHAPTER 6

RESULTS AND DISCUSSION

Dataset

The experiments are done using MATLAB R2018a tool. I2CVB is one of the biggest
available collections of quality-controlled MRI images. For the implementation of the
proposed method, spatial domain, and frequency domain of 30 MRI prostate lesion images
(15-benign and 15-Malignant) have been obtained respectively by applying rotations at
different angles. Train images of each label have been used to train the PNN architecture with
fifty Epochs, whereas rest twenty percent is used for testing. The features extracted by
GLCM, DWT future network are used to train PNN classifier to classify the images into its
respective classes. The efficiency of the model can be computed using various performance
metrics.

From figure 6.1, it is observed that the proposed method can be effectively detecting
the regions of prostate cancers, it indicates the segmentation done very effectively compared
to the Active contour approach. Here, TEST-1 and TEST 2 images are considered as the
benign and TEST-3 and TEST-4 images are considered malignant type images, respectively.
For the malignant images, the segmentation accuracy is more.

48
Fig. 6.1: Segmented output images of various methods.

49
Performance metrics

For evaluating the performance measure the proposed method is implemented with the two
types of segmentation methods, they are Active contour (AC) and k-means clustering,
respectively. For performing this comparisons Accuracy, Sensitivity, F measure, Precision,
MCC, Dice, Jaccard and Specificity parameters are calculated, respectively.

From the Table 1 and Figure 6.1, it is observed that the proposed K-means clustering method
along with PNN gives the highest performance for all metrics compared to the Active counter
method.

Table 2: Accuracy comparison.

Method Test 1 Test 2 Test 3 Test 4

SVM-Linear kernel [14] 0.4 0.40 0.7 0.7

SVM-RBF kernel [14] 0.4 0.45 0.55 0.6

SVM-Polynomial kernel [14] 0.4 0.3667 0.50 0.5667

SVM-5 fold cross validation 0.6 0.55 0.60 0.45


[14]

Proposed PNN-AC 0.9157 0.78099 0.85796 0.47765

Proposed PNN-K-means 0.99985 0.99715 0.99999 0.99999

From the Table 2, it is observed that the proposed method gives the highest accuracy for both
Benign and malignant diseases compared to the various kernels of SVM [14] such as SVM-
Linear kernel, RBF kernel; Polynomial kernel and 5-fold cross validation, respectively.

50
Accuracy
Proposed PNN-K-means

Proposed PNN-AC

SVM-5 fold cross validation [14]

SVM-Polynomial kernel [14]

SVM-RBF kernel [14]

SVM-Linear kernel [14]

0 0.2 0.4 0.6 0.8 1 1.2

Test 4 Test 3 Test 2 Test 1

51
CHAPTER 7

CONCLUSION

This project presented a computational methodology for detection & classification of prostate
cancer from MRI images using PNN based deep learning-based approach. Here, Gaussian
filters are utilized for pre-processing, which eliminates any unwanted noise elements or
artifacts innovated while image acquisition. Then K-means clustering segmentation is
employed for ROI extraction and detection of cancerous cells. Then GLCM, DWT based
method was developed for extraction of statistical, color and texture features from segmented
image respectively. Finally, PNN was employed to classify the type of cancer such as either
benign or malignant using trained network model. Thus, upon comparing with state of art
works, we conclude that PNN is better than conventional SVM method. In future, this work
can be extended by implementing a greater number of network layers into the PNN and can
also be applied for other type of benign and malignant cancers.

52
Appendix A

MATLAB

A.1 Introduction

MATLAB is a high-performance language for technical computing. It integrates


computation, visualization, and programming in an easy-to-use environment where problems
and solutions are expressed in familiar mathematical notation. MATLAB stands for matrix
laboratory, and was written originally to provide easy access to matrix software developed by
LINPACK (linear system package) and EISPACK (Eigen system package) projects.
MATLAB is therefore built on a foundation of sophisticated matrix software in which the
basic element is array that does not require pre dimensioning which to solve many technical
computing problems, especially those with matrix and vector formulations, in a fraction of
time.

MATLAB features a family of applications specific solutions called toolboxes. Very


important to most users of MATLAB, toolboxes allow learning and applying specialized
technology. These are comprehensive collections of MATLAB functions (M-files) that
extend the MATLAB environment to solve particular classes of problems. Areas in which
toolboxes are available include signal processing, control system, neural networks, fuzzy
logic, wavelets, simulation and many others.

Typical uses of MATLAB include: Math and computation, Algorithm development,


Data acquisition, Modeling, simulation, prototyping, Data analysis, exploration, visualization,
Scientific and engineering graphics, Application development, including graphical user
interface building.

A.2 Basic Building Blocks of MATLAB

The basic building block of MATLAB is MATRIX. The fundamental data type is the
array. Vectors, scalars, real matrices and complex matrix are handled as specific class of this
basic data type. The built in functions are optimized for vector operations. No dimension
statements are required for vectors or arrays.

A.2.1 MATLAB Window

53
The MATLAB works based on five windows: Command window, Workspace
window, Current directory window, Command history window, Editor Window, Graphics
window and Online-help window.

A.2.1.1 Command Window

The command window is where the user types MATLAB commands and expressions
at the prompt (>>) and where the output of those commands is displayed. It is opened when
the application program is launched. All commands including user-written programs are
typed in this window at MATLAB prompt for execution.

A.2.1.2 Work Space Window

MATLAB defines the workspace as the set of variables that the user creates in a work
session. The workspace browser shows these variables and some information about them.
Double clicking on a variable in the workspace browser launches the Array Editor, which can
be used to obtain information.

A.2.1.3 Current Directory Window

The current Directory tab shows the contents of the current directory, whose path is
shown in the current directory window. For example, in the windows operating system the
path might be as follows: C:\MATLAB\Work, indicating that directory “work” is a
subdirectory of the main directory “MATLAB”; which is installed in drive C. Clicking on the
arrow in the current directory window shows a list of recently used paths. MATLAB uses a
search path to find M-files and other MATLAB related files. Any file run in MATLAB must
reside in the current directory or in a directory that is on search path.

A.2.1.4 Command History Window

The Command History Window contains a record of the commands a user has entered
in the command window, including both current and previous MATLAB sessions. Previously
entered MATLAB commands can be selected and re-executed from the command history
window by right clicking on a command or sequence of commands. This is useful to select
various options in addition to executing the commands and is useful feature when
experimenting with various commands in a work session.

A.2.1.5 Editor Window

54
The MATLAB editor is both a text editor specialized for creating M-files and a
graphical MATLAB debugger. The editor can appear in a window by itself, or it can be a sub
window in the desktop.In this window one can write, edit, create and save programs in files
called M-files.

MATLAB editor window has numerous pull-down menus for tasks such as saving,
viewing, and debugging files. Because it performs some simple checks and also uses color to
differentiate between various elements of code, this text editor is recommended as the tool of
choice for writing and editing M-functions.

A.2.1.6 Graphics or Figure Window

The output of all graphic commands typed in the command window is seen in this
window.

A.2.1.7 Online Help Window

MATLAB provides online help for all it’s built in functions and programming
language constructs. The principal way to get help online is to use the MATLAB help
browser, opened as a separate window either by clicking on the question mark symbol (?) on
the desktop toolbar, or by typing help browser at the prompt in the command window. The
help Browser is a web browser integrated into the MATLAB desktop that displays a
Hypertext Markup Language (HTML) documents. The Help Browser consists of two panes,
the help navigator pane, used to find information, and the display pane, used to view the
information. Self-explanatory tabs other than navigator pane are used to perform a search.

A.3 MATLAB Files

MATLAB has three types of files for storing information. They are: M-files and
MAT-files.

A.3.1 M-Files

These are standard ASCII text file with ‘m’ extension to the file name and creating
own matrices using M-files, which are text files containing MATLAB code. MATLAB editor
or another text editor is used to create a file containing the same statements which are typed
at the MATLAB command line and save the file under a name that ends in .m. There are two
types of M-files:

55
1. Script Files

It is
an M-file with a set of MATLAB commands in it and is executed by typing name of file on
the command line. These files work on global variables currently present in that environment.

2. Function Files

A function file is also an M-file except that the variables in a function file are all
local. This type of files begins with a function definition line.

A.3.2 MAT-Files

These are binary data files with .mat extension to the file that are created by
MATLAB when the data is saved. The data written in a special format that only MATLAB
can read. These are located into MATLAB with ‘load’ command.

A.4 the MATLAB System:

The MATLAB system consists of five main parts:

A.4.1 Development Environment:

 This is the set of tools and facilities that help you use MATLAB functions and files.
Many of these tools are graphical user interfaces. It includes the MATLAB desktop and
Command Window, a command history, an editor and debugger, and browsers for viewing
help, the workspace, files, and the search path.

A.4.2 the MATLAB Mathematical Function:

This is a vast collection of computational algorithms ranging from elementary functions like
sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix
inverse, matrix eigen values, Bessel functions, and fast Fourier transforms.

56
A.4.3 the MATLAB Language:

This is a high-level matrix/array language with control flow statements, functions, data
structures, input/output, and object-oriented programming features. It allows both
"programming in the small" to rapidly create quick and dirty throw-away programs, and
"programming in the large" to create complete large and complex application programs.

A.4.4 Graphics:

MATLAB has extensive facilities for displaying vectors and matrices as graphs, as
well as annotating and printing these graphs. It includes high-level functions for two-
dimensional and three-dimensional data visualization, image processing, animation, and
presentation graphics. It also includes low-level functions that allow you to fully customize
the appearance of graphics as well as to build complete graphical user interfaces on your
MATLAB applications.

A.4.5 the MATLAB Application Program Interface (API):

This is a library that allows you to write C and FORTRAN programs that interact with
MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking),
calling MATLAB as a computational engine, and for reading and writing MAT-files.

A.5 SOME BASIC COMMANDS:

pwd prints working directory

Demo demonstrates what is possible in Mat lab

Who lists all of the variables in your Mat lab workspace?

Whose list the variables and describes their matrix size

clear erases variables and functions from memory

clear x erases the matrix 'x' from your workspace

close by itself, closes the current figure window

figure creates an empty figure window

hold on holds the current plot and all axis properties so that subsequent graphing

57
commands add to the existing graph

hold off sets the next plot property of the current axes to "replace"

findfind indices of nonzero elements e.g.:

d = find(x>100) returns the indices of the vector x that are greater than 100

break terminate execution of m-file or WHILE or FOR loop

for repeat statements a specific number of times, the general form of a FOR

statement is:

FOR variable = expr, statement, ..., statement END

for n=1:cc/c;

magn(n,1)=NaNmean(a((n-1)*c+1:n*c,1));

end

diff difference and approximate derivative e.g.:

DIFF(X) for a vector X, is [X(2)-X(1) X(3)-X(2) ... X(n)-X(n-1)].

NaN the arithmetic representation for Not-a-Number, a NaN is obtained as a

result of mathematically undefined operations like 0.0/0.0

INF the arithmetic representation for positive infinity, a infinity is also produced

by operations like dividing by zero, e.g. 1.0/0.0, or from overflow, e.g. exp(1000).

save saves all the matrices defined in the current session into the file,

matlab.mat, located in the current working directory

load loads contents of matlab.mat into current workspace

save filename x y z saves the matrices x, y and z into the file titled filename.mat

save filename x y z /ascii save the matrices x, y and z into the file titled filename.dat

load filename loads the contents of filename into current workspace; the file can

58
be a binary (.mat) file

load filename.dat loads the contents of filename.dat into the variable filename

xlabel(‘ ’) : Allows you to label x-axis

ylabel(‘ ‘) : Allows you to label y-axis

title(‘ ‘) : Allows you to give title for

plot

subplot() : Allows you to create multiple

plots in the same window

A.6 SOME BASIC PLOT COMMANDS:

Kinds of plots:

plot(x,y) creates a Cartesian plot of the vectors x & y

plot(y) creates a plot of y vs. the numerical values of the elements in the y-vector

semilogx(x,y) plots log(x) vs y

semilogy(x,y) plots x vs log(y)

loglog(x,y) plots log(x) vs log(y)

polar(theta,r) creates a polar plot of the vectors r & theta where theta is in radians

bar(x) creates a bar graph of the vector x. (Note also the command stairs(x))

bar(x, y) creates a bar-graph of the elements of the vector y, locating the bars

according to the vector elements of 'x'

Plot description:

grid creates a grid on the graphics plot

title('text') places a title at top of graphics plot

xlabel('text') writes 'text' beneath the x-axis of a plot

59
ylabel('text') writes 'text' beside the y-axis of a plot

text(x,y,'text') writes 'text' at the location (x,y)

text(x,y,'text','sc') writes 'text' at point x,y assuming lower left corner is (0,0)

and upper right corner is (1,1)

axis([xminxmaxyminymax]) sets scaling for the x- and y-axes on the current plot

A.7 ALGEBRIC OPERATIONS IN MATLAB:

Scalar Calculations:

+ Addition

- Subtraction

* Multiplication

/ Right division (a/b means a ÷ b)

\ left division (a\b means b ÷ a)

^ Exponentiation

For example 3*4 executed in 'matlab' gives ans=12

4/5 gives ans=0.8

Array products: Recall that addition and subtraction of matrices involved


addition or subtraction of the individual elements of the matrices. Sometimes it is desired to
simply multiply or divide each element of an matrix by the corresponding element of another
matrix 'array operations”.

Array or element-by-element operations are executed when the operator is preceded by a '.'
(Period):

a .* b multiplies each element of a by the respective element of b

a ./ b divides each element of a by the respective element of b

a .\ b divides each element of b by the respective element of a

60
a .^ b raise each element of a by the respective b element

A.8 MATLAB WORKING ENVIRONMENT:

A.8.1 MATLAB DESKTOP

Matlab Desktop is the main Matlab application window. The desktop contains five sub
windows, the command window, the workspace browser, the current directory window, the
command history window, and one or more figure windows, which are shown only when the
user displays a graphic.

The command window is where the user types MATLAB commands and expressions
at the prompt (>>) and where the output of those commands is displayed. MATLAB defines
the workspace as the set of variables that the user creates in a work session.

The workspace browser shows these variables and some information about them.
Double clicking on a variable in the workspace browser launches the Array Editor, which can
be used to obtain information and income instances edit certain properties of the variable.

The current Directory tab above the workspace tab shows the contents of the current
directory, whose path is shown in the current directory window. For example, in the windows
operating system the path might be as follows: C:\MATLAB\Work, indicating that directory
“work” is a subdirectory of the main directory “MATLAB”; WHICH IS INSTALLED IN
DRIVE C. clicking on the arrow in the current directory window shows a list of recently used
paths. Clicking on the button to the right of the window allows the user to change the current
directory.

MATLAB uses a search path to find M-files and other MATLAB related files, which
are organize in directories in the computer file system. Any file run in MATLAB must reside
in the current directory or in a directory that is on search path. By default, the files supplied
with MATLAB and math works toolboxes are included in the search path. The easiest way to
see which directories are soon the search path, or to add or modify a search path, is to select
set path from the File menu the desktop, and then use the set path dialog box. It is good
practice to add any commonly used directories to the search path to avoid repeatedly having
the change the current directory.

The Command History Window contains a record of the commands a user has entered
in the command window, including both current and previous MATLAB sessions. Previously

61
entered MATLAB commands can be selected and re-executed from the command history
window by right clicking on a command or sequence of commands.

This action launches a menu from which to select various options in addition to
executing the commands. This is useful to select various options in addition to executing the
commands. This is a useful feature when experimenting with various commands in a work
session.

A.8.2 Using the MATLAB Editor to create M-Files:

The MATLAB editor is both a text editor specialized for creating M-files and a
graphical MATLAB debugger. The editor can appear in a window by itself, or it can be a sub
window in the desktop. M-files are denoted by the extension .m, as in pixelup.m.

The MATLAB editor window has numerous pull-down menus for tasks such as
saving, viewing, and debugging files. Because it performs some simple checks and also uses
color to differentiate between various elements of code, this text editor is recommended as
the tool of choice for writing and editing M-functions.

To open the editor , type edit at the prompt opens the M-file filename.m in an editor
window, ready for editing. As noted earlier, the file must be in the current directory, or in a
directory in the search path.

A.8.3 Getting Help:

The principal way to get help online is to use the MATLAB help browser, opened as a
separate window either by clicking on the question mark symbol (?) on the desktop toolbar,
or by typing help browser at the prompt in the command window. The help Browser is a web
browser integrated into the MATLAB desktop that displays a Hypertext
MarkupLanguage(HTML) documents. The Help Browser consists of two panes, the help
navigator pane, used to find information, and the display pane, used to view the information.

Self-explanatory tabs other than navigator pane are used to perform a search.

62
REFERENCES

[1] Nasiri, Sara, et al. "DePicT Malignant Deep-CLASS: a deep convolutional neural
networks approach to classify prostate lesion images." BMC bioinformatics 21.2
(2020): 1-13.

[2] Munir, Khushboo, et al. "Cancer diagnosis using deep learning: a bibliographic
review." Cancers 11.9 (2019): 1235

[3] Kadampur, Mohammad Ali, and Sulaiman Al Riyaee. "Prostate cancer detection:
applying a deep learning-based model driven architecture in the cloud for
classifying dermal cell images." Informatics in Medicine Unlocked 18 (2020):
100282.
[4] Akram, Tallha, et al. "A multilevel features selection framework for prostate
lesion classification." Human-centric Computing and Information Sciences 10
(2020): 1-26.
[5] Marka, Arthur, et al. "Automated detection of nonMalignant prostate cancer using
digital images: a systematic review." BMC medical imaging 19.1 (2019): 21.
[6] Gaonkar, Rohan, et al. "Lesion analysis towards Malignant detection using soft
computing techniques." Clinical Epidemiology and Global Health (2019).
[7] Hekler, Achim, et al. "Superior prostate cancer classification by the combination
of human and artificial intelligence." European Journal of Cancer 120 (2019):
114-121.
[8] Rajasekhar, K. S., and T. Ranga Babu. "Prostate Lesion Classification Using
Convolution Neural Networks." Indian Journal of Public Health Research &
Development 10.12 (2019): 118-123.
[9] Iyer, Vijayasri, et al. "Hybrid quantum computing based early detection of
prostate cancer." Journal of Interdisciplinary Mathematics 23.2 (2020): 347-355.
[10] Roslin, S. Emalda. "Classification of Malignant from MRI data using machine
learning techniques." Multimedia Tools and Applications (2018): 1-16.
[11] Moqadam, SepidehMohammadi, et al. "Cancer detection based on electrical
impedance spectroscopy: A clinical study." Journal of Electrical
Bioimpedance 9.1 (2018): 17-23.

63
[12] Hosny, Khalid M., Mohamed A. Kassem, and Mohamed M. Foaud. "Prostate
cancer classification using deep learning and transfer learning." 2018 9th Cairo
International Biomedical Engineering Conference (CIBEC). IEEE, 2018.
[13] Dascalu, A., and E. O. David. "Prostate cancer detection by deep learning and
sound analysis algorithms: A prospective clinical study of an elementary
dermoscope." EBioMedicine 43 (2019): 107-113.
[14] M. Vidya and M. V. Karki, "Prostate Cancer Detection using Machine Learning
Techniques," 2020 IEEE International Conference on Electronics, Computing and
Communication Technologies (CONECCT), Bangalore, India, 2020, pp. 1-5, doi:
10.1109/CONECCT50063.2020.9198489.

64

You might also like