Professional Documents
Culture Documents
Abstract:: Proper Evaluation of The Performance of Artificial Intelligence Techniques in The Analysis of
Abstract:: Proper Evaluation of The Performance of Artificial Intelligence Techniques in The Analysis of
Abstract:: Proper Evaluation of The Performance of Artificial Intelligence Techniques in The Analysis of
INTRODUCTION
Prostate cancer is a serious disease worldwide. In fact, men of all ages can be affected by this
deadly disease. Day to day, technology is spreading its branches to every sector, including the
medical industry. Recently, the use of Computer-Aided Diagnosis (CAD) has been increased
to help doctors make correct decisions. Early detection of rapid recognition plays a vital role
in the diagnosis and prognosis of prostate cancer. Bio-medical imaging is very crucial for
efficient cancer identification and treatment. It is quite challenging for pathologists to detect
anomalies in biopsy reports quickly and efficiently. Manual processing takes a considerable
amount of time and delays treatment.Moreover, manual processing is not cost-efficient and it
is time-consuming. Deep learning can deliver improved Gleason grades while reducing
human errors by increasing accuracy regardless of location. Deep learning techniques in
medical imaging have already shown promising results. From the early days, back from
1980, Computer-Aided Diagnosis (CAD) has been used in different medical fields. In CAD
applications that use medical imaging, machine learning methods are commonly used to
detect cancer. In the last decade, machine learning and deep learning technology have
improved significantly. Furthermore, this improvement also contributes to CAD applications.
Deep learning can learn high-level features from the images. With the introduction of deep
learning methods, it may be possible to achieve high detection accuracy without using hand-
crafted features, as features may be extracted during training. Moreover, with the help of
massively parallel computing (GPUs) in recent years, deep learning techniques have achieved
immense popularity in prostate cancer detection and Gleason grading.
The paper aims to present some conventional deep learning techniques and a complete
overview of prostate cancer detection applications and Gleason grading.
● A closer look into different histopathology image datasets and their sources
Prostate cancer is the most common cancer in the United States and is the second most deadly
in men. To identify different kinds of prostate tumours, pathologists use different screening
methods. Male hormones such as testosterone cause prostate cancer to grow and survive.
Like all cancers, prostate cancer begins when a mass of cells has grown out of control and
invades other tissues. Cells become cancerous due to the accumulation of defects, or
mutations, in their DNA. Mutations in the abnormal cells' DNA cause the cells to grow and
divide more rapidly than normal cells do. The abnormal cells continue living when other cells
would die. Acinar adenocarcinoma of the prostate comprises 90–95% of prostate cancers
diagnosed. Ductal carcinoma and neuroendocrine carcinoma account for the majority of
additional cases. The prostate is a small walnut-shaped gland in the male reproductive
system that surrounds the urethra below the bladder. It produces the seminal fluid that
nourishes and transports sperm. As shown in Fig. 1, healthy prostate tissue consists of non-
glandular stroma (fibro muscular tissue) and stroma-surrounding glands. These different
tissues are tightly fused and surrounded by a joint capsule. Each gland unit consists of the
lumen and epithelial cells. Carcinomas of the prostate arise most commonly in the outer,
peripheral zone of the gland. Cells develop in and out of the gland in cancerous tissues,
interrupting prostate glands' general structure and organization. Cancerous tissue has
uncontrolled replication of epithelial cells that interrupt the regular arrangement of gland
units. Epithelial cells usually substitute both stroma and lumen in high-grade cancer.
Download : Download high-res image (984KB) Download : Download full-size image
Fig. 1. Prostate Cancer tissue(left) vs Normal Prostate tissue(right) Arrows indicate
infiltrating lymphocytes.
The Gleason grading system is one of the most reliable methods for evaluating prostate
cancer aggression, developed in 1967 and updated in 2014. Gleason grades are used for
describing prostate adenocarcinoma growth patterns, and they are related to disease severity.
According to this system, prostate cancers are scaled into five grades based on glandular
patterns of differentiation. It varies from 1 (excellent prognosis) to 5 (poor prognosis). Deep
learning technology can contribute significantly to the automatic detection of cancer in
prostate tissues and predict the severity of the cancer stage.
The distinction between GP 3 patches and GP 4 patches is a difficult task. Researchers find
GP 3 and GP 4 differentiating more problematic than other Gleason patterns. Mainly fused or
small Lumia-free glands may be classified into either GP 3 or GP 4.
CONVOLUTIONAL LAYER
The convolution layer is the basic building block of CNN. The convolution layer has many
kernels. Each of the neurons acts as a kernel. Different kinds of kernels/filters can perform
operations on images such as edge detection, blur, and sharpen by applying convolution. In
the convolution kernel, images are split into small pieces and extract features from each small
block. The kernel uses a specific set of weights to communicate with images by multiplying
their elements by the receptive region's corresponding elements [10]. Furthermore,
convolution can be classified into various forms depending on strides, filters, padding [11].
POOLING LAYER
The pooling layer aims to reduce the number of parameters when the image is too large and
limit the over fitting risk. It also minimizes computational load, memory usage. Spatial
pooling is often referred to as sub-sampling or down-sampling. The dimension of each map is
reduced, but essential details are preserved. CNN uses different pooling formulations,
including max pooling, sum pooling, average pooling, L2-norm pooling, overlapping, spatial
pyramid pooling.
ACTIVATION FUNCTION
At the end or in neural networks, an activation function or layer is a node which is a deciding
function for learning intricate patterns. The choice of an effective activation function can
accelerate the process of learning. In recent decades, Sigmoid and TanH functions have been
used as the activation function. ReLU is currently the most used activation function in the
world and is used in nearly all CNNs architectures. ReLU and modified versions help to solve
the vanishing gradient problem. The ReLU's activation function is significantly
computational efficient as all neurons do not activate simultaneously. ReLU is converging six
times faster in practice than TanH and Sigmoid.
BATCH NORMALIZATION
Batch normalization helps pace deep learning processes by raising the number of unit values
by moving around hidden values (covariance shift). Also, batch normalization makes it much
easier to learn more independently from each network layer. A layer's features are
independently normalized with a mean zero and variance one in batch normalization.
DROPOUT
The fully connected (FC) layer within CNN utilizes high-level features from convolution or
pooling layers. The fully connected layer classifies the input image into various groups based
on the dataset. In a fully connected layer, softmax is mainly used as an activation function for
classification, and the number of layers included in the network model is not controlled
strictly.
UNPOOLING LAYER
During the pooling operation, create a matrix that records the maximum value's location,
and the unpool operation will insert the pooled value in the original place, with the remaining
elements being set to zero. Unpooling captures example-specific structures by tracing the
original locations with strong activations back to image space. As a result, it effectively
reconstructs the detailed structure.
Transfer learning is a machine learning approach where the pre-trained model reuses a new
problem. It will not only considerably speed up training but also require substantially less
training data. Transfer learning is a powerful tool when a neural network handles limited data
for a new domain, and a sizeable pre-existing data pool can be transferred to the task. Labeled
data sets are limited in medical imaging. Transfer learning is a perfect option for managing
minimal medical data. Transfer learning strategies can be separated into two separate
sections. They are: Use the pre-trained model as a feature extractor: This technique uses a
pre-trained model (like Image Net) as a feature extractor to handle the convolutional neural
network. This removes the last fully connected layer (classifier layer) and then treats the
remaining layers for a new task. Instead of the entire network, this approach trains only a new
classifier, which significantly speeds up training. Fine-tuning: Another strategy is fine-
tuning technique. The fine-tuning technique removes the final layer. It also retrains several
previous layers selectively. The process is done by back propagation. All CNN layers can be
fine-tuned.
END-TO-END LEARNING
Most deep-learning systems have several implementation stages. However, the deep
learning system integrates all these stages with a single neural network. This is a deep
learning methodology that trains all parameters together. Parameters are being learned
altogether in this process, not step-by-step. The only difference between the end-to-end
learning process and the Deep Learning process is that the end-to-end learning process must
collect all of the parameters jointly (at the same time), while the Deep Learning process can
collect the parameters either jointly or step by step. Therefore, every end-to-end learning
process is a Deep Learning process, but not every Deep Learning process is an end-to-end
learning process.
MULTI-TASK LEARNING
Multi-task learning (MTL) is a subfield of machine learning in which multiple learning tasks
are solved at the same time while exploiting commonalities and differences across tasks. This
can improve learning efficiency and prediction accuracy for the task-specific models
compared to training the models separately. According to Rich Caruana et al., MTL
improves generalization by leveraging the domain-specific information contained in the
training signals of related tasks. It does this by training tasks in parallel while using a shared
representation. In effect, the training signals for the extra tasks serve as an inductive bias. The
MTL net uses a shared hidden layer trained in parallel on all the tasks; what is learned for
each task can help other tasks be learned better.
AlexNet and LeNet have very similar architecture. However, AlexNet is more in-depth, with
more convolutional layers and more filters per layer. AlexNet has eight layers — 5
convolutional and three fully connected. For adding non-linearity, ReLU is being
implemented after every convolutional and fully connected layer instead of TanH. AlexNet
also uses dropout instead of regularization to deal with overfitting. It has also consisted of
data augmentation, SGD with momentum. Oscar et al. used AlexNet to optimize their
algorithm.
VGG-16
Simonyan et al. invented the VGG-16, which has three fully connected and 13 convolution
layers, carrying the ReLU tradition with AlexNet. VGG-19 is the more in-depth version of
VGG-16. Wang et al. used VGG-16 with Graph Convolution Network (GCN).
GOOGLENET (INCEPTION-V1)
INCEPTION-V3
RESNET
ResNet is one of the pioneers of batch normalization. ResNet introduced the first skip
connection concept, which allowed the model to learn an identity function. The Identity
function ensures the higher layer performs at least as well as the lower layer, not worse.
ResNet can be designed even deeper CNNs (up to 152 layers) without compromising the
model's generalization power. Kwak et al. used ResNet for feature extracting purposes.
UNET
UNet [30] was first designed and implemented in 2015 to process biomedical images. This
architecture consists of three parts: contraction, bottleneck, and expansion. There are several
contraction blocks in the contraction section, and each one takes the input using two 3 × 3
convolution layers followed by a 2 × 2 max pooling. UNet uses the same feature maps to
extend a vector to a segmented image for a contraction. An encoder-like contracting path is
used to capture context through a compact feature map. The bottleneck layer interferes
between contraction and expansion layers. The expansion layer is like a decoder that makes
precise localization. Many researchers used UNet or modifications of UNet for nuclei
segmentation tasks.
MOBILENET
MobileNet is a lightweight but robust architecture to extract features. It has smaller neural
networks, low latency, the low computational cost with high accuracy. This architecture has
depthwise separable convolutions. Pointwise convolutions follow depthwise separable
convolutions. Arvaniti et al. used the MobileNet model to extract features from the images.
OBJECTIVES
METHODOLOGY
Our data comprised 7 tissue microarray slides that contained tissue cores sampled from
radical prostatectomy specimens. Sections of the blocks were stained in hematoxylin-eosin
and digitized as virtual slides at ×40 magnification using a SCN400 Slide Scanner (Leica
Microsystems). This study was approved by the Clinical Research Ethics Board of the
University of British Columbia. The patient data were identified. Patients consented to the
use of their data in research projects, including our own. This study followed the Standards
for Quality Improvement Reporting Excellence reporting guidline. A subset of 333 tissue
cores were sampled from 231 patients who underwent radical prostatectomy at the
Vancouver General Hospital between June 27, 1997, and June 7, 2011. The cores were
independently graded in detail (4 classes: benign and cancer Gleason grade 3, grade 4, and
grade 5) between December 12, 2016, and October 5, 2017 by 6 pathologists (L.F., B.F.S.,
P.T., D.T., C.F.V., and G.W.) who included a research-based genitourinary pathologist, 4
clinical genitourinary pathologists, and a clinical general pathologist, ranging from
midcareer to veteran, with 1 to 27 years of experience (median, 16 years) in prostate cancer
grading. Four of the pathologists annotated all 333 cores. The other 2 pathologists each
annotated 191 and 92 cores.
EXISTING SYSTEM:
PROPOSED SYSTEM:
Deep learning-based methods are not easy to compare. Moreover, very few studies have
compared deep learning with traditional machine learning methods for automatic Gleason
grading. A recent deep learning method achieved an average accuracy of 70% in
classifying patches into four groups of benign and Gleason grades 3-5.
METHODOLOGY
DEEP LEARNING ALGORITHM:
LSTMs are a type of Recurrent Neural Network (RNN) that can learn and memorize long-
term dependencies. Recalling past information for long periods is the default behavior.
LSTMs retain information over time. They are useful in time-series prediction because they
remember previous inputs. LSTMs have a chain-like structure where four interacting layers
communicate in a unique way. Besides time-series predictions, LSTMs are typically used for
speech recognition, music composition, and pharmaceutical development.
·Random forest is an ensemble classifier (methods that generate many classifiers and
aggregate their results) that consists of many decision trees and outputs the class that is the
mode of the class’s output by individual trees. ·The term came from random decision forests
that was first proposed by Tin Kam Ho of Bell Labs in 1995. ·The method combines
Breiman’s “bagging” (randomly draw datasets with replacement from the training data, each
sample the same size as the original training set) idea and the random selection of features
·It is very user friendly as it has only two parameters i.e number of variables and number of
trees.
·* Let the number of training cases be N, and the number of variables in the classifier be M.
* We are told the number m of input variables to be used to determine the decision at a
node of the tree; m should be much less than M.
* Choose a training set for this tree by choosing n times with replacement from all N
available training cases (i.e. take a bootstrap sample).
* For each node of the tree, randomly choose m variables on which to base the decision at
that node. Calculate the best split based on these m variables in the training set.
* Each tree is fully grown and not pruned (as may be done in constructing a normal tree
classifier). For prediction a new sample is pushed down the tree. It is assigned the label of
the training sample in the terminal node it ends up in.
This procedure is iterated over all trees in the ensemble, and the average vote of all trees is
reported as random forest prediction
Research on Deep learning-based prostate cancer detection and Gleason grading is rising
day after day. Researchers are working hard to detect prostate cancer and improve the
Gleason grading method all over the world. Nevertheless, a variety of drawbacks remain.
Researchers have to deal with these limitations and improve the system.
As a consequence of the privacy issue, there are limited data sets available on prostate cancer.
In order to ensure data authenticity, several experts must conduct all images annotation. Data
shortage complicates training and can lead to over fitting.
Various whole slide images display various facets of prostate cancer and require an
appropriate way to combine independent information. A T2-weighted sequence is suitable for
the description of the zonal prostate anatomy. It may be used to examine the prostate fossa
and seminal vesicles in-depth. The apparent diffusion coefficient (ADC) and prostate
carcinoma Gleason score negatively correlates. • Limitations of Transfer learning The
transfer learning approach can be used to resolve the lack of data set issues. The lack of initial
and target problem similarity, transfer learning can lead to negative transfer. The
conventional transfer learning model considers each image separately without exchanging
details on the intra-category correlation. It is hard to remove layers with confidence to reduce
the number of parameters because of its nature, which finds low-level features. Densely
connected layers and deep convolutional layers can be good points for a reduction, but it
difficult to see how many layers and neurons to reduce to avoid over fitting .
• INTER-OBSERVER VARIABILITY
Inter-observer variation is the difference between the results attained by two or more
observers studying the same thing. Intra-observer variation happens in PCa diagnosis when
two or more pathologists look at the biopsy and put different options. One of the researchers'
problems is that proper and accurate manual grading was not always possible.
Misinterpretation can impact the reproducibility of the system. Computer-aided systems or
Artificial intelligence help us to identify the Gleason scores faster and let us take the decision
easily.
• DIFFERENT LEVELS OF MAGNIFICATION RESULT IN
DIFFERENT LEVELS OF INFORMATION
Between two types of information, cell structure and glandular structure, the cell structure
is clearly visualized in the high-power field (HPF) microscopic images, and glandular
structure is clearly visualized in the low-power field (LPF) microscopic images. Cancerous
tissue concludes both cellular and structural atypia; therefore, images at multiple
magnifications are essential for research work. Sometimes inputting both low and high
magnification images simultaneously to model gives better accuracy.
While slicing and placing the pathology specimens on a slide-glass with hematoxylin and
eosin, some undesirable effects could be introduced, like tissue deformation and wrinkled,
and dust can be mixed with slides. As these artifacts can change the actual output, some
algorithms like blur and tissue-folds have been introduced. According to Daisuke Komura
et al, color variation is another serious artifact. This variation occurs due to different
manufacturers of staining reagents, staining conditions, the thickness of tissue section,
scanner models, etc. Consideration of color variation can help us getting better accuracy. For
this, we need enough data on every stained tissue from every scanner.
Wenyuan et al. noticed that 5-fold validation is not a patient-wise validation as they did not
have patient-level information. This might result in a positive bias as cancer can notice
similarities in tiles within the same patient, especially in closely related tiles. According to
this paper, The ROI-Align layer extracts a small feature map from the corresponding feature
pyramid layer for each ROI right before the network head. It loses some crucial
histopathological information. This type of information is critical for Gleason grading as the
sizes of glands are different.
• EXCESSIVE PRE-PROCESSING
In Öztürk et al. , when the results are examined, it has been seen that pre-processing methods
contribute to learning to a certain extent. This contribution varies depending on the cleaning
of the noises and the drying of the properties. If pre-processing is excessive, success cannot
reach the desired level. For this, it is crucial to pre-process images appropriately.
SYSTEM REQUIREMENTS:
HARDWARE REQUIREMENTS:
SOFTWARE REQUIREMENTS:
LITRATURE SURVEY:
HISTOLOGIC GRADING OF PROSTATE CANCER: A PERSPECTIVE.
The wide-ranging biologic malignancy of prostate cancer is strongly correlated with its
extensive and diverse morphologic appearances. Histologic grading is a valuable research
tool that could and should be used more extensively and systematically in patient care. It can
improve clinical staging, as outlined by Oesterling et al (J Urol 138: 92-98, 1987), during
selection of patients for possible prostatectomy by helping to identify the optimal treatment.
Some of the recurrent practical problems with grading (reproducibility, "undergrading" of
biopsies, and "lumping" of grades) are discussed and recommendations are made. The newer
technologically sophisticated but single-parameter tumor measurements are compared with
one important advantage of histologic grading: the ability to encompass the entire low to high
range of malignancy. The predictive success of grading suggests that prostate cancers have
more or less fixed degrees of malignancy and growth rates (a hypothesis of "biologic
determinism") rather than a steady increase in malignancy with time. Most of the observed
facts can be interpreted on that basis, including the interrelations of tumor size, grade, and
malignancy. The increasing age-adjusted incidence of diagnosed prostate cancer is attributed
to new diagnostic tools and increased diagnostic zeal.
There has been much interest in unsupervised learning of hierarchical generative models
such as deep belief networks. Scaling such models to full-sized, high-dimensional images
remains a difficult problem. To address this problem, we present the convolutional deep
belief network, a hierarchical generative model which scales to realistic image sizes. This
model is translation-invariant and supports efficient bottom-up and top-down probabilistic
inference. Key to our approach is probabilistic max-pooling, a novel technique which shrinks
the representations of higher layers in a probabilistically sound way. Our experiments show
that the algorithm learns useful high-level visual features, such as object parts, from
unlabeled images of objects and natural scenes. We demonstrate excellent performance on
several visual recognition tasks and show that our model can perform hierarchical (bottom-up
and top-down) inference over full-sized images.
CONVOLUTIONAL NETWORKS AND APPLICATIONS IN VISION
Intelligent tasks, such as visual perception, auditory perception, and language understanding
require the construction of good internal representations of the world (or "features")? which
must be invariant to irrelevant variations of the input while, preserving relevant information.
A major question for Machine Learning is how to learn such good features automatically.
Convolutional Networks (ConvNets) are a biologically-inspired trainable architecture that
can learn invariant features. Each stage in a ConvNets is composed of a filter bank, some
nonlinearities, and feature pooling layers. With multiple stages, a ConvNet can learn multi-
level hierarchies of features. While ConvNets have been successfully deployed in many
commercial applications from OCR to video surveillance, they require large amounts of
labeled training samples. We describe new unsupervised learning algorithms, and new non-
linear stages that allow ConvNets to be trained with very few labeled samples. Applications
to visual object recognition and vision navigation for off-road mobile robots are described.
Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224x224)
input image. This requirement is "artificial" and may reduce the recognition accuracy for the
images or sub-images of an arbitrary size/scale. In this work, we equip the networks with
another pooling strategy, "spatial pyramid pooling", to eliminate the above requirement. The
new network structure, called SPP-net, can generate a fixed-length representation regardless
of image size/scale. Pyramid pooling is also robust to object deformations. With these
advantages, SPP-net should in general improve all CNN-based image classification methods.
On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety
of CNN architectures despite their different designs. On the Pascal VOC 2007 and
Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single
full-image representation and no fine-tuning.
The power of SPP-net is also significant in object detection. Using SPP-net, we compute the
feature maps from the entire image only once, and then pool features in arbitrary regions
(sub-images) to generate fixed-length representations for training the detectors. This method
avoids repeatedly computing the convolutional features. In processing test images, our
method is 24-102x faster than the R-CNN method, while achieving better or comparable
accuracy on Pascal VOC 2007.In ImageNet Large Scale Visual Recognition Challenge
(ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification
among all 38 teams. This manuscript also introduces the improvement made for this
competition.
Recurrent nets are in principle capable to store past inputs to produce the currently desired
output. Because of this property recurrent nets are used in time series prediction and process
control. Practical applications involve temporal dependencies spanning many time steps, e.g.
between relevant inputs and desired outputs. In this case, however, gradient based learning
methods take too much time. The extremely increased learning time arises because the error
vanishes as it gets propagated back. In this article the de-caying error flow is theoretically
analyzed. Then methods trying to overcome vanishing gradients are briefly discussed.
Finally, experiments comparing conventional algorithms and alternative methods are
presented. With advanced methods long time lag problems can be solved in reasonable time.
Deep neural networks have been successfully used in diverse emerging domains to solve real
world complex problems with may more deep learning(DL) architectures, being developed to
date. To achieve these state-of-the-art performances, the DL architectures use activation
functions (AFs), to perform diverse computations between the hidden layers and the output
layers of any given DL architecture. This paper presents a survey on the existing AFs used in
deep learning applications and highlights the recent trends in the use of the activation
functions for deep learning applications. The novelty of this paper is that it compiles majority
of the AFs used in DL and outlines the current trends in the applications and usage of these
functions in practical deep learning deployments against the state-of-the-art research results.
This compilation will aid in making effective decisions in the choice of the most suitable and
appropriate activation function for any given application, ready for deployment. This paper is
timely because most research papers on AF highlights similar works and results while this
paper will be the first, to compile the trends in AF applications in practice against the
research results from literature, found in deep learning research to date.
SYSTEM TESTING
Software system meets its requirements and user expectations and does not fail
in an unacceptable manner. There are various types of test. Each test type
addresses a specific testing requirement.
TYPES OF TESTS
Unit testing
Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid
outputs. All decision branches and internal code flow should be validated. It is
the testing of individual software units of the application .it is done after the
completion of an individual unit before integration. This is a structural testing,
that relies on knowledge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application,
and/or system configuration. Unit tests ensure that each unique path of a
business process performs accurately to the documented specifications and
contains clearly defined inputs and expected results.
Integration testing
Functional test
Functional tests provide systematic demonstrations that functions tested are
available as specified by the business and technical requirements, system
documentation, and user manuals.
System Test
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results.
An example of system testing is the configuration oriented system integration
test. System testing is based on process descriptions and flows, emphasizing
pre-driven process links and integration points.
Unit Testing:
Unit testing is usually conducted as part of a combined code and unit test
phase of the software lifecycle, although it is not uncommon for coding and unit
testing to be conducted as two distinct phases.
Test objectives
All field entries must work properly.
Pages must be activated from the identified link.
The entry screen, messages and responses must not be delayed.
Features to be tested
Verify that the entries are of the correct format
No duplicate entries should be allowed
All links should take the user to the correct page.
Integration Testing
Software integration testing is the incremental integration testing of two
or more integrated software components on a single platform to produce failures
caused by interface defects.
Test Results: All the test cases mentioned above passed successfully. No
defects encountered.
Acceptance Testing
Test Results: All the test cases mentioned above passed successfully. No
defects encountered.
SYSTEM STUDY
FEASIBILITY STUDY
ECONOMICAL FEASIBILITY
TECHNICAL FEASIBILITY
SOCIAL FEASIBILITY
ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system
will have on the organization. The amount of fund that the company can pour
into the research and development of the system is limited. The expenditures
must be justified. Thus the developed system as well within the budget and this
was achieved because most of the technologies used are freely available. Only
the customized products had to be purchased.
TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the
technical requirements of the system. Any system developed must not have a
high demand on the available technical resources. This will lead to high
demands on the available technical resources. This will lead to high demands
being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing this
system.
SOCIAL FEASIBILITY
SOFTWARE ENVIRONEMT:
Python is a MUST for students and working professionals to become a great Software
Engineer specially when they are working in Web Development Domain. I will list down
some of the key advantages of learning Python:
Characteristics of Python
It provides very high-level dynamic data types and supports dynamic type checking.
It can be easily integrated with C, C++, COM, ActiveX, CORBA, and Java.
Just to give you a little excitement about Python, I'm going to give you a small conventional
Python Hello World program, You can try it using Demo link.
Applications of Python
As mentioned before, Python is one of the most widely used language over the web. I'm
going to list few of them here:
Easy-to-learn − Python has few keywords, simple structure, and a clearly defined
syntax. This allows the student to pick up the language quickly.
Easy-to-read − Python code is more clearly defined and visible to the eyes.
A broad standard library − Python's bulk of the library is very portable and cross-
platform compatible on UNIX, Windows, and Macintosh.
Interactive Mode − Python has support for an interactive mode which allows
interactive testing and debugging of snippets of code.
Portable − Python can run on a wide variety of hardware platforms and has the same
interface on all platforms.
Extendable − You can add low-level modules to the Python interpreter. These
modules enable programmers to add to or customize their tools to be more efficient.
GUI Programming − Python supports GUI applications that can be created and
ported to many system calls, libraries and windows systems, such as Windows MFC,
Macintosh, and the X Window system of Unix.
Scalable − Python provides a better structure and support for large programs than
shell scripting.
History of Python
Python was developed by Guido van Rossum in the late eighties and early nineties at the
National Research Institute for Mathematics and Computer Science in the Netherlands.
Python is derived from many other languages, including ABC, Modula-3, C, C++, Algol-68,
SmallTalk, and Unix shell and other scripting languages.
Python is copyrighted. Like Perl, Python source code is now available under the GNU
General Public License (GPL).
Python is now maintained by a core development team at the institute, although Guido van
Rossum still holds a vital role in directing its progress.
Python Features
Easy-to-learn − Python has few keywords, simple structure, and a clearly defined
syntax. This allows the student to pick up the language quickly.
Easy-to-read − Python code is more clearly defined and visible to the eyes.
A broad standard library − Python's bulk of the library is very portable and cross-
platform compatible on UNIX, Windows, and Macintosh.
Interactive Mode − Python has support for an interactive mode which allows
interactive testing and debugging of snippets of code.
Portable − Python can run on a wide variety of hardware platforms and has the same
interface on all platforms.
Extendable − You can add low-level modules to the Python interpreter. These
modules enable programmers to add to or customize their tools to be more efficient.
Scalable − Python provides a better structure and support for large programs than
shell scripting.
Apart from the above-mentioned features, Python has a big list of good features, few are
listed below −
It provides very high-level dynamic data types and supports dynamic type checking.
It can be easily integrated with C, C++, COM, ActiveX, CORBA, and Java.
Variables are nothing but reserved memory locations to store values. This means that
when you create a variable you reserve some space in memory.
Based on the data type of a variable, the interpreter allocates memory and decides
what can be stored in the reserved memory. Therefore, by assigning different data
types to variables, you can store integers, decimals or characters in these variables.
Python variables do not need explicit declaration to reserve memory space. The
declaration happens automatically when you assign a value to a variable. The equal
sign (=) is used to assign values to variables.
The operand to the left of the = operator is the name of the variable and the operand to
the right of the = operator is the value stored in the variable. For example −
A module allows you to logically organize your Python code. Grouping related code
into a module makes the code easier to understand and use. A module is a Python
object with arbitrarily named attributes that you can bind and reference.
Simply, a module is a file consisting of Python code. A module can define functions,
classes and variables. A module can also include runnable code.
Example
The Python code for a module named aname normally resides in a file named
aname.py. Here's an example of a simple module, support.py
Python has been an object-oriented language since it existed. Because of this, creating and
using classes and objects are downright easy. This chapter helps you become an expert in
using Python's object-oriented programming support.
If you do not have any previous experience with object-oriented (OO) programming, you
may want to consult an introductory course on it or at least a tutorial of some sort so that you
have a grasp of the basic concepts.
Class − A user-defined prototype for an object that defines a set of attributes that
characterize any object of the class. The attributes are data members (class variables
and instance variables) and methods, accessed via dot notation.
Class variable − A variable that is shared by all instances of a class. Class variables
are defined within a class but outside any of the class's methods. Class variables are
not used as frequently as instance variables are.
Data member − A class variable or instance variable that holds data associated with a
class and its objects.
Function overloading − The assignment of more than one behavior to a particular
function. The operation performed varies by the types of objects or arguments
involved.
Instance variable − A variable that is defined inside a method and belongs only to
the current instance of a class.
Inheritance − The transfer of the characteristics of a class to other classes that are
derived from it.
Object − A unique instance of a data structure that's defined by its class. An object
comprises both data members (class variables and instance variables) and methods.
Creating Classes
The class statement creates a new class definition. The name of the class immediately follows
the keyword class followed by a colon as follows −
class ClassName:
'Optional class documentation string'
class_suite
The class has a documentation string, which can be accessed via ClassName.__doc__.
The class_suite consists of all the component statements defining class members, data
attributes and functions.
Example
Following is the example of a simple Python class −
class Employee:
'Common base class for all employees'
empCount = 0
def displayCount(self):
print "Total Employee %d" % Employee.empCount
def displayEmployee(self):
print "Name : ", self.name, ", Salary: ", self.salary
The variable empCount is a class variable whose value is shared among all instances
of a this class. This can be accessed as Employee.empCount from inside the class or
outside the class.
The first method __init__() is a special method, which is called class constructor or
initialization method that Python calls when you create a new instance of this class.
You declare other class methods like normal functions with the exception that the first
argument to each method is self. Python adds the self argument to the list for you; you
do not need to include it when you call the methods.
To create instances of a class, you call the class using class name and pass in whatever
arguments its __init__ method accepts.
You access the object's attributes using the dot operator with object. Class variable would be
accessed using class name as follows −
emp1.displayEmployee()
emp2.displayEmployee()
print "Total Employee %d" % Employee.empCount
The Python standard for database interfaces is the Python DB-API. Most Python database
interfaces adhere to this standard.
You can choose the right database for your application. Python Database API supports a wide
range of database servers such as −
GadFly
mSQL
MySQL
PostgreSQL
Informix
Interbase
Oracle
Sybase
Here is the list of available Python database interfaces: Python Database Interfaces and APIs.
You must download a separate DB API module for each database you need to access. For
example, if you need to access an Oracle database as well as a MySQL database, you must
download both the Oracle and the MySQL database modules.
The DB API provides a minimal standard for working with databases using Python structures
and syntax wherever possible. This API includes the following −
Importing the API module.
Acquiring a connection with the database.
We would learn all the concepts using MySQL, so let us talk about MySQLdb module.
What is MySQLdb?
Before proceeding, you make sure you have MySQLdb installed on your machine. Just type
the following in your Python script and execute it −
#!/usr/bin/python
import MySQLdb
If it produces the following result, then it means MySQLdb module is not installed −
Note − Make sure you have root privilege to install above module.
Database Connection
This table has fields FIRST_NAME, LAST_NAME, AGE, SEX and INCOME.
Python provides various options for developing graphical user interfaces (GUIs). Most
important are listed below.
Tkinter − Tkinter is the Python interface to the Tk GUI toolkit shipped with Python.
We would look this option in this chapter.
wxPython − This is an open-source Python interface for wxWindows
http://wxpython.org.
JPython − JPython is a Python port for Java which gives Python scripts seamless
access to Java class libraries on the local machine http://www.jython.org.
There are many other interfaces available, which you can find them on the net.
Tkinter Programming
Tkinter is the standard GUI library for Python. Python when combined with Tkinter provides
a fast and easy way to create GUI applications. Tkinter provides a powerful object-oriented
interface to the Tk GUI toolkit.
Creating a GUI application using Tkinter is an easy task. All you need to do is perform the
following steps −
Enter the main event loop to take action against each event triggered by the user.
Example
#!/usr/bin/python
import Tkinter
top = Tkinter.Tk()
# Code to add widgets will go here...
top.mainloop()
Tkinter Widgets
Tkinter provides various controls, such as buttons, labels and text boxes used in a GUI
application. These controls are commonly called widgets.
There are currently 15 types of widgets in Tkinter. We present these widgets as well as a brief
description in the following table −
1
The Button widget is used to display buttons in your application.
2 Canvas
The Canvas widget is used to draw shapes, such as lines, ovals, polygons and
rectangles, in your application.
Checkbutton
3 The Checkbutton widget is used to display a number of options as checkboxes. The
user can select multiple options at a time.
Entry
4 The Entry widget is used to display a single-line text field for accepting values from a
user.
Frame
5
The Frame widget is used as a container widget to organize other widgets.
Label
6 The Label widget is used to provide a single-line caption for other widgets. It can also
contain images.
Listbox
7
The Listbox widget is used to provide a list of options to a user.
Menubutton
8
The Menubutton widget is used to display menus in your application.
Menu
9 The Menu widget is used to provide various commands to a user. These commands are
contained inside Menubutton.
Message
10 The Message widget is used to display multiline text fields for accepting values from a
user.
Radiobutton
11 The Radiobutton widget is used to display a number of options as radio buttons. The
user can select only one option at a time.
Scale
12
The Scale widget is used to provide a slider widget.
Scrollbar
13 The Scrollbar widget is used to add scrolling capability to various widgets, such as list
boxes.
14 Text
The Text widget is used to display text in multiple lines.
Toplevel
15
The Toplevel widget is used to provide a separate window container.
Spinbox
16 The Spinbox widget is a variant of the standard Tkinter Entry widget, which can be
used to select from a fixed number of values.
PanedWindow
17 A PanedWindow is a container widget that may contain any number of panes,
arranged horizontally or vertically.
LabelFrame
18 A labelframe is a simple container widget. Its primary purpose is to act as a spacer or
container for complex window layouts.
tkMessageBox
19
This module is used to display message boxes in your applications.
Standard attributes
Let us take a look at how some of their common attributes.such as sizes, colors and fonts are
specified.
Dimensions
Colors
Fonts
Anchors
Relief styles
Bitmaps
Cursors
Geometry Management
All Tkinter widgets have access to specific geometry management methods, which have the
purpose of organizing widgets throughout the parent widget area. Tkinter exposes the
following geometry manager classes: pack, grid, and place.
The pack() Method − This geometry manager organizes widgets in blocks before
placing them in the parent widget.
The grid() Method − This geometry manager organizes widgets in a table-like
structure in the parent widget.
The place() Method − This geometry manager organizes widgets by placing them in a
specific position in the parent widget.
CONCLUSION
Detection of prostate cancer using whole slide images is a landmark in the area of medical
pathology. We analysed numerous articles on deep learning usage throughout this paper to
identify prostate cancer from histopathological images. Machine learning and deep learning
have opened the door to medical image studies. There are still many undiscovered fields that
need to be explored. In this paper, we discussed basic concepts regarding prostate cancer.
Convolutional architectures have increased in usage rate during the past decade for
processing complex images. We provided pieces of information about several state-of-the-art
techniques surveying from most recent works. We have also discussed Gleason grading
methods, histopathological image pre-processing, and post-processing techniques. Data
insufficient, super-pixel image, the difference between inter-observer and intra-observer
variability have caused researchers to suffer working with histopathological images. In most
of the research papers we studied, we have seen limitations and a lack of reproducible data in
this research field. We have highlighted the evaluation criteria and metrics for loss
calculation in different types of model architecture. This paper shows the pathway of how to
use deep learning for prostate cancer detection from histopathological images.
REFERENCES
[1] R.L. Siegel, K.D. Miller, A. Jemal Cancer statistics, 2019 CA A Cancer J
Clin, 69 (2019), pp. 7-34, 10.3322/caac.21551 Google Scholar
[5] Yann LeCun, Yoshua Bengio Convolutional networks for images, speech,
and time series The handbook of brain theory and neural networks MIT Press,
Cambridge, MA, USA (1998), pp. 255-258 Google Scholar
[11] Y. LeCun, Y. Bengio, G. Hinton Deep learning Nature, 521 (2015), pp.
436-444, 10.1038/nature14539 Google Scholar
[13] T. Wang, D.J. Wu, A. Coates, A.Y. Ng End-to-end text recognition with
convolutional neural networks Proceedings of the 21st International Conference on
Pattern Recognition (ICPR2012), Tsukuba (2012), pp. 3304-3308 Google Scholar
[14] K. He, X. Zhang, S. Ren, J. Sun Spatial pyramid pooling in deep
convolutional networks for visual recognition 9 IEEE transactions on pattern
analysis and machine intelligence, vol. 37 (1 Sept. 2015), pp. 1904-1916,
10.1109/TPAMI.2015.2389824 Google Scholar
[15] Sepp Hochreiter The vanishing gradient problem during learning recurrent
neural nets and problem solutions Int J Uncertain Fuzziness Knowledge-Based Syst, 6
(2) (1998), pp. 107-116 Google Scholar
[19] Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, R. Ruslan,
Salakhutdinov Improving neural networks by preventing co-adaptation of feature
detectors arXiv:1207.0580 (2012) Google Scholar
[21] Waseem Rawat, Zenghui Wang Deep convolutional neural networks for
image classification: a comprehensive review Neural Comput, 29 (9) (2017), pp. 2352-
2449 CrossRefView Record in ScopusGoogle Scholar
[23] Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, Y. Ng Andrew
Self-taught learning: transfer learning from unlabeled data Proceedings of the 24th
international conference on Machine learning (ICML' 07). Association for Computing
Machinery, New York, NY, USA, 759–766 https://doi.org/10.1145/1273496.1273592
(2007) Google Scholar
[26] C. Szegedy, et al. Going deeper with convolutions 2015 IEEE Conference
on Computer Vision and Pattern Recognition (CVPR), Boston, MA (2015), pp. 1-9,
10.1109/CVPR.2015.7298594 Google Scholar
[28] Karen Simonyan, Andrew Zisserman Very deep convolutional networks for
large-scale image recognition CoRR abs/1409 (2015), p. 1556 Google Scholar