Minor Report Final2

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 47

DIAGNOSIS AND PREDICTION OF ALZHEIMER

DISEASE USING SOFT COMPUTING TECHNIQUES

MINOR PROJECT REPORT

SUBMITTED BY

Devesh Krishnani (13118015)

Anupriya Sahu (13118012)

V Vinod Kumar (13118084)

Pritesh Ahirwar(12118055)

UNDER THE GUIDANCE OF

Dr.Rekh Ram Janghel

Prof. B. Ramchandra Reddy

Department of Information Technology

NATIONAL INSTITUTE OF TECHNOLOGY,

RAIPUR (492010)
Department of Information Technology

CERTIFICATE
We hereby certify that the work which is being presented in the B.Tech Minor Project
entitled “Diagnosis and Prediction of Alzheimer using Soft Computing Technique”,
which is submitted as a Minor Project Report & submitted to the Department of Information
Technology of National Institute of Technology Raipur (C.G.) is an authentic record of
our own work carried out during a period from July 2016 to December 2016 under the
supervision of Dr.Rekh Ram Janghel Information Technology Department.

The matter presented in this report has not been submitted by us for the award of any other
degree elsewhere.

Submitted By:
Devesh Krishnani
Anupriya Sahu
V Vinod Kumar
Pritesh Ahirwar

This is to certify that the above statement made by the candidates is correct to the best of our
knowledge.

Signature of Supervisor
Date: 09-05-2016 Dr. Rekh Ram Janghel

Signature of Co-Supervisor
Prof. B Ramchandra Reddy

1|Page
Department of Information Technology

ACKNOWLEDGEMENT
The pleasure, the achievement, the glory, the satisfaction, the reward appreciation and the
construction of our project cannot be thought of without the few, who apart from their regular
schedule spared their valuable time. A number of persons contributed either directly or
indirectly in shaping and achieving the desired outcome. We owe a debt of gratitude to Dr.
Sudhakar Pandey (HOD), Department of Information Technology, National Institute of
Technology, National (C.G.) for providing us with the opportunity to develop this project.
Through his timely advice, constructive criticism and supervision he was a real source of
inspiration for me.

We express our sincere thanks to our guides, Dr. Rekh Ram Janghel, Assistant Professor
and Prof. B Ramchandra Reddy, Assistant Professor Department of Information
Technology, National Institute of Technology, Raipur (C.G.) for their valuable guidance,
suggestions and help required for executing the project work time to time. Without their
direction and motivation, it would have been nearly impossible for us to achieve the initial
level of target planned.

At last but not the least we are really thankful to our parents for always encouraging us in our
studies and also to our friends who directly or indirectly helped us in this work.

Devesh Krishnani (13118015)


Anupriya Sahu (13118012)
V Vinod Kumar (13118084)
Pritesh Ahirwar (12118055)

2|Page
Department of Information Technology

ABSTRACT

Recently, machine learning techniques especially predictive modelling and pattern


recognition in biomedical sciences from drug delivery system to medical imaging has become
one of the important methods which are assisting researchers to have deeper understanding of
entire issue and to solve complex medical problems. Deep learning is a powerful machine
learning algorithm in classification while extracting low to high-level features. In this report,
we have used various deep learning methods to classify Alzheimer’s disease. The importance
of classifying this kind of medical data is to potentially develop a predict model or system in
order to recognize the type disease from normal subjects or to estimate the stage of the
disease. Classification of clinical data such as Alzheimer's disease has been always
challenging and most problematic part has been always selecting the most discriminative
features. Using Convolution Networks and Spiking neural networks accuracy of 97.84% and
94.46% was achieved. This approach enables us to expand our methodology to predict more
complicated systems.

3|Page
Department of Information Technology

Contents

CHAPTER 1. ............................................................................................................................. 7

INTRODUCTION ..................................................................................................................... 7

1.1 Alzheimer’s Disease ........................................................................................................ 8

1.2 Deep Learning ............................................................................................................... 11

1.3 Data Aquisition ............................................................................................................. 11

CHAPTER 2. ............................................................................................................................. 3

LITERATURE REVIEW .......................................................................................................... 4

CHAPTER 3. ........................................................................................................................... 17

METHODOLOGY .................................................................................................................. 17

3.1 Structural MRI Pre-processing ....................................................................................... 18

3.2 Conversion ..................................................................................................................... 24

3.3 Convolution Neural Network……………………………………………………….....24

3.4 Spiking Neural Network…………………………………………………………….....27

3.5 Cooperative Coevolutionary Neural Network………………………………...…….....29

CHAPTER 4. ........................................................................................................................... 31

RESULTS AND CONCLUSION .......................................................................................... 31

References ................................................................................................................................ 36

4|Page
Department of Information Technology

List of Figures

Figure 1.1 Alzheimer's effect over age groups in India ........................................................... 10


Figure 3.1 FlowChart. .............................................................................................................. 19
Figure 3.2 Raw Structural MRI Scan ....................................................................................... 20
Figure 3.3 Image after applying BET ...................................................................................... 20
Figure 3.4 Image after Grey Matter Extraction ....................................................................... 21
Figure 3.5 Image after White Matter extraction ...................................................................... 22
Figure 3.6 Image after White Matter extraction ...................................................................... 22
Figure 3.7 Image after Figure 5 and Figure 6 are combined together ..................................... 23
Figure 3.8 Image after Segmentation steps have been applied. ............................................... 24
Figure 3.9 Gray Matter Segmented Image ............................................................................... 24
Figure 3.10 Image after applying affine non linear registration .............................................. 25
Figure 3.11 Final Gray Matter Segmented Image ................................................................... 26
Figure 3.12 LeNet Architecture ............................................................................................... 28
Figure 3.13 Spiking Neural Network Architecture[] ............................................................... 29
Figure 3.14 Step 1 of Neuro COCO Algorithm ....................................................................... 35
Figure 3.15 Step 2 of Neuro COCO Algorithm ....................................................................... 35
Figure 4.1 AD vs NL for CNN with learning rate = 0.01 ........................................................ 40
Figure 4.2 AD vs NL for CNN with learning rate = 0.5 .......................................................... 40
Figure 4.3 AD vs NL for SNN with learning rate = 0.5 .......................................................... 41
Figure 4.4 AD vs NL for SNN with learning rate = 0.01 ........................................................ 41
Figure 4.5 The filters of the first convolution layer of Lenet architecture .............................. 42
Figure 4.6 Features of First Convolution Layer....................................................................... 43

5|Page
Department of Information Technology

List of abbreviations
Alzheimer’s Disease……….………………………………………………………………..AD
Normal Human……………………………………………………………………………....NL
Convolution Neural Network……………………………………………...........................CNN
Spiking Neural Network…………………….......................................................................SNN
Neuro Cooperative Coevolutionary Network...................................................................COCO
Alzheimer’s Disease Neurological Initiative....................................................................ADNI
Mini Mental State Examination……………………………………………………..…..MMSE
Convolution Layer………………………………………………………………………CONV
Deconvolution Layer……………………………………………………………………RELU
Fully Connected Layer……………………………………………………………………..FC
Pooling Layer……………………………………………………………………………POOL
Adaption of modules in Cooperative Coevolution of feedforward Networks …..…AMC-FNN

6|Page
Department of Information Technology

List of Tables
Table 1.1 Demographic Information for both subsets, including mental state examination
(MMSE) score .......................................................................................................................... 14
Table 4.1 The accuracy of testing is shown below. As Shown, a very high level of accuracy
is acheived for both Convolution Neural Network and Spiking Neural Network. Convolution
Neural Network is slightly better than Spiking Neural Network. ............................................ 39

7|Page
Department of Information Technology

CHAPTER1

INTRODUCTION

8|Page
Department of Information Technology

Chapter 1
Introduction
Deep learning is a powerful machine learning algorithm in classification while extracting low
to high-level features. In this report, we have used various deep learning methods to classify
Alzheimer’s disease. The importance of classifying this kind of medical data is to potentially
develop a predict model or system in order to recognize the type disease from normal
subjects or to estimate the stage of the disease. Classification of clinical data such as
Alzheimer's disease has been always challenging and most problematic part has been always
selecting the most discriminative features.

1.1 Alzheimer’s Disease

Alzheimer’s Disease(AD) is the most common type of dementia in 65 year and older, in
which the mental ability of persons gradually declines and reaches a stage where it becomes
difficult for them to lead a normal life. With the disease progressing gradually, patients find
themselves more dependent on their immediate family member for survival. Its expectation is
1 in 85 people will be affected by 2050 and the number of affected people is double in the
next 20 years.[1] Alzheimer’s disease was named after the German psychiatrist and
pathologist Alois Alzheimer after he examined a female patient (post mortem) in 1906 that
had died at age 51 after having severe memory problems, confusion, and difficulty
understanding questions. Alzheimer report two common abnormalities in the brain of this
patient, “1. Dense layers of protein deposited outside and between the nerve cells. 2. Areas of
damaged nerve fibers , inside the nerve cells, which instead of being directly had become

9|Page
Department of Information Technology

tangled”. Moreover, these plaques and tangles have been used to help diagnose AD.

Figure 1.1Alzheimer's effect over age groups in India[2]

There are 3 phases of AD: normal case, mild cognitive impairment(MCI), and dementia.
MCI includes “mild changes in memory. Dementia means severity of the disease. The
symptoms of AD different between patients. The following are common Symptoms of
Alzheimer’s:

 Memory loss that disrupts daily life.


 Challenges in planning or solving problems.
 Problem understanding visual images and spatial relationships.
 Decreased or poor judgment.
 Withdrawal from work or social activities.

The current state-of-the-art clinical diagnosis of AD requires a specialty clinic and includes a
medical examination, neuropsychological testing, neuroimaging, cerebrospinal fluid (CSF)
analysis and blood examination. This process is neither time nor cost-effective. Additionally,
given the quickly aging global population with an expected striking increase of AD cases,

10 | P a g e
Department of Information Technology

there are insufficient numbers of specialty clinics to meet the growing needs [1]. While CSF
and neuroimaging markers are gold standards for the in vivo assessment of the patients, they
are incursive and expensive and, therefore, have limited utility as frontline screening and
diagnostic tools. In addition, prior work has shown that non specialist clinicians are
inaccurate at identifying early AD and mild cognitive impairment (MCI). Which is a major
impetus to the search for clinically useful screening and diagnostic tools [2].

Dementia affects every person in a different way. Its impact can depend on what the person
was like before the disease; his/her personality, life-style, significant relationships and
physical health. The problems linked to dementia can be best understood in three stages. The
duration of each stage is given as a guideline; sometimes people can deteriorate quicker, and
at other times more slowly.

It is estimated that over 3.7 million people are affected by dementia in India. This is expected
to double by 2030. It is estimated that the cost of taking care of a person with dementia is
about 43,000 annually; much of which is met by the families. The financial burden will only
increase in the coming years [3]. The challenge posed by dementia as a health and social
issue is of a scale we can no longer ignore. Despite the magnitude, there is gross ignorance,
neglect and scarce services for people with dementia and their families. We know that
dementia is not part of aging and is caused by a variety of diseases. We now have a range of
options to treat the symptoms of dementia and offer practical help to those affected.
Alzheimer’s and Related Disorders Society of India (ARDSI) the national voluntary
organization dedicated to the care, support and research of dementia has been in the forefront
to improve the situation since 1992. ARDSI is committed to developing a society which is
dementia friendly and literate. This could only happen if we have the political commitment at
all levels to provide a range of solutions that deliver a life with dignity and honour for people
with dementia.

Diagnosing Alzheimer's disease requires very careful medical assessment, including patient
history, a mini mental state examination(MMSE), and physical and neurobiological exams.
[4] In addition to these evaluations, structural magnetic resonance imaging and resting state
functional magnetic resonance imaging (rs-fMRI) offer non-invasive methods of studying the
structure of the brain, functional brain activity, and changes in the brain. During scanning
using both structural (anatomical) and rs-fMRI techniques, patients remain prone on the MRI
table and do not perform any tasks. This allows data acquisition to occur without any effects

11 | P a g e
Department of Information Technology

from a particular task on functional activity in the brain [5].Alzheimer's disease causes
shrinkage of the hippocampus and cerebral cortex and enlargement of ventricles in the brain.
The level of these effects is dependent upon the stage of disease progression. In the advanced
stage of AD, severe shrinkage of the hippocampus and cerebral cortex, as well as
significantly enlarged ventricles, can easily be recognized in MR images. This damage affects
those brain regions and networks related to thinking, remembering (especially short-term
memory), planning and judgment. Since brain cells in the damaged regions have degenerated,
MR image (or signal) intensities are low in both MRI and rs-fMRI techniques [8-10].
However, some of the signs found in the AD imaging data are also identified in normal aging
imaging data. Identifying the visual distinction between AD data and images of older subjects
with normal aging effects requires extensive knowledge and experience, which must then be
combined with additional clinical results in order to accurately classify the data (i.e., MMSE)
[1] Development of an assistive tool or algorithm to classify MR-based imaging data, such as
structural MRI and rs-fMRI data, and, more importantly, to distinguish brain disorder data
from healthy subjects, has always been of interest to clinicians[10]. A robust machine
learning algorithm such as Deep Learning, which is able to classify Alzheimer's disease, will
assist scientists and clinicians in diagnosing this brain disorder and will also aid in the
accurate and timely diagnosis of Alzheimer's patients [11].

1.2 Deep Learning

Deep learning is a branch of machine learning based on a set of algorithms that attempt to
model high level abstractions in data. In a simple case, you could have two sets of neurons:
ones that receive an input signal and ones that send an output signal. When the input layer
receives an input it passes on a modified version of the input to the next layer. In a deep
network, there are many layers between the input and output, allowing the algorithm to use
multiple processing layers, composed of multiple linear and non-linear transformations.

12 | P a g e
Department of Information Technology

Deep learning is part of a broader family of machine learning methods based on learning
representations of data. An observation (e.g., an image) can be represented in many ways
such as a vector of intensity values per pixel, or in a more abstract way as a set of edges,
regions of particular shape, etc. Some representations are better than others at simplifying the
learning task. One of the promises of deep learning is replacing handcrafted features with
efficient algorithms for unsupervised or semi-supervised feature learning and hierarchical
feature extraction

Various deep learning architectures such as deep neural networks, convolutional deep neural
networks, deep belief networks and recurrent neural networks have been applied to fields like
computer vision, automatic speech recognition, natural language processing, audio
recognition and bioinformatics where they have been shown to produce state-of-the-art
results on various tasks.

1.3 Data Acquisition

For this study, Data was acquired from Alzheimer’s Disease Neurological Initiative(ADNI).
ADNI is a global research effort that actively supports the investigation and development of
treatments that slow or stop the progression of AD. This multisite, longitudinal study assesses
clinical, imaging, genetic and bio specimen biomarkers through the process of normal aging
to early mild cognitive impairment (EMCI), to late mild cognitive impairment (LMCI), to
dementia or AD. With established, standardized methods for imaging and biomarker
collection and analysis, ADNI facilitates a way for scientists to conduct cohesive research
and share compatible data with other researchers around the world.

The Dataset consists of 54 Images present in NIFTI Format and is acquired through the
standard protocol present in the ADNI Site. It consists of 27 males out of which 18 are
classified as suffering from Alzheimer’s Disease with average MMSE score of 42.26. Rest are
females out of which 9 are suffering from Alzheimer’s Disease with average MMSE score of
41.2. The images acquired are structural MRI scans with weighting T1 and slice thickness as
1mm. Scanning was performed on three different Tesla scanners, General Electric (GE)
Healthcare, Philips Medical Systems, and Siemens Medical Solutions, and was based on
identical scanning parameters. Anatomical scans were acquired with a 3D MPRAGE sequence
(TR=2s, TE=2.63 ms, FOV=25.6 cm, 256×256 matrix, 160 slices of 1mm thickness). Table 1
presents a summary of data acquired.

13 | P a g e
Department of Information Technology

Table 1 presents the demographic information for both subsets, including mini mental state
examination (MMSE) scores

Table 1.1 Demographic Information for both subsets, including mental state examination (MMSE) score

Modality Total Group Subject Female Mean SD Male Mean SD MMSE


Subject of of SD
Age Age
MRI 52 Alzheimer 27 9 79.42 15.16 18 80.54 15.98 27.90
Normal 25 16 80.15 12.36 9 81.75 27.43 28.20

14 | P a g e
Department of Information Technology

Chapter 2

Literature Review

15 | P a g e
Department of Information Technology

Chapter 2
Literature Review
Deep Learning is one of the emerging fields in machine learning. It has various application
which extend upon fields such as Medical Imaging, Network Classification, Sentiment
Analysis, Game Playing, Prediction of weather etc. Some of the common deep learning
networks are Convolution Network, Deep Spiking Neural Network and Stacked Encoders. In
recent times, different algorithms for classification of Alzheimer’s Disease have emerged.
Samar Sarraf and GhassenTofigi devised an algorithm to classify Alzheimer’s Disease using
Deep Learning [11]. Initially Structural MRI Scans and Functional MRI Scans were acquired
from Alzheimer’s Disease Neurological Initiative using a standard protocol. [12] After
acquisition digital image processing techniques were applied to the raw MRI Scans. First,
Brain was extracted using brain extraction tool present in FSL library provided by oxford.
Then the images were segmented into three parts: Grey Matter, White Matter, Cerebral Fluid
using the FSL Library. After Segmentation, Images were normalized using Gaussian kernel
with Sigma value equal to 2,3 and 4 and then linearly registered. Next, Convolution neural
network was applied to the image dataset to classify Alzheimer’s Disease Patient from a
normal human. The Accuracy of classification for LeNet Architecture was 99.42% and for
googleNet architecture was 99.49%.

S Sarraf and G Tofighi also devised an algorithm to compare classification result for
smoothed dataset and unsmoothed dataset. Using LeNet Architecture, the classification
accuracy achieved was 98.789% for the unsmoothed dataset and 99.21% for the smoothed
dataset and using the GoogleNet architecture the classification accuracy achieved was
98.824% for unsmoothed Dataset and 99.46% for smoothed dataset. The methodology
applied was the same as mentioned in previous literature [13]. Another framework which
used stacked autoencoder was proposed by S lui, it improved upon the accuracy given by
Support Vector Machine by being 97% accurate in classification whereas classification using
SVM was 74% accurate. [14]

Detection of MRI disease using MRI Hippocampal texture using a logistic regression model
was achieved by L Sensen by extracting Hippocampal texture features and applying a
Logistic Regression model to the features. The accuracy achieved was 74% for AD vs NL
classification. In addition to AD vs NL classification, the accuracy for AD vs MCI vs NL
classification was 71% [15]. Akhila D and Shobna S used elfman back propagation network

16 | P a g e
Department of Information Technology

to classify Alzheimer’s using the features extracted by applying Weiner filter and GLCM
Matrix. The accuracy achieved by them was 93.1% over 50 epoch [16].

Li, Feng and Tran, Loc developed a robust algorithm for classification of Alzheimer Disease
and improved the accuracy result by 6.3% over classical neural network. Utilizing the
dropout technique to improve classical deep learning by preventing weight co-adaptation,
which is a typical cause of over-fitting in deep learning. In addition, they incorporated
stability selection, an adaptive learning factor and a multi-task learning strategy into the deep
learning framework [17].

17 | P a g e
Department of Information Technology

Chapter 3

Methodology

18 | P a g e
Department of Information Technology

Chapter 3
Methodology
Classification of Alzheimer's disease images and normal, healthy images required several
steps, from pre-processing to recognition, which resulted in the development of an end-to-end
pipeline. Three major modules formed this recognition pipeline: a)pre-processing b)data
conversion; and c) classification, respectively. Two different approaches were used in the pre-
processing module, as pre- processing of 3D structural MRI data required different
methodologies, which will be explained later in this paper. After the pre-processing steps, the
data were converted from medical imaging to a Portable Network Graphics (PNG) format to
input into the deep learning-based classifier. Finally, the CNN-based architecture receiving
mages in its input layer was trained and tested (validated) using 75% and 25% of the dataset,
respectively

Figure 3.1 End to End recognition based on deep learning architectures is composed of Preprocessing,
Image conversion and Classification. The structural MRI images are preprocessed using FSL Library
which are then converted into 2D images along Z axis.

3.1 Structural MRI Data Pre-processing

MRI Data was pre-processed using FSL Library developed by oxford, it is a widely used
library for MRI images pre-processing. Initially, BET Extraction was applied to all the
images, it removed all the not required regions from the image such as neck, Eye, Skull, Nose
etc. Only the part of image which contained the Brain was the output of the first step.

19 | P a g e
Department of Information Technology

Figure 2.2 Raw Structural MRI Scan

Figure 3.3 Image after brain extraction tool is applied. Neck, eye, skull etc which are not part of brain are
removed

20 | P a g e
Department of Information Technology

Next, a study-specific grey matter template was then created using the FSL-VBM library and
relevant protocol [19]. In this step, all brain-extracted images were segmented to grey matter
(GM), white matter (WM) and cerebrospinal fluid (CSF). GM images were selected and
registered to the GM ICBM-152 standard template using linear affine transformation. The
registered images were concatenated and averaged and were then flipped along the x-axis,
and the two mirror images were then re- averaged to obtain a first-pass, study-specific affine
GM template.

Figure 3.4Image after Grey Matter Extraction

21 | P a g e
Department of Information Technology

Figure3.5 Image after White Matter extraction

Figure 3.6 Image after White Matter extraction

22 | P a g e
Department of Information Technology

Figure 3.7 Image after Figure 5 and Figure 6 are combined together

23 | P a g e
Department of Information Technology

Figure 3.8 Image after Segmentation steps have been applied.

Figure 3.9 Grey Matter Segmented Image

24 | P a g e
Department of Information Technology

Second, the GM images were re-registered to this affine GM template using non-linear
registration, concatenated into a 4D image which was then averaged and flipped along the x-
axis. Both mirror images were then averaged to create the final symmetric, study-specific
non1linear GM template at 2x2x2 mm3 resolution in standard space. Following this, all
concatenated and averaged 3D GM images (one 3D image per subject) were concatenated
into a stack (4D image = 3D images across subjects). Additionally, the FSL-VBM protocol
introduced a compensation or modulation for the contraction/enlargement due to the non-
linear component of the transformation, where each voxel of each registered grey matter
image was multiplied by the Jacobian of the warp field. The modulated 4D image was then
smoothed by a range of Gaussian kernels, sigma = 2, 3, 4 mm (standard sigma values in the
field of MRI data analysis), which approximately resulted in full width at half maximums
(FWHM) of 4.6, 7 and 9.3 mm. The various spatial smoothing kernels enabled us to explore
whether classification accuracy would improve by varying the spatial smoothing kernels.

Figure 3.10 Image after applying affine non linear registration

25 | P a g e
Department of Information Technology

Figure 3.11Final Grey Matter Segmented Image

3.2 Conversion

All the Grey Matter Images were converted from 3D images to 2D images using Nibabel and
OpenCv which are libraries available in python. [21-22] Total images obtained were 4765 2D
images after conversion. Then first 10 and last 10 slices were removed for each 3D image as
they showed no significant information and mean voxel intensity was equal to zero.
Therefore, a total of 3682 Images were obtained for classification of Alzheimer’s. 1982
images were those belonging to Alzheimer’s Class and rest belonged to Normal Class.

3.3 Convolution Neural Networks

Convolutional neural network (CNN, or ConvNet) is a type of feed-forward artificial neural


network in which the connectivity pattern between its neurons is inspired by the organization
of the animal visual cortex. Individual cortical neurons respond to stimuli in a restricted
region of space known as the receptive field. The receptive fields of different neurons
partially overlap such that they tile the visual field. The response of an individual neuron to
stimuli within its receptive field can be approximated mathematically by a convolution
operation. Convolutional networks were inspired by biological processes and are variations of

26 | P a g e
Department of Information Technology

multilayer perceptrons designed to use minimal amounts of preprocessing. They have wide
applications in image and video recognition, recommender systems and natural language
processing [23].

Convolutional neural networks (CNNs) that are inspired by the human visual system are
similar to classic neural networks. This architecture has been specifically designed based on
the explicit assumption that raw data are comprised of two- dimensional images that enable
certain properties to be encoded while also reducing the amount of hyper parameters. The
topology of CNNs utilizes spatial relationships to reduce the number of parameters that must
be learned, thus improving upon general feed-forward backpropagation training [24].
Equation 1 demonstrates how the gradient component for a given weight is calculated in the
backpropagation step, where E is error function, y is the neuron Ni,j,x is the input, l represents
layer numbers, w is filter weight with a and b indices, N is the number of neuron In a given
layer, and m is the filter size.

𝑙 𝑙
𝜕𝐸 𝜕𝐸 𝜕𝑥𝑖𝑗 𝜕𝐸 𝜕𝑥𝑖𝑗 𝑙−1
= ∑𝑁−𝑚 𝑁−𝑚
𝑖=0 ∑𝑖=0 𝑙 𝜕𝜔 = ∑𝑁−𝑚 𝑁−𝑚
𝑖=0 ∑𝑖=0 𝑙 𝜕𝜔 𝑦(𝑖+𝑎)(𝑖+𝑏) (1)
𝜕𝜔𝑎𝑏 𝜕𝑥𝑖𝑗 𝑎𝑏 𝜕𝑥𝑖𝑗 𝑎𝑏

In CNNs, small portions of the image (called local receptive fields) are treated as inputs to the
lowest layer of the hierarchical structure. One of the most important features of CNNs is that
their complex architecture provides a level of invariance to shift, scale and rotation, as the
local receptive field allows the neurons or processing units access to elementary features,
such as oriented edges or corners. This network is primarily comprised of neurons having
learnable weights and biases, forming the convolutional layer. It also includes other network
structures, such as a pooling layer, a normalization layer and a fully connected layer. As
briefly mentioned above, the convolutional layer, or conv layer, computes the output of
neurons that are connected to local regions in the input, each computing a dot product
between its weight and the region it is connected to in the input volume. The pooling layer,
also known as the pool layer, performs a downsampling operation along the spatial
dimensions. The normalization layer, also known as the rectified linear units (ReLU) layer,
applies an elementwise activation function, such as max (0, x) thresholding at zero. This layer
does not change the size of the image volume[23]. The fully connected(FC) layer computes
the class scores, resulting in the volume of the number of classes. As with ordinary neural
networks, and as the name implies, each neuron in this layer is connected to all of the
numbers in the previous volume. The convolutional layer plays an important role in CNN
architecture and is the core building block in this network. The conv layer's parameters
27 | P a g e
Department of Information Technology

consist of a set of learnable filters. Every filter is spatially small but extends through the full
depth of the input volume. During the forward pass, each filter is convolved across the width
and height of the input volume, producing a 2D activation map of that filter. During this
convolving, the network learns of filters that activate when they see some specific type of
feature at some spatial position in the input. Next, these activation maps are stacked for all
filters along the depth dimension, which forms the full output volume. Every entry in the
output volume can thus also be interpreted as an output from a neuron that only examines a
small region in the input and shares parameters with neurons in the same activation map. A
pooling layer is usually inserted between successive conv layers in CNN architecture. Its
function is to reduce (down sample) the spatial size of the representation in order to minimize
network hyper parameters, and hence also to control overfitting. The pooling layer operates
independently on every depth slice of the input and resizes it spatially using the max
operation. In convolutional neural network architecture, the conv layer can accept any image
(volume) of size W1×H1×D1 that also requires four hyper parameters, which are K, number
of filters; F, their spatial extent; S, the size of stride; and P, the amount of zero padding. The
conv layer outputs the new image, whose dimensions are W2 × H2 × D2.

LeNet-5 was first designed by Y. LeCun. This architecture successfully classified digits and
was applied to hand-written check numbers. The application of this fundamental but deep
network architecture expanded into more complicated problems by adjusting the network
hyper parameters. LeNet-5 architecture, which extracts low-to mid-level features, includes
two conv layers, two pooling layers, and two fully connected layers.

Figure 3.12 LeNetArchitecture[23]

28 | P a g e
Department of Information Technology

3.4 Spiking Neural Network

In customary neural networks, a neuron generates a spike, the action potential, at the kernel
of the soma. This spike in the form of a 1-2ms pulse travels through the axon, which is linked
to the dendrite of another neuron via synapses. The incoming spike influences the receiving
neuron’s membrane potential, which causes the neuron to fire a spike. A spike can have either
a positive or a negative effect on the receiving neuron, also called postsynaptic neuron. The
positive one is called postsynaptic potential (PSP) and the negative one is inhibitory
postsynaptic potential [25].

In SNNs the neurons rely on the timing of the spikes in order to communicate with each
other. Since the basic principle of SNNs is diff erent than traditional neural networks, it is
required to adapt the learning rules. Spiking neural networks (SNN) (also pulse-coupled or
integrate-and-fire networks) are more detailed models and use this neural-code of precisely
timed spikes. The input and output of a spiking neuron is described by a series of firing-times,
called a spike-train. One firing-time thus describes the time a neuron has sent out a pulse.
Further details of the pulse like the form are neglected, because all pulses of one neuron-type
look alike.

The potential of a spiking neuron is modelled by a dynamic variable and works as a leaky
integrator of the incoming spikes: newer spikes contribute more to the potential then older
spikes. If this sum is higher than a predefined threshold the neuron fires a spike. Also the
refractory period and synaptic delay is modeled.This makes an SNN a dynamic system, in
contrast with the sigmoidal neuron networks which are static, and enables it to perform
computation on temporal patterns in a very natural way.

Figure 3.13 Spiking Neural Network Architecture[24]

29 | P a g e
Department of Information Technology

The framework consists of the development of Spiking SOM, and it is divided into 4 main
phases. Phase 1 is the data pre-processing process that involves the process of generating the
training sample for SOM learning. In this phase, the spiking method of neurons’
communication will be embedded in the SOM learning algorithm. Different types of neural
coding schemes are implemented in the process of representing the input data into spike times
to generate training sample. After the training data is fed into the network, the model will
execute the training process and potential weights are generated in the Phase 2. In Phase 3,
outputs from the Spiking SOM classifier are identified and labelled according to the features
and characteristics of the data. Finally, in Phase 4, proposed Spiking SOM model is validated
using classification accuracy and error quantization method [27].

PHASE 1: DATA PREPROCESSING


Conversion of input data into spike times by neural coding scheme.

PHASE 2: LEARNING PHASE


Training the spiking neural network.

PHASE 3: SPIKING CLASSIFIER


Identification of output to represent relationships of data.

PHASE 4: RESULT AND ANALYSIS

Classification validation:
1. Classification accuracy.
2. Error quantization.

30 | P a g e
Department of Information Technology

3.4.1 Parameter Setting :-

Before training a network we have to set the parameters used by our learning procedure with
sensible values. For some of these we can just choose a value that seems appropriate, without
tuning them to improve the networks performance [28]. For other parameters it is difficult to
come up with a good value, so we have to make a rough estimation using some preliminary
tests. We will discuss the parameters one by one and give empirical formulas where possible.

• The number of delays per connection. There are a number of synapses with different delays
with k ∈ 1,2,...l between every input- and output-neuron . The delay-interval (dl−d1) should
strongly depend on the duration of the input-pattern and the desired output-spikes [5]. The
smallest delay should be the diff erence between the time of the last input-spike and the
earliest output-spike and the largest delay should be the diff erence between the time of the
earliest input-spike and the last output-spike. By doing this, every input-spike can be delayed
in such a degree that it can influence the desired early and late output-spikes. In all the
experiments we chose to distribute the delays 1 ms apart from each other, as in [5, 43], so that
= k ms with k ∈ {1,2,...,l}[27].

• Weight-initialization. Before training begins the weights should be initialized to some


random value. If the weights are too low the output-neurons will not fire and there is no way
to calculate the error with respect to the weights [25]. So it is important to make a good
estimate. We chose to pick these initial weight-values randomly from a uniform distribution.
The average of this distribution depends on a number of things including the Fan-In of the
output-neurons, which is the number of synapses leading to one output-neuron. In our
architecture this is the product of the number of input-neurons I and the number of delay lines
per connection.

• The learning-rate α. It is difficult to pick a suitable value for the learning-rate α, introduced
in equation. If α is too high the algorithm will most likely not converge, but will change the
weights with such big steps that it will overshoot the minimum and start oscillating around
it[6]. If it is too low the algorithm will change the weights too slow and thus it will take a
long time before the network-error will reach a minimum.

• The stopping-criteria. When dealing with real-world data there is always presence of noise.
So it would be very unlikely that the algorithm finds weight-values that reduces the network-
error to zero. This would also be infeasible, because it would indicate that we have overfitted

31 | P a g e
Department of Information Technology

the data. It is better to stop training when the error drops beneath a certain threshold. The
order of the error, and thus the height of the threshold, depends on a number of things: the
number of training-patterns, the number of output-neurons and the coding-scheme of the
output.

PARAMETER VALUE
V(rest) -60(mV)
V(threshold) -40(mV)

V(peak) 35(mV)
Đt 0.05,0.1,0.2,0.5,1
α (learning rate) 0.1

3.5 Neural Cooperative Coevolutionary Algorithm

The basic hypothesis underlying this algorithm is that to apply Coevolutionary algorithm
(CEA) effectively to detect the Alzheimer Disease, Cooperative coevolution (CoCo) that
divides a large problem into small subcomponents that are denoted using subpopulations
where each subcomponent uses a separate evolutionary algorithm. The subcomponents
evolve in a round-robin method and are inherently isolated. The only cooperation that takes
place is during the fitness evaluation.

The original CoCo framework engaged in a separate subcomponent for each variable and
was only effective for problems which are separable [29]. In seperable problems, there is no
interdependency between the decision variables but in non-separable problems,
interdependencies exist. In most cases, groups of interacting and non-interacting variables
exist which determines the degree of nonseparability. The degree of non separability, which
refers to the nature of the problem in terms of the interacting variables. The performance of
evolutionary algorithms depreciate when the problem becomes significantly complex and

32 | P a g e
Department of Information Technology

large. Just like the other evolutionary algorithms, CoCo also faces the problems of scalability
which has been detected for non seperable problems. As the CoCo Neural Network has been
used for training feedforward [30], and recurrent neural networks [29], the attention has not
been on the issue of separability and interaction between the variables.

We use the Adaptive Modular Architecture to solve this Particular Problem. The other being
used are COVNET [29] and MOBNET [30].

Types of Coevolution

• By evaluation

– Competitive

– Cooperative

• By population organization

– Inter-population

– Intra-population

But, in those above types of coevolution we check by evaluation using the cooperative
approach as it is good approach than competitive.

Although there are many subcomponent design methodologies, but only two major ones
produce the efficient solutions. These include subcomponent design on the Neuron level and
Synapse level.

Neuron Level: It uses each neuron in the hidden layer as the main reference point for the
respective subcomponent.

• Each subcomponent consists of the incoming and outgoing connections.

• Cooperative coevolutionary (Coco) model for evolving artificial neural networks


(COVNET [29]) and multi-objective cooperative networks (MOBNET [30]) build the
given subcomponents by encoding input and output connections to the respective
hidden neuron.

• They have been used for training feedforward network architectures. This encoding
scheme is similar to that of Enforced subpopulations(ESP) for training recurrent
neural networks which has been applied to pole balancing problems.

33 | P a g e
Department of Information Technology

Synapse Level: In the synapse level encoding, each weight or link in the network forms a
subcomponent of the CoCo framework.

• The cooperatively coevolved synapse neuro evolution (CoSyNE) algorithm was used
for training feedforward and recurrent networks on pole balancing problems.

• In this encoding scheme, a subcomponent represents a single interconnection which


is either the weight or bias in the network. Therefore, the number of subpopulations
depends on the number of weights and biases.

3.5.1 Adaption of modules in Cooperative Coevolution of feedforward Networks

This section introduces the framework for the adaptation of modules in cooperative
coevolution (AMCC) for training feedforward neural networks (FNN). The AMCC
framework changes its level of modularity during evolution. It employs the level of
modularity which provide a greater level of flexibility (allowing evolution in separable search
space) during the initial stage and decreases the level of modularity during the later stages of
evolution.

The modularity of the AMCC-FNN framework can transform from synapse level encoding
(CoSyNE) to neuron level encoding (NSP) and finally to network level encoding (EA). The
general AMCC frameworks employs all the three levels of encoding for the adaptation of
modularity. The respective levels of encoding are further described as follows:

• Synapse level encoding: Decomposes the network into its lowest level to form a
single module [8]. The number of connections in the network determines the number
of modules.
• Neuron level encoding: Decomposes the network into the neuron level. The number
of neurons in the hidden and output layers determines the number of modules [31].
• Network level encoding: The standard neuro evolutionary encoding scheme where
only one population represents the entire network. There is no decomposition present
in this level of encoding.

3.5.2 Cooperative Coevolutionary (Coco) Architecture

The generalized architecture for evolving interacting co-adapted subcomponents is nothing


but Cooperative Coevolutionary (Coco), models an ecosystem consisting of two or more

34 | P a g e
Department of Information Technology

species. As in nature, the species are genetically isolated meaning that individuals are only
mate with other members and variables of their species. Restrictions are imposed simply
byevolving the species in separate populations. The species on one hand interact with one
another within ashared domain model and will have the cooperative relationship.

Figure 3.14 Step 1 of Neuro COCO Algorithm

Figure 3.15 Step 2 of Neuro COCO Algorithm

35 | P a g e
Department of Information Technology

3.5.3 Function Optimization (Coco)

General implementation of algorithm is using the Function Optimization technique. The


solution to a problem using the function optimization problem consisting of specifying the
value of N parameters (variables) is, first a natural decomposition is to maintain N
subpopulations (species) each of which contains competing values for a particular parameter.
One can then assign fitness to a particular value(member) of a particular subpopulation by
assembling it along with selected members of the other subpopulations to form one or more
N-dimensional vectors whose fitness can be calculated in the normal fashion, and using these
results we assign fitness to the individual components which are being evaluated. That is, the
fitness of a particular member of a particular species is computed by estimating how well it
“cooperates” with other subspecies (subpopulation) to produce good solutions.

3.5.4 Analysis of Decomposition Capability

In the following empirical analysis we explore whether the algorithm outlined above is
capable of producing good problem decompositions purely as a result of evolutionary
pressure to increase the overall fitness of the ecosystem. We will describe four studies each
designed to answer one of the following questions concerning the ability of this algorithm to
decompose problems:

• Will species trace and cover multiple environmental niches?


• Will species evolve to an appropriate level of generality?
• Will adaptation occur as the number and role of species change?
• Will an appropriate number of species emerge?

Major Issues

• Problem decomposition

• Interdependency of subcomponents

• Maintain diversity during search

• Credit assignment

• Maintaining performance of EA as the problem becomes significantly large.

36 | P a g e
Department of Information Technology

Training set is defined as a set of examples used for learning, that is to fit the parameters [i.e.,
weights] of the classifier. Validation set is defined as a set of examples used to tune the
parameters [i.e., architecture, not weights] of a classifier, for example to choose the number
of hidden units in a neural network. Test set is defined as a set of examples used only to
assess the performance [generalization] of a fully specified classifier. Accuracy refers to the
percentage of correct number of classification of dataset after training and testing phase.

37 | P a g e
Department of Information Technology

Chapter 4

Result and Conclusion

38 | P a g e
Department of Information Technology

Chapter 4
Result and Conclusion

The different neural network models were initially set for 10 epochs and initiated for
Stochastic Gradient Descent with a gamma = 0.1, a momentum = 0.9,a base learningrate =
0.01,aweight decay = 0.0005, and a step learning rate policy dropping the learning rate in
steps by a factor of gamma every step size iteration. Next, the models was trained and tested
by 75% and 25% of the data. The comparative results are shown in the table below along
with architectures used for Neural Network Models.

Table 4.1Theaccuracy of testing is shown below. As Shown, a very high level of accuracy is achieved for both
Convolution Neural Network and Spiking Neural Network. Convolution Neural Network is slightly better than
Spiking Neural Network and Cooperative Coevolutionary Neural Network.

Test
Learning
Neural Network Architecture Epoch Accuracy(AD
Rate
vs NL)
5 93.15%
10 97.26%
0.01
15 97.84%
Convolution Neural 20 97.84%
Lenet-5
Network 5 89.26%
10 92.12%
0.5
15 94.73%
20 96.39%
5 92.62%
10 94.49%
0.01
15 94.65%
Spiking CoCo Neural 20 94.87%
SpikeProp
Network 5 84.63%
10 89.74%
0.5
15 91.45%
20 93.65%
5 91.32%
10 92.53%
0.01
15 91.89%
Cooperative 20 93.69%
AMC-FNN [11]
Coevolutionary
(Modularity) 5 83.65%
Neural Network
10 89.63%
0.5
15 91.35%
20 91.75%

39 | P a g e
Department of Information Technology

The best accuracy in the case of Convolution Neural Network was 97.84% for number of
epoch equal to 15 and learning rate equal to 0.01. For Spiking neural network, the best
accuracy was achieved for epoch equal to 20 and learning rate equal to 0.01, accuracy being
96.39%. As shown in the table, for learning rate equal to 0.5, the neural network do not
perform well as compared to their counterparts. The Accuracy of Testing for epoch equal to
15 is represented in graphs given below for both neural networks and learning rate.

Accuracy Test vs Loss test(CNN)


1.2

0.8

0.6

0.4

0.2

0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Accuracy Loss

Figure 4.1 AD vs NL for CNN with learning rate = 0.01

Accuracy Test vs Loss test (CNN learning rate


=0.5)
1

0.8

0.6

0.4

0.2

0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Accuracy Loss

Figure 4.2 AD vs NL for CNN with learning rate = 0.5

40 | P a g e
Department of Information Technology

Accuracy test vs Loss test (SNN learning rate = 0.5


1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Accuracy Loss

Figure 4.3 AD vs NL for SNN with learning rate = 0.5

Accuracy Test vs Loss test (SNN with learning rate


equal to 0.01)
1

0.8

0.6

0.4

0.2

0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Accuracy Loss

Figure 4.4 AD vs NL for SNN with learning rate = 0.01

In order to distinguish brains aff ected by Alzheimer's disease from normal healthy brains in
older adults, this study presented a pipeline, including extensive preprocessing modules and
deep learning-based classifiers, using structural and functional MRI data. Scale and shift
invariant low- to high-level features were extracted from a massive volume of whole brain
data using convolutional neural network architecture, resulting in a highly accurate and
reproducible predictive model.

In this study, we find that Convolution neural network performs better than Spiking neural
network by a slight margin for learning rate equal to 0.01.

41 | P a g e
Department of Information Technology

This cutting-edge deep learning-based framework points to a number of applications in


classifying brain disorders in both clinical trials and large-scale research studies. This study
also demonstrated that the developed pipelines served as fruitful algorithms in characterizing
multimodal MRI biomarkers. In conclusion, the proposed methods demonstrate strong
potential for predicting the stages of the progression of Alzheimer's disease and classifying
the eff ects of aging in the normal brain.

The proposed methodology could be extended to include algorithms such as Evolutionary


Algorithms, Hybrid Algorithms, and Nature Algorithms. Comparison of various different
algorithms will provide a clear perception about the performance of neural networks for
Alzheimer Classification and will lead to development of algorithm which could be
commercialized and used widely.

Figure shown below demonstrate 20 5*5 filters for MRI Models.

Figure 4.5The filters of the first convolution layer of Lenet architecture

42 | P a g e
Department of Information Technology

Figure 4.6 Features of First Convolution Layer

43 | P a g e
Department of Information Technology

REFERENCES :-

1. Mareeswari, S., and Dr G. Wiselin Jiji. "A survey: Early detection of alzheimer's
disease using different techniques." International Journal on Computational Sciences
& Applications (IJCSA)
2. Vol 5.Review Article: Innovative diagnostic tools for early detaction of Alzheimer’s
disease by chrishtophLaske, Hamid R.Sohrabi,shaun M. Frost.
3. Vemuri, Prashanthi, David T. Jones, and Clifford R. Jack. "Resting state functional
MRI in Alzheimer's Disease." Alzheimer's research & therapy 4.1 (2012): 1.
4. Warsi, Mohammed A. "The Fractal Nature and Functional Connectivity of Brain
Function as Measured by BOLD MRI in Alzheimer’s Disease." (2012).
5. Cheryl L Grady, Anthony R McIntosh, SaniaBeig, Michelle L Keightley, Hana
Burian, and Sandra E Black. Evidence from functional neuroimaging of a
compensatory prefrontal network in alzheimer’s disease.
6. Cheryl L Grady, Maura L Furey, Pietro Pietrini, Barry Horwitz, and Stanley I
Rapoport. Altered brain functional connectivity and impaired short-term memory in
alzheimer’s disease. Brain, 124(4):739–756, 2001
7. Evanthia E Tripoliti, Dimitrios I Fotiadis, and Maria Ar- gyropoulou. A supervised
method to assist the diagnois and classification of the status of alzheimer’s disease
using data from an fmri experiment.
8. In 2008 30th Annual International Conference of the IEEE Engineering in Medicine
and Biology Society, pages 4419–4422. IEEE, 2008
9. Allan Ravent´os and Moosa Zaidi. Automating neurological disease diagnosis using
structural mr brain scan features.
10. Sarraf, S., &Tofighi, G. (2016). Classification of Alzheimer’s Disease Structural MRI
Data by Deep Learning Convolutional Neural Networks. arXiv, 1–14. Retrieved from
http://arxiv.org/abs/1603.08631
11. Wyman, B. T., Harvey, D. J., Crawford, K., Bernstein, M. A., Carmichael, O., Cole,
P. E., … Jack, C. R. (2013). Standardization of analysis sets for reporting results from
ADNI MRI data. Alzheimer’s and Dementia. http://doi.org/10.1016/j.jalz.2012.06.004
12. Sarraf, S., Tofighi, G., & Neuroimaging, D. (2016). DeepAD : Alzheimer â€TM s
Disease Classification via Deep Convolutional Neural Networks using MRI and
fMRI. http://doi.org/http://dx.doi.org/10.1101/070441

44 | P a g e
Department of Information Technology

13. Liu, S., Liu, S., Member, S., Cai, W., Pujol, S., Kikinis, R., & Feng, D. (2014).
[poster] Early Diagnosis of Alzheimer’S Disease With Deep Learning, (Md), 1–4.
14. Sørensen, L., Igel, C., Liv Hansen, N., Osler, M., Lauritzen, M., Rostrup, E., &
Nielsen, M. (2015). Early detection of Alzheimer’s disease using MRI hippocampal
texture. Human Brain Mapping, 0(August), n/a–n/a.
http://doi.org/10.1002/hbm.23091
15. Scheidegger, M. (2016). Multimodal Neuroimaging & Depression, (March), 2–6.
16. Li, F., Tran, L., Thung, K. H., Ji, S., Shen, D., & Li, J. (2015). A Robust Deep Model
for Improved Classification of AD/MCI Patients. IEEE Journal of Biomedical and
Health Informatics, 19(5), 1610–1616. http://doi.org/10.1109/JBHI.2015.2429556
17. Lecun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(1), 436–444.
http://doi.org/10.1038/nature14539
18. Douaud, G., Smith, S., Jenkinson, M., Behrens, T., Johansen-Berg, H., Vickers, J., …
James, A. (2007). Anatomically related grey and white matter abnormalities in
adolescent-onset schizophrenia. Brain, 130(9), 2375–2386.
http://doi.org/10.1093/brain/awm184
19. "FSL - Fslwiki". Fsl.fmrib.ox.ac.uk. N.p., 2016. Web. 11 Nov. 2016.
20. "Neuroimaging In Python — Nibabel 2.1.1Dev Documentation". Nipy.org. N.p.,
2016. Web. 11 Nov. 2016.
21. "Opencv | Opencv". Opencv.org. N.p., 2016. Web. 11 Nov. 2016.
22. LeCun, Y., Bottou, L., Bengio, Y., &Haffner, P. (1998). Gradient-based learning
applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2323.
http://doi.org/10.1109/5.726791
23. Erhan, D., Courville, A., & Vincent, P. (2010). Why Does Unsupervised Pre-training
Help Deep Learning ? Journal of Machine Learning Research, 11, 625–660.
http://doi.org/10.1145/1756006.1756025
24. Andr´e Gr¨uning and Sander M. Bohte “Spiking Neural Networks: Principles and
Challenges” European Symposium on Artificial Neural Networks. Page 14-35
Booij, Olaf. "Temporal pattern classification using spiking neural networks." Unpublished
master’s thesis, University of Amsterdam (August 2004) (2004).
25. Basegmez, Erdem, and Jörg Conradt. "The Next Generation Neural Networks: Deep
Learning and Spiking Neural Networks" .
26. Spiking Self-Organizing Maps for Classification Problem BariahYusob*,
SitiMariyamHjShamsuddin, HazaNuzlyAbdullHamed

45 | P a g e
Department of Information Technology

27. Brette, Romain, et al. "Simulation of networks of spiking neurons: a review of tools
and strategies." Journal of computational neuroscience 23.3 (2007): 349-398.
28. Ghosh-Dastidar, Samanwoy, and Hojjat Adeli. "Spiking neural
networks." International journal of neural systems 19.04 (2009): 295-308.
29. Ghosh-Dastidar, Samanwoy, and Hojjat Adeli. "A new supervised learning algorithm
for multiple spiking neural networks with application in epilepsy and seizure
detection." Neural networks 22.10 (2009): 1419-1431.
30. Rossello, Josep L., et al. "Chaos-based mixed signal implementation of spiking
neurons." International Journal of Neural Systems 19.06 (2009): 465-471.
31. Iglesias, Javier, and Alessandro EP Villa. "Emergence of preferred firing sequences in
large spiking neural networks during simulated neuronal development." International
Journal of Neural Systems 18.04 (2008): 267-277.
32. García-Pedrajas, Nicolás, César Hervás-Martínez, and José Muñoz-Pérez. "COVNET:
a cooperative coevolutionary model for evolving artificial neural networks." IEEE
Transactions on neural networks 14.3 (2003): 575-596.
33. García-Pedrajas, Nicolás, César Hervás-Martínez, and José Muñoz-Pérez. "Multi-
objective cooperative coevolution of artificial neural networks (multi-objective
cooperative networks)." Neural networks: the official journal of the International
Neural Network Society 15.10 (2002): 1259-1278
34. Chandra, Rohitash, Marcus Frean, and Mengjie Zhang. "An encoding scheme for
cooperative coevolutionary feedforward neural networks." Australasian Joint
Conference on Artificial Intelligence. Springer Berlin Heidelberg, 2010.

46 | P a g e

You might also like