Professional Documents
Culture Documents
Neural Network For Cancer Diagnosis
Neural Network For Cancer Diagnosis
Neural Networks
DECLARATION OF ORIGINALITY
UNIVERSITY OF PRETORIA
The University of Pretoria places great emphasis upon integrity and ethical conduct in the
preparation of all written work submitted for academic evaluation.
While academic staff teach you about referencing techniques and how to avoid plagiarism,
you too have a responsibility in this regard. If you are at any stage uncertain as to what is
required, you should speak to your lecturer before any written work is submitted.
You are guilty of plagiarism if you copy something from another authors work (e.g. a
book, an article or a website) without acknowledging the source and pass it off as your own.
In effect you are stealing something that belongs to someone else. This is not only the case
when you copy work word-for-word (verbatim), but also when you submit someone elses work
in a slightly altered form (paraphrase) or use a line of argument without acknowledging it. You
are not allowed to use work previously produced by another student. You are also not allowed
to let anybody copy your work with the intention of passing if off as his/her work.
Students who commit plagiarism will not be given any credit for plagiarised work. The
matter may also be referred to the Disciplinary Committee (Students) for a ruling. Plagiarism
is regarded as a serious contravention of the Universitys rules and can lead to expulsion from
the University.
The declaration which follows must accompany all written work submitted while you are a
student of the University of Pretoria. No written work will be accepted unless the declaration
has been completed and attached.
Declaration
1. I understand what plagiarism is and am aware of the Universitys policy in this regard.
2. I declare that this assignment report is my own original work. Where other peoples work
has been used (either from a printed source, Internet or any other source), this has been
properly acknowledged and referenced in accordance with departmental requirements.
3. I have not used work previously produced by another student or any other person to
hand in as my own.
4. I have not allowed, and will not allow, anyone to copy my work with the intention of
passing it off as his or her own work.
SIGNATURE:
DATE:
Contents
1.1
1.2
1.3
1.4
1.5
1.6
1.1
Introduction . . . . . . . . . . . . . . . . . . . . . . .
Problem Definition and Methodology . . . . . . . . .
1.2.1 Hidden Layer size and amount . . . . . . . . .
1.2.2 Learning Rate . . . . . . . . . . . . . . . . . .
Results and Graphs . . . . . . . . . . . . . . . . . . .
1.3.1 Instability of using single neuron hidden layer
1.3.2 Results for different hidden layer sizes . . . . .
1.3.3 The effect of learning rate . . . . . . . . . . .
Discussion and Conclusion . . . . . . . . . . . . . . .
1.4.1 Effects of hidden layer size and the dataset . .
1.4.2 The effect of learning rate . . . . . . . . . . .
1.4.3 Final Properties and Stop Condition Notes . .
Appendix A: Hidden Neurons . . . . . . . . . . . . .
Appendix B: Learning Rate Results . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 2
. 3
. 3
. 3
. 4
. 4
. 4
. 5
. 5
. 5
. 6
. 6
. 8
. 16
Introduction
1.2
1.2.1
1.2.2
Learning Rate
For results, a relatively slow but visible learning rate () is about 0.01 [4],
this rate will be used for initial observations with limited stop conditions
to determine optimal network size and then further learning rates will be
simulated in neural networks of the best size(s) found.
1.3
1.3.1
1.3.2
1.3.3
The following plot is based on raw data plotted with results from Appendix B.
1.4
1.4.1
Advantage
Fast Train Time
Good Performance
Good Performance
Disadvantage
Poor Performance
Increased Train Times
Very Long Train Times
Figure 1.4: A table of trade-offs for small and large hidden layers
We can conclude then, that the best compromise is a hidden layer with 6 - 7
neurons.
1.4.2
1.4.3
Bibliography
[1] R. Collobert, Deep learning for efficient discriminative parsing, in International Conference on Artificial Intelligence and Statistics, no. EPFLCONF-192374, 2011.
[2] J. Heaton, Introduction to neural networks with Java. Heaton Research,
Inc., 2008.
[3] P. McCullagh, J. A. Nelder, and P. McCullagh, Generalized linear models,
vol. 2. Chapman and Hall London, 1989.
[4] S. Russel and P. Norvig, Artificial Intelligence : A Modern Approach.
Pearson, third ed., 2010.
1.5
10
11
12
13
14
15
1.6
16
17
18
19
20