Professional Documents
Culture Documents
SelvarasaM w1654542 2016131 AutoSkreenr Automated Skin Cancer Screening Using Optimized CNN
SelvarasaM w1654542 2016131 AutoSkreenr Automated Skin Cancer Screening Using Optimized CNN
In Collaboration With
University of Westminster, UK
AUTOSKREENR
AUTOMATED SKIN CANCER SCREENING USING
OPTIMIZED CNN
A dissertation by
Supervised by
W1654542 | 2016131
May 2020
© The copyright for this project and all its associated products resides with Informatics Institute of
Technology
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Declaration
I hereby certify that this project report and all the artefacts associated with it is my own
work and it has not been submitted before nor is currently being submitted for any degree
programme.
Signature:……………………. Date:…………………….
2
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Abstract
Cancer is one the deadliest diseases known to mankind and skin cancer is a common form
of cancer that mainly occurs in people with lighter skin due to lack of melanin
pigmentation. It is less fatal in its benign form but deadly if malignant. Survival rate for a
malignant skin cancer is high if diagnosed at an early stage but significantly drops when
diagnosed at a later stage. Statistics show that the number of skin cancer cases are on the
rise in many western countries while the number of experienced dermatologists have stayed
constant resulting in a shortage.
Due to this, there is an increased need an automated skin cancer screening tool that can aid
a dermatologist in the screening process. In this project, the various aspects involved in
building an automated tool for skin cancer screening is well researched. Segmentation
model was built to improvise the segmentation accuracy on the ISIC 2018 dataset by using
the newer SOTA architecture, EfficientNet. Experiments were carried out to identify how
preprocessing and segmentation affects the prediction capabilities of the classification
model. Two experiments were carried out where one experiment involved the use of
preprocessed and segmented images while the other experiment involved the use of non-
preprocessed images. The classifier was optimized via the use of bio-inspired optimization
algorithms. COA and GA were compared for this purpose where the results showed that
GA tends to perform well. The proposed GA has a new variation called population variance
which intended to improve the diversity of the solutions produced by the algorithm. The
VGG-16 architecture was optimized using the GA algorithm and trained on both the sets
of data to identify the better approach. Segmentation model shows promising results while
further research needs to be conducted to investigate the classification model.
Finally, a web application was built for dermatologists to upload dermoscopic images and
get predictions on and also receive a visual reasoning for the prediction made.
Keywords
3
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Acknowledgement
To do a good research, it requires a lot of effort and will. Researchers are challenging and
therefore, a lot of challenges need to be faced with confidence. And that confidence, not
only comes from within but also from around us. I owe a great many thanks to a great
number of people who helped me and put up with me throughout this project. It would not
have been possible to complete this research without their support and guidance.
I thank Mr. Achala Chathuranga Aponso for providing me with all the guidance I needed
throughout the research to make this project a success. Without his guidance, advice and
support this project would not have been possible.
4
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Publication
A Critical Analysis of Computer Aided Approaches for Skin Cancer Screening
Content: A review on the steps involved in building an automated skin cancer screening
tool and existing approaches that have been explored.
5
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Table of Content
Declaration .......................................................................................................................... 2
Abstract ............................................................................................................................... 3
Acknowledgement .............................................................................................................. 4
Publication .......................................................................................................................... 5
Table of Content ................................................................................................................. 6
List of Figures ................................................................................................................... 12
List of Tables .................................................................................................................... 13
List of Abbreviations ........................................................................................................ 14
1. Chapter 1: Introduction ............................................................................................. 15
1.1. Chapter Overview .............................................................................................. 15
1.1. Background ........................................................................................................ 15
1.2. Problem Statement ............................................................................................. 17
1.3. Research Question .............................................................................................. 18
1.4. Research Aim ..................................................................................................... 19
1.5. Research Motivation .......................................................................................... 19
1.6. Research Objectives ........................................................................................... 19
1.7. Related Work...................................................................................................... 20
1.8. Project Scope ...................................................................................................... 21
1.9. Rich Picture ........................................................................................................ 22
1.10. Resource Requirements .................................................................................. 23
1.10.1. Software Requirements ........................................................................... 23
1.10.2. Hardware Requirements .......................................................................... 23
1.10.3. Data Requirements .................................................................................. 23
1.11. Chapter Summary ........................................................................................... 23
Chapter 2: Literature Review ............................................................................................ 24
2.1. Chapter Overview .................................................................................................. 24
2.2. Conceptual Graph .................................................................................................. 24
2.3. Domain Justification .............................................................................................. 24
2.4. Literature Review of the Domain .......................................................................... 25
2.4.1. Skin Cancer Screening Techniques ................................................................ 25
2.4.2. Computer Aided Skin Cancer Screening ........................................................ 27
6
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
7
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
8
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
9
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
10
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
11
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
List of Figures
Figure 1 Rich Picture ........................................................................................................ 22
Figure 2 High level use case diagram. .............................................................................. 61
Figure 3 High Level Architecture diagram of the proposed system. ................................ 64
Figure 4 Class Diagram for the proposed solution. .......................................................... 65
Figure 5 Sequence Diagram for Classify Image Use Case ............................................... 66
Figure 6 Activity diagram of the proposed system. .......................................................... 67
Figure 7 Vanilla Genetic Algorithm ................................................................................. 68
Figure 8 Coyote Optimization Algorithm ......................................................................... 69
Figure 9 ER Diagram for the Web Application Database ................................................ 70
Figure 10 Convolution Dense Block Representation........................................................ 73
Figure 11 Single-Point Crossover ..................................................................................... 74
Figure 12 Original Image of Skin Lesion ......................................................................... 78
Figure 13 Contrast Enhancement of Image ...................................................................... 78
Figure 14 Original Image - Not Preprocessed .................................................................. 78
Figure 15 Preprocessed Image - Post Artifact Removal ................................................... 78
Figure 16 Preprocessing of a Single Image Function ....................................................... 79
Figure 17 UNet Architecture described ............................................................................ 80
Figure 18 Hyperparameters of the UNet - EfficientNetB3 Model ................................... 81
Figure 19 UNet Model Define with EfficientNetB3 Feature Extractor ............................ 81
Figure 20 UNet Model Compile with Optimizer, Loss Function and Metrics ................. 81
Figure 21 UNet Model Training with Callback to save weights if any improvement ...... 81
Figure 22 Semantic Segmentation Results – ISIC 2019 ................................................... 82
Figure 23 Enhance the Mask and Apply on original image.............................................. 82
Figure 24 Preprocessed Image – ISIC 2019 ..................................................................... 83
Figure 25 Generated Mask – ISIC 2019 ........................................................................... 83
Figure 26 Mask applied on preprocessed image. .............................................................. 83
Figure 27 Visual Reasoning – ISIC 2019 ......................................................................... 83
Figure 28 Statistical test results for the Segmentation Model. ......................................... 93
Figure 29 Segmentation model result on an image from ISIC 2018 Dataset. .................. 93
Figure 30 Segmentation model result on an image from ISIC 2019 Dataset. .................. 93
Figure 31 Statistical test results for Classification Model. ............................................... 93
Figure 32 Model Accuracy - Train vs Validation ............................................................. 94
Figure 33 Model Loss - Train vs Validation ..................................................................... 94
Figure 34 Model F1-Score - Train vs Validation.............................................................. 94
Figure 35 Model Recall - Train vs Validation .................................................................. 94
Figure 36 Model Evaluation - Train Data ......................................................................... 94
Figure 37 Model Evaluation - Test Data........................................................................... 94
Figure 38 Evalaution Criteria to evalaute the project. ...................................................... 97
Figure 39 Self Evaluation of the Project ......................................................................... 105
Figure 40 Completion Status of the Functional Requirements of the Project ................ 105
Figure 41 Utilizing knowledge from the degree. ............................................................ 108
Figure 42 The equation to calculate Intersection Over Union ........................................ 131
12
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
List of Tables
Table 1 In Scope and Out of Scope .................................................................................. 22
Table 2 Software Requirements of Project. ...................................................................... 23
Table 3 Hardware Requirements of Project. ..................................................................... 23
Table 4 Summary of findings based on existing work – Image Classification. ............... 33
Table 5 Summary of findings based on existing work - Image Segmentation. ................ 38
Table 6 Risk assessment and Mitigation ........................................................................... 49
Table 7 Compliance with BCS Code of Conduct. ............................................................ 50
Table 8 SLEP Analysis ..................................................................................................... 51
Table 9 Stakeholders, their roles and how they benefit from the proposed tool. ............. 53
Table 10 Analysis of requirements elicitation by Observing existing systems. ............... 54
Table 11 Analysis of requirements elicitation by distributing Questionnaires. ................ 54
Table 12 Analysis of requirements elicitation by Brainstorming. .................................... 55
Table 13 Analysis of requirements elicitation by Literature review. ................................ 55
Table 14 Findings of the Interviews. ................................................................................ 58
Table 15 Functional Requirements of the Proposed System and Algorithm. ................... 59
Table 16 Non-Functional Requirements of the Proposed System and Algorithm............ 60
Table 17 Use case for description for Classify Image. ..................................................... 62
Table 18 Design Goal intended to achieve via this research. ........................................... 63
Table 19 ISIC 2018 and 2019 Dataset Description .......................................................... 72
Table 20 Hyperparameters to Tune the Architecture. ....................................................... 73
Table 21 Parameters that manipulated the Genetic Algorithm optimization. ................... 75
Table 22 Results of carrying on the hyperparameter tuning on the MNIST dataset. ....... 75
Table 23 Parameters that manipulated the Coyote Optimization Algorithm. ................... 76
Table 24 Results of carrying on the hyperparameter tuning on the MNIST dataset. ....... 76
Table 25 Summary of steps involved in Experiment 1 and 2. .......................................... 83
Table 26 Performance of all the models tried out. ............................................................ 84
Table 27 Hyperparameters optimized and the search space. ............................................ 84
Table 28 Parameters and Values of Genetic Algorithm ................................................... 85
Table 29 Values obtain for parameters upon hyperparameter tuning completion. ........... 85
Table 30 Web App code to add a new patient to the database. ........................................ 85
Table 31 Web App code to view a single patient with all the predictions........................ 86
Table 32 Web App code to validate a prediction made via the application. .................... 86
Table 33 Black Box Test Cases, Expected and Actual Outputs and the status of each test
case. ................................................................................................................................... 89
Table 34 White Box Test Cases, Expected and Actual Outputs and the status of each test
case. ................................................................................................................................... 91
Table 35 Feedback on the overall concept and project idea. ............................................ 99
Table 36 Objectives Completeness ................................................................................. 107
13
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
List of Abbreviations
Abbreviation Definition
USA United State of America
UV Ultra-Violet
ANN Artificial Neural Network
CNN Convolutional Neural Network
GA Genetic Algorithm
COA Coyote Optimization Algorithm
PSO Particle Swarm Optimization
TL Transfer Learning
WHO World Health Organization
SOTA State of the Art
ABCD Asymmetry, Border, Color and Diameter
ROI Region of Interest
14
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
1. Chapter 1: Introduction
1.1. Chapter Overview
The introduction chapter provides an overview of the project been undertaken. The project
background and the problem background are both defined. The importance of the project
is addressed along with the research challenges. Related work is discussed before the
research challenges are discussed. The objectives that needs to be achieved to successfully
complete the project are defined. The project scope will also be defined. Finally, the
resource requirements will be discussed.
1.1. Background
Cancer is a disease that does not have any known cure and considered to be one of the
deadliest diseases in the world. One of the common forms of cancer is Skin Cancer. People
with ligher skin, those exposed to UV for prolonged period and people with higher count
of moles on skin are known to be at higher risk of skin cancer. This is due to the lack of
melanin pigmentation on their skin. Melanin is known to protect the skin from UV light,
which is a direct cause for skin cancer (Oliveira et al., 2016), (Reliant Medical Group,
2019). Not all cancers of the skin are harmful. Skin cancer can be either benign or
malignant where benign skin lesions are less or not harmful whereas malignant skin lesions
are cancerous and therefore harmful. Melanoma of the skin is the most common form of
malignant skin cancer according to the AAD but is less common when compared to benign
skin lesions (American Association of Dermatology, 2019). Although benign skin lesions
are not harmful, they do come with cosmetical disadvantages to the patient. In the USA,
the deadliest form of skin cancer is the Melanoma of the skin and has a high mortality rate
of 1.62% (American Cancer Society, 2020). The survival rate for a patient with malignant
skin cancer is high when diagnosed at early stages but drops significantly to as low as 23%
as time passes (The Skin Cancer Foundation, 2020). One of the common causes for skin
cancer is the exposure to UV rays for prolong period of time (Penta, Somashekar and
Meeran, 2017). Most cases of malignant skin cancer can be cured by surgical excision if
detected at an early stage (Herath et al., 2018). Benign skin lesions can be surgically
removed if required as well. Benign skin lesions account for the vast majority of the skin
lesion cases that are reported and only a small portion of the reported cases are cancerous,
15
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
melanoma. Therefore, distinguishing between the benign skin cancer and melanoma is very
important. Cases of skin cancer have seen a sharp rise since the 1970s, especially in the
UK. Cases related to malignant melanoma has increased by 50% in the last decade in the
UK (Robertson and Fitzgerald, 2017). In Brazil, skin cancer accounts for upto 30% of all
the reported malignancies (Minango et al., 2019). According to WHO, globally the number
of reported skin cancer cases are on the rise for the past few decades (WHO, 2020).
16
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
examination and that the majority of the doctors have never performed a total body
examination in their career (Herath et al., 2018). All these indicate the need for an
automated approach for the skin lesion screening due to the lack of experts in the domain
(Codella et al., 2018), (Mishra and Celebi, 2016).
The advent of large datasets and more computational power, a surge in computer vision
research is being experienced. Deep Learning, a technique commonly used to solve
computer vision problems, shows breakthrough performance in several areas (Hongtao and
Qinchuan, 2019). CNNs, a variation of Neural Networks, is a Deep Learning algorithm
used to solve computer vision problems and considered to be the SOTA. They are known
to have surpassed human experts in several benchmarking tests (Sermanet and LeCun,
2011). Manual architecture/network design (Sermanet and LeCun, 2011), (Liu et al., 2019),
using existing architectures (Minango et al., 2019), and using Transfer Learning (Shin et
al., 2016). Transfer Learning is a technique where a pretrained network, usually trained on
a larger generalized dataset, is retrained on a more specific dataset. Although popular
architectures such as ResNet, VGGNet have emerged, they have their own defects and it
takes time to fix them (Weng et al., 2019). Tuning approaches have shown to improve the
accuracy as it allows to find the optimal network architecture for the dataset. The tuning
approach uses a search strategy to tune networks. Commonly used approaches are heuristic
search and RL (Liu et al., 2019).
Popular machine learning algorithms such as SVM, kNN, Random Forest, Neural
Networks (Rubegni et al., 2002), (Codella et al., 2018), (Al-masni et al., 2018), (Yap,
Yolland and Tschandl, 2018) and Logistic Regression (Kawahara, BenTaieb and
17
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Hamarneh, 2016) have been explored. These approaches require minimal human
intervention. Pretrained Neural Networks have been explored by several researches for the
classification of dermoscopic images of skin cancer/lesions (Bassi and Gomekar, 2019),
(Hosny, Kassem and Foaud, 2019), (Menegola et al., 2017), (Romero-Lopez et al., 2017).
Transfer learning shows better results when compared to training a network from scratch
(Romero-Lopez et al., 2017), (Menegola et al., 2017). Menegola et al. was able to achieve
an AUC score of 80.7% and 84.5% on two different datasets. Romero-Lopez et al. have
explored the use of transfer learning and compared it to training a network from scratch.
The results showed that using transfer learning with fine tuning of weights performs better
than freezing weights or training from scratch. Hosny, Kassem and Foaud have explored
the use of transfer learning across different datasets with a focus on data augmentation. The
accuracies achieved were 91.8%, 88.24% and 87.31% on MED-NODE, DermQuest and
ISIC 2017 respectively.
Lack of data and lack of hyperparameter tuning are some of the identified drawbacks
(Romero-Lopez et al., 2017). Menegola et al. suggest the further exploration of transfer
learning from a dataset related to the domain rather than a generalized dataset (Menegola
et al., 2017). The large publicly available dataset for skin lesions is the ISIC dataset (2019)
which contains images sourced from multiple sources. The images prohibit distortions and
artefacts (Majtner, Yildirim-Yayilgan and Hardeberg, 2016). Therefore, a robust pre-
processing pipeline is required. The dataset also lacks variation in terms of diseases, age
groups and ethnicities represented in the dataset (Codella et al., 2018). Another drawback
in the existing tools are the lack of reasoning provided which can be of use for less
experienced dermatologists and general physicians.
Does the use of preprocessing for classification provide an accuracy improvement over not
using preprocessing with hyperparameter tuning and image segmentation involved in both
approaches?
How can the results of the model be interpreted to make medical decisions based on the
results provided by the model?
18
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Two experiments will be carried out to find out if certain preprocessing steps coupled with
segmentation helps improvise the prediction accuracy of skin cancer classification. A
robust set of pre-processing steps will be identified to preprocess the images. Feature
extraction and segmentation will be carried out to further improvise the images for
classification. The classification model will then be train on the preprocessed data and
compared against another classification model trained on the non-preprocessed data. Both
the experiments involve hyperparameter tuning and image segmentation.
19
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
The use of clinical algorithms such as 7-point checklist, Menzies method and ABCD rule
allows to visualize the clinical features better (Jain, Jagtap and Pise, 2015), (Mehta and
Shah, 2016). Such algorithms have also been used as the feature extraction algorithm for
more complex classification algorithms. Machine Learning algorithms such as SVM and
kNN have been explored for dermoscopic image classification (Majtner, Yildirim-
Yayilgan and Hardeberg, 2016), (Codella et al., 2018). These algorithms require a separate
feature extraction process. Clinical, Dictionary based, Hand-crafted and deep learning
based features have been explored for classification of skin cancer images (Barata, Celebi
and Marques, 2019).
CNNs are considered the SOTA for computer vision problems. They’ve been explored for
skin cancer classification and have displayed tremendous improvement in terms of
accuracy when compared against physicians and other class of algorithms (Yap, Yolland
and Tschandl, 2018). Different approaches for training CNNs have been explored
20
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
(Romero-Lopez et al., 2017), (Menegola et al., 2017). The advantages of transfer learning
and fine-tuning have been explored by the authors (Bisla et al., 2019). Model ensembling
is another approach explored for classification (Harangi, 2018).
The most common and important steps that are carried out in pre-processing skin cancer
images for classification have been presented by researchers (Mehta and Shah, 2016),
(Hoshyar, Al-Jumaily and Hoshyar, 2014), (Bisla et al., 2019). Segmentation has been
identified as an important step as it allows the classifier to focus only on the ROI (Mehta
and Shah, 2016). CNNs display good performance in image segmentation (Al-masni et al.,
2018, p221-231), (Jafari et al., 2016), (Hoshyar, Al-Jumaily and Hoshyar, 2014). Class
imbalances and lack of data can be addressed via augmentation. It is shown to improve the
classification accuracy for dermoscopic images (Bisla et al., 2019), (Perez et al., 2018).
Data augmentation has shown promising results for computer vision tasks (Perez and
Wang, 2017).
Based on the objectives and review of existing products, the scope is defined. The main
goal of the project is to improve the accuracy of skin lesion classification using a deep
learning approach and providing a way for the doctors to the interpret the results.
Identifying and carrying out image Providing verbose reasoning for the
segmentation on the images. predictions obtained after classification.
21
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
The Rich Picture shows the final web application. Input image is preprocessed, segmented,
and classified using both models. One model where only segmentation is applied while the
other, both segmentation and preprocessing. The generated prediction is displayed along
with the visual reasoning for the prediction.
22
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
23
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Skin cancer can be either benign or malignant where benign is not fatal and malignant is
fatal and if diagnosed at a later stage, the survival rate is very low. Malignant Skin Cancers,
especially melanoma only make up 1% of all the reported cases of skin cancer but
responsible for over 75% of skin cancer induced deaths (American Cancer Society, 2020).
24
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
The survival rate for melanoma when diagnosed early is around 92% but drops
significantly when diagnosed later. According to the WHO the number of reported cases
of skin cancer is on the rise for the past few decades (World Health Organization, 2020).
In the UK, the number of reported cases is seeing a sharp rise since the 1970s with a 50%
increase in just the past decade. According to Cancer Research UK, in the year 2013 14,509
cases of melanoma were found and it was predicted that the rate will increase further in the
upcoming years. Skin cancer is becoming very common in youngsters which poses a
serious issue to the future of the country (Robertson and Fitzgerald, 2017). In Brazil, skin
cancer accounts for upto 30% of all the reported malignancies (Minango et al., 2019). The
number of dermatologists per capita in the USA has drastically decreased in the past few
years while a demand for them has increased due to the rise in the number of cases of skin
cancer (Codella et al., 2018). It has been estimated that in the year 2020 there will 108,420
cases of skin cancer out of which 100,350 will be the melanoma of the skin. 10% of these
cases may also result in death (American Cancer Society, 2020). A study carried out by
Herath et al. show that the number of doctors with knowledge about skin cancer
examination is very low in Sri Lanka. Out of the 123 respondents to their survey, only 10
had received formal training on how to perform a total body examination and the majority
of the doctors have never performed a total body examination in their career (Herath et al.,
2018). This shortage of well-trained dermatologists in combination with rising number of
skin cancer cases calls for an interest in an automated tool for skin cancer examination that
can aid the dermatologists in the screening process.
25
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
carried out regularly by dermatologists on patients. Patients may also carry out self-
examinations that may help them to identify any lesion on their body. Skin cancers are
known to change shape, size and color overtime, especially the malignant cancers.
Therefore, tracking the cancer is very important. Overdiagnosis is problem faced by
patients when dermatologists identify a false positive. As further tests may involve invasive
techniques, it is important to have a solid screening technique with very few or no false
positives.
26
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
invasive technique that uses a hand-held microscope and incident light to view the
subsurface image of the skin at an increased magnification. Dermoscopy is mainly used for
the early detection of melanoma. The tool is of two types, immersion contact dermoscopy
and non-contact dermoscopy. It is the most common tool used by clinicians to get an
accurate assessment of skin cancer (Kittler et al., 2002, p159-165), (Dinnes et al., 2015),
(Barata, Celebi and Marques, 2019). The tool allows to obtain a clear zoomed in image of
the skin lesion that clinicians can assess to make further decisions (Barata, Celebi and
Marques, 2019).
Macroscopic images are also a possibility. These images are obtained from standard
cameras and therefore do not visualize the deeper details like a dermoscopy would. Several
issues exist with macroscopic images such as an inconsistent distance between the camera
and skin, and poor resolution and bad lighting (Oliveira et al., 2016). Dermoscopy shows
a signification improvement in terms of accuracy when compared with unaided visual
inspection. This allows for an increase in accuracy from 60% to a 90% and therefore is a
tool that is used around the world (Kittler et al., 2002), (Dinnes et al., 2015), (Barata, Celebi
and Marques, 2019). But the accuracy highly depends on the expertise of the examiner.
The accuracy is no better than that of unaided visual inspection when used by a non-
experienced examiner (Dinnes et al., 2015). The main shortcoming identified with this tool
is the need for an experienced examiner. A study carried out by Kittler et al. (2002)
recommends the involvement of two or more dermatologists to yield the highest possible
accuracy. But with the undersupply of dermatologists, that can be very difficult (Glazer,
Rigel, Winkelmann and Farberg, 2017). Further biopsies are carried out to confirm the
prognosis.
27
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
The use of Support Vector Machine (SVM) has been explored by Majtner et al. in their
research paper where manual feature extraction techniques have been used to extract the
features required to perform the classification. The effectiveness of the manual feature
extraction technique has been compared against automated feature extraction with
Convolutional Neural Networks. Results show that automated feature extraction
approaches work better then manual feature extraction (Majtner, Yildirim-Yayilgan and
Hardeberg, 2016), (Mahbod et al, 2019). Logistic Regression Classifier has been used to
test its effectiveness in images classification based on features extracted via automated
feature extracting using pretrained CNNs (Kawahara, BenTaieb and Hamarneh, 2016).
Linear Texture information and has been used for the identification of malignant
melanomas using SVMs. The proposed system was capable of achieving an accuracy of
70% (Yuan, Yang, Zouridakis and Mullani, 2006). Researchers have also utilized ABCD
rules for feature extraction before using Machine Learning algorithms such as SVM for
classification (Farooq, Azhar and Raza, 2016). Murugan, Nair and Kumar have explored
the use of algorithms such as Random Forest, kNN and SVM for skin cancer classification.
The approach involves segmentation and the use of manual feature extraction techniques
such as ABCD and GLCM to extract the required features to carry out the classification.
The SVM outperforms all the other classifiers by a large margin with an accuracy of
28
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
85.72% (Murugan, Nair and Kumar, 2019). The use of Random Forest, Naïve Bayes, K*
instance-based classifier and Attributional Calculus for the automatic diagnosis of
melanoma has been explored by Grzesiak-Kopeć, Nowak and Ogorzałek. ABCD has been
the chosen set of features to be extracted from the dermoscopic images. In this study,
Random Forest emerged as the best performing classifier with an accuracy of 86.53%
(Grzesiak-Kopeć, Nowak and Ogorzałek, 2015). Almaraz-Damian, Ponomaryov and
Rendon-Gonzalez have explored the use of SVM for classification with ABCD rule for
feature extraction and Morphological masking for the removal of artefacts from the image
and Segmentation. The accuracy of their system was 75.1% (Almaraz-Damian,
Ponomaryov and Rendon-Gonzalez, 2016).
The use of Delaunay triangulation for the segmentation and classification has resulted in a
sensitivity of 93.5% and specificity of 85.2% with an Adaboost classifier (Pennisi et al.,
2016, p89-103). Ferris et al. has explored the use of Decision Trees for classification using
ABCD features (Ferris et al., 2015). Although the performance is good, like all the previous
researches, the classifier is only trained to distinguish between melanoma and benign skin
lesions excluding other cancers have not been included. Comparing results from existing
research show that the use of segmentation has resulted in a higher accuracy than with no
image segmentation (Pennisi et al., 2016, p89-103). It is evident that SVM is the most
commonly used classifier.
An article published in the year 2017 in nature has shown that Deep Convolutional Neural
Networks (DCNN) based approaches provide better accuracy when compared to well-
trained dermatologists by a difference of almost 6% (Esteva et al., 2017). Premaladha and
Ravichandran have compared neural networks and other machine learning algorithms for
the classification of dermoscopic images. The results of the research show that SVM-
Adaboost hybrid algorithm performs better than the proposed ANN (Premaladha and
29
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Pomponiu, Nejati and Cheung proposed the use of CNNs with a Transfer Learning for
training for the classification of dermoscopic images. The Transfer learning approach used
for training the chosen architecture allows to initialize the networks weights using the
weights learnt from a much larger dataset such as ImageNet (Deng et al., 2009). The
approach involves augmentation of data to compensate for the lack of large datasets and
automated feature extraction via CNN. An accuracy of 83.95% was achieved. The
automated feature extraction approach has resulted in an accuracy higher than the use of
hand-crafted features (Pomponiu, Nejati and Cheung, 2016). Hosny, Kassem and Foaud
have proposed a method that uses a pretrained CNN architecture known as Alex-Net which
resulted in an accuracy of 87.31% on the original ISIC 2017 dataset and 95.91% accuracy
on the augmented ISIC 2017 dataset (Hosny, Kassem and Foaud, 2019).
Menegola et al. have explored different approaches such as no TL, TL from related dataset
and TL from a generalized dataset and also exploring how further tuning affects the
accuracy. Results show that TL without fine-tuning and training from scratch shows the
worst performance. TL from a more generalized dataset is the best approach. The authors
suggest further exploration of the application of TL. Romero-Lopez et al. have explored a
TL approach segmentation and classification of dermoscopic images. The TL approach had
been compared with training from scratch and further tuning of the CNN. The results of
30
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
the research on the test dataset show that the third approach, fine-tuning a pre-trained
model, has resulted in a higher accuracy of 81.33%. Although the used dataset is large, the
risk of overfitting is inevitable as the dataset is not large enough for Deep Learning. The
authors have noted the lack of hyperparameter tuning as another limitation (Romero-Lopez
et al., 2017).
Transfer learning for skin cancer classification has also been explored in the works of Bassi
and Gomekar. Their experiment involves preprocessing and segmentation of the image
using thresholding methods. Features such as the age and gender of the patient has also
been used in their experiment via a parallel network. Augmentation techniques were also
used due to the presence of class imbalances. Multiple CNN architectures had been trained
and the VGG16 architecture had emerged as the best with an accuracy of 82.8% on the
non-segmented data. The accuracy with the segmented dataset had dropped significantly
to 65.8% suggesting that the thresholding technique used for segmentation is not ideal and
recommending further research into supervised segmentation techniques (Bassi and
Gomekar, 2019). A hybrid approach by Mahbod et al. shows that CNNs can be used as the
feature extractor with algorithms such as SVM. The authors have used an ensemble of
pretrained models for the feature extraction proving that ensembles perform better than a
single model. The authors have also suggested exploring more advanced pretrained models
such as DenseNet (Mahbod et al., 2019).
Advanced pretrained models such as DenseNet and SqueezeNet have been explored by
Kadampur and Al Riyaee. The best performing out of the research is the SqueezeNet with
an AUC score of 99.77%. Although the dataset used in the research is large, it is not the
largest dataset available (Kadampur and Al Riyaee, 2020). Deep architectures such as
ResNet-101 and Inception-v3 has also been applied for the classification of skin cancers
into benign and malignant by Demir, Yilmaz and Kose. The results are promising with an
accuracy of 90% for the Inception-v3 model which marginally outperformed ResNet-101
by 1% (Demir, Yilmaz and Kose, 2019). A limitation of this research is the small dataset.
Complex architectures could result in model overfitting and regularization can be used to
address this. CNNs have been trained from scratch using the proposed regularizer to
31
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
achieve an accuracy of 97.49% placing it very close to the SOTA in the domain (Albahar,
2019). One of the limitations is the need to specify the lambda constant.
Dermatologists follow a hierarchical method to carry out the diagnosis for any form of skin
cancer. Barata and Marques imitate this method in their proposed solution where TL is
used. The first stage of the is to identify if the dermoscopic image consists of a melanoma
or non-melanoma and then the final diagnosis was carried out. The authors highlight the
importance of the image segmentation (Barata and Marques, 2019). The results show that
the hierarchical approach does offer improvements in terms of performance. The results
also show that fine tuning is superior to not fine tuning the pretrained model. A limitation
in this research is the size of the dataset (Barata and Marques, 2019). Hyperparameter
tuning is a suggestion made by some researchers as a potential area that requires more
exploring. Tan, Zhang and Lim have explored the effectiveness of hyperparameter tuning
of CNNs using PSO (Tan, Zhang and Lim, 2019). The limitation identified is similar to the
research by Barata and Marques, the use of a larger dataset should be explored.
32
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
33
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
The important preprocessing steps for the detection of skin cancers in images has been
discussed by Hoshyar, Al-Jumaily and Hoshyar. The preprocessing steps involved in skin
cancer detection is mainly carried out to enhance the images (Hoshyar, Al-Jumaily and
Hoshyar, 2014), (Mehta and Shah, 2016). Full resolution images require more
computational resources to train. Resizing an image reduces the need for vast computation
resources (Hoshyar, Al-Jumaily and Hoshyar, 2014), (Mehta and Shah, 2016). The images
are known contain artefacts such as hair, gel and rulers that are left behind as a result of
the use of dermoscopy (Majtner et al., 2016), (Aziz, 2015), (Hameed, Ruskin, Abu Hassan
and Hossain, 2016), (Mehta and Shah, 2016). Therefore, artefact removal steps are used to
remove these artefacts from the images.
Hameed, Ruskin, Abu Hassan and Hossain suggest color space transformation as the RGB
color space consists of a range of colors. Transforming to a color space with less colors
may allow for better and accurate preprocessing (Hameed, Ruskin, Abu Hassan and
Hossain, 2016), (Hoshyar, Al-Jumaily and Hoshyar, 2014). LAB color space has been
identified as the optimal color space that results in very low error rate. Artefact removal
for skin cancer detection has been extensively explored by several researchers for skin
cancer classification and segmentation (Menegola et al., 2017), (Al-masni et al., 2018,
p221-231). Dullrazor is an artefact removal method invented in 1997 which uses
morphological edge operations (Lee et al., 1997) but doesn’t work well for thin hair. “E-
Shaver” is an improvement proposed by Kiani et al. (Jaworek-Korjakowska and
Tadeusiewicz, 2013) have proposed an artefact removal approach that involves the removal
34
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
of the artefacts from an image by first converting an image to grayscale and then applying
unsharp masking and Black top hat transform was then applied on the image. The next step
was to perform image inpainting. This approach was able to achieve an accuracy of 88.7%.
Generating binary mask and then applying inpainting is another artefact removal approach
but results in low accuracy (Aziz, 2015). As explored by Hoshyar, Al-Jumaily and
Hoshyar, Contrast Limited Adaptive Histogram Equalization (CLAHE) is considered to be
one of the best contrast enhancement approaches. CLAHE is designed for medical image
enhancement (Hoshyar, Al-Jumaily and Hoshyar, 2014).
Segmentation steps can be both supervised and unsupervised. Both types of approaches
have been extensively explored by researchers for skin cancer detection. A comparison
between multiple segmentation approaches has been carried out by Hameed, Ruskin, Abu
Hassan and Hossain in their review on the approaches for skin cancer detection. Ganster et
al. have suggested the use of a fusion of three types of segmentation methods: dynamic
thresholding, global thresholding, and color clustering (Ganster et al., 2001). Jain, jagtap
and Pise, in their research on skin cancer detection, have proposed the use of automatic
thresholding and masking operations in each of the R, G and B planes. The 3-plane masking
accuracy had been proposed to improvise the accuracy over single plane masking. The
segmentation technique requires artefact removal as it focuses on the largest blob on the
image (Jain, jagtap and Pise, 2015). The Water-shedding algorithm has also been proposed
for skin lesion segmentation of dermoscopy images by Wang et al. The algorithm requires
robust preprocessing steps as it does not work well in the presence of artefacts such as hair
and black borders in the image (Wang et al., 2010). The watershed algorithm has been
compared against another segmentation approach known as the active contour method.
Watershed algorithm is comparatively a better region-based segmentation technique which
is also less sensitive to noise. Farooq, Azhar and Raza have explored the use of watershed
35
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
algorithm for segmenting dermoscopic images in the PH2 dataset. It is compared with
active contour and a combination of both. The combined approach is shown to work better
for segmentation of images. (Farooq, Azhar and Raza, 2016).
The use of clustering algorithms for skin cancer image segmentation has been explored by
Nasr-Esfahani et al. The masks generated via this method was further enhancement via the
use of morphological operations. The generated masks are then applied on the images used
for classification with guassian filtering for smoothening the areas surrounding the skin
cancer (Nasr-Esfahani et al., 2016). The effectiveness of Ant-colony based segmentation
algorithm has been explored for the segmentation of skin cancer images. The algorithm is
applied on each of the three-color plane system, RGB, HSV and LAB, and the grayscale
image. The plane that results in the best plane was used. The effectiveness of the ACO
based algorithm is measured using the XOR metric. The average XOR error of 172 images
used to test the algorithm was 57.6% (Dalila, Zohra, Reda and Hocine, 2017). The proposed
approach was compared against manually crafted masks by using the images for
classification. The results show that the classification of images using the ACO based
segmentation is better with an accuracy of 93.60% than the classification of images using
the manual mask segmentation with an accuracy of 86.60%. A comparison between
Chan0vese segmentation algorithm and Expectation maximum algorithm has shown that
the Expectation maximum algorithm performs better with a Jaccard index of 71.2%. The
approaches have also been compared with the artefact removal approaches proposed by the
researchers. The results show that artefact removal improves the segmentation than not
36
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Deep learning approaches that involve the use of CNNs have been explored with the advent
of larger datasets that contain dermoscopic images thus supporting supervised learning.
Jafari et al. have applied CNNs for the patch extraction or segmentation of dermoscopic
images of skin cancer. The approach uses a local and global window where the local
windows allows to identify the local texture around the pixel and the global window allows
to identify the lesion. The dataset being used for the research consists of a smaller number
of images and therefore very little variation. The approach is shown to perform better than
other approaches with an accuracy of 98.5% and recall of 95.0% (Jafari et al., 2016).
Mishra and Daescu have compared the Otsu thresholding method against the use of CNNs
for segmentation of dermoscopic images. Otsu thresholding is a clustering-based image
segmentation method. The comparison between the two approaches show that the CNN
based approach performs better than Otsu thresholding with a Jaccard index of 84.2% and
71.1% respectively (Mishra and Daescu, 2017). Fully Connected Networks for multi class
segmentation of dermoscopic images have shown to improve over existing approaches that
do not involve CNNs but doesn’t improve over an architecture such as the U-Net (Goyal,
Hoon Yap and Hassanpour, 2017). Youssef et al. have explored the use of Deep
Convolutional networks for skin cancer image segmentation. Encoder-decoder technique
for semantic pixel-wise segmentation has been used, resembling SegNet, a commonly used
Segmentation Architecture. The results show that the approach performs very well with an
F1-Score of 87.0% (Youssef et al., 2018). Results from the research by Murugan, Nair and
Kumar also show that the use of segmentation before performing classification indeed
helps improve the accuracy as the classifier can focus more on the ROI but comparison
between the two approaches have not been provided (Murugan, Nair and Kumar, 2019).
The use of FrCN for skin cancer segmentation has shown tremendous performance and is
shown to perform better than SegNet, U-Net and FCN (Al-masni et al., 2018, p221-231).
37
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
38
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
39
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
algorithm is known to have a very slow convergence with higher chances of being stuck in
the local minima. Another notable limitation of the use the two previously explored
algorithms are the use of additional feature extraction steps.
40
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
proving that the proposed approach performs significantly better with an average difference
in recall of 15.9% (Abraham and Mefraz Khan, 2018).
2.5.1.5. Conclusion
It is clear from the above discussion that CNNs are better algorithms in terms of image
segmentation. This has also been proven by extensive research carried out by a lot of
researchers. Therefore, CNNs are chosen to carry out the image segmentation as part of
this project. It has also been identified that the U-Net architecture is the most suitable for
this task as it is an architecture introduced for medical imaging tasks.
The most common candidate algorithm for automatic feature extraction in the presence of
sufficient data and ground truth labels is the Convolutional Neural Network. CNNs are
capable of extremely efficient feature extraction due to the presence of Convolutional
Filters which are capable of extracting low level features and learning the image
representations without human intervention (Tschandl et al., 2019). Transfer Learning is
also commonly used to address the lack of data. Transfer Learning is when CNNs are
initialized with weights learnt from much larger and general datasets and these weights are
either directly used or further fine-tuned for the problem in hand.
41
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
When CNNs were compared with manual feature extraction methods such as RSurf
features and Local Binary Patterns (LBP), pretrained CNNs were shown to perform better
(Majtner, Yildirim-Yayilgan and Hardeberg, 2016). The use of CNNs as feature extractors
have shown to result in better classification accuracy when compared with human
diagnostic results (Yap, Yolland and Tschandl, 2018). The use of CNN as feature extractors
with transfer learning is shown to perform very well for classification of images
(Kawahara, BenTaieb and Hamarneh, 2016). A comparison between hand-crafted features,
dictionary based features, clinically inspired features and deep learning features show that
deep learning features work well but only if there are large amounts of data while the other
approaches work well with even very little data (Barata, Celebi and Marques, 2019). With
the advent of data augmentation techniques, smaller dataset sizes can be artificially
increased to allow for the use of CNNs. A study carried out by Wu et al. on face skin
disease classification have shown that the use of pretrained CNNs doesn’t require any
additional feature extraction steps and this laborious task can therefore be automated (Wu
et al., 2019).
2.5.2.1. Conclusion
It can be concluded that CNN as a feature extractor is far superior and less laborious
compared to other feature extraction methods. In the presence of large datasets, CNNs are
shown to work well. In the presence of moderately large datasets, pretrained CNNs can be
used to learn features better and displaying performance significantly better than the other
feature extraction methods.
42
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Neighbor algorithm for skin cancer image classification has shown that RF performs better
than kNN by approx. 10% in terms of accuracy (Murugan, Nair and Kumar, 2019). In the
study by Grzesiak-Kopeć, Nowak and Ogorzałek for the classification of skin cancer
images, RF was compared with Naïve Bayes classifier and K* instance-based classifier.
RF was once again the better performing algorithm with an (Grzesiak-Kopeć, Nowak and
Ogorzałek, 2015).
But SVMs were able to outperform RF in the study by Murugan, Nair and Kumar where
SVM trained on ABCD features had an accuracy of 89.43% while the RF had an accuracy
of 76.87% (Murugan, Nair and Kumar, 2019). SVM has also been used by Majtner,
Yildirim-Yayilgan and Hardeberg for classification of skin cancer using dermoscopic
images. The results show that the SVM is capable of achieving higher accuracies compared
to the other machine learning algorithms explored (Majtner, Yildirim-Yayilgan and
Hardeberg, 2016). 92.1% classification accuracy on classifying skin cancers into benign
and malignant have been achieved using SVM (Alquran et al., 2017). Comparison between
SVM and CNN for polyp classification shows that CNNs work better in terms of
classification and feature extraction (Shin and Balasingham, 2017).
Extensive research shows that CNNs are the SOTA for computer vision tasks. Variations
of CNNs exist that aid in different types of tasks. For the purpose of this research, we focus
mainly on the vanilla CNN. Romero-Lopez et al. have explored the use CNNs for
dermoscopic image classification and the results show that the network is capable of
producing highly accurate predictions (Romero-Lopez et al., 2017). Menegola et al. and
Romero-Lopez et al. have both explored the use of CNNs and Transfer Learning for
dermoscopic image classification. Bassi and Gomekar have explored pretrained CNN
models for dermoscopic image classification. The results show that the VGG-16
architecture has the highest F1-Score compared to other architectures: VGG-19,
43
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Inceptionv3, and MobileNet (Bassi and Gomekar, 2019). ResNet101 has been compared
against Inception-v3 for classification of dermoscopic images. Inception-v3 is shallower
compared to ResNet101 and in this study Inception-v3 emerges as the better accuracy with
an accuracy of around 90% (Demir, Yilmaz and Kose, 2019). A comparison between the
accuracy of deep learning approach and human physicians have shown that CNNs
outperform human physicians for classification of dermoscopic images with accuracies of
42.94% and 81.59% respectively. This shows that using a computer aided approach,
especially deep learning improves the accuracy of the prediction by a huge margin (Hekler
et al., 2019). The authors also suggest a combined approach that performs better but only
marginally when compared with the CNN approach. Ensemble of pretrained CNNs have
shown to be capable of achieving SOTA performance with SVM classification (Mahbod
et al., 2019).
2.5.4.3. Conclusion
As CNNs are being used feature extractors, they can also be further extended to perform
the classification. Research also shows the CNNs work well for classification as well.
Therefore, CNNs have been chosen. Transfer Learning was also chosen as the training
approach as it allows to achieve a higher accuracy with a smaller dataset. Research shows
that transfer learning approaches with further fine tuning the network works better than no
fine-tuning or training the network from scratch (Romero-Lopez et al., 2017), (Menegola
et al., 2017).
An architecture is the order of the different types of layers that exist. Choosing an
appropriate architecture aids in achieving good performance. For the purpose of this
research a complex architecture, as suggested by Mahbod et al., and a shallow architecture
that has been well researched and shown to perform well was chosen (Romero-Lopez et
al., 2017). Therefore, pretrained VGG16 and DenseNet101 were chosen for the
implementation. Performance of both architectures was compared before finalizing on one
architecture.
44
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
The most commonly used approach for tuning CNNs is Reinforcement Learning (RL). RL
is shown to be capable of designing architectures that are on par with the SOTA
architectures (Baker, Gupta, Naik and Raskar, 2016). Asynchronous RL, a variation of RL,
is shown to be capable of designing architectures on the MNIST dataset with accuracies
over 95.8% without human intervention, and boundaries. (Neary, 2018). Evolutionary
algorithms are a class of algorithms that are quite often explored in the area of
hyperparameter tuning of CNNs. The algorithm is designed around the evolutionary theory
proposed by Darwin (Vikhar, 2016). Genetic Algorithm is the most commonly used type
of Evolutionary Algorithm. RL is more resource intensive and therefore, this has sparked
an interest in the use of evolutionary algorithms to search for architectures (Sun et al.,
2019). Sun et al. have proposed a GA implementation where the results show that the
proposed algorithm is capable of finding the architectures comparable with SOTA
architectures. Petroski Such et al. did a comparison between GA and RL for training neural
networks. The results show that GA is faster and equally capable as RL in less time
(Petroski Such et al., 2018).
Tan, Zhang and Lim have explored on using PSO for hyperparameter tuning of CNNs for
skin cancer classification. The authors have proposed a variation of PSO for
hyperparameter tuning called the Random Coefficient PSO (RCPSO). The algorithm is
capable of searching for every deep CNNs with a fixed set of different layers (Tan, Zhang
and Lim, 2019). The proposed algorithm is compared against other bio-inspired algorithms
and is shown to perform the best out of the lot. COA is a rather new meta-heuristic global
optimization algorithm that mimics the social organization of Coyotes and their adaptation
to the environment. The algorithm focuses on the social structure and the experiences
exchanged by the Coyotes (Pierezan and Dos Santos Coelho, 2018). Standard Benchmark
functions are used to evaluate the performance of the proposed COA compared to other
bio-inspired algorithms such PSO. The algorithm is shown to perform better than PSO.
45
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
2.5.5.1. Conclusion
Two of the bio-inspired algorithms chosen are GA and COA. Although GA has been
extensively applied in this area, it has not been applied in the domain of skin cancer and as
seen in (Sun et al., 2019), it shows a lot of promise in terms of hyperparameter tuning of
CNNs. COA is a rather new algorithm that has not been applied in the area of architecture
search. Therefore, it was decided to perform a comparison of the two algorithms in the area
of CNN hyperparameter tuning/architecture search and so both of the algorithms were
chosen for the optimization of CNN.
46
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Chapter 3: Methodology
3.1. Chapter Overview
This chapter covers the research methodologies that will be used to carry out this research.
The various methodologies are explored and the methodology most suitable for this
research is identified and justifications are provided for the choices.
A mix-mode approach was chosen for this research as the data gathered for this research
was via questionnaires and the tool built at end of this research is measured using statistical
approaches and evaluated by experts.
3.3.1. Top-Down
This approach is also known as deductive. The researcher initially develops a hypothesis.
Data related to this hypothesis is then gathered. The researcher then studies the data and
tests the hypothesis that was previously developed to check if the hypothesis is valid or
not.
47
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
3.3.2. Bottom-Up
This approach is also known as inductive. The researcher begins by collecting data that is
relevant to the research that is being carried out and tries to tries to make sense of data via
analysis. In this approach the researcher develops a theory based on the data that has been
gathered. An example of a deductive is the research carried out by Herath et al. where the
knowledge and skill of doctors in Sri Lanka related to melanoma is low is the hypothesis.
48
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
49
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
50
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Social Legal
Privacy is very important. None of the No violations should be attempted against
survey respondent’s details were added to laws setup for privacy, data and IT
the thesis but only the data collected from protection. All of the data used in the
them. It is important that unauthorized research are freely available datasets that
parties don not get access to confidential are public to use with citations. All patient
data as this is a medical related research. data are anonymized.
Ethical Professional
All feedbacks and responses were taken Medical practitioners did not divulge any
with the consent, orally or written. patient related information. The
application does not require the user to
input any information to reveal their
identity.
Table 8 SLEP Analysis
51
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
52
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
53
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Advantage Disadvantage
The core features and the limitations of a Time taken to analyse all the systems is
skin cancer screening tool can be really high.
identified.
Table 10 Analysis of requirements elicitation by Observing existing systems.
Result: The existing commercial tools do not make use of deep learning and are mainly
based on clinical algorithms. Although the reported accuracy is good, deep learning
approaches can improve the accuracy further. Existing researches have explored the use of
deep learning but have dataset limitations and lack hyperparameter tuning.
5.3.2. Questionnaire
Questionnaires were prepared and were emailed to Dermatologists and Doctors. The
chosen individuals are a very small group as such domain experts are hard to come by.
Advantages Disadvantage
Minimal effort and time when compared The honesty of the respondent cannot be
with other elicitation methods. guaranteed.
Easy to obtain insights from the data. Smaller audience reach if the audience are
a very specific group.
No geographical barriers.
Table 11 Analysis of requirements elicitation by distributing Questionnaires.
54
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Result: Dermatologists considered that the system will be useful if made a reality can
indeed aid in the skin cancer screening process. This step was considered to elicit
requirements to gather requirements that turned out to be very useful for this research.
5.3.3. Brainstorming
Brainstorming sessions were carried out in the project initiation phase to define the problem
and propose a solution to the identified problem. This method goes along with observing
existing systems to identify the problem. Brainstorming helps to identify a variety of
requirements at each session, which can also be considered a disadvantage at times.
Advantage Disadvantage
Ability to identify novel requirements that Contradicting requirements at each
may not have been considered by other brainstorming sessions.
researchers and existing systems.
Table 12 Analysis of requirements elicitation by Brainstorming.
Result: A lot of the decisions made in this research were based on brainstorming sessions.
This turned out to be very useful as several decision had to be made during the research
and quick brainstorming sessions made this possible.
Advantage Disadvantage
Provides the opportunity to identify well Very time consuming and needs to be
documented, critically evaluated performed throughout the project.
advantage and limitations of existing skin
cancer screening tools.
Table 13 Analysis of requirements elicitation by Literature review.
Result: Another main contribution to the requirements identified in this research is the
Literature Review. The several decisions that were taken and requirements identified have
been explored in the Literature Review chapter.
55
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Question Result
1. Is the
examination
and
identification
of skin cancer
from
Based on the results, it was concluded that the examination process is
dermoscopy
not very time consuming as the majority of the respondents have
image time
mentioned.
consuming?
2. Does the
examination
and
identification
process require
a certain level
It can be concluded that the examination and identification process
of experience
does indeed require a certain level of experience to be carried out
for accurate
successfully.
identification?
3. Will an
automated tool
that can
examine and
identify the
56
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
type of skin All the interviewees consider such a tool to be useful as it can ease the
cancer from identification process.
dermoscopy Objective: 2,3,4
image be
useful in the
screening
process?
4. Is it important
to be able to
interpret the
results
obtained from
the tool?
The interpretation was considered important but when used my expert
dermatologists, this will not be of much necessity due to their
experience.
Objective: 6
5. What are the
most important
types of skin
lesions out of
the list of skin
Melanoma and Squamous Cell Carcinoma was considered the most
lesions?
important as they are malignant.
Objective: 1
6. How likely is it
for such a tool
to attract the
interests of
dermatologists The interviewees consider this tool to be very useful in the prognosis
for use in the and are highly likely to use.
57
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
screening Objective(s): 5
process?
7. What is your The results for this question can be found in the Appendix.
opinion on this
tool that The respondents feel that such a tool can aid in the prognosis. One
utilizes the respondent also claims that such a tool can help avert unnecessary
advances in invasive biopsies.
Artificial Objective(s): 4, 5
Intelligence for
medical
imaging?
Table 14 Findings of the Interviews.
Although questionnaires were distributed, only a small number of domain experts were
reached. This is due to the importance of only distributing the questionnaires to domain
experts, dermatologists in case of this research. Dermatologists are hard to come by and
therefore, only a few of them were reached and requirements were gathered from them.
58
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Segment the image so that the ROI is extracted from the image. The
image can then be fed to classifier as the classifier can focus on the
ROI.
04 Classify Image High
Classify the image into one of the several skin diseases that the tool
is capable of identifying.
05 Reason the prediction. High
Based on the image uploaded, identify the regions that correspond
to the classification and display this information.
06 Create and Manage Patients Medium
Create patients so that images can be uploaded under each patient.
This way keeping records of the images and classification can be
easier.
07 Provide technical information Low
Allow the user to access details about the steps involved in the
model building and the results. Ex:- Segmentation accuracy,
Classification accuracy.
08 Results of Training Medium
Provide extensive details of the training process including
visualizations. Make the final model accessible to be downloaded
and used later.
Table 15 Functional Requirements of the Proposed System and Algorithm.
59
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
5.5.2.2. Usability
The usability of the application is very important. The dermatologists should be able to use
the tool with ease. A lot of the complexities involved in the manual screening process are
negated by using an automated approach. Also, the UI of the tool is also required to be very
simple to make it easier for a dermatologist to use it in the screening process.
5.5.2.3. Interpretability
Interpretability of the results is an important aspect. This allows the dermatologists to
understand how and why the predictions made by the tool were made. Being able to
interpret the results allows dermatologists to place their trust on such an automated system
more. Basic Interpretation of the predictions is part of the research.
5.5.2.4. Performance
The performance of the application can be optimized to ensure that the time taken to load
models can be minimized by carrying out certain operations. The predictions need to be
obtained as quick as possible. The optimization of the optimization algorithm is important
such that the algorithm provides quick results.
60
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Figure 2 shows a high-level overview of the proposed system. The users of the system,
dermatologists/doctors, may create a new account or access their existing account. Patient
records can be maintained without their personal information. The images uploaded are
patient specific. The uploaded images are preprocessed, segmented, and then classified.
The predictions and visual reasoning for the predictions are saved for each patient.
61
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Use Case Description for Process Image has been added to Appendix G.
62
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Chapter 6: Design
6.1. Chapter Overview
This chapter discusses the design decisions that was made in developing the proposed
system. The important diagrams that provides an idea of the application have been
explored. The decisions made in the diagrams have also been discussed in this chapter.
Goal Description
Correctness It is important for the system to be able to predict the skin disease in
the image very accurately. The system should therefore have a high
accuracy for the system to be usable.
Performance The performance of the tool is very important. The time taken to
produce one result should be as little as possible as the tool will
potentially be used in labs to aid dermatologists in carrying out the
screening process at a faster pace.
Reusability The code should be written in a reusable manner such that based on
newer requirements, the application can be easily modified and
extended. The components in the application should be usable.
Adaptability The application should have the ability to adapt to any new changes
made to the core models such as the segmentation and classification
model.
Table 18 Design Goal intended to achieve via this research.
63
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
The components of the application are separated to allow for modularity, maintainability,
and extensibility. The contribution of the research mainly lies in the logic tier in a specific
module.
64
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
65
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Figure 5 shows the Sequence diagram for the Classify Image Use Case where an image is
input by the dermatologist and is classified into one of the skin cancers that the
AutoSkreenr tool is capable of classifying. The sequence begins with the dermatologist
uploading a dermoscopic image. The file is validated to ensure that it is in the correct
format and it is then preprocessed, segmented and then classification is carried out using
the image. Based on the predictions obtained, reasonings for the predictions are
determined. The predictions and the reasoning for the prediction are then visualized to the
dermatologist to make better and informed decisions.
Sequence diagram for the Preprocess Image Use Case can be found in Appendix H. An
image is uploaded, resized, enhanced by carrying out enhancement techniques such as
contrast enhancement and removal of artefacts in the image such as hair and ruler. Upon
completion of preprocessing the images are saved to be accessed later when required.
66
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
The above activity diagram beings with the dermatologist logging into the system to begin
the classification on dermoscopic images. The system uses the already trained models to
carry out tasks such as segmentation and classification. These models need to be loaded
onto the system to successfully complete the respective tasks. After the predictions are
obtained on the image fed by the dermatologist and reasonings have been produced for
prediction and visualized on the front-end.
67
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
6.6. Flowcharts
The two most important algorithms explored in this research are the Genetic Algorithm
and Whale Optimization Algorithm. The algorithms are used for architecture search. The
flowcharts below show the theoretical implementation as a pseudocode.
Figure 7 represents a Flowchart of the Genetic Algorithm used in this research. Genetic
Algorithm (GA) belongs to a class of algorithms known as Evolutionary Algorithms (EA).
They are a heuristic algorithm that follows the concept of evolution and therefore is a nature
inspired algorithm. The algorithm has to important phases to it that aids in finding the
optimum parameters that are optimized. The two important phases are crossover and
mutation where mutation is the random change in the parameter values. Therefore, the
algorithm is known a random-based classical evolutionary algorithm.
68
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Figure 8 represents a Flowchart of the Coyote Optimization Algorithm (COA) used in this
research. It is a meta-heuristic algorithm inspired social organization and adaptation to
environment of Coyotes.
69
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
The ER Diagram, Figure 9, is used to describe the tables in the application. The Web
Application stores some data about the patient, the images uploaded, and all the predictions
made by the application are also stored in the database. This was decided to allow for the
dermatologists to be able to retrieve all the predictions that were made previously. The
application also allows the dermatologists to annotate the anatomic site for each image.
This can be useful for a data collection process. Also, the dermatologist is allowed to mark
if a prediction is valid for not allowing for data collection for future researches.
70
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Chapter 7: Implementation
7.1. Chapter Overview
This chapter discusses the implementation of the proposed system for automatic skin
cancer screening. The different decisions that was in terms of the different parts of the
implementation are described in this chapter. The code snippets of the important parts of
the system have been presented along with the explanation.
7.2.1. Libraries
Deep Learning – Tensorflow, Keras, Segmentation Models.
Preprocessing – Scikit-Image, OpenCV
Augmentation – Augmentor
Other libraries – Pandas, Numpy, Scikit-Learn, Matplotlib
7.2.2. Dataset
The dataset chosen is the ISIC (International Skin Imaging Collaboration) 2019.
71
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Hyperparameter Values/Range
Number of Convolutional 1 – 2
Layers
Number of Dense Layers 1–5
Number of Convolution 32 – 256
Filters (Each Conv Layer)
Size of Convolution Filter 1 – 8
(Each Conv Layer)
Activation Function (Each Sigmoid, Tanh, Relu, Swish
Conv Layer)
Max Pooling Layer (After a 1 (True) or (0) False
Conv Layer)
Pool Size (Each Max 2 (Not Tuned)
Pooling Layer)
72
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
73
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Some of the design choices made for the algorithm are as follows:
74
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Figure 11 shows Single Point crossover technique where two parents cross at the crossover
point to produce two offspring by splitting the parents into two at one point.
7.3.4.4. Results
Table 21 presents the list of values for the parameters of the Genetic Algorithm that was
used to tune the parameters of the model that was built on the MNIST dataset.
Parameter Value
Population Size 10
Number of Generations 10
Population Variance 2
Elite Percent 20%
Competitiveness 40%
Mutation Probability 20%
Table 21 Parameters that manipulated the Genetic Algorithm optimization.
75
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
7.3.5.3. Results
Table 23 presents the list of values for the parameters for the COA that was used to tune
parameters of the model that was built on the MNIST dataset.
Parameter Value
Number of Packs 3
Size of Pack 5
Number of Evaluations 10
Number of Experiments 1
Table 23 Parameters that manipulated the Coyote Optimization Algorithm.
76
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Based on the results of the two Optimization Algorithms, GA and COA, on the MNIST
dataset, GA was chosen to carry out hyperparameter optimization to build the classifier for
skin cancer classification. The choice was made based on the highest value produced and
the consistency of the results produced by the algorithm.
For classification, two experiments were carried out to compare the effectiveness of
preprocessing. Experiment 1 involves all the preprocessing steps for the classification
dataset and Segmentation was applied. Augmentation was also used. Experiment 2 does
not involve all the preprocessing steps and only image scaling was performed for the
classification dataset and Segmentation was applied. Augmented was also used.
7.3.1. Preprocessing
Images are preprocessed both during the model building phase and when an image is
uploaded to the app. Experiment 1 involves all of the below mentioned preprocessing step
while Experiment 1 only involves Image Scaling.
77
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
7.3.2. Augmentation
Both ISIC 2018 and 2019 datasets were augmented as seen in Table 19. Augmentation
applied on the ISIC 2019 dataset was to address the class imbalances. Traditional
78
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Augmentation techniques were applied using the Augmentor library for Python. The
following Augmentation techniques were applied: Rotation, Flipping, Random Zoom,
Gaussian Distortion and Random Distortion, Random Brightness, Contrast and Color.
Figure below shows all the preprocessing steps involved in preprocessing an image.
79
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Parameter Value
Epochs 60
Batch Size 8
Validation Split 0.2
80
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Optimizer Adam
Learning Rate 0.001 (Default)
Figure 18 Hyperparameters of the UNet - EfficientNetB3 Model
7.3.3.2. Metrics
Dice Coefficient and Jaccard Index/Intersection Over Union (IoU) are commonly used
metrics for evaluation of segmentation models. Therefore, these metrics were chosen for
this research.
7.3.3.3. Implementation
The segmentation_models library was used. The library provides a UNet implementation
using Keras with Tensorflow as the backend. The library allows to pick the feature
extractor and as mentioned earlier, the EfficientNetB3 backbone was chosen.
Figure 20 UNet Model Compile with Optimizer, Loss Function and Metrics
Figure 21 UNet Model Training with Callback to save weights if any improvement
The dataset for Segmentation was split using the Holdout method. The dataset was split
into train, test and validation using the train_test_split() method provided by the Scikit
Learn library. The splitting are as follows: Train – 70%, Validation – 20%, and Hold-
Out – 10%. The training and validation set contains augmented data. Segmentation model
was used to create masks for the images in the ISIC 2019 dataset. The masks were then
applied on those images.
81
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
82
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
7.3.4. Classification
Classification was carried out as two experiments with different steps explored.
Experiment 1 explores the use of segmentation and preprocessing whereas Experiment 2
explores not using the two steps. Both experiments involve hyperparameter optimization.
83
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
The performance on the Train and Test set was measured and used to identify the best
model. The architectures were trained for 5 epochs and evaluated on the test set using the
F1-Score. VGG16 was chosen as it had a high F1-Score on the Hold-Out set.
Hyperparameter Values/Range
Activation Function Tanh, softmax, sigmoid, relu
Optimizer SGD, Adam, Adagrad, Adadelta
Learning Rate 0.1,0.01,0.001,0.0005,0.0001,0.00005,0.00001,0.000001
Number of Layers (Dense) 1 to 3
Number of Neurons Per 32 - 256
Layer
Table 27 Hyperparameters optimized and the search space.
Hyperparameter tuning was performed on top of the pretrained models to add further dense
layers to improve classification. The optimizer and learning rate were also optimized. GA
parameters:
Population Size 5
Population Variance 2
Number of Generations 5
Fitness Metric F1 Score
Elite Percentage 0.2
Mutation Probability 0.2
84
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Competitiveness 0.4
Epochs 1
Batch Size 8
Table 28 Parameters and Values of Genetic Algorithm
Parameter Value
Number of Dense Layers 2
Number of Neurons for Each Layer 64, 8 (Output Layer)
Batch Normalization for Each Layer Yes, No
Activation Function of Each Layer ReLU, ReLU, Softmax
Optimizer SGD
Learning Rate 0.00005
F1-Score 55.63%
Table 29 Values obtain for parameters upon hyperparameter tuning completion.
Table 24 shows the list of hyperparameters obtained as the best set of hyperparameters to
train the model. VGG16 was then trained for 30 Epochs with a Batch Size of 32.
85
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Table 31 Web App code to view a single patient with all the predictions.
Table 32 Web App code to validate a prediction made via the application.
86
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Chapter 8. Testing
8.1. Chapter Overview
Testing is carried out to verify if the application works in the intended manner and to
identify any bugs in the application. This chapter focuses on testing all the functional and
non-functional requirements of the implementation. All the implemented algorithms and
functionalities have been tested to ensure that they perform as intended to accomplish the
requirements successfully.
87
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
88
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Patient Create Provide details of the patient Patient added Patient added Pass
and create a new patient and to the db and to the db and
add to the application. shown in the shown in the
list of patients. list of patients.
Patient View List all patients created by All patients All patients Pass
the logged in user. created by the created by the
user are listed. user are listed.
Single Patient List all images uploaded All images All images Pass
View with the predictions for each uploaded for uploaded for
image. patient and patient and
predictions predictions
made. made.
Validate Confirm if the prediction The prediction The Pass
Prediction made by the tool is valid or validity prediction
not and save to database. updated in validity
database. updated in
database.
Information Provide the user with Information Information Pass
information related to the related to related to
training process. model model
building building
displayed. displayed.
Table 33 Black Box Test Cases, Expected and Actual Outputs and the status of each test case.
89
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
90
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Save new Details of patients input Input details Input details Pass
patient. and saved to the database. saved to the saved to the
database. database.
List all List of all patients added All the patients All the patients Pass
patients. by user displayed when added by the added by the
View all patients is user are listed. user are listed.
clicked.
Single Patient Patient details and Patient details Patient details Pass
View. predictions made on and all and all
patient displayed when predictions for predictions for
the patient id is clicked patient patient
on. displayed. displayed.
Prediction as User clicks on the Valid Prediction is set Prediction is set Pass
Valid. button to approve a to valid. to valid.
prediction.
Prediction as User clicks on the invalid Prediction is set Prediction is set Pass
Invalid. button to disapprove a to invalid. to invalid.
prediction.
Obtain Clicking on info retrieves Implementation Implementation Pass
Information. all details about the details details
implementation. displayed. displayed.
Table 34 White Box Test Cases, Expected and Actual Outputs and the status of each test case.
91
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
8.4.1. Segmentation
The performance of the segmentation model was measured using the IoU and Dice
Coefficient.
92
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Figure 29 shows a mask image obtained from the segmentation model on the ISIC 2018
dataset and the Ground Truth. Figure 30 shows a mask image predicted for an image in
the ISIC 2019 dataset and its enhanced version – sharpening.
Figure 29 Segmentation model result on an image from Figure 30 Segmentation model result on an image from
ISIC 2018 Dataset. ISIC 2019 Dataset.
8.4.2. Classification
The performance of the classifier was measured using the F1-Score and Accuracy.
93
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Figure 32 Model Accuracy - Train vs Validation Figure 33 Model Loss - Train vs Validation
Overfitting was experienced with the Experiment 1 model where heavy preprocessing was
carried out. Therefore, the Train vs Test accuracy, loss and f1-score graphs were compared
to confirm this. The extreme fluctuations in the graphs indicate that the models are indeed
being overfitted.
94
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
In figures 36 and 37, he first value is the Loss. The second the accuracy. The third is the
Recall. The fourth is the precision and the fifth is the F1-Score.
8.5. Benchmarking
The performance of the Segmentation and Classification Model on the hold-out set was
compared with some of the existing researches identified in the Literature of this research.
8.5.1. Segmentation
Work compared using the Jaccard Index/Intersection Over Union (IoU). The following
table presents the comparison.
8.5.2. Classification
The metric used for comparison is the Categorical Accuracy. The following table presents
the comparison.
Study Accuracy
Proposed Approach (Experiment 2) 85.99%
Eddine Guissous, 2019 (ISIC 2019) 91%
Romero-Lopez et al., 2017 (ISIC 2018) 81.33%
Being trained for only 30 epochs, the results on the ISIC 2019 dataset is good but needs
further improvement as it is not as good as the other researcher’s works.
95
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
96
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Chapter 9. Evaluation
9.1. Chapter Overview
This chapter focuses on what the technical experts and domain experts think about the
various aspects of the implemented system such as the architecture, functionalities,
usability, and performance.
Criteria Purpose
Overall Concept and Project Idea To get comments and insights from technical and
domain experts.
Scope and Depth of the Project The scope and depth of the project should be
evaluated by technical experts with knowledge in
Deep Learning.
System Design and Architecture Evaluate if the system design and architectures are
valid and upto latest standards.
Accuracy of the Implementation Evaluate how well the application is performing.
User Friendliness of the Interfaces. The usability of the application based on the user
interfaces provided is evaluated by the domain
experts, dermatologists.
Further Enhancements. Identify what potential improvements can be made
to improve the application further.
Usefulness of Visual Reasoning If the provided visual reasoning is usable.
Usability of GUI User-friendliness of the GUI was evaluated by the
domain experts, dermatologists.
Usability and Opinions The usability of the application in skin cancer
screening and the domain expert opinion.
Figure 38 Evalaution Criteria to evalaute the project.
97
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
The Technical and Domain Evaluation forms can be found in Appendix J. The Emails sent
to evaluators to get feedback can be found in Appendix K.
98
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
9.5.1.1. Summary
Positive feedback was received in terms of the Project Concept. Technical Experts have
considered the exploration a positive of the project along with the visual reasoning
provided. The domain expert also considers the project to be good, but the users should be
reconsidered as it can be used for remote screening.
99
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
9.5.2.1. Summary
The Technical Expert(s) considered that the scope and depth of the project is more than
enough. The exploration of different architectures to carry out the segmentation and
classification was welcome.
100
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
101
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
the initial screening, its saves lot of time and money for
the patient. And doctor will be able to provide initial
diagnosis without physically seeing the patient. High
patient compliance. Disadvantages are wrong diagnosis
can happen due to image quality and competence of
person who takes the picture. High quality images are
very important. And 3D images would be more helpful
for a proper diagnosis. If these images are combined with
the images coming from Dermoscope, diagnosis can be
more accurate.
Dr. Dantha Hewage The patient’s whole body has to be examined in order to
Dermatologist conclude a disease, not just the part with the lesion.
Dr. Thushani Dabrera Good initiative. However, the target audience/recipient
Doctor should be a family/general practitioner.
9.5.5.1. Summary
Overall, a positive feedback with suggestions to explore the possible areas in which the
application can be used. Suggestions to use better augmentation techniques such as GANs
and more suitable optimization algorithm was given. Should reconsider target audience.
9.5.6.1. Summary
It can be concluded that although the visual reasoning was not the center of the research, it
is still considered useful especially, when the practitioner lacks expertise.
102
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
9.5.7.1. Summary
It can be deduced that the application is very usable and therefore, usability has been
successfully achieved.
9.5.8. Usability of the tool in the Screening Process and Opinion on the system
Evaluator Feedback
Dr. Asoka Kamaladasa This could be helpfull in initial screening of skin cancers
Consultant Dermatologist
Dr. Dantha Hewage Could be useful.
Dermatologist
Dr. Thushani Dabrera If it is intended for a consultant dermatologist who is an
Doctor expert in the diagnosis of skin conditions and can
differentiate and diagnose skin cancer at a glance- not
very useful for them. It will be very useful for a general
of family practitioner/OPD doctor.
Criteria Self-Evaluation
Overall Concept and Idea The idea of the research was to identify the
effectiveness of the chosen preprocessing steps for
classification and also the effectiveness of
hyperparameter optimization. This has opened newer
paths for future researches. Also, the findings of the
two experiments carried out can also be extended in
future researchers.
Scope and Depth of the Project The initial project scope was defined based on the
project timeline. A combination of several approaches
was required to be tested within the period of time
provided for this project and the resources that was
accessible by the author. Therefore, the scope had to
103
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
104
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
105
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
The aim of this research was successfully achieved within the given period of time to an
extent as a GA was implemented to design and train CNN architectures automatically. Both
experiments were conducted to compare and identify which experiment performs best. It
was realized that the use of heavy preprocessing does not improve the accuracy but rather
adversely affects the performance of the classification.
106
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Complete Chapter 7
To compare the use of preprocessing with not using
preprocessing to build a classification model.
107
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
• The author gained knowledge related to Data Science and Machine Learning during
the Placement Year as the author interned as a Data Scientist, gaining hands-on
experience on Data Science and Machine Learning projects.
• During the placement year, knowledge regarding the Python programming
language was acquired. The knowledge in Python vastly helped in initiating the
project as the knowledge allowed to start work on the project much sooner.
108
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
109
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
The GA algorithm was used for hyperparameter optimization in this research. The vanilla
GA algorithm was modified to support population variance. This is a concept where in
addition the fixed number of populations, another variable size is attached to the population
where at each generation, completely new solutions are added based on the variable size.
This variable size remains constant throughout all generations. Population variance was
introduced to improve the diversity of the solutions produced by the algorithm.
The application of EfficientNet with the U-Net Architecture has not been explored before
for skin cancer image segmentation. The results from this research shows that the approach
produces promising results. Therefore, further research can be carried out to experiment
with wider EfficientNet architectures such as EfficientNetB7.
This is a review paper published by the author. An analysis of all the different Computer
Aided Approaches that has been carried out by different researchers till date and how they
compare with each other. The limitations of the compared systems have also been
identified and listed.
10.8. Limitations
1. The lack of variation in the dataset used for both segmentation and classification.
It is important to introduce variations such as ethnicity and color. Such variations
can allow the dataset to generalize more towards the general public.
110
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
111
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
References
Abraham, N. and Mefraz Khan, N., 2018. A Novel Focal Tversky loss function with
improved Attention U-Net for lesion segmentation. [online] Available at:
<https://arxiv.org/abs/1810.07842> [Accessed 23 January 2020].
Albahar, M., 2019. Skin Lesion Classification Using Convolutional Neural Network With
Novel Regularizer. IEEE Access, 7, pp.38306-38313.
Almaraz-Damian, J., Ponomaryov, V. and Rendon-Gonzalez, E., 2016. Melanoma CADe
based on ABCD Rule and Haralick Texture Features. 2016 9th International Kharkiv
Symposium on Physics and Engineering of Microwaves, Millimeter and Submillimeter
Waves (MSMW).
Al-masni, M., Al-antari, M., Choi, M., Han, S. and Kim, T. (2018). Skin lesion
segmentation in dermoscopy images via deep full resolution convolutional
networks. Computer Methods and Programs in Biomedicine, 162, pp.221-231.
Alom, M., Taha, T., Yakopcic, C., Westberg, S., Sidike, P., Nasrin, M., Hasan, M., Van
Essen, B., Awwal, A. and Asari, V., 2019. A State-of-the-Art Survey on Deep Learning
Theory and Architectures. Electronics, 8(3), p.292.
Alquran, H., Qasmieh, I., Alqudah, A., Alhammouri, S., Alawneh, E., Abughazaleh, A.
and Hasayen, F., 2017. The melanoma skin cancer detection and classification using
support vector machine. 2017 IEEE Jordan Conference on Applied Electrical Engineering
and Computing Technologies (AEECT).
American Cancer Society, 2020. Cancer Facts & Figures 2020 | American Cancer Society.
[online] Cancer.org. Available at: <https://www.cancer.org/research/cancer-facts-
statistics/all-cancer-facts-figures/cancer-facts-figures-2020.html> [Accessed 1 December
2019].
Aziz, H., 2015. Artifact Removal from Skin Dermoscopy Images to Support to support
automated melanoma diagnosis. Al-Rafidain Engineering, [online] 23, pp.22-30. Available
at:
<https://www.researchgate.net/publication/331960732_Artifact_Removal_from_Skin_De
rmoscopy_Images_to_Support_to_support_automated_melanoma_diagnosis> [Accessed
15 January 2020].
Baker, B., Gupta, O., Naik, N. and Raskar, R., 2016. Designing Neural Network
Architectures using Reinforcement Learning. ICLR 2017, [online] Available at:
<https://arxiv.org/abs/1611.02167> [Accessed 15 January 2020].
Barata, C. and Marques, J., 2019. Deep Learning For Skin Cancer Diagnosis With
Hierarchical Architectures. 2019 IEEE 16th International Symposium on Biomedical
Imaging (ISBI 2019).
112
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Barata, C., Celebi, M. and Marques, J., 2019. A Survey of Feature Extraction in
Dermoscopy Image Analysis of Skin Cancer. IEEE Journal of Biomedical and Health
Informatics, 23(3), pp.1096-1109.
Bassi, S. and Gomekar, A., 2019. Deep Learning Diagnosis of Pigmented Skin Lesions.
2019 10th International Conference on Computing, Communication and Networking
Technologies (ICCCNT).
Bisla, D., Choromanska, A., Berman, R., Stein, J. and Polsky, D., 2019. Towards
Automated Melanoma Detection With Deep Learning: Data Purification and
Augmentation. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Workshops (CVPRW).
Carli, P., Quercioli, E., Sestini, S., Stante, M., Ricci, L., Brunasso, G. and DE Giorgi, V.
(2003). Pattern analysis, not simplified algorithms, is the most reliable method for teaching
dermoscopy for melanoma diagnosis to residents in dermatology. British Journal of
Dermatology, [online] 148(5), pp.981-984. Available at:
https://www.ncbi.nlm.nih.gov/pubmed/12786829 [Accessed 9 Oct. 2019].
Cochrane, 2020. How Accurate Is Visual Inspection Of Skin Lesions With The Naked Eye
For Diagnosis Of Melanoma In Adults?. [online] Cochrane. Available at:
<https://www.cochrane.org/CD013194/SKIN_how-accurate-visual-inspection-skin-
lesions-naked-eye-diagnosis-melanoma-adults> [Accessed 1 December 2019].
Codella, N., Nguyen, Q., Pankanti, S., Gutman, D., Helba, B., Halpern, A. and R. Smith,
J., 2016. Deep Learning Ensembles for Melanoma Recognition in Dermoscopy Images.
[online] Available at: <https://arxiv.org/abs/1610.04662> [Accessed 7 May 2020].
Dalila, F., Zohra, A., Reda, K. and Hocine, C., 2017. Segmentation and classification of
melanoma and benign skin lesions. Optik, 140, pp.749-761.
Demir, A., Yilmaz, F. and Kose, O., 2019. Early detection of skin cancer using deep
learning architectures: resnet-101 and inception-v3. 2019 Medical Technologies Congress
(TIPTEKNO).
Deng, J., Dong, W., Socher, R., Li, L., Kai Li and Li Fei-Fei, 2009. ImageNet: A large-
scale hierarchical image database. 2009 IEEE Conference on Computer Vision and Pattern
Recognition.
Dinnes, J., Matin, R., Moreau, J., Patel, L., Chan, S., Chuchu, N., Bayliss, S., Grainge, M.,
Takwoingi, Y., Davenport, C., Walter, F., Fleming, C., Schofield, J., Shroff, N., Godfrey,
K., O'Sullivan, C., Deeks, J. and Williams, H., 2015. Tests to assist in the diagnosis of
cutaneous melanoma in adults: a generic protocol. Cochrane Database of Systematic
Reviews.
Esteva, A., Kuprel, B., Novoa, R., Ko, J., Swetter, S., Blau, H. and Thrun, S., 2017.
Dermatologist-level classification of skin cancer with deep neural networks. Nature,
542(7639), pp.115-118.
113
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Farooq, M., Azhar, M. and Raza, R., 2016. Automatic Lesion Detection System (ALDS)
for Skin Cancer Classification Using SVM and Neural Classifiers. 2016 IEEE 16th
International Conference on Bioinformatics and Bioengineering (BIBE).
Ferris, L., Harkes, J., Gilbert, B., Winger, D., Golubets, K., Akilov, O. and Satyanarayanan,
M., 2015. Computer-aided classification of melanocytic lesions using dermoscopic images.
Journal of the American Academy of Dermatology, 73(5), pp.769-776.
Gachon, J., Beaulieu, P., Sei, J., Gouvernet, J., Claudel, J., Lemaitre, M., Richard, M. and
Grob, J. (2005). First Prospective Study of the Recognition Process of Melanoma in
Dermatological Practice. Archives of Dermatology, [online] 141(4). Available at:
https://www.ncbi.nlm.nih.gov/pubmed/15837860 [Accessed 9 Oct. 2019].
Ganster, H., Pinz, P., Rohrer, R., Wildling, E., Binder, M. and Kittler, H., 2001. Automated
melanoma recognition. IEEE Transactions on Medical Imaging, 20(3), pp.233-239.
Glazer, A., Rigel, D., Winkelmann, R. and Farberg, A., 2017. Clinical Diagnosis of Skin
Cancer. Dermatologic Clinics, 35(4), pp.409-416.
Goyal, M., Hoon Yap, M. and Hassanpour, S., 2017. Multi-class Semantic Segmentation
of Skin Lesions via Fully Convolutional Networks. VSI: AI in Breast Cancer Care, [online]
Available at: <https://arxiv.org/abs/1711.10449> [Accessed 18 January 2020].
Grzesiak-Kopeć, K., Nowak, L. and Ogorzałek, M., 2015. Automatic Diagnosis of
Melanoid Skin Lesions Using Machine Learning Methods. Artificial Intelligence and Soft
Computing, pp.577-585.
Hameed, N., Ruskin, A., Abu Hassan, K. and Hossain, M., 2016. A comprehensive survey
on image-based computer aided diagnosis systems for skin cancer. 2016 10th International
Conference on Software, Knowledge, Information Management & Applications (SKIMA).
Hekler, A., Utikal, J., Enk, A., Hauschild, A., Weichenthal, M., Maron, R., Berking, C.,
Haferkamp, S., Klode, J., Schadendorf, D., Schilling, B., Holland-Letz, T., Izar, B., von
Kalle, C., Fröhling, S., Brinker, T., Schmitt, L., Peitsch, W., Hoffmann, F., Becker, J.,
Drusio, C., Jansen, P., Klode, J., Lodde, G., Sammet, S., Schadendorf, D., Sondermann,
W., Ugurel, S., Zader, J., Enk, A., Salzmann, M., Schäfer, S., Schäkel, K., Winkler, J.,
Wölbing, P., Asper, H., Bohne, A., Brown, V., Burba, B., Deffaa, S., Dietrich, C., Dietrich,
M., Drerup, K., Egberts, F., Erkens, A., Greven, S., Harde, V., Jost, M., Kaeding, M.,
Kosova, K., Lischner, S., Maagk, M., Messinger, A., Metzner, M., Motamedi, R.,
Rosenthal, A., Seidl, U., Stemmermann, J., Torz, K., Velez, J., Haiduk, J., Alter, M., Bär,
C., Bergenthal, P., Gerlach, A., Holtorf, C., Karoglan, A., Kindermann, S., Kraas, L.,
Felcht, M., Gaiser, M., Klemke, C., Kurzen, H., Leibing, T., Müller, V., Reinhard, R.,
Utikal, J., Winter, F., Berking, C., Eicher, L., Hartmann, D., Heppt, M., Kilian, K.,
Krammer, S., Lill, D., Niesert, A., Oppel, E., Sattler, E., Senner, S., Wallmichrath, J.,
Wolff, H., Gesierich, A., Giner, T., Glutsch, V., Kerstan, A., Presser, D., Schrüfer, P.,
Schummer, P., Stolze, I., Weber, J., Drexler, K., Haferkamp, S., Mickler, M., Stauner, C.
and Thiem, A., 2019. Superior skin cancer classification by the combination of human and
artificial intelligence. European Journal of Cancer, 120, pp.114-121.
114
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Herath, H., Keragala, B., Udeshika, W., Samarawickrama, S., Pahalagamage, S.,
Kulatunga, A. and Rodrigo, C., 2018. Knowledge, attitudes and skills in melanoma
diagnosis among doctors: a cross sectional study from Sri Lanka. BMC Research Notes,
11(1).
Hosny, K., Kassem, M. and Foaud, M., 2019. Classification of skin lesions using transfer
learning and augmentation with Alex-net. PLOS ONE, 14(5), p.e0217293.
Jafari, M., Karimi, N., Nasr-Esfahani, E., Samavi, S., Soroushmehr, S., Ward, K. and
Najarian, K., 2016. Skin lesion segmentation in clinical images using deep learning. 2016
23rd International Conference on Pattern Recognition (ICPR).
Jain, S., jagtap, V. and Pise, N. (2015). Computer Aided Melanoma Skin Cancer Detection
Using Image Processing. Procedia Computer Science, [online] 48, pp.735-740. Available
at: https://www.sciencedirect.com/science/article/pii/S1877050915007188 [Accessed 13
Nov. 2019].
Jaworek-Korjakowska, J. and Tadeusiewicz, R., 2013. Hair removal from dermoscopic
color images. Bio-Algorithms and Med-Systems, 9(2).
Kadampur, M. and Al Riyaee, S., 2020. Skin cancer detection: Applying a deep learning
based model driven architecture in the cloud for classifying dermal cell images. Informatics
in Medicine Unlocked, 18, p.100282.
Kawahara, J., BenTaieb, A. and Hamarneh, G. (2016). Deep features to classify skin
lesions. 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI). [online]
Available at: https://ieeexplore.ieee.org/document/7493528 [Accessed 7 Nov. 2019].
Kittler, H., Pehamberger, H., Wolff, K. and Binder, M. (2002). Diagnostic accuracy of
dermoscopy. The Lancet Oncology, [online] 3(3), pp.159-165. Available at:
https://www.sciencedirect.com/science/article/abs/pii/S1470204502006794 [Accessed 9
Oct. 2019].
Kumar, N., 2010. Gradient Based Techniques for the Avoidance of Oversegmentation.
BEATS 2010.
Lee, T., Ng, V., Gallagher, R., Coldman, A. and McLean, D., 1997. Dullrazor®: A software
approach to hair removal from images. Computers in Biology and Medicine, 27(6), pp.533-
543.
Liu, C., Zoph, B., Neumann, M., Shlens, J., Hua, W., Li, L., Fei-Fei, L., Yuille, A., Huang,
J. and Murphy, K. (2019). Progressive Neural Architecture Search. [online]
Openaccess.thecvf.com. Available at:
http://openaccess.thecvf.com/content_ECCV_2018/html/Chenxi_Liu_Progressive_Neura
l_Architecture_ECCV_2018_paper.html [Accessed 10 Oct. 2019].
115
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Mahbod, A., Schaefer, G., Ellinger, I., Ecker, R., Pitiot, A. and Wang, C., 2019. Fusing
fine-tuned deep features for skin lesion classification. Computerized Medical Imaging and
Graphics, 71, pp.19-29.
Majtner, T., Lidayova, K., Yildirim-Yayilgan, S. and Hardeberg, J., 2016. Improving skin
lesion segmentation in dermoscopic images by thin artefacts removal methods. 2016 6th
European Workshop on Visual Information Processing (EUVIP).
Majtner, T., Yildirim-Yayilgan, S. and Hardeberg, J., 2016. Combining deep learning and
hand-crafted features for skin lesion classification. 2016 Sixth International Conference on
Image Processing Theory, Tools and Applications (IPTA), [online]
Manuel, P. and AlGhamdi, J., 2003. A data-centric design for n-tier architecture.
Information Sciences, 150(3-4), pp.195-206.
Masood, A. and Ali Al-Jumaily, A., 2013. Computer Aided Diagnostic Support System for
Skin Cancer: A Review of Techniques and Algorithms. International Journal of
Biomedical Imaging, 2013, pp.1-22.
Masood, A., Al- Jumaily, A. and Anam, K., 2015. Self-supervised learning model for skin
cancer diagnosis. 2015 7th International IEEE/EMBS Conference on Neural Engineering
(NER),.
Mehta, P. and Shah, B., 2016. Review on Techniques and Steps of Computer Aided Skin
Cancer Diagnosis. Procedia Computer Science, 85, pp.309-316.
Menegola, A., Fornaciali, M., Pires, R., Bittencourt, F., Avila, S. and Valle, E. (2017).
Knowledge transfer for melanoma screening with deep learning. 2017 IEEE 14th
International Symposium on Biomedical Imaging (ISBI 2017). [online] Available at:
https://ieeexplore.ieee.org/abstract/document/7950523 [Accessed 10 Oct. 2019].
Meskini, E., Helfroush, M., Kazemi, K. and Sepaskhah, M., 2018. A New Algorithm for
Skin Lesion Border Detection in Dermoscopy Images. J Biomed Phys Eng, [online] 8(1),
pp.117–126. Available at: <https://www.ncbi.nlm.nih.gov/pubmed/29732346> [Accessed
20 January 2020].
Mikkilineni, R., Weinstock, M., Goldstein, M., Dube, C. and Rossi, J., 2001. Impact of the
basic skin cancer triage curriculum on providers’ skin cancer control practices. Journal of
General Internal Medicine, 16(5), pp.302-307.
Minango, P., Iano, Y., Borges Monteiro, A., Padilha França, R. and Gomes de Oliveira,
G., 2019. Classification of Automatic Skin Lesions from Dermoscopic Images Utilizing
Deep Learning. SET INTERNATIONAL JOURNAL OF BROADCAST ENGINEERING,
2019(1), pp.107-114.
116
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Mishra, R. and Daescu, O., 2017. Deep learning for skin lesion segmentation. 2017 IEEE
International Conference on Bioinformatics and Biomedicine (BIBM).
Murugan, A., Nair, S. and Kumar, K., 2019. Detection of Skin Cancer Using SVM,
Random Forest and kNN Classifiers. Journal of Medical Systems, 43(8).
Nasr-Esfahani, E., Samavi, S., Karimi, N., Soroushmehr, S., Jafari, M., Ward, K. and
Najarian, K., 2016. Melanoma detection by analysis of clinical images using convolutional
neural network. 2016 38th Annual International Conference of the IEEE Engineering in
Medicine and Biology Society (EMBC).
National Cancer Institute. (2019). Cancer Statistics. [online] Available at:
https://www.cancer.gov/about-cancer/understanding/statistics [Accessed 1 Nov. 2019].
117
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Premaladha, J. and Ravichandran, K., 2016. Novel Approaches for Diagnosing Melanoma
Skin Lesions Through Supervised and Deep Learning Algorithms. Journal of Medical
Systems, 40(4).
Radiology Business, 2020. Is This The End? Machine Learning And 2 Other Threats To
Radiology’S Future. [online] Radiology Business. Available at:
<https://www.radiologybusiness.com/topics/technology-management/end-machine-
learning-and-2-other-threats-radiologys-future> [Accessed 1 December 2019].
Reliant Medical Group. (2019). Three Most Common Skin Cancers - Reliant Medical
Group. [online] Available at: https://reliantmedicalgroup.org/medical-
services/dermatology/three-most-common-skin-cancers/ [Accessed 9 Oct. 2019].
Robertson, F. and Fitzgerald, L., 2017. Skin cancer in the youth population of the United
Kingdom. Journal of Cancer Policy, 12, pp.67-71.
Romero-Lopez, A., Giro-i-Nieto, X., Burdick, J. and Marques, O. (2017). Skin Lesion
Classification from Dermoscopic Images Using Deep Learning Techniques. Biomedical
Engineering. [online] Available at: https://ieeexplore.ieee.org/abstract/document/7893267
[Accessed 11 Oct. 2019].
Rubegni, P., Cevenini, G., Burroni, M., Perotti, R., Dell'Eva, G., Sbano, P., Miracco, C.,
Luzi, P., Tosi, P., Barbini, P. and Andreassi, L. (2002). Automated diagnosis of pigmented
skin lesions. International Journal of Cancer, [online] 101(6), pp.576-580. Available at:
https://doi.org/10.1002/ijc.10620 [Accessed 9 Oct. 2019].
Shin, H., Roth, H., Gao, M., Lu, L., Xu, Z., Nogues, I., Yao, J., Mollura, D. and Summers,
R. (2016). Deep Convolutional Neural Networks for Computer-Aided Detection: CNN
Architectures, Dataset Characteristics and Transfer Learning. IEEE Transactions on
Medical Imaging, [online] 35(5), pp.1285-1298. Available at:
https://ieeexplore.ieee.org/abstract/document/7404017 [Accessed 10 Oct. 2019].
Shin, Y. and Balasingham, I., 2017. Comparison of hand-craft feature based SVM and
CNN based deep learning framework for automatic polyp classification. 2017 39th Annual
International Conference of the IEEE Engineering in Medicine and Biology Society
(EMBC).
Silveira, M., Nascimento, J., Marques, J., Marcal, A., Mendonca, T., Yamauchi, S., Maeda,
J. and Rozeira, J., 2009. Comparison of Segmentation Methods for Melanoma Diagnosis
in Dermoscopy Images. IEEE Journal of Selected Topics in Signal Processing, 3(1), pp.35-
45.
Sreelatha, T., Subramanyam, M. and Prasad, M., 2019. Early Detection of Skin Cancer
Using Melanoma Segmentation technique. Journal of Medical Systems, 43(7).
118
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Sun, Y., Xue, B., Zhang, M. and Yen, G. (2019). Evolving Deep Convolutional Neural
Networks for Image Classification. IEEE Transactions on Evolutionary Computation,
[online] pp.1-1. Available at: https://arxiv.org/abs/1710.10741 [Accessed 7 Nov. 2019].
Tan, T., Zhang, L. and Lim, C., 2019. Intelligent skin cancer diagnosis using improved
particle swarm optimization and deep learning models. Applied Soft Computing, 84,
p.105725.
The Skin Cancer Foundation, 2020. Skin Cancer Facts & Statistics - The Skin Cancer
Foundation. [online] The Skin Cancer Foundation. Available at:
<https://www.skincancer.org/skin-cancer-information/skin-cancer-facts/> [Accessed 1
December 2019].
Tschandl, P., Rosendahl, C., Akay, B., Argenziano, G., Blum, A., Braun, R., Cabo, H.,
Gourhant, J., Kreusch, J., Lallas, A., Lapins, J., Marghoob, A., Menzies, S., Neuber, N.,
Paoli, J., Rabinovitz, H., Rinner, C., Scope, A., Soyer, H., Sinz, C., Thomas, L., Zalaudek,
I. and Kittler, H., 2019. Expert-Level Diagnosis of Nonpigmented Skin Cancer by
Combined Convolutional Neural Networks. JAMA Dermatology, 155(1), p.58.
Vikhar, P., 2016. Evolutionary algorithms: A critical review and its future prospects. 2016
International Conference on Global Trends in Signal Processing, Information Computing
and Communication (ICGTSPICC).
Vincent, L. and Soille, P., 1991. Watersheds in digital spaces: an efficient algorithm based
on immersion simulations. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 13(6), pp.583-598.
Wang, H., Chen, X., Moss, R., Stanley, R., Stoecker, W., Celebi, M., Szalapski, T.,
Malters, J., Grichnik, J., Marghoob, A., Rabinovitz, H. and Menzies, S., 2010. Watershed
segmentation of dermoscopy images using a watershed technique. Skin Research and
Technology.
Weng, Y., Zhou, T., Liu, L. and Xia, C. (2019). Automatic Convolutional Neural
Architecture Search for Image Classification Under Different Scenes. IEEE Access,
[online] 7, pp.38495-38506. Available at: https://ieeexplore.ieee.org/document/8676019
[Accessed 10 Oct. 2019].
World Health Organization, 2020. Skin Cancers. [online] World Health Organization.
Available at: <https://www.who.int/uv/faq/skincancer/en/index1.html> [Accessed 01 May
2020].
Wu, Z., Zhao, S., Peng, Y., He, X., Zhao, X., Huang, K., Wu, X., Fan, W., Li, F., Chen,
M., Li, J., Huang, W., Chen, X. and Li, Y., 2019. Studies on Different CNN Algorithms
for Face Skin Disease Classification Based on Clinical Images. IEEE Access, 7, pp.66505-
66511.
Yap, J., Yolland, W. and Tschandl, P., 2018. Multimodal skin lesion classification using
deep learning. Experimental Dermatology, 27(11), pp.1261-1267.
119
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Youssef, A., Bloisi, D., Muscio, M., Pennisi, A., Nardi, D. and Facchiano, A., 2018. Deep
Convolutional Pixel-wise Labeling for Skin Lesion Image Segmentation. 2018 IEEE
International Symposium on Medical Measurements and Applications (MeMeA).
Yuan, X., Yang, Z., Zouridakis, G. and Mullani, N., 2006. SVM-based Texture
Classification and Application to Early Melanoma Detection. 2006 International
Conference of the IEEE Engineering in Medicine and Biology Society.
Zahangir Alom, M., Aspiras, T., M. Taha, T. and K. Asari, V., 2020. Skin Cancer
Segmentation and Classification with NABLA-N and Inception Recurrent Residual
Convolutional Networks. [online] Available at:
https://www.researchgate.net/publication/332669351_Skin_Cancer_Segmentation_and_C
lassification_with_NABLA-
N_and_Inception_Recurrent_Residual_Convolutional_Networks
Eddine Guissous, A., 2019. Skin Lesion Classification Using Deep Neural Network.
120
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Appendices
Appendix A – Conceptual Graph
121
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
122
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
123
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
124
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
125
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
126
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
127
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
128
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
129
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
130
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
segmentation model.
Intersection Over Union Jaccard Coefficient. Used
to evaluate the accuracy of
the segmentation model.
131
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
132
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
Technical Evaluator
133
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
134
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
135
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
136
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
137
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
138
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
139
AutoSkreenr – Automated Skin Cancer Screening using Optimized CNN
140