Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Received November 1, 2020, accepted November 4, 2020, date of publication November 11, 2020,

date of current version December 1, 2020.


Digital Object Identifier 10.1109/ACCESS.2020.3037258

Deep Learning in Skin Disease Image


Recognition: A Review
LING-FANG LI1 , XU WANG1 , WEI-JIAN HU 1, NEAL N. XIONG 2, (Senior Member, IEEE),
YONG-XING DU 1 , AND BAO-SHAN LI1
1 School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou 014010, China
2 Department of Mathematics and Computer Science, Northeastem State University, Tahlequah, OK 74464, USA
Corresponding authors: Wei-Jian Hu (hwj_llf@hotmail.com) and Yong-Xing Du (dyxql@imust.edu.cn)
This work was supported in part by the Inner Mongolia Autonomous Region’s major science and technology: artificial intelligence
application technology and product research and development; application research and demonstration in modem pastures, Fund Number
2019ZD025; in part by the Inner Mongolia Natural Science Foundation on livestock big data, livestock grazing trajectory mining, and
optimized production decision-making research Fund Number 2019MS06021; and in part by the Inner Mongolia’s special project on the
transformation of scientific and technological achievements. Xiaowei Yang’s entire industry chain quality traceability big data service
platform. It has obtained the innovation fund of Inner Mongolia University of science and technology, the construction of chronic disease
knowledge map based on natural language processing, fund number: 2019QDL-S10.

ABSTRACT The application of deep learning methods to diagnose diseases has become a new research topic
in the medical field. In the field of medicine, skin disease is one of the most common diseases, and its visual
representation is more prominent compared with the other types of diseases. Accordingly, the use of deep
learning methods for skin disease image recognition is of great significance and has attracted the attention
of researchers. In this study, we review 45 research efforts on the identification of skin disease by using
deep learning technology since 2016. We analyze these studies from the aspects of disease type, data set,
data processing technology, data augmentation technology, model for skin disease image recognition, deep
learning framework, evaluation indicators, and model performance. Moreover, we summarize the traditional
and machine learning-based skin disease diagnosis and treatment methods. We also analyze the current
progress in this field and predict four directions that may become the research topic in the future. Our
results show that the skin disease image recognition method based on deep learning is better than those of
dermatologists and other computer-aided treatment methods in skin disease diagnosis, especially the multi
deep learning model fusion method has the best recognition effect.

INDEX TERMS Deep learning, image recognition, review, skin disease.


I. INTRODUCTION Approximately 30% to 70% of people are in high-risk
The skin, the largest organ of the human body, is an important groups [3]. According to the British Skin Foundation Report
barrier. The main function of the skin is to protect the in 2018, approximately 60% of the British people suffer
human body from harmful substances from the outside world from skin disease [4]; 5.4 million new cases of skin cancer
and prevent the outflow of various nutrients in the human are recorded in the United States every year; one in five
body [1]. In human productive life, the skin health status is Americans will be diagnosed with a cutaneous malignancy in
affected by many factors, such as solar radiation, smoking, their lifetime [43]. Skin disease brings not only a significant
drinking, sports activities, viruses, and working environ- impact to human beings, such as daily activities damage, loss
ment [2]. These factors not only affect the integrity of skin of the interpersonal relationship, and internal organ damage,
function but also cause certain damage to the skin, have but also death. This condition can also constitute mental
an adverse effect on human health, and can even threaten illness, leading to isolation, depression, and even suicide [5].
human life in severe cases. Therefore, skin disease has Accordingly, skin disease has become one of the major topics
become one of the common diseases of human beings. Skin in the field of medicine. Figure 1 shows the global hot
disease covers all cultural regions and occurs in all ages. situation of skin disease.
In the treatment of skin disease, early detection is the criti-
The associate editor coordinating the review of this manuscript and cal condition to cure the disease, effectively reduce its impact,
approving it for publication was Yudong Zhang . and improve the survival rate. Take melanoma in skin disease

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
208264 VOLUME 8, 2020
L.-F. Li et al.: Deep Learning in Skin Disease Image Recognition: A Review

FIGURE 1. Global Skin Disease Thermal Map. Panel (a) is a global trend chart of skin disease search fever, and the horizontal axis represents the year’s
longitudinal axis. The chart shows that the global search fever for skin disease has started to increase since 2011, and the search fever has been
increasing year by year. By 2020, the search fever for skin disease has reached its peak. Panel b is a global heat map of skin disease regions, starting
from 2011. The map demonstrates skin disease problems in all parts of the world, except Greenland in the Arctic Circle, and the search for skin disease
is hot near the equator. The highest search for skin disease in the world is France, followed by Monaco and Japan.

as an example. In recent years, malignant neoplasms in skin popular among researchers. The application of artificial intel-
diseases have increased significantly. Malignant melanoma ligence in other fields has shown its great potential. The fact
(the deadliest type) is responsible for 10,000 deaths annually that at least 45 studies have used deep learning to address skin
just in the United States [6]. Melanoma is a highly lethal disease identification issues and have achieved promising
but not incurable disease. If abnormal proliferation of skin results encourages authors to prepare the survey.
melanocytes is detected in the early stage, then the survival This study mainly summarizes the research and application
rate is 96%; if it is detected in the late stage, then the survival progress of skin disease image recognition based on deep
rate is only reduced to 5% [7]. Therefore, early diagnosis and learning. Section II briefly introduces the methods and sta-
treatment of skin disease can minimize the damage caused by tus of skin disease recognition. Section III introduces the
skin disease. However, the skin disease recognition accuracy development history of skin disease detection. Section IV
is unideal due to the similarity between different skin diseases focuses on the research progress of skin disease image recog-
and the limited number of dermatologists with professional nition based on deep learning. Section V summarizes the full
knowledge. The identification of skin disease has become a text and discusses skin disease image recognition’s future
serious scientific challenge. research trends based on deep learning.
To address the issue of skin disease diagnosis and treat-
ment, people used computer-aided diagnosis for automatic
skin disease recognition based on the skin disease images II. METHOD
earlier [26]. With the rapid development of the artificial In recent years, deep learning has been given great atten-
intelligence technology, deep learning has quickly developed tion to skin disease recognition, and research achievements
a computer vision. The medical image processing of skin dis- increased. This study summarizes the relevant literature
ease has become an essential component and received great in the field of skin disease identification from 2016 to 2020.
attention in the cross-field of image processing, machine The distribution of the selected papers is shown in Figure 3.
science, and intelligent medicine. Many experts and scholars The three main steps in analyzing the literature in this field are
have been engaged in the image recognition of skin disease. as follows: (a) Use hierarchical search strategies to retrieve
The recent article published by Dick et al. is a good starting and collect relevant literature on each database, (b) con-
point. This article lists in detail the relevant articles on the duct detailed review and analysis of collected literature, and
diagnosis of melanoma in deep learning [8]. (c) statistical analysis of relevant data.
This study investigates the research status of skin disease The first step is to search for conference papers or journal
recognition in recent years, summarizes the datasets used by articles from CNKI and the scientific databases IEEE Xplore,
researchers, and analyses from the aspects of image prepro- ScienceDirect, Web of Science, Google Scholar, PubMed,
cessing, data augmentation, deep learning model, and frame- arXiv, and Medline by using hierarchical search strategies.
work performance indicators. On the one hand, this study The strategy is: First, the year limit is set, and the search year
provides a reference for deep learning methods for derma- is from 2016 to the present; then, the first-level keywords are
tologists. On the other hand, this study facilitates researchers used for the subject search for each database. The first-level
to quickly and accurately retrieve the literature related to der- keywords are as follows:
matological image recognition. This survey’s foundation is [‘‘Artificial intelligence’’ and ‘‘skin disease recognition’’]
the rapidly developing artificial intelligence-based diagnosis or [‘‘deep learning’’ and ‘‘skin disease recognition’’] or
technology in the medical field, which has become increasing [‘‘deep learning’’ and ‘‘skin lesion’’].

VOLUME 8, 2020 208265


L.-F. Li et al.: Deep Learning in Skin Disease Image Recognition: A Review

Approximately 2431 documents were retrieved through how is the overall performance? Did the researchers test the
the first-level keyword search; then, the second-level key- performance of the models on different datasets?
words were used to retrieve the documents obtained at the The third step is to sort out the analyzed information,
upper level. The second-level search function was to obtain summarize the collected datasets, and classify the standard
additional detailed granularity (such as melanoma, acne, and preprocessing techniques and deep learning neural network
pigment lesions) to search for achieving the first level of models in the literature. The network models are divided into
missing and filling. In the field of deep learning image recog- single and multiple models, and the performance indicators
nition, the mainstream method is to use convolutional neural used in the article are summarized. The main findings are
networks (CNNs). Accordingly, the second level uses CNNs discussed in detail in Section 4.
as keywords to narrow the search range. The second level
keywords are as follows:
[‘‘Convolutional Neural Network’’ and ‘‘Melanoma
Recognition’’] or [‘‘Convolutional Neural Network’’ and
‘‘Acne Classification’’] or [‘‘Convolutional Neural Network’’
and ‘‘Pigmented Skin Disease Classification’’].
In this way, 312 papers related to deep learning are
screened out and are suitable for the field of skin disease
recognition. These papers are then sorted by ‘‘relevance’’
and ‘‘time’’. We quickly browse the downloaded and cited
documents. The ‘‘Abstract, Introduction and Conclusions’’
‘‘Exclude non-deep learning and non-dermatological identifi-
cation documents’’. We also check the relevant literature cited
in the paper and whether any literature meets skin disease
recognition; and, if studies exist, we search them. During FIGURE 3. Distribution of selected papers. The horizontal axis represents
the number of papers in the year along the vertical axis. The number of
the browsing, if an expert or scholar is found to have unique deep studies on skin disease recognition has rapidly increased since
insights in the field, then we will directly search the author’s 2016. Twenty-one studies were published in 2018 and 2020, thereby
accounting for more than 86.6%. From the content of the study,
name for a literature search. Finally, 45 papers were retrieved 91% (41 articles), 6% (3 articles), and less than 2% (1 article) were
through a hierarchical search strategy. The search strategy involved in skin disease identification, data generation, and interpretable
process is shown in Figure 2. studies of skin disease identification, respectively.

III. DEVELOPMENT OF SKIN DISEASE DIAGNOSIS


TECHNOLOGY
A. TRADITIONAL MEDICAL DIAGNOSTIC PROCESS OF
SKIN DISEASE
The traditional medical diagnosis of skin disease comes from
the doctor according to his knowledge and experience or the
characteristics and rules presented by the dermatoscopic
images to distinguish the status of the patient’s skin lesions.
The diagnostic process can be summarized as follows: first,
through the doctor’s visual observation, namely, visual diag-
nosis, to locate the necessary information of the patient, then
FIGURE 2. Search strategy flowchart. The search is mainly divided into dermoscopy and histopathological examination. Dermoscopy
two levels. First, the year is restricted. Second, the first-level search and is a noninvasive skin imaging technology that can observe the
second-level search are conducted.
skin structure at the junction of the lower epidermis and the
superficial dermis, and it is a high-definition imaging technol-
In the second step, the 45 papers selected in the previous ogy [23]. Doctors analyzed the nature, distribution, arrange-
step are analyzed one by one. The following research ques- ment, color, edge and boundary, shape and appearance of
tions are considered: pigmented skin lesions according to dermatoscopy detection
Which categories of skin disease have been identified in methods such as seven-point checklist [12], ABCD rule [13],
this study? What are the data sources and types used? Are chaos and Clues [16], three-point checklists [14], and cash
the datasets public, and is it a dermoscopic image? Is the (color, architecture, symmetry, and homogeneity) [15]. How-
data preprocessed or enhanced? (4) What are the models ever, only experienced dermatologists can accurately iden-
used in deep learning, and what frameworks are chosen to tify pigmented skin disease’ pathological features due to
build them? (5) What are the innovations of this document the similarity of skin lesions in color, texture, edge contour,
and the main improvements of the model? Is model fusion and other features and the difference of pathological tissues
used? (6) What are the performance indicators used, and between different patients. However, this method of relying

208266 VOLUME 8, 2020


L.-F. Li et al.: Deep Learning in Skin Disease Image Recognition: A Review

FIGURE 4. Flow chart of skin disease image recognition based on machine learning. Image processing is divided into image
acquisition, image preprocessing, and dataset division. Image preprocessing includes image size adjustment, normalization,
and noise removal. Image recognition mainly includes image feature extraction and classification models to classify the
extracted features and then output the results.

on experience for diagnosis is far from meeting the patients’ histograms, and classified them [10], [11]. In 2012,
needs for medical resources. The process from a sampling test Rahil et al. used wavelet decomposition to derive texture
to a doctor’s diagnosis, the histopathological examination, features, modeled and analyzed lesion boundary sequences
and then to the patient’s report generally takes 4 to 5 days. to derive boundary features, and based on shape indicators
This process requires a large amount of time and affects the to derive geometric features. Finally, four classifiers, namely,
patient’s cure. Support Vector Machine, Random Forest, Logical Model
In summary, the traditional dermatological diagnosis has Tree, and Hidden Naive Bayesian, are used for classifica-
the following shortcomings: First, the lack of medical tion [17]. In 2013, Ballerini et al. proposed a hierarchical
resources. Dermatologists with professional skin knowl- K-NN classifier algorithm for melanoma skin disease based
edge are limited. The mismatch between the dermatologists’ on color and texture, which uses three classifiers for hier-
growth rate and the incidence rate of skin disease has resulted archical combination and feature selection to adjust each
in many patients with few professional dermatologists. classifier’s feature set to suit its task. The recognition accu-
Second, the accuracy of diagnosis is low. Dermatologists racy is more than 70% [18]. In the same year, Ning et al.
with professional knowledge have different work experiences used machine learning ID3 [19], classification and regres-
and may have varying diagnoses for the same patient under sion tree [20], and AdaBoost three different algorithms for
subjective thoughts. Even the same doctor, affected by light, feature extraction of their performance [21]. The AdaBoost
fatigue, and other factors, has different diagnostic results for performed well in these algorithms [22]. In the 400 skin
the same skin disease picture. Third, the skin disease images images collected with laser confocal scanning microscopy,
are complex due to the skin disease characteristics, and small the recognition accuracy was 94.75%, the specificity was
gaps exist between categories and large gaps within cate- 93%, and the sensitivity was 96.5%.
gories. Accordingly, the diagnosis is prone to misdiagnosis The image recognition method for skin disease based on
or misses a diagnosis, leading reduction in the diagnostic machine learning is to extract the features of skin disease
accuracy. by manually setting extractors and classifying them by tradi-
tional machine learning methods. This method requires great
professional medical knowledge to conduct deep exploratory
B. SKIN DISEASE IMAGE RECOGNITION BASED ON data analysis and reduce its dimension. Finally, after a com-
MACHINE LEARNING plex parameter adjustment, the results can be outputted,
With the development of machine learning, the solving of which requires a large amount of time and energy. This
the shortcomings of the traditional skin disease diagnostic method has low portability, so the feature engineering is
process and image recognition technology of skin disease effective only in the same field. These shortcomings limit
based on machine learning was stablished. Image recogni- the development of machine learning in skin disease recog-
tion based on machine learning is an interdisciplinary field nition. Deep learning has achieved good results in image
integrating medical skin disease imaging, mathematical mod- recognition with its advancement. The deep learning method
eling, and computer technology through feature engineering can automatically mine the deep-seated nonlinear relation-
and machine learning classification algorithms to complete ship in medical images and do not need to establish feature
the recognition and diagnosis of skin disease. The flow chart engineering compared with the traditional image recognition
is shown in Figure 4. methods. The extraction efficiency is also efficient. Deep
The earliest study on the automatic classification of skin learning is adaptable and easy to transform, and the tech-
disease dates back to 1987 [9]. In 2007, Stanley et al. nology can be more easily adapted to different fields and
extracted melanomas’ color characteristics, established color applications.

VOLUME 8, 2020 208267


L.-F. Li et al.: Deep Learning in Skin Disease Image Recognition: A Review

TABLE 1. Application categories of skin disease identification.

IV. SKIN DISEASE IMAGE RECOGNITION BASED ON dermoscopic imaging and collected from dermatological
DEEP LEARNING image databases [30], [32]–[34], [40], [96]. Some datasets are
A. APPLIED SKIN DISEASE FIELD also collected by universities in collaboration with renowned
The main application of deep learning in skin disease recogni- hospitals [25], [26], [29], [37], [61]. Pathological sections
tion is skin disease classification, that is, the quantitative fea- of basal cell carcinoma and seborrheic keratosis studied by
ture extraction of lesion tissues through skin disease images. Meifeng et al. are obtained from the Second Affiliated Hospi-
The classification is analyzed and judged. This direction is the tal of Xi’an Jiaotong University [61]. The HAM10000 dataset
mainstream application direction of skin disease recognition, is a dermoscopic image collected from the Dermatology
and the main types of skin disease are benign neoplasms Department of Vienna Medical University in Austria and
and malignant neoplasms. Benign neoplasms are a type of the dermatology practice of Cliff Rosendahl in Queensland,
skin disease with a gradually increasing incidence, and the Australia [29].
gap between the lesions is small, and the recognition is In this study, 18 datasets were collected, including 14 pub-
low. Benign neoplasms commonly used for research includes lic datasets and 4 self-collected datasets [31], [38], [71], [74].
nevus (23 articles) and seborrheic keratosis (17 articles). See Table 2 for details. The 18 datasets covered 17 types
Malignant neoplasms are another type of skin disease of skin diseases, among which melanoma has the most fre-
widely used in research. Malignant neoplasms are cellular quent occurrence, and 14 datasets contained relevant data.
dysplasia disease that occur in the skin, which are life- Eczema and psoriasis only appeared in DermIS, DemQuest,
threatening through constant proliferation and metastasis. DermNZ, and Dermnet datasets, and all of them were nonder-
Malignant tumor identification in skin disease identification moscopic images. The ISIC competition dataset has become
is exceptionally significant due to the high mortality rate the widely used skin disease dataset due to its wide range
of malignant neoplasms. The malignant neoplasms com- of influence, massive data, rich types, and other charac-
monly used in research are basal cell carcinoma (13 articles), teristics. ISIC 2018 is the most widely used skin disease
squamous cell carcinoma (seven articles), and malignant dataset, and 12 papers used it [31], [42], [52], [53], [55],
melanoma. Among the retrieved literature, the largest num- [56], [65], [70], [75], [78], [83], [84]. Seven papers used
ber of studies on melanoma recognition was 34. However, the ISIC 2016 dataset [47], [51], [52], [54], [63], [64], [84],
Non-neoplastic skin diseases are scarce, and only three arti- and 11 papers utilized the ISIC 2017 dataset [44], [45], [47]–
cles on deep learning of eczema and psoriasis identifica- [49], [51], [52], [54], [60], [67]. Researchers will use multiple
tion [46], [50], and [73] are collected in this study. datasets to conduct experiments due to the limited number
of images in a single dataset. For example, researchers at
B. DATA SOURCES Stanford University organized four skin disease databases
Deep learning requires a large amount of data to extract into one database. One-third of the collected literature use the
features during training. However, large-scale image data of mixture of multiple datasets [31], [38], [42], [43], [46], [47],
skin disease are difficult to obtain due to certain aspects, such [50]–[52], [57], [72], [75], [79], [84].
as the image of skin disease involves patients’ privacy, variety
of skin diseases, and the presence of some rare diseases. Skin C. IMAGE PREPROCESSING
disease images need to be labeled by experts with appropriate In deep learning image recognition, a deep learning model has
medical knowledge due to the similarity of lesion manifes- high requirements for image quality because a good image
tations among various skin diseases, which limits the size of quality can improve the model’s generalization ability [107].
the skin disease dataset that is publicly available in academia. Image preprocessing is carried out before model training. The
Currently, the acquisition of skin disease datasets is mainly primary purpose is to eliminate the irrelevant information in
divided into self-collected and public datasets. Self-collected the image, enhance the detectability of the useful and related
datasets are currently less publicly available. Most published information in the image, and simplify the data to the great
dermatological datasets are image data obtained by using extent to improve the model’s feature extraction ability and

208268 VOLUME 8, 2020


L.-F. Li et al.: Deep Learning in Skin Disease Image Recognition: A Review

TABLE 2. Skin disease dataset.

TABLE 3. Data preprocessing.

recognition reliability. This work has 28 studies on image disease images [31], [46]. Singhal et al. used filters to reduce
preprocessing, which are divided into data cleaning and data the influence of noise [56]. Rahul et al. used nonlocal means
conversion. Table 3 shows the details. deoiling method to remove noise [57]. Xiaoyu added noise
to the skin disease image to study image noise’s influence on
skin disease recognition [38].
1) DATA CLEANING
Data cleaning is mainly to ensure the integrity of data fea-
2) DATA CONVERSION
tures of the skin disease images. In skin disease image
recognition, the commonly used data cleaning approach is The purpose of data transformation is to transform data
to remove the noise to reduce hair and shadow influence on from one format or structure to another according to the
skin disease recognition. The image’s quality is affected by requirements of a deep learning model. In the collected lit-
the skin’s nature, the environment, the equipment, and the erature, the commonly used data conversion technologies are
lighting conditions. The image with a low quality will affect size adjustment (18 articles), normalization (10 articles), and
the recognition effect, resulting in the loss of accuracy and graying (three articles). Table 3 shows the details.
calculation cost. The commonly used denoising algorithms
include spatial domain filtering, transform domain filtering, a: SIZE ADJUSTMENT
and partial differential equation. In the selected literature, Its input size limits the network model of deep learning,
four papers carried out noise removal on the selected datasets. and the size of its input image is mostly fixed. If the input
Hameed et al. and Hagerty et al. removed hair from the skin size in the network model is large, then the abstract level of

VOLUME 8, 2020 208269


L.-F. Li et al.: Deep Learning in Skin Disease Image Recognition: A Review

information cannot meet the needs of the network, and Eleven papers used the rotation method, and the commonly
the amount of calculation will increase; if the input size used rotation angles were 90◦ , 180◦ , and 270◦ . Given that
is small, then the picture information will be lost; hence, the generation of confrontation network is a new technology,
the network model in deep learning mostly uses 224 × 224 or only three papers (Bissoto et al. [65], Albarqouni et al. [67],
227 × 227 fixed-size input. Seventeen papers carried outsize and Xianzhen et al. [68]) were collected for the genera-
adjustment, among which four papers did not introduce tion of simulation data of skin disease image. Among these
the size adjustment method used, while the other literature papers, Bissoto et al. and Albarqouni et al. used the gener-
mainly utilized scaling (seven articles) and clipping (eight ated countermeasure network to generate data from the ISIC
articles). Mahbod used zooming and clipping [52], while 2017 dataset. See Table 4 for details.
Mahbod and Zhang used bilinear interpolation method for
scaling [47], [49]. TABLE 4. Data augmentation.

b: NORMALIZATION
In the selected literature, 11 papers normalized the images.
Normalization is to convert the samples’ eigenvalues to the
same dimension: to map the data to the interval of [0,1] or
[−1,1], and the extreme value of variables only determines
the data. This method aims to limit the preprocessing data
to a specific range to eliminate the particular sample data’s
adverse effects.

c: GRAYSCALE CONVERSION
Gray conversion operates on a single pixel of an image; Although the traditional data amplification technology will
its primary purpose is to improve the contrast of the increase the number of images, the diversity of data has not
image and threshold processing. In the collected literature, been improved due to the lack of learning the image’s char-
Mahbod et al. [47], Singhal et al. [56], and Hasan et al. [69] acteristics, which is just a simple copy transformation. The
used grayscale transformation to process skin disease method of model generation is to generate images by learn-
pictures. ing the features of images. The generated model has strong
feature learning and expression abilities compared with the
D. IMAGE DATA AUGMENTATION traditional image amplification technology. The generated
Skin disease data are difficult to collect due to the problems image is different from the original image, and the image
of personal privacy and professional equipment involved in quality is also higher. This method can effectively solve the
the collection process of the medical skin disease dataset. unbalanced data amount of various skin disease categories,
Accordingly, less skin disease data has been collected. Some and the diversity of generated images has been dramatically
diseases’ rarity makes the data collection of this category improved.
less, resulting in the uneven distribution of the collected
datasets. In deep learning, small-scale datasets can easily E. MODEL AND FRAMEWORK
lead to insufficient model learning and overfitting. To solve The advantage of deep learning over traditional machine
the problem of small skin disease dataset and improve the learning is that significant feature representations are auto-
network model’s generalization ability, researchers use data matically learned from raw data, such as image pixels, to no
augmentation technology to expand the amount of training longer rely on traditional feature engineering. Currently,
data. Data augmentation uses existing data to create new the primary method of skin disease image recognition is to
data under the guidance of task objectives. The traditional use a convolution neural network in deep learning, convo-
image data augmentation expands the dataset by introducing lution, and pooling operation of convolutional network in
geometric transformation and image operation to the original image recognition, which has translation, rotation, and scale
data without changing the data label. The leading technolo- invariances. The CNN has excellent superiority in feature
gies are rotation, mirror image, adding noise, and dimension representation. All the literature collected in this study is
reduction. The new data amplification technology produces based on the CNN model.
simulation data on the basis of the original data and by From the type of CNN used, the research work collected in
generating Gans’ model [62]. The internal distribution law of this study adopts popular CNN architecture, such as AlexNet,
pictures indicates that the generated confrontation network is VGG, Inception, ResNet, and DensenNet [85]–[89]. Eight
not only limited to within-class information but also uses the articles are about AlexNet ( [41], [42], [47], [51], [58], [59],
information between categories to synthesize pictures. [73], and [79]), seven articles used VGG ([47], [51], [53],
Among the 17 papers discussed in this study, 17 papers [54], [59], [60], [71], and [72]), and seven articles utilized
used data augmentation, 14 of which used the tradi- the Inception ([38], [43], [53], [56], [59], [77], and [84]).
tional data augmentation technology to expand the dataset. Among the collected literature, at most 18 papers focused

208270 VOLUME 8, 2020


L.-F. Li et al.: Deep Learning in Skin Disease Image Recognition: A Review

on skin disease identification by ResNet ([31], [38], [42], (two articles), multimodel extraction feature SVM classifi-
[47], [49], [51], [52], [56], [59], [63], [64], [74]–[76], [78], cation (two articles), fusion strategy combined with clinical
[81], [83], and [84]). At least four papers used DenseNet information (four articles), the combination of human and
([55], [56], [74], and [84]). Certain papers are related to some artificial intelligence (two articles), and conventional multi-
relatively new networks, such as Zhuang et al. using SENet ple model fusion (four articles). AlexNet, VGG, Inception,
([42] and [90]) and Hameed et al. utilizing SqueezeNet and ResNet are often used for fusion.
([50] and [91]). A large number of experiments have proved that the
From the number of network models used by researchers, multimodel fusion method is better than the single model
the collected model methods can be classified into two method [59], [61] in the field of skin disease recognition.
types: the first one is a single model method based on deep Given that the objective factors limit a single model, a model
learning; the other type is a multimodel method based on deep generalization bottleneck is commonly encountered when
learning. Twenty-seven papers focused on a single model dealing with problems. The multimodel fusion can combine
method and 10 papers on the multimodel method. the excellent models and integrate each model’s advantages
to break through the bottleneck of the generalization ability
1) SKIN DISEASE IMAGE RECOGNITION BASED ON A of a single model
SINGLE MODEL
Twenty-seven pieces of literature used a single model in 3) DEEP LEARNING FRAMEWORK
the literature, among which 10 papers ([38], [39], [43], Among the collected literature, only 16 papers published
[56], [58], [63], [72], [74], [75], and [76]) has no model the deep learning framework, including Keras [92], Tensor-
improvement, and the other 15 articles have optimized the Flow [93], MATLAB [94], and PyTorch [95]. Keras has
model. The optimization methods mainly include: modify- become a popular framework owing to its consistent and
ing the fully connected layer (five articles: [41], [44], [60], concise API, which can significantly reduce the workload of
[73], and [79]), using a machine learning classifier (two users. Six articles have used the framework. Tensorflow is
articles: [64] and [80]), applying (1 article: [50]) the model often used with Keras. The framework can deploy training
to the mobile application, introducing attention mechanism models on various servers and mobile devices without execut-
(two articles: [49] and [54]), self-building network model ing a separate model decoder or loading a Python interpreter.
(two articles: [57] and [69]), and multilevel classification Pytorch is a deep learning framework released by Facebook
(three articles: [46], [55], and [77]). Kawahara et al. [41] and AI research in 2017. This framework has the advantages of
Yuexiang et al. [44] formed a fully CNN based on CNNs and flexibility, ease of use, and fast speed. PyTorch is a rookie
removed the fully connected layers. Xueying et al. set the in the deep learning framework. Five articles collected use
weights on the Softmax loss function to improve recognition the framework. MATLAB has a long history of development
accuracy [60]. Yan et al. [54] and Zhang et al. [49] introduced and has advantages in the image, video, and audio data for
the attention mechanism in the CNNs. Zhang et al. proposed accurate value annotation. Five papers use MATLAB. The
a CNN (ARL-CNN) on the basis of attention residual learn- details are shown in Table 7.
ing. The network embeds the residual and attention learning
mechanisms into each ARL module, builds an ARL-CNN F. EVALUATING INDICATOR
model with an arbitrary depth by stacking multiple ARL To evaluate the performance of deep learning in skin dis-
blocks, and trains it in an end-to-end manner. See Table 5 for ease image recognition, several performance indicators are
details. used: accuracy (ACC) represents the percentage of correct
prediction results in the total sample [99]; mean average
2) SKIN DISEASE IMAGE RECOGNITION BASED ON precision (mAP) represents the average accuracy of all cat-
MULTIMODEL FUSION egories [100]; true positive rate (TPR), also known as sensi-
Multimodel fusion is mainly to learn through multiple learn- tivity and Recall (R) [101], represents the probability of being
ers and integrate multiple learners’ results by using some predicted to be positive in the actual positive samples [102];
rules. In the conventional multimodel fusion, the average false positive rate (FPR) refers to the percentage of actual
method, weighting method, linear regression method, simple disease-free but judged to be disease-free; true negative rate
majority voting (SMV) rule, and maximum probability rule (TNR) [103], also known as specificity, indicates that the
are used to fuse and recalculate the probability results of a actual disease-free is correctly judged to be disease-free;
certain type of output of each model. The final output of area under the receiver operating characteristic (ROC) curve
the fusion results is obtained. The other way is to fuse the (AUC) refers to the probability that the classifier outputs
features learned from multiple models to fuse the features positive and negative samples, and the likelihood that the
and then output the final results. The flow chart of the classifier outputs a positive sample is greater than that of
above-mentioned two fusion methods is showed in Figure 5. the negative sample; ROC is the working characteristic curve
Fourteen pieces of literature are about multimodel fusion of subjects, which shows the performance of classification
collected in this study, and the details are shown in model under all classification thresholds [104]. The specific
Table 6. The main methods are multi-input model fusion performance indicators are shown in Table 8.

VOLUME 8, 2020 208271


L.-F. Li et al.: Deep Learning in Skin Disease Image Recognition: A Review

TABLE 5. A. Single model.

G. MODEL PERFORMANCE AND ANALYSIS accuracy rate of 80% to 89%, and the other nine papers have
Two papers are difficult, or even impossible, to compare an accuracy rate of more than 90%. This finding shows the
because the datasets used by researchers in various tasks, good generalization ability of deep learning, and the recog-
the types of skin disease identified, models, parameters, nition accuracy of the model built by Sarkar et al. is as high
and performance indicators are different. Therefore, readers as 99.5% [57]. Among the 24 papers that use the AUC as a
should carefully consider the comments in this section. measure of performance, nine papers have more than 90% of
Among the 29 papers with accuracy indicators, four papers the AUC, and the highest area under the ROC curve reaches
have an accuracy rate of less than 70%, 15 papers have an 97.5% [51]. Eleven articles have a range of 80% to 89%,

208272 VOLUME 8, 2020


L.-F. Li et al.: Deep Learning in Skin Disease Image Recognition: A Review

FIGURE 5. Two fusion methods of multimodel fusion. The figure is the conventional multimodel fusion method. After
the image is inputted, the feature is extracted by multiple CNNs and then categorized by the classifier to obtain the
probability value of each sample image category. Finally, the probability value of the classification is fused by the
fusion module to output the category of the maximum probability value. Panel B is the multimodel fusion of feature
fusion, which mainly includes the following steps: After feature extraction, the CNN fuses the extracted features, and
then the probability value of each category is outputted by the classifier. The output with the maximum probability
value is selected.

and only four articles are below 80%. In terms of sensitivity, This paper analyses the collected literature and con-
28 papers are used as evaluation indicators, 10 papers have cludes that deep learning provides excellent performance
a sensitivity of more than 90%, six papers have a sensitivity in most related work. The deep learning-based methods
of 80% to 89%, and 12 papers have a sensitivity lower than used in 45 papers are effectively and correctly compared.
80%. Twenty-four papers have specificity as an indicator, The papers are difficult to summarize because the datasets,
11 papers have specificity higher than 90%, five papers are preprocessing techniques, indicators, models, and parameters
between 80% and 89%, and the other eight papers are less involved in each article are different. Accordingly, the com-
than 80%. See Table 9 for details. parison of this study is strictly limited to the techniques

VOLUME 8, 2020 208273


L.-F. Li et al.: Deep Learning in Skin Disease Image Recognition: A Review

TABLE 6. Multimodel fusion.

TABLE 7. Deep learning framework.

used in each paper. Based on these constraints, we observed recognition. Harangi et al. and Xu Meifeng et al. compared
that deep learning performance is better than that of tradi- the performance of multimodel fusion with that of a single
tional methods, and the automatic feature extraction of deep model in their respective experiments [59], [61]. The results
learning models is more efficient than that of traditional show that the performance of the multimodel fusion is bet-
methods (such as color [105], histogram [106], statistics, and ter than that of a single network model. The multimodel
texture [107]). Hang et al. used deep learning to extract the approach combines the advantages of every single model
feature vectors of skin disease and categorized them by using to obtain the optimal solution. Another way of combining
SVM as a classifier [64]. The AUC of the model performance multiple models is to fuse the patient’s clinical information
of the manual feature extraction method is 6.17% higher (e.g., gender, age, and location of lesions.). Hagerty et al.
than those of traditional manual feature extraction methods. used this method to resolve the patient’s information [31],
The multimodel fusion performance of a single convolution excluding the image recognition of skin disease. The ROC
network is better that that of single-model skin disease image area increased from 83% to 94% for deep learning only.

208274 VOLUME 8, 2020


L.-F. Li et al.: Deep Learning in Skin Disease Image Recognition: A Review

TABLE 8. Performance index.

V. SUMMARY AND PROSPECT B. PROSPECT


A. SUMMARY Deep learning, a new field of machine learning, has good
In this article, we outline the development of skin disease application prospects in computer vision and provides a
diagnostic technology and the process of traditional medical new direction for medical image recognition. Currently, deep
diagnosis and machine learning-based image recognition of learning has achieved empirical progress in the field of medi-
skin disease and investigate the results of deep learning-based cal image recognition [97]. However, deep learning is still in
research in the field of skin disease recognition. Forty-five its infancy in medical image recognition. Additional devel-
relevant papers have been identified to obtain their concerns opments and comprehensive attempts should be carried out
about the areas and skin disease types. These papers are in the field of skin disease recognition in the future. The
utilized as basis to study the used data source, the prepro- following summarizes four possible directions for research
cessing and data expansion techniques, the technical details development:
of the models, and the performance indicators’ overall per-
formance. Deep learning models AlexNet, VGG, GoogleNet, 1) SIMULATION DATA GENERATION OF SKIN DISEASE
and ResNet are widely used in skin disease recognition. IMAGE
Researchers often use the multimodel fusion technology to Deep learning has good generalization ability in skin disease
improve the performance of models. In future work, we plan recognition; however, it requires many data in the learning
to apply the concepts and best practices of deep learning process. Few data may cause insufficient feature extraction,
described in this survey to other medical fields that have not which will affect the diagnosis and recognition of lesions.
fully utilized this technology. This survey aims to encourage One of the significant problems in skin disease image recog-
many researchers to conduct deep learning experiments and nition is the difficulty in obtaining a large number of data
apply the model of deep learning in the field of computer sets due to the complications in medical image collection
vision involving medicine, thereby achieving smart and con- and the rare occurrence of individual diseases. Generating
venient development for the medical industry. simulated pictures of skin disease by using antinetwork is an

VOLUME 8, 2020 208275


L.-F. Li et al.: Deep Learning in Skin Disease Image Recognition: A Review

TABLE 9. Performance in the literature.

excellent method to solve this problem. Such an approach disease for facilitating the feature extraction of lesion sites to
also improves the performance of the network. This technol- improve the accuracy of clinical recognition models.
ogy will have good development prospects in the field of skin
disease recognition in the future.
3) INTERPRETABILITY STUDY ON SKIN DISEASE
IDENTIFICATION
2) CLINICAL IDENTIFICATION OF SKIN DISEASE Now, researchers focus on the performance indicators of
At present, the types of pictures recognized in the field of models. However, various factors in the environment result
skin disease recognition are relatively single. Most graph- in the transformation of model concepts and change of per-
ics are based on the pictures generated by the dermoscopic formance. The factors that drive the model should be under-
imaging technology. Few studies have been conducted on stood to make certain decisions. Currently, the interpretability
the pictures taken in clinical or real life. The location of the research on skin disease identification models is a big gap in
lesion is difficult to identify due to several problems, such this field and must be explored. In the skin disease recog-
as low pixels and presence of shadows, and the lesion area nition field, deep learning models can be applied to identify
only occupies a small part or incomplete in clinical or real skin disease to guide clinical automated medical diagnosis.
scenes pictures. This aspect needs further study. In clinical However, these algorithms are still a ‘‘black box’’ to generate
recognition models, attention mechanisms or target detection predictions on the basis of input data. There is no specific
methods can be introduced to focus on the lesion sites of skin interpretation on what skin disease features does the deep

208276 VOLUME 8, 2020


L.-F. Li et al.: Deep Learning in Skin Disease Image Recognition: A Review

learning model as a basis for judgment. The development of [9] N. Cascinelli, M. Ferrario, T. Tonelli, and E. Leo, ‘‘A possible new
deep learning should be trustworthy and explainable. Making tool for clinical diagnosis of melanoma: The computer,’’ J. Amer. Acad.
Dermatol., vol. 16, no. 2, pp. 361–367, 1987, doi: 10.1016/S0190-
the algorithm public and transparent in decision-making will 9622(87)70050-4.
give users reliability and security. Interpretability research on [10] R. J. Stanley, W. V. Stoecker, and R. H. Moss, ‘‘A relative color approach
to color discrimination for malignant melanoma detection in dermoscopy
skin disease identification can resolve prejudices and auditing
images,’’ Skin Res. Technol., vol. 13, no. 1, pp. 62–72, Feb. 2007,
brought about by artificial intelligence [108]. Interpretability doi: 10.1111/j.1600-0846.2007.00192.x.
makes artificial intelligence open and transparent in legal, [11] N. N. Xiong, Y. Shen, K. Yang, C. Lee, and C. Wu, ‘‘Color sensors and
their applications based on real-time color image segmentation for cyber
moral, and philosophical aspects. physical systems,’’ EURASIP J. Image Video Process., vol. 2018, no. 1,
p. 23, Dec. 2018, doi: 10.1186/s13640-018-0258-x.
4) AI DIAGNOSIS AND TREATMENT OF SKIN DISEASE [12] W. Stolz, A. Riemann, A. B. Cognetta, L. Pillet, W. Abmayr, D. Holzel,
P. Bilek, F. Nachbar, M. Landthaler, and O. Braun-Falco, ‘‘ABCD rule of
Mobile devices, such as smartphones, PDAs, and tablets, dermatoscopy: A new practical method for early recognition of malignant
are becoming an essential part of human life [108]. melanoma,’’ Eur. J. Dermatol., vol. 4, no. 7, pp. 521–527, 1994.
Embedding AI diagnosis and treatment of skin disease on [13] G. Argenziano, G. Fabbrocini, P. Carli, V. De Giorgi, E. Sammarco,
and M. Delfino, ‘‘Epiluminescence microscopy for the diagnosis of
smart devices will be a significant trend in the future. doubtful melanocytic skin lesions,’’ Arch. Dermatol., vol. 134, no. 12,
However, most skin diseases are diagnosed on the basis of pp. 1563–1570, Dec. 1998, doi: 10.1001/archderm.134.12.1563.
high-performance graphics processors. The computational [14] H. P. Soyer, G. Argenziano, I. Zalaudek, R. Corona, F. Sera, R. Talamini,
F. Barbato, A. Baroni, L. Cicale, A. Di Stefani, P. Farro, L. Rossiello,
complexity of the algorithm should be minimized while E. Ruocco, and S. Chimenti, ‘‘Three-point checklist of dermoscopy,’’
improving the algorithm recognition capability to ensure that Dermatology, vol. 208, no. 1, pp. 27–31, 2004, doi: 10.1159/000075042.
[15] J. S. Henning, S. W. Dusza, S. Q. Wang, A. A. Marghoob,
it can be easily used on mobile phones and wearable intelli- H. S. Rabinovitz, D. Polsky, and A. W. Kopf, ‘‘The CASH (color,
gent devices [109]. This study is of great significance for AI architecture, symmetry, and homogeneity) algorithm for dermoscopy,’’
diagnosis and treatment of skin diseases. In intelligent ques- J. Amer. Acad. Dermatol., vol. 56, no. 1, pp. 45–52, Jan. 2007, doi: 10.
1016/j.jaad.2006.09.003.
tions and answers on skin disease, AI assistants can replace [16] C. Rosendahl, A. Cameron, I. Mccoll, and D. Wilkinson, ‘‘Dermatoscopy
dermatologists to inquire about patients and communicate in routine practice: ‘Chaos and clues,’’’ Aust. Fam. Physician, vol. 41,
with them about some repetitive contents, such as diagnosis, no. 7, pp. 482–487, 2012.
[17] R. Garnavi, M. Aldeen, and J. Bailey, ‘‘Computer-aided diagnosis of
prescription, and health promotion. At present, some issues melanoma using Border- and wavelet-based texture analysis,’’ IEEE
related to philosophy, law, and ethics still persist in addi- Trans. Inf. Technol. Biomed., vol. 16, no. 6, pp. 1239–1252, Nov. 2012,
tion to technical bottlenecks in AI diagnosis and treatment. doi: 10.1109/TITB.2012.2212282.
[18] L. Ballerini, R. B. Fisher, B. Aldridge, and J. Rees, ‘‘A color and texture
Examples of the questions are as follows: Is the subject of based hierarchical K-NN approach to the classification of non-melanoma
AI diagnosis and treatment a human or a medical device on skin lesions,’’ in Color Medical Image Analysis (Lecture Notes in Com-
putational Vision and Biomechanics), M. E. Celebi and G. Schaefer, Eds.
the legal level?; what is the legal standard in the clinical Amsterdam, The Netherlands: Springer, 2013, pp. 63–86.
application?; what are the criteria for judging a medical defect [19] J. Vasquez and B. E. Comendador, ‘‘Competency discovery system:
or medical negligence?; and who is legally responsible for Integrating the enhanced ID3 decision tree algorithm to predict the assess-
ment competency of senior high school students,’’ Int. J. Adv. Sci., Eng.
medical accidents in AI diagnosis and treatment?. Inf. Technol., vol. 9, no. 1, p. 60, Jan. 2019, doi: 10.18517/ijaseit.9.1.7763.
[20] P. L. Gutierrez and S. Siva, Classification and Regression Tree (CART).
Amsterdam, The Netherlands: Springer, 2008.
REFERENCES [21] B. He, C. Huang, G. Sharp, S. Zhou, Q. Hu, C. Fang, Y. Fan, and F. Jia,
[1] N. Hameed, A. Ruskin, K. A. Hassan, and M. Hossain, ‘‘A comprehensive ‘‘Fast automatic 3D liver segmentation based on a three-level AdaBoost-
survey on image-based computer aided diagnosis systems for skin can- guided active shape model,’’ Med. Phys., vol. 43, no. 5, pp. 2421–2434,
cer,’’ in Proc. 10th Int. Conf. Softw., Knowl., Inf. Manage. Appl. (SKIMA), Apr. 2016.
China, 2016, doi: 10.1109/SKIMA.2016.7916221. [22] T. Wang, N. Zhang, G. Hou, and X. Yu, ‘‘Performance comparison
[2] N. Hameed, A. Shabut, and M. A. Hossain, ‘‘A Computer-aided diagno- of several machine learning methods for computer-aided diagnosis of
sis system for classifying prominent skin lesions using machine learn- melanoma,’’ Appl. Res. Comput., vol. 30, no. 6, pp. 1731–1733, 2013.
ing,’’ in Proc. 10th Comput. Sci. Electron. Eng. (CEEC), Sep. 2018, [23] M. A. Marchetti, N. C. Codella, S. W. Dusza, D. A. Gutman, B. Helba,
pp. 186–191. A. Kalloo, N. Mishra, C. Carrera, M. E. Celebi, J. L. De Fazio, and
[3] M. T. Johnson and J. Roberts, ‘‘Skin conditions and related need for N. Jaimes, ‘‘Results of the 2016 international skin imaging collaboration
medical care among persons 1–74 years. United States, 1971–1974,’’ Vital international symposium on biomedical imaging challenge: Comparison
Health Stat., vol. 11, no. 212, pp. i–v and 1–72, Nov. 1978. of the accuracy of computer algorithms to dermatologists for the diagnosis
[4] British Skin Foundation. Accessed: May 2, 2018. [Online]. Avail- of melanoma from dermoscopic images,’’ J. Amer. Acad. Dermatol.,
able:http://www. britishskinfoundation.org.uk vol. 78, no. 2, pp. 270–277, 2018, doi: 10.1016/j.jaad.2017.08.016.
[5] A. Picardi, I. Lega, and E. Tarolla, ‘‘Suicide risk in skin disorders,’’ [24] G. Argenziano, P. Soyer, V. De Giorgi, D. Piccolo, P. Carli, M. Delfino,
Clinics Dermatol., vol. 31, no. 1, pp. 47–56, Jan. 2013, doi: 10.1016/j. A. Ferrari, R. Hofmann-Wellenhof, D. Massi, G. Mazzocchetti, M. Scal-
clindermatol.2011.11.006. venzi, and I. Wolf, Interactive Atlas of Dermoscopy. Dermoscopy: A
[6] H. W. Rogers, M. A. Weinstock, S. R. Feldman, and B. M. Coldiron, Tutorial (Book) and CD-ROM. 2000.
‘‘Incidence estimate of nonmelanoma skin cancer (keratinocyte carcino- [25] T. Mendonca, P. M. Ferreira, J. S. Marques, A. R. S. Marcal, and
mas) in the U.S. population, 2012,’’ JAMA Dermatol., vol. 151, no. 10, J. Rozeira, ‘‘PH2 —A dermoscopic image database for research and
p. 1081, Oct. 2015. benchmarking,’’ in Proc. 35th Annu. Int. Conf. IEEE Eng. Med. Biol. Soc.
[7] K. Freedberg, A. Geller, D. Miller, R. Lew, and H. Koh, ‘‘Screen- (EMBC), Jul. 2013, pp. 5437–5440.
ing for malignant melanoma: A cost-effectiveness analysis,’’ J. Amer. [26] Z. F. Mohammed and A. A. Abdulla, ‘‘An efficient CAD system for ALL
Acad. Dermatol., vol. 41, no. 5, pp. 738–745, 1999, doi: 10.1016/S0190- cell identification from microscopic blood images,’’ Multimedia Tools
9622(99)70010-1. Appl., pp. 1–14.
[8] V. Dick, C. Sinz, M. Mittlböck, H. Kittler, and P. Tschandl, ‘‘Accuracy [27] M. Combalia, N. C. F. Codella, V. Rotemberg, B. Helba, V. Vila-
of computer-aided diagnosis of melanoma: A meta-analysis,’’ J. Amer. plana, O. Reiter, C. Carrera, A. Barreiro, A. C. Halpern, S. Puig, and
Acad. Dermatol., vol. 155, no. 11, p. 1291, Nov. 2019, doi: 10.1001/ J. Malvehy, ‘‘BCN20000: Dermoscopic lesions in the wild,’’ 2019,
jamadermatol.2019.1375. arXiv:1908.02288. [Online]. Available: https://arxiv.org/abs/1908.02288

VOLUME 8, 2020 208277


L.-F. Li et al.: Deep Learning in Skin Disease Image Recognition: A Review

[28] N. C. F. Codella, D. Gutman, M. E. Celebi, B. Helba, M. A. Marchetti, [49] J. Zhang, Y. Xie, Y. Xia, and C. Shen, ‘‘Attention residual learning
S. W. Dusza, A. Kalloo, K. Liopyris, N. Mishra, H. Kittler, and A. Halpern for skin lesion classification,’’ IEEE Trans. Med. Imag., vol. 38, no. 9,
‘‘Skin lesion analysis toward melanoma detection: A challenge at the pp. 2092–2103, Sep. 2019, doi: 10.1109/TMI.2019.2893944.
2017 International symposium on biomedical imaging (ISBI), hosted by [50] N. Hameed, A. Shabut, F. Hameed, S. Cirstea, S. Harriet, and A. Hossain,
the international skin imaging collaboration (ISIC),’’ in Proc. IEEE 15th ‘‘Mobile-based skin lesions classification using convolution neural net-
Int. Symp. Biomed. Imag. (ISBI), Apr. 2018, pp. 168–172. work,’’ Ann. Emerg. Technol. Comput., vol. 4, no. 2, pp. 26–37, Apr. 2020,
[29] P. Tschandl, C. Rosendahl, and H. Kittler, ‘‘The HAM10000 dataset, a doi: 10.33166/AETiC.2020.02.003.
large collection of multi-source dermatoscopic images of common pig- [51] A. Mahbod, R. Ecker, and I. Ellinger, ‘‘Skin lesion classifica-
mented skin lesions,’’ Sci. Data, vol. 5, no. 1, Dec. 2018, Art. no. 180161, tion using hybrid deep neural networks,’’ in Proc. 44th Int. Conf.
doi: 10.1038/sdata.2018.161. Acoust., Speech, Signal Process. (ICASSP), 2019. [Online]. Available:
[30] ISIC Project-ISIC Archive. Accessed: May 23, 2019. [Online]. Avail- https://arxiv.org/abs/1702.08434
able:https://www. isic-archive.com [52] A. Mahbod, G. Schaefer, C. Wang, G. Dorffner, R. Ecker, and I. Ellinger,
[31] J. R. Hagerty, R. J. Stanley, H. A. Almubarak, N. Lama, R. Kasmi, P. Guo, ‘‘Transfer learning using a multi-scale and multi-network ensemble
R. J. Drugge, H. S. Rabinovitz, M. Oliviero, and W. V. Stoecker, ‘‘Deep for skin lesion classification,’’ Comput. Methods Programs Biomed.,
learning and handcrafted method fusion: Higher diagnostic accuracy for vol. 193, Sep. 2020, Art. no. 105475, doi: 10.1016/j.cmpb.2020.105475.
melanoma dermoscopy images,’’ IEEE J. Biomed. Health, vol. 23, no. 4, [53] A. Aldwgeri and N. F. Abubacker, ‘‘Ensemble of deep convolutional
pp. 1385–1391, Jul. 2019, doi: 10.1109/JBHI.2019.2891049. neural network for skin lesion classification in dermoscopy images,’’ in
[32] DermlS. Accessed: Jun. 9, 2017. [Online]. Available: http://www. Advances in Visual Informatics. IVIC (Lecture Notes in Computer Sci-
dermis.net/dermisrootlen/home/index.htm ence), vol. 11870, H. B. Zaman et al., Eds. Cham, Switzerland: Springer,
[33] Derm101 Image Library. Accessed: Jan. 12, 2018. [Online]. Available: 2019, doi: 10.1007/978-3-030-34032-2_20.
https://www.derm101.com/image librarv/ [54] Y. Yan, J. Kawahara, and G. Hamarneh, ‘‘Melanoma recognition via
[34] DermNZ-ImageLibrary. Accessed: Jan. 13, 2018. [Online]. Available: visual attention,’’ in Proc. Int. Conf. Inf. Process. Med. Imag., 2019,
https://www.dermnetnz.org/image-librarvl pp. 793–804.
[35] D. Gutman, N. C. F. Codella, E. Celebi, B. Helba, M. Marchetti,
[55] D. Moldovan, ‘‘Transfer learning based method for two-step skin cancer
N. Mishra, and A. Halpern, ‘‘Skin lesion analysis toward melanoma
images classification,’’ in Proc. 7th IEEE Int. Conf. E-Health Bioeng.
detection: A challenge at the international symposium on biomedi-
(EHB), 2019.
cal imaging (ISBI) 2016, hosted by the international skin imaging
[56] A. Singhal, R. Shukla, P. K. Kankar, S. Dubey, S. Singh, and R. B. Pachori,
collaboration (ISIC),’’ 2016, arXiv:1605.01397. [Online]. Available:
‘‘Comparing the capabilities of transfer learning models to detect skin
http://arxiv.org/abs/1605.01397
lesion in humans,’’ Proc. Inst. Mech. Eng., Part H, J. Eng. Med., vol. 234,
[36] N. C. F. Codella et al., ‘‘Skin lesion analysis toward melanoma detection:
no. 10, pp. 1083–1093, Oct. 2020, doi: 10.1177/0954411920939829.
A challenge at the 2017 international symposium on biomedical imaging
[57] R. Sarkar, C. C. Chatterjee, and A. Hazra, ‘‘Diagnosis of melanoma
(ISBI), hosted by the international skin imaging collaboration (ISIC),’’ in
from dermoscopic images using a deep depthwise separable resid-
Proc. IEEE 15th Int. Symp. Biomed. Imag. (ISBI), Washington, DC, USA,
ual convolutional network,’’ IET Image Process., vol. 13, no. 12,
2018, pp. 168–172, doi: 10.1109/ISBI.2018.8363547.
[37] Med-Node Dataset. Accessed: Jan. 13, 2018. [Online]. Available: pp. 2130–2142, Oct. 2019, doi: 10.1049/iet-ipr.2018.6669.
[58] U. Aishwarya, I. J. Daniel, and R. Raghul, ‘‘Convolutional neural network
http://www.cs.rug.nl/ imaging/databases/melanoma_naevi/
[38] X. Fan, M. Dai, C. Liu, F. Wu, X. Yan, Y. Feng, Y. Feng, and B. Su, based skin lesion classification and identification,’’ in Proc. Int. Conf.
‘‘Effect of image noise on the classification of skin lesions using deep Inventive Comput. Technol. (ICICT), Feb. 2020, pp. 264–270.
convolutional neural networks,’’ Tsinghua Sci. Technol., vol. 25, no. 3, [59] B. Harangi, ‘‘Skin lesion classification with ensembles of deep con-
pp. 425–434, Jun. 2020, doi: 10.26599/TST.2019.9010029. volutional neural networks,’’ J. Biomed. Informat., vol. 86, pp. 25–32,
[39] S. S. Han, M. S. Kim, W. Lim, G. H. Park, I. Park, and S. E. Chang, Oct. 2018.
‘‘Classification of the clinical images for benign and malignant cutaneous [60] H. E. Xueying, H. Zhongyi, and W. Benzheng, ‘‘Pigmented skin lesion
tumors using a deep learning algorithm,’’ J. Investigative Dermatol., recognition and classification based on deep convolutional neural net-
vol. 138, no. 7, pp. 1529–1538, Jul. 2018, doi: 10.1016/j.jid.2018.01.028. work,’’ J. Comput. Appl., 2018.
[40] American Cancer Society-Melanoma Skin Cancer. Accessed: [61] X. U. Meifeng et al., ‘‘Skin disease recognition method based on multi-
Oct. 23, 2015. [Online]. Available: http://www.cancer.org/acs/groups/cid/ model fusion of convolutional neural network,’’ J. Xi’an Jiaotong Univ.,
documents/webcontent/003 120-pdf 2019.
[41] J. Kawahara, A. BenTaieb, and G. Hamarneh, ‘‘Deep features to classify [62] K. Wang, C. Gou, Y. J. Duan, L. Yilun, and X. H. Zheng, ‘‘Genera-
skin lesions,’’ in Proc. IEEE 13th Int. Symp. Biomed. Imag. (ISBI), tive adversarial networks: The state of the art and beyond,’’ Zidonghua
Apr. 2016, pp. 1397–1400. Xuebao/Acta Automatica Sinica, vol. 43, pp. 321–332, Mar. 2017, doi:
[42] D. Zhuang, K. Chen, and J. Chang, ‘‘CS-AF: A cost-sensitive multi- 10.16383/j.aas.2017.y000003.
classifier active fusion framework for skin lesion classification,’’ 2020, [63] K. F. Wang et al., ‘‘Generative adversarial networks: The state of the art
arXiv:2004.12064. [Online]. Available: https://arxiv.org/abs/2004.12064 and beyond,’’ Acta Automatica Sinica, vol. 43, no. 3, pp. 321–332, 2017.
[43] A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and [64] L. Hang, Y. Zhen, N. Dong, B. Lei, and W. Tianfu, ‘‘Melanoma recogni-
S. Thrun, ‘‘Dermatologist-level classification of skin cancer with deep tion in dermoscopy images via deep residual network,’’ Chin. J. Biomed.
neural networks,’’ Nature, vol. 542, no. 7639, pp. 115–118, Feb. 2017, Eng., vol. 37, pp. 274–282, 2018, doi: 10.3969/j.issn.0258-8021.2018.
doi: 10.1038/nature21056. 03.003.
[44] Y. Li and L. Shen, ‘‘Skin lesion analysis towards melanoma detection [65] A. Bissoto, F. Perez, E. Valle, and S. Avila, ‘‘Skin lesion synthesis
using deep learning network,’’ Sensors, vol. 18, no. 2, p. 556, Feb. 2018, with generative adversarial networks,’’ in OR 2.0 Context-Aware Oper-
doi: 10.3390/s18020556. ating Theaters, Computer Assisted Robotic Endoscopy, Clinical Image-
[45] S. Serte and H. Demirel, ‘‘Gabor wavelet-based deep learning for Based Procedures, and Skin Image Analysis. CARE 2018, CLIP 2018,
skin lesion classification,’’ Comput. Biol. Med., vol. 113, Oct. 2019, OR 2.0 2018, ISIC 2018 (Lecture Notes in Computer Science), vol.
Art. no. 103423, doi: 10.1016/j.compbiomed.2019.103423. 11041, D. Stoyanov et al., Eds. Cham, Switzerland: Springer, 2018, doi:
[46] N. Hameed, A. M. Shabut, M. K. Ghosh, and M. A. Hossain, ‘‘Multi-class 10.1007/978-3-030-01201-4_32.
multi-level classification algorithm for skin lesions classification using [66] F. Long, N. Xiong, A. V. Vasilakos, L. T. Yang, and F. Sun, ‘‘A sustainable
machine learning techniques,’’ Expert Syst. Appl., vol. 141, Mar. 2020, heuristic QoS routing algorithm for pervasive multi-layered satellite wire-
Art. no. 112961, doi: 10.1016/j.eswa.2019.112961. less networks,’’ Wireless Netw., vol. 16, no. 6, pp. 1657–1673, Aug. 2010,
[47] A. Mahbod, G. Schaefer, I. Ellinger, R. Ecker, A. Pitiot, and C. Wang, doi: 10.1007/s11276-009-0220-z.
‘‘Fusing fine-tuned deep features for skin lesion classification,’’ Comput- [67] C. Baur, S. Albarqouni, and N. Navab, ‘‘MelanoGANs: High resolution
erized Med. Imag. Graph., vol. 71, pp. 19–29, Jan. 2019, doi: 10.1016/j. skin lesion synthesis with GANs,’’ 2018, arXiv:1804.04338. [Online].
compmedimag.2018.10.007. Available: https://arxiv.org/abs/1804.04338
[48] K. Matsunaga, A. Hamada, A. Minagawa, and H. Koga, ‘‘Image clas- [68] X. Shang, M. Han, Y. Sun, SY. Sun, X. Chen, M. Hu, and Y. Mei, ‘‘Skin
sification of melanoma, nevus and seborrheic keratosis by deep neu- diseases diagnosis method based on generative adversarial networks
ral network ensemble,’’ 2017, arXiv:1703.03108. [Online]. Available: and Naive Bayes,’’ J. Frontiers Comput. Sci. Technol., vol. 13, no. 6,
https://arxiv.org/abs/1703.03108 pp. 1005–1015, 2019, doi: 10.3778/j.issn.1673-9418.1806019.

208278 VOLUME 8, 2020


L.-F. Li et al.: Deep Learning in Skin Disease Image Recognition: A Review

[69] M. Hasan, S. Barman, S. Islam, and A. Reza, ‘‘Skin cancer detection [87] C. Szegedy et al., ‘‘Going deeper with convolutions,’’ in Proc. IEEE
using convolutional neural network,’’ in Proc. 5th Int. Conf. Comput. Conf. Comput. Vis. Pattern Recognit. (CVPR), Boston, MA, USA, 2015,
Artif. Intell., 2019, pp. 254–258. pp. 1–9, doi: 10.1109/CVPR.2015.7298594.
[70] H. Ing, Q. Zhang, M. Chen, L. Zhang, Z. Li, J. Zhu, and Z. Li, ‘‘Few- [88] K. He, X. Zhang, S. Ren, and J. Sun, ‘‘Deep residual learning for
shot domain adaptation for the identification of clinical imaging for image recognition,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit.,
dermatology,’’ J. Xi’ Jiaotong Univ., vol. 54, no. 9, Sep. 2020 Oct. 2016, pp. 770–778.
[71] C. Yu, S. Yang, W. Kim, J. Jung, K.-Y. Chung, S. W. Lee, and [89] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, ‘‘Densely
B. Oh, ‘‘Acral melanoma detection using a convolutional neural net- connected convolutional networks,’’ in Proc. IEEE Conf. Comput. Vis.
work for dermoscopy images,’’ PLoS ONE, vol. 13, no. 3, Mar. 2018, Pattern Recognit. (CVPR), Honolulu, HI, USA, 2017, pp. 2261–2269,
Art. no. e0193321, doi: 10.1371/journal.pone.0193321. doi: 10.1109/CVPR.2017.243.
[72] A. Menegola, M. Fornaciali, R. Pires, F. V. Bittencourt, S. Avila, and [90] J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu, ‘‘Squeeze- and-excitation
E. Valle, ‘‘Knowledge transfer for melanoma screening with deep learn- networks,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 8,
ing,’’ in Proc. IEEE 14th Int. Symp. Biomed. Imag. (ISBI ), Apr. 2017, pp. 2011–2023, Aug. 2020, doi: 10.1109/TPAMI.2019.2913372.
pp. 297–300. [91] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and
[73] T. Shanthi, R. S. Sabeenian, and R. Anand, ‘‘Automatic diagno- K. Keutzer, ‘‘SqueezeNet: AlexNet-level accuracy with 50x fewer param-
sis of skin diseases using convolution neural network,’’ Microproces- eters and <0.5 MB model size,’’ 2016, arXiv:1602.07360. [Online].
sors Microsyst., vol. 76, Jul. 2020, Art. no. 103074, doi: 10.1016/j. Available: https://arxiv.org/abs/1602.07360
micpro.2020.103074. [92] Keras Documentation. Accessed: Feb. 12, 2020. [Online]. Available:
[74] Y. Yang, Y. Ge, L. Guo, Q. Wu, L. Peng, E. Zhang, J. Xie, Y. Li, and https://keras.io/applications/
T. Lin, ‘‘Development and validation of two artificial intelligence models [93] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro,
for diagnosing benign, pigmented facial skin lesions,’’ Skin Res. Technol., G. S. Corrado, A. Davis, J. Dean, M. Devin, and S. Ghemawat,
early access, Aug. 8, 2020, doi: 10.1111/srt.12911. ‘‘TensorFlow: Large-scale machine learning on heterogeneous
[75] T. J. Brinker, A. Hekler, A. H. Enk, J. Klode, A. Hauschild, C. Berking, distributed systems,’’ 2016, arXiv:1603.04467. [Online]. Available:
B. Schilling, S. Haferkamp, D. Schadendorf, S. Fröhling, and J. S. Utikal, https://arxiv.org/abs/1603.04467
[94] A. Vedaldi and K. Lenc, ‘‘MatConvNet: convolutional neural net-
‘‘A convolutional neural network trained with dermoscopic images per-
works for MATLAB,’’ in Proc. 23rd ACM Int. Conf. Multimedia, 2015,
formed on par with 145 dermatologists in a clinical melanoma image
pp. 689–692, doi: 10.1145/2733373.2807412.
classification task,’’ Eur. J. Cancer, vol. 111, pp. 148–154, Apr. 2019,
[95] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin,
doi: 10.1016/j.ejca.2019.02.005.
A. Desmaison, L. Antiga, and A. Lerer, ‘‘Automatic differentiation in
[76] T. J. Brinker, A. Hekler, A. H. Enk, C. Berking, S. Haferkamp,
pytorch,’’ in Proc. NIPS Workshop Autodiff, Long Beach, CA, USA, 2017.
A. Hauschild, M. Weichenthal, J. Klode, D. Schadendorf, T. Holland- [96] Dermnet-Skin Disease Atlas. Accessed: Dec. 31, 2016. [Online]. Avail-
Letz, C. von Kalle, S. Fröhling, B. Schilling, and J. S. Utikal, ‘‘Deep able: http://www.dermnet.com/
neural networks are superior to dermatologists in melanoma image clas- [97] S. S. Savkare and S. P. Narote, ‘‘Blood cell segmentation from micro-
sification,’’ Eur. J. Cancer, vol. 119, pp. 11–17, Sep. 2019, doi: 10. scopic blood images,’’ in Proc. Int. Conf. Inf. Process. (ICIP), Dec. 2015,
1016/j.ejca.2019.05.023. pp. 502–505.
[77] H. A. Haenssle et al., ‘‘Man against machine: Diagnostic perfor- [98] Y. Yang, N. Xiong, N. Y. Chong, and X. Défago, ‘‘A decentralized and
mance of a deep learning convolutional neural network for dermo- adaptive flocking algorithm for autonomous mobile robots,’’ in Proc. 3rd
scopic melanoma recognition in comparison to 58 dermatologists,’’ Int. Conf. Grid Pervasive Comput., May 2008, pp. 262–268.
Ann. Oncol., vol. 29, no. 8, pp. 1836–1842, Aug. 2018, doi: 10.1093/ [99] Y. S. Vang, Z. Chen, and X. Xie, ‘‘Deep learning framework for multi-
annonc/mdy166. class breast cancer histology image classification,’’ in Image Analysis and
[78] A. Hekler, J. S. Utikal, A. H. Enk, A. Hauschild, M. Weichenthal, Recognition. ICIAR (Lecture Notes in Computer Science), vol. 10882,
R. C. Maron, C. Berking, S. Haferkamp, J. Klode, D. Schadendorf, and A. Campilho, F. Karray, B. ter Haar Romeny, Eds. Cham, Switzerland:
B. Schilling, ‘‘Superior skin cancer classification by the combination of Springer, 2018, doi: 10.1007/978-3-319-93000-8_104.
human and artificial intelligence,’’ Eur. J. Cancer, vol. 120, pp. 114–121, [100] T. Su, S. Mu, A. Shi, Z. Cao, and M. Dong, ‘‘A CNN-LSVM model for
Oct. 2019, doi: 10.1016/j.ejca.2019.07.019. imbalanced images identification of wheat leaf,’’ Neural Netw. World,
[79] K. M. Hosny, M. A. Kassem, and M. M. Foaud, ‘‘Classification of skin vol. 29, no. 5, pp. 345–361, 2019, doi: 10.14311/NNW.2019.29.021.
lesions using transfer learning and augmentation with alex-net,’’ PLoS [101] D. Bardou, K. Zhang, and S. M. Ahmad, ‘‘Classification of breast can-
ONE, vol. 14, no. 5, May 2019, Art. no. e0217293, doi: 10.1371/journal. cer based on histology images using convolutional neural networks,’’
pone.0217293. IEEE Access, vol. 6, pp. 24680–24693, 2018, doi: 10.1109/ACCESS.
[80] S. Nasiri, J. Helsper, M. Jung, and M. Fathi, ‘‘DePicT melanoma deep- 2018.2831280.
CLASS: A deep convolutional neural networks approach to classify [102] S. A. Adeshina, A. P. Adedigba, A. A. Adeniyi, and A. M. Aibinu,
skin lesion images,’’ BMC Bioinf., vol. 21, no. 2, Mar. 2020, doi: 10. ‘‘Breast cancer histopathology image classification with deep convo-
1186/s12859-020-3351-y. lutional neural networks,’’ in Proc. 14th Int. Conf. Electron. Comput.
[81] J. Yap, W. Yolland, and P. Tschandl, ‘‘Multimodal skin lesion classifica- Comput. (ICECCO), Kaskelen, Kazakhstan, 2018, pp. 206–212, doi:
tion using deep learning,’’ Exp. Dermatol., vol. 27, no. 11, pp. 1261–1267, 10.1109/ICECCO.2018.8634690.
Nov. 2018. [103] S. Kwadwo, F. You, and O. Tettey, ‘‘Efficient, ultra-facile breast cancer
[82] P. Tschandl, C. Rosendahl, B. N. Akay, G. Argenziano, A. Blum, histopathological images classification approach utilizing deep learning
R. P. Braun, H. Cabo, J. Y. Gourhant, J. Kreusch, A. Lallas, and J. Lapins, optimizers,’’ Int. J. Comput. Appl., vol. 177, no. 37, pp. 1–9, Feb. 2020,
‘‘Expert-level diagnosis of nonpigmented skin cancer by combined con- doi: 10.5120/ijca2020919875.
volutional neural networks,’’ JAMA Dermatol., vol. 155, no. 1, pp. 58–65, [104] X. Wang, K. Mao, L. Wang, P. Yang, D. Lu, and P. He, ‘‘An
2019, doi: 10.1001/jamadermatol.2018.4378. appraisal of lung nodules automatic classification algorithms for CT
[83] P. Tschandl, C. Rinner, Z. Apalla, G. Argenziano, N. Codella, A. Halpern, images,’’ Sensors, vol. 19, no. 1, p. 194, Jan. 2019, doi: 10.3390/
M. Janda, A. Lallas, C. Longo, J. Malvehy, J. Paoli, S. Puig, C. Rosendahl, s19010194.
H. P. Soyer, I. Zalaudek, and H. Kittler, ‘‘Human–computer collaboration [105] R. He, K. Liu, N. Xiong, and Y. Zhu, ‘‘Garment image retrieval on
for skin cancer recognition,’’ Nature Med., vol. 26, no. 8, pp. 1229–1234, the Web with ubiquitous camera-phone,’’ in Proc. IEEE Asia–Pacific
Aug. 2020, doi: 10.1038/s41591-020-0942-0. Services Comput. Conf., Dec. 2008, pp. 1584–1589.
[84] M. A. Al-masni, D.-H. Kim, and T.-S. Kim, ‘‘Multiple skin lesions diag- [106] Z. Xia, X. Wang, X. Sun, Q. Liu, and N. Xiong, ‘‘Steganalysis of
nostics via integrated deep convolutional networks for segmentation and LSB matching using differences between nonadjacent pixels,’’ Multi-
classification,’’ Comput. Methods Programs Biomed., vol. 190, Jul. 2020, media Tools Appl., vol. 75, no. 4, pp. 1947–1962, Feb. 2016, doi: 10.
Art. no. 105351, doi: 10.1016/j.cmpb.2020.105351. 1007/s11042-014-2381-8.
[85] A. Krizhevsky, I. Sutskever, and G. E. Hinton, ‘‘ImageNet classification [107] J. Yang, Y. Jiao, N. Xiong, and D. S. Park, ‘‘Fast face gender recognition
with deep convolutional neural networks,’’ in Proc. Neural Inf. Process. by using local ternary pattern and extreme learning machine,’’ KSII Trans.
Syst., vol. 25, 2012, pp. 1–9, doi: 10.1145/3065386. Internet Inf. Syst. (TIIS), vol. 7, no. 7, pp. 1–16, 2013.
[86] K. Simonyan and A. Zisserman, ‘‘Very deep convolutional networks [108] W. Fang, Y. Li, H. Zhang, N. Xiong, J. Lai, and A. V. Vasilakos, ‘‘On
for large-scale image recognition,’’ 2014, arXiv:1409.1556. [Online]. the throughput-energy tradeoff for data transmission between cloud and
Available: http://arxiv.org/abs/1409.1556 mobile devices,’’ Inf. Sci., vol. 283, pp. 79–93, Nov. 2014.

VOLUME 8, 2020 208279


L.-F. Li et al.: Deep Learning in Skin Disease Image Recognition: A Review

[109] Y. Zhou, D. Zhang, and N. Xiong, ‘‘Post-cloud computing paradigms: Dr. Xiong has been a Senior Member of the IEEE Computer Society
A survey and comparison,’’ Tsinghua Sci. Technol., vol. 22, no. 6, since 2012. He has received the Best Paper Award in the 10th IEEE Inter-
pp. 714–732, Dec. 2017. national Conference on High Performance Computing and Communications
(HPCC-08) and the Best Student Paper Award in the 28th North American
Fuzzy Information Processing Society Annual Conference (NAFIPS2009).
He is the Chair of ‘‘Trusted Cloud Computing’’ Task Force, IEEE Compu-
tational Intelligence Society (CIS), and the Industry System Applications
LING-FANG LI received the B.S. degree in com- Technical Committee. He has been the General Chair, Program Chair, Public-
puter software and the M.S. degree in computer ity Chair, Program Committee member, and Organizing Committee member
science and technology from Southwest Jiao- for over 100 international conferences, and as a Reviewer for about 100 inter-
tong University, Chengdu, China, in 2013 and national journals, including IEEE JSAC, IEEE SMC (Park: A/B/C), IEEE
2016, respectively. From 2016 to 2018, she was a TRANSACTIONS ON COMMUNICATIONS, IEEE TRANSACTIONS ON MOBILE COMPUTING,
Software Research and Development Engineer at IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS. He is serving as an
ThoughtWorks. Since 2019, she has been a Lec- Editor-in-Chief, Associate Editor or Editor Member for over ten international
turer with the School of Information Engineering, journals, including Associate Editor for the IEEE TRANSACTIONS ON SYSTEMS,
Inner Mongolia University of Science and Tech- MAN & CYBERNETICS: SYSTEMS, an Associate Editor for the IEEE TRANSACTIONS
nology, Baotou, China. Her research interests are ON NETWORK SCIENCE AND ENGINEERING, an Associate Editor for Information
computer vision and data security. Science, an Editor-in-Chief for the Journal of Internet Technology (JIT),
the Editor-in-Chief for the Journal of Parallel & Cloud Computing (PCC),
and a Guest Editor for over ten international journals, including Sensor
Journal, WINET, and MONET.

XU WANG received the B.S. degree in computer


science and technology from Xinyang University,
Xinyang, China, in 2018. He is currently pursuing
the M.S. degree in computer science and technol-
ogy with the Inner Mongolia University of Sci-
ence and Technology, Baotou, China. His research
interests include pattern recognition and computer YONG-XING DU received the B.S. degree in com-
vision. munication engineering from the Chengdu Univer-
sity of Information Technology, Chengdu, China,
in 2001, and the M.S. degree in electronic circuit
and system and the Ph.D. degree in microelec-
tronics and solid electronics from the Xi’an Uni-
versity of Technology, Xi’an, China, in 2007 and
WEI-JIAN HU received the B.S. degree in com- 2014, respectively. From 2001 to 2007, he was a
puter software and the M.S. degree in computer Research Assistant with the Mongolia University
science and technology from Southwest Jiao- of Science and Technology, Baotou, where he was
tong University, Chengdu, China, in 2013 and also a Lecturer with the School of Information Engineering from 2007 to
2016, respectively. From 2015 to 2016, he was 2012, and has been a Professor since 2018. His main research fields are the
a Software Research and Development Engineer numerical calculation of electromagnetic fields, new microwave radiators,
at General Electric. From 2016 to 2019, he was attenuation of black barrier effects, and beam synthesis. He is a Senior
a Software Research and Development Engineer Member of the Chinese Institute of Electronics. He is the Editor of the
at JD.COM. Since 2019, he has been a Lec- American Journal of Traffic and Transportation Engineering and the Journal
turer with the School of Information Engineering, of Electronic Research and Application, and a Reviewer of IEEE ACCESS,
Inner Mongolia University of Science and Technology, Baotou, China. His IEEE MWCL, Optical and Quantum Electronics, International Journal
research interests are computer vision and deep learning. of Electronics and Communications, Asia-Pacific Microwave Conference,
Measurement and Control Technology, Journal of Optics, Laser Journal, and
Optical Communication Technology.

NEAL N. XIONG (Senior Member, IEEE)


received both Ph.D. degrees from Wuhan Uni-
versity, in 2007, with a focus on sensor system
engineering, and from the Japan Advanced Insti-
tute of Science and Technology, in 2008, with
a focus on dependable communication networks, BAO-SHAN LI was born in Tangshan, China,
respectively. in 1965. He received the B.S. degree in radio tech-
He is currently an Associate Professor (5th nology from Northeastern University, Shenyang,
year) with the Department of Mathematics and China, in 1986, and the M.S. degree in com-
Computer Science, Northeastern State University, munication engineering from the Inner Mongolia
Tahlequah, OK, USA. Before he attended Northeastern State University, University of Technology, Hohhot, China, in 2007.
he worked at Georgia State University, Wentworth Technology Institution, He was a Lecturer (1986–1999) and was an Asso-
and Colorado Technical University (Full Professor about five years) for ciate Professor (1999–2008) with the School of
about ten years. His research interests include cloud computing, security and Information Engineering, Inner Mongolia Univer-
dependability, parallel and distributed computing, networks, and optimiza- sity of Science and Technology, Baotou, where
tion theory. He has published over 200 international journal papers and over he has been a Professor since 2008. He has authored over 30 articles. His
100 international conference papers. Some of his works were published in the current research interest is the application research of RFID and the Internet
IEEE JSAC, IEEE or ACM transactions, ACM Sigcomm workshop, IEEE of Things technology.
INFOCOM, ICDCS, and IPDPS.

208280 VOLUME 8, 2020

You might also like