Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Available

Available online
online at
at www.sciencedirect.com
www.sciencedirect.com
Available online at www.sciencedirect.com

ScienceDirect
Procedia
Procedia Manufacturing
Manufacturing 00
00 (2019)
(2019) 000–000
000–000
Procedia Manufacturing 39 (2019) 349–356

25th International Conference on Production Research Manufacturing Innovation:


Cyber Physical Manufacturing
August 9-14, 2019 | Chicago, Illinois (USA)
3D Medical Image Classification with Depthwise Separable
Networks

Haifeng
Haifeng Wang,
Wang, Qianqian
Qianqian Zhang,
Zhang, Hongya
Hongya Lu,
Lu, Daehan
Daehan Won,
Won, and
and Sang
Sang Won
Won Yoon
Yoon∗
Department of
Department of Systems
Systems Science
Science and
and Industrial
Industrial Engineering,
Engineering, State
State University
University of
of New
New York
York at
at Binghamton,
Binghamton, Binghamton,
Binghamton, NY
NY 13905,
13905, U.S.A
U.S.A

Abstract
Abstract
This
This research
research studies
studies aa dilated
dilated depthwise
depthwise separable
separable convolution
convolution neural
neural network
network (DSCN)
(DSCN) model
model toto identify
identify human
human tissue
tissue types
types
from
from 3D medical images. 3D medical image classification is a challenging task due to the unpredictable noise and indistinct tissue
3D medical images. 3D medical image classification is a challenging task due to the unpredictable noise and indistinct tissue
behaviors of
behaviors of the
the image
image content.
content. TheThe objective
objective of
of this
this research
research isis to
to improve
improve typical
typical supervised
supervised deep
deep learning
learning model
model accuracy
accuracy by by
using
using dilated
dilated convolution
convolution and and depthwise
depthwise separable
separable network
network approaches
approaches on on 3D
3D medical
medical image
image classification
classification tasks.
tasks. A
A depthwise
depthwise sep-
sep-
arable
arable architecture
architecture is
is used
used toto improve
improve parameter
parameter utilization
utilization efficiency.
efficiency. Dilated
Dilated convolutions
convolutions are
are applied
applied toto systematically
systematically aggregate
aggregate
multiscale contextual
multiscale contextual information
information and and provide
provide aa large
large receptive
receptive field
field with
with aa small
small number
number of
of trainable
trainable weights.
weights. The
The performance
performance of of
the
the constructed
constructed model
model is is tested
tested to
to perform
perform aa multi-class
multi-class human
human tissue
tissue classification
classification on
on 3D
3D Optical
Optical Coherence
Coherence Tomography
Tomography (OCT)
(OCT)
images.
images. Experimental
Experimental results
results are
are compared
compared with
with typical
typical deep
deep learning
learning classification
classification models.
models. The
The results
results show
show that
that the
the DSCN
DSCN model
model
outperforms other
outperforms other models
models for for all
all the
the tissue
tissue classification
classification tasks.
tasks. The
The proposed
proposed DSCN
DSCN model
model can
can be
be aa potential
potential approach
approach for
for the
the 3D
3D
image-based diagnostic
image-based diagnostic tasks
tasks in
in both
both healthcare
healthcare and
and manufacturing
manufacturing field.field.

©
cc 2019
2019 The Authors. Published by Elsevier Ltd.

 2019 The
The Authors.
Authors. Published
Published by
by Elsevier
Elsevier B.V.
B.V.
This
This is
is an
an open
open access
access article
article under
under the CC
the CC BY-NC-ND
BY-NC-ND license
license (https://creativecommons.org/licenses/by-nc-nd/4.0/)
Peer-review under responsibility ofthe
This is an open access article under theCC BY-NC-ND
scientific licenseof(https://creativecommons.org/licenses/by-nc-nd/4.0/)
committee (https://creativecommons.org/licenses/by-nc-nd/4.0/)
the ICPR25 International Scientific & Advisory and Organizing
Selection
Selection and
and peer
peer review
review under
under the
the responsibility
responsibility of
of ICPR25
ICPR25 International
International Scientific
Scientific &
& Advisory
Advisory and
and Organizing
Organizing committee
committee
committee members
members.
members.
Keywords:
Keywords: Convolutional
Convolutional neural
neural network,
network, optical
optical coherence
coherence tomography
tomography image
image classification,
classification, depthwise
depthwise separable
separable networks
networks

1. Introduction
1. Introduction

With
With the
the development
development of of information
information technologies,
technologies, aa large
large amount
amount of
of biomedical
biomedical image
image data
data can
can be
be collected
collected in
in the
the
recent decades,
recent decades, which
which promotes
promotes thethe application
application of
of machine
machine learning
learning techniques
techniques for
for human
human diseases
diseases computer-aided
computer-aided
diagnoses [1,
diagnoses [1, 2,
2, 3].
3]. One
One ofof the
the critical
critical tasks
tasks of
of the
the computer-aided
computer-aided diagnoses
diagnoses is
is to
to perform
perform image
image classification
classification to
to
identify different
identify different tissues.
tissues. Effective
Effective tissue
tissue identification
identification through
through medical
medical image
image analysis
analysis can
can also
also be
be used
used for
for further
further
medical related
medical related machine
machine learning
learning model
model development
development andand practical
practical applications
applications to
to improve
improve patient
patient service
service level,
level,
increase process
increase process efficiency,
efficiency, and
and reduce
reduce healthcare
healthcare cost.
cost.

∗∗ Corresponding author.
Corresponding author. Tel.:
Tel.: +1-607-777-5935; fax: +1-607-777-4094.
+1-607-777-5935; fax: +1-607-777-4094.
E-mail
E-mail address:
address: yoons@binghamton.edu
yoons@binghamton.edu

2351-9789 © 2019 The Authors. Published by Elsevier Ltd.


This is an open access article under the by
CCElsevier
BY-NC-ND
B.V. license (https://creativecommons.org/licenses/by-nc-nd/4.0/)
2351-9789
2351-9789
Peer-review cc under
2019
2019 The Authors.
The Authors. Published
Published
responsibility of thebyscientific
Elsevier B.V.
committee of the ICPR25 International Scientific & Advisory and Organizing
This is an open
This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/)
access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/)
committee members
Selection
Selection and
and peer
peer review
review
10.1016/j.promfg.2020.01.369 under
under the
the responsibility
responsibility of
of ICPR25
ICPR25 International
International Scientific &
Scientific & Advisory
Advisory and
and Organizing
Organizing committee
committee members.
members.
350 Haifeng Wang et al. / Procedia Manufacturing 39 (2019) 349–356
Wang, Zhang, Lu, and Yoon / Procedia Manufacturing 00 (2019) 000–000

By using different imaging techniques, such as Ultrasound, High-resolution computed tomography (CT), Magnetic
resonance imaging (MRI), mammography, and optical coherence tomography (OCT), doctors can visualize those
barely invisible tissues inside patients’ body. Compared to other diagnosis technologies, such as gene detection and
blood test, medical imaging is one of the effective and direct methods to perform disease diagnosis, especially cancer
screening. Typically, radiologists need to be trained to provide professional and accurate advices during imaging
diagnosis process. For example, Sommerey et al. studied a test to use OCT for fat, thyroid, parathyroid, and lymph
identification [4]. And the method they used was manually matched, which aims at matching the image findings
to the corresponding histology behavior. However, human detection is quite time-consuming, and manual detection
results can vary from radiologist to radiologist because of the different expertise levels. With the development and
proliferation of artificial intelligence (AI) techniques, the use of machine learning methods to achieve human disease
automatic detection has attracted the most attention in literature recently.
In this research, a deep learning-based human tissue classification model is developed using recent advanced deep
learning algorithms. The structure of this paper is organized as following. Section 2 provides a general overview about
the background of 3D OCT image scan and current state-of-the-art methods for medical image analysis. Section 3
describes the dilated depthwise separable convolution neural network (DSCN) model structure. Experimental results
and related performance of the proposed model are presented in Section 4. The research findings are finally concluded
in Section 5.

2. Background

Resolution and image dimension are two directly factors that can influence the diagnosis results. As one of the
high resolution 3D scan techniques, OCT is rapidly growing in many non-invasive treatments and swift diagnostic
procedures in the recent years, such as retinal pathologies examination [5], breast cancer diagnosis [6], and thyroid
tissue detection [4].
By applying near-infrared light interferometry, OCT technique can provide cross-sectional images to visualize
the tissues’ microstructure. Indeed, the cross-sectional images can construct 3D scans and depict more detail tissue
information of a scanned organ. Compared to many other imaging methods, OCT can obtain higher resolution images,
for instance, the resolution of OCT image has a 10-fold increase compared with ultrasound. Figure 1(a) is the OCT
machine scan process, and Figure 1(b) shows an example of human thyroid OCT scan. 3D images are constructed
through many cross-sectional C-Scans in this case.

(a) OCT machine scan process. (b) 3D adipose tissue.

Figure 1. Tissue OCT scan process and the 3D data.

Most of the practical diagnostic processes are based on the match of OCT findings to the corresponding histology
manually [4]. For instance, thyroid tissue has a thick and highly scattering capsule, which is followed by the follicle
bearing tissue with low signal intensity, and parathyroid tissue is homogeneous with bright scattering. Figure 2 shows
some examples of different human tissues, i.e., thyroid, parathyroid, adipose, and muscle, extracted from the 681st
Haifeng Wang et al. / Procedia Manufacturing 39 (2019) 349–356 351
Wang, Zhang, Lu, and Yoon / Procedia Manufacturing 00 (2019) 000–000

layer along C-Scan direction. Typically, radiologists need to be well trained to provide professional and accurate
advices during image diagnosis processes.

(a) Thyroid tissue. (b) Adipose tissue.

(c) Parathyroid tissue. (d) Muscle tissue.

Figure 2. Human tissue OCT scan images (Extracted from the 681st layer).

For machine learning-based medical image analysis, one of the critical milestones is the research from Mangasar-
ian et al. [7], who applied the linear programming approach for mammography image breast cancer detection. The
traditional medical image classification models generally apply morphological operation directly on image pixel level
analysis, such as edges extraction for breast cancer cell detection [8] and geographic level image detection [9]. Indeed,
images can be considered as data matrices and converted to a feature space for more general machine learning algo-
rithms. Various medical imaging tools and different machine learning detection techniques have been used in medical
image detection domain, such as expectation maximization (EM) algorithm, support vector machine, artificial net-
works, Naive Bayes classifier, logistic regression, Bayesian networks [10]. Due to the complexity of human tissue
structures, tissue (even cell level) classification of different medical images becomes a challenging problem. Studies
identified different good models in the literature, but the limitation is that the machine learning models are usually
problem specific [11].
To increase the generalized performance of image classification models, deep learning methods are widely used
in the recently years. By training a deep structure model with a huge amount of labeled data, a deep learning model
can obtain pretty high performance on a wide range of tasks [12]. Deep learning is a powerful tool that can capture
general aspects of images, such as edges and corners, through its feature hierarchies learning. One example is to use
deep learning for 3D image knee cartilage detection. In their model, three planes that pass through the voxel and are
parallel to the xy, yz and zx planes are taken, respectively. A 2D patch centered around that voxel is extracted from
each of the three planes. Then, one convolutional neural network (CNN) model is applied to each plane, and the three
CNNs are fused in the final layer to feed a softmax classifier [13]. Another recent example is to use deep learning
on a brain tumor study by gliomas and glioblastomas detection [14]. The challenge of the gliomas and glioblastomas
detection is that these tumors can appear all the places in human brain, and are often diffused and poorly contrasted.
Besides, those tumors have no exact size and shape, which requires a high flexible and capacity model. So, a deep
352 Haifeng Wang et al. / Procedia Manufacturing 39 (2019) 349–356
Wang, Zhang, Lu, and Yoon / Procedia Manufacturing 00 (2019) 000–000

neural network, e.g. CNN, is used to learn those abstract features and perform on various tumor modality detection
[14]. Due to the vague object types in medical images, many deep learning models can only be applied for specific
tasks, such as brain tumor analysis, breast cancer analysis. Applying deep learning models for 3D OCT image analysis
is still limited available in the literature.
Medical imaging technologies become more and more advanced and high-resolution 3D medical images are avail-
able. However, most of the practical tests are still limited in manual tests. Even many data mining and machine
learning related algorithms are used in medical image analysis tasks. Most models can only work for specific tasks
and suffer a significant challenge with images collected by different scanning methods. In this research, tissue clas-
sification through 3D OCT image is studied by applying several modification of deep learning approaches, such as
depthwise separable convolution (DSC) and dilation networks.

3. Methodology

Typical algorithms for medical image classification are based on adaptations of CNN that had originally been
designed for natural image classification. However, 3D medical images are structurally different from natural image
classification due to the vague image content and high image volume. In this research, to increase the performance of
typical CNN on 3D medical image classification tasks, DSC and dilated convolution are combined to improve CNN
classification performance on 3D medical images.

Figure 3. Architecture of the dilated depthwise separable convolutional neural network.

3.1. Convolutional Neural Network Architecture

CNNs typically involve many layers that are used to extract successive representations from the input data and
each layer is connected by weights and bias, which are trained to store the learned information. By applying DSC and
dilated convolution, the constructed DSCN is designed to perform 3D medical image classification in this research.
Figure 3 is the architecture of the DSCN model. A dilated convolution (Dilated Conv.) is added in the structure as
shown in dashed boxes in Figure 3.
Convolution is one of the formal operations, which is defined as the integral of the product of the two functions
after one is reversed and shifted. In CNN, the convolutional layer includes several special terminologies, i.e., filter,
stride, and padding. The filter is also called convolution kernel, which is a matrix that includes values. Filter has a
Haifeng Wang et al. / Procedia Manufacturing 39 (2019) 349–356 353
Wang, Zhang, Lu, and Yoon / Procedia Manufacturing 00 (2019) 000–000

smaller size dimension compared to input image and is applied to the entirety of the image. The values of a filter are
determined during the CNN training process.
Most of the medical scans are just grey image data. By removing the channel dimension in typical 3D convolution,
2D convolution-based operation can be used for the 3D medical image data analysis.
 m−1 
m−1  n
vli jq = wlabq ul−1
(i+a)( j+b)c , ∀q (1)
a=0 b=0 c=1

Given a filter wl with size m × m and a given input volume ul at position (a, b, c), the value of each pixel (vli jq ) at the
q-th output channel can be expressed as Equation (1) during 2D convolution process.
uli jq = arg max{ul−1
(i+x)( j+y)c : x, y = 0, . . . , m − 1, c = 1, . . . , n} (2)
The notation l indicates the l-th layer, and n is the number of input filters. The max pooling layer is expressed in
Equation (2).
m−1 
 m−1
sli jc = wlabc ul−1
(i+a)( j+b)c , ∀c (3)
a=0 b=0
n
vli jq = wlq sli jc , ∀q (4)
c=1

By applying the 2D convolution, the depthwise and pointwise convolutions are used to work on 3D data, as shown in
Equations (3) and (4), where Equation (3) is the process of 2D depthwise convolution, and Equation (4) represents 2D
pointwise convolution operation. The dilation convolution operation is given in Equation (5).
 m−1 
m−1  n
vli jq = wlabq ul−1
(i+αa)( j+αb)c , ∀q (5)
a=0 b=0 c=1
uli jk = σ(vli jk ) (6)
σ(x) = max (0, x) (7)
Activation functions are used to add nonlinearity in the learning model, as shown in Equation (6). As a widely used
activation function, Rectified Linear Units (RLU) is applied as shown in Equation (7).

3.2. Model Training Process

A categorical cross entropy loss function is applied as the objective function, as shown in Equation (8), where yi is
the true probability for the class i, and ŷi is the predicted probability distribution of the class.

H(y, ŷ) = − yi log ŷi (8)
i
f (w) = H(y, ŷ) (9)
To minimize f (w) in Equation (9), gradient decent (GD) approach is used to adjust network parameters (w) in the
opposite direction of ∇w f (w). By defining step size λ, typical GD process is expressed as Equation (10).
wt = wt−1 − λ∇w f (wt−1 ) (10)
Adam is a modified version of GD, which computes adaptive learning rates based on the estimates of the first and
second moments of the gradients. For iteration t, the Adam process is shown in Equations (11)-(16).
δt = ∇w f (wt−1 ), (11)
pt = β1 pt−1 + (1 − β1 )δt , (12)
qt = β2 qt−1 + (1 − β2 )δ2t , (13)
354 Haifeng Wang et al. / Procedia Manufacturing 39 (2019) 349–356
Wang, Zhang, Lu, and Yoon / Procedia Manufacturing 00 (2019) 000–000

pt
p̂t = , (14)
1 − β1
qt
q̂t = , (15)
1 − β2
p̂t
wt = wt−1 − λ  , (16)
q̂t + 
where pt and qt are the first and second moment vector, respectively, p0 = q0 = 0, and β1 , β2 ∈ [0, 1). As a suggestion
from literature [15], λ = 0.001, β1 = 0.9, β2 = 0.999, and  = 10−8 .
During the CNN training process, the distribution of each layer’s inputs changes, called an internal covariate shift,
because of the parameter updates of previous layers. As a result, a lower learning rate and a careful weight initialization
are required, and the model training efficiency becomes quite low [16]. As an alternative approach to overcome the
internal covariate shift, batch normalization (BN) is used. Scale and shift parameters are used to avoid changing the
original representation of the previous layer during normalization. Algorithm 1 illustrates the BN process with a batch
input {x1 , . . . , xB }. As a note, parameter , which is typically selected as 0.001, is added to the batch variance for
numerical feasibility. The scale parameter γ and shift parameter β are trained along with other network parameters
during the training process.

Algorithm 1 Batch Normalization.


1: Inputs: {x1 , . . . , x B }: A batch of inputs; γ, β: Scale and shift parameters
2: procedure Batch Normalization({x1 , . . . , x B }, γ, β)
B
1
3: µ= xi
B i=1
B
1
4: σ= (xi − µ)2
B i=1
5: for i = 1, . . . , B do
xi − µ
6: x̂i = √ γ+β
σ2 + 
7: return { x̂1 , . . . , x̂B }

4. Experimental Results and Analysis

In this experiment, four tissue types, which include thyroid, parathyroid, adipose, and muscle, are used. Background
of each 3D whole sample is excluded as shown in yellow dashed cuboid in Figure 4. To increase the dataset size for
the model training, each single 3D volume of interest (VOI) sample is partitioned with a 200 × 200 × 10 dimension
sliding cuboid with stride size 100 in A-Scan and B-Scan directions and 5 in C-Scan direction, as shown in Figure 4.
Totally, 6,660 3D samples are collected. To validate the performance of the DSCN model, a state-of-art deep learning

Figure 4. VOIs collections on 3D OCT samples.


method, i.e., Xception, is modified to 3D input version in this research. In addition, a variation of DSCN (varDSCN)
Haifeng Wang et al. / Procedia Manufacturing 39 (2019) 349–356 355
Wang, Zhang, Lu, and Yoon / Procedia Manufacturing 00 (2019) 000–000

is also tested to compare with the DSCN model by moving the dilation layer after convolution layer while separating
the nets based on the model architecture in Figure 3.
80% of the whole samples is used as training data and the remaining samples are used to constructed testing dataset.
During training process, the training dataset is separated by 70%/30% as training and validation datasets, respectively.
In sum, the number of 3D samples in training, validation, and testing datasets is 3,729 samples, 1,599 samples, and
1,332 samples. 5,00 epochs are set for the training. All experiments are implemented by TensorFlow r1.5 under the
computational specification of 64-bit Windows 10, with Intel i7 processor (4.00 GHz), 16 GB random-access memory
(RAM), and NVIDIA GeForce GTX 1080.
Table 1 shows the test classification performance results measured from precision, recall, F1-score, accuracy, and
running time for each model and tissue type. In terms of overall classification accuracy, it is obvious that DSCN
obtains the highest accuracy compared to varDSCN and 3D Xception models, which indicates the effectiveness of
applying dilated convolution in depthwise separable networks. Interestingly, the varDSCN model only achieves 0.69
accuracy, which supports the importance of performing dilated convolution before typical convolution. By comparing
the classification performance for each tissue type, it is obvious that DSCN obtains relative high performance for
all the tissue types on all the performance measures. But for parathyroid, varDSCN obtains relative lower precision
due to falsely classify some other tissues to parathyroid. Considering classification results for different tissue types,
parathyroid and muscle tend to obtain lower precision. Thyroid and adipose tend to get lower recall and F1 score. In
terms of computational efficiency, DSCN is slower than 3D Xception due to the extra computational cost for dilation
operation.

Table 1. Classification performance results for each model and tissue type.
Model Class Precision Recall F1-Score Accuracy Running time (minutes)
Thyroid 1.00 0.96 0.98
Parathyroid 0.92 1.00 0.96
DSCN 0.97 325.16
Adipose 0.98 0.99 0.99
Muscle 0.99 0.95 0.97
Thyroid 1.00 0.03 0.06
Parathyroid 0.51 0.96 0.67
varDSCN 0.69 310.22
Adipose 0.99 0.73 0.84
Muscle 0.72 0.88 0.79
Thyroid 0.85 0.84 0.84
Parathyroid 0.83 0.93 0.88
3D Xception 0.88 278.71
Adipose 1.00 0.81 0.89
Muscle 0.85 0.93 0.89

5. Conclusion

Deep learning models have shown their high generalization performance in many image classification tasks. Fo-
cusing on 3D medical image analysis, this research constructs a DSCN model for OCT image classification. By
comparing the DSCN model with 3D Xception and a DSCN variation on a four classes classification task, DSCN
obtains the highest performance and shows the advantage of applying dilated convolution in depthwise separable net-
works. The highest accuracy obtained to classify the four tissue types is 0.97 using 3D OCT image in this research.
Based on the test results, the properties of parathyroid and thyroid samples can be further studied to improve the model
classification performance. The proposed model can also be applied for the 3D image analysis in the manufacturing
field. Given a 3D manufacturing image dataset, diagnostic tasks can be tested using the proposed DSCN model.

References

[1] Haifeng Wang and Sang Won Yoon. A machine learning model for medical image recognition using texture-based features. In IIE Annual
Conference. Proceedings, pages 1655–1660. Institute of Industrial and Systems Engineers (IISE), 2017.
356 Haifeng Wang et al. / Procedia Manufacturing 39 (2019) 349–356
Wang, Zhang, Lu, and Yoon / Procedia Manufacturing 00 (2019) 000–000

[2] Haifeng Wang, Bichen Zheng, Sang Won Yoon, and Hoo Sang Ko. A support vector machine-based ensemble algorithm for breast cancer
diagnosis. European Journal of Operational Research, 267(2):687–699, 2018.
[3] Hongya Lu, Haifeng Wang, and Sang Won Yoon. A dynamic gradient boosting machine using genetic optimizer for practical breast cancer
prognosis. Expert Systems with Applications, 116:340–350, 2019.
[4] Sandra Sommerey, Norah Al Arabi, Roland Ladurner, Constanza Chiapponi, Herbert Stepp, Klaus KJ Hallfeldt, and Julia KS Gallwas. Intra-
operative optical coherence tomography imaging to identify parathyroid glands. Surgical endoscopy, 29(9):2698–2704, 2015.
[5] Rukhsana G Mirza, Mark W Johnson, and Lee M Jampol. Optical coherence tomography use in evaluation of the vitreoretinal interface: a
review. Survey of ophthalmology, 52(4):397–421, 2007.
[6] Stephen A Boppart, Wei Luo, Daniel L Marks, and Keith W Singletary. Optical coherence tomography: feasibility for basic research and
image-guided surgery of breast cancer. Breast cancer research and treatment, 84(2):85–97, 2004.
[7] Olvi L Mangasarian, W Nick Street, and William H Wolberg. Breast cancer diagnosis and prognosis via linear programming. Operations
Research, 43(4):570–577, 1995.
[8] Salim Ben Chaabane and Farhat Fnaiech. Color edges extraction using statistical features and automatic threshold technique: application to
the breast cancer cells. Biomedical engineering online, 13(1):1, 2014.
[9] Yuliang Wang, Zaicheng Zhang, Huimin Wang, and Shusheng Bi. Segmentation of the clustered cells with optimized boundary detection in
negative phase contrast images. PloS one, 10(6):e0130178, 2015.
[10] Shweta Kharya. Using data mining techniques for diagnosis and prognosis of cancer disease. arXiv preprint arXiv:1205.1923, 2012.
[11] Daniel Schmitter, Paulina Wachowicz, Daniel Sage, Anastasia Chasapi, Ioannis Xenarios, Viesturs Simanis, and Michael Unser. A 2d/3d image
analysis system to track fluorescently labeled structures in rod-shaped cells: application to measure spindle pole asymmetry during mitosis.
Cell division, 8(1):1, 2013.
[12] Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio, Francesco Ciompi, Mohsen Ghafoorian, Jeroen AWM
van der Laak, Bram van Ginneken, and Clara I Sánchez. A survey on deep learning in medical image analysis. arXiv preprint arXiv:1702.05747,
2017.
[13] Adhish Prasoon, Kersten Petersen, Christian Igel, François Lauze, Erik Dam, and Mads Nielsen. Deep feature learning for knee cartilage
segmentation using a triplanar convolutional neural network. In International Conference on Medical Image Computing and Computer-
Assisted Intervention, pages 246–253. Springer, 2013.
[14] Mohammad Havaei, Axel Davy, David Warde-Farley, Antoine Biard, Aaron Courville, Yoshua Bengio, Chris Pal, Pierre-Marc Jodoin, and
Hugo Larochelle. Brain tumor segmentation with deep neural networks. Medical Image Analysis, 2016.
[15] Diederik P Kingma and Jimmy Lei Ba. Adam: Amethod for stochastic optimization. In Proc. 3rd Int. Conf. Learn. Representations, 2014.
[16] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv
preprint arXiv:1502.03167, 2015.

You might also like