Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

2022 44th Annual International Conference of

the IEEE Engineering in Medicine & Biology Society (EMBC)


Scottish Event Campus, Glasgow, UK, July 11-15, 2022

nnUNet-based Multi-modality Breast MRI Segmentation and Tissue-


Delineating Phantom for Robotic Tumor Surgery Planning
2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC) | 978-1-7281-2782-8/22/$31.00 ©2022 IEEE | DOI: 10.1109/EMBC48229.2022.9871109

Motaz Alqaoud, John Plemmons, Eric Feliberti, Siqin Dong, Krishnanand Kaipa, Gabor Fichtinger,
Yiming Xiao, and Michel A. Audette*, Senior Member, IEEE

Abstract— Segmentation of the thoracic region and breast automated methods [9] require less time than manual methods
tissues is crucial for analyzing and diagnosing the presence of while still needing user involvement and producing variable
breast masses. This paper introduces a medical image results depending on observers. Even though using computer-
segmentation architecture that aggregates two neural networks aided diagnosis (CAD) systems, fully automated segmentation
based on the state-of-the-art nnU-Net. Additionally, this study of breast tissue and lesions remains challenging [10].
proposes a polyvinyl alcohol cryogel (PVA-C) breast phantom,
based on its automated segmentation approach, to enable Some recent studies have attempted to address this
planning and navigation experiments for robotic breast surgery. problem. Wu et al. [11] applied the edge technique to separate
The dataset consists of multimodality breast MRI of T2W and the breast region from other organs in the 3D sagittal T1-
STIR images obtained from 10 patients. A statistical analysis of weighted MRI scans and attained a DSC of 0.95. In the last
segmentation tasks emphasizes the Dice Similarity Coefficient few years, deep neural networks (DNNs) have been broadly
(DSC), segmentation accuracy, sensitivity, and specificity. We utilized in medical image segmentation challenges, such as
first use a single class labeling to segment the breast region and Fully Convolutional Network FCN [12], SegNet [13], U-Net
then exploit it as an input for three-class labeling to segment [14], and V-Net [15]. These DNNs can extract highly detailed
fatty, fibroglandular (FGT), and tumorous tissues. The first features, train, and achieve end-to-end segmentation. Zhang et
network has a 0.95 DCS, while the second network has a 0.95, al. [16] build a DL approach using U-Net to segment the breast
0.83, and 0.41 for fat, FGT, and tumor classes, respectively.
region and the FGT, accomplishing a DSC of 0.86 and 0.83,
respectively.
Clinical Relevance—This research is relevant to the breast
surgery community as it establishes a deep learning-based (DL) The majority of these traditional and DL segmentation
algorithmic and phantomic foundation for surgical planning and models entail the need for modification to work on particular
navigation that will exploit preoperative multimodal MRI and datasets. Thus, the parameters of these DNN approaches are
intraoperative ultrasound to achieve highly cosmetic breast often fine-tuned for specific types of MRI scanner
surgery. In addition, the planning and navigation will guide a characteristics and protocols. Therefore, breast MRI scans
robot that can cut, resect, bag, and grasp a tissue mass that differ for various modalities and scanning protocols. As a
encapsulates breast tumors and positive tissue margins. This result, although existing approaches have demonstrated an
image-guided robotic approach promises to potentiate the adequate performance on certain task optimization problems,
accuracy of breast surgeons and improve patient outcomes.
they may not cope with MRI data variability.
I. INTRODUCTION Accordingly, we are proposing using two cascaded nnU-
Breast cancer is the second common cancer among Net architectures to segment breast region and inner breast
women, next to skin cancer [1]. Thus, early diagnosis and tissue and tumor masses using multimodality breast tumor
treatment play a proven role in decreasing the mortality rate images to overcome MRI data variability and obviate manual
[2]. Meanwhile, magnetic resonance imaging (MRI) is gaining intervention. nnU-Net has emerged as a state-of-the-art
acceptance for mass detection, diagnosis, and follow-up in biomedical segmentation architecture [17]. It is a self-
breast cancers. Its high soft-tissue contrast makes its configurable network architecture method for a particular
discriminance feasible in multimodal imaging and 3D image dataset. Without manual tuning, nnU-Net specifies all
visualization [3]. However, routine MRI imaging of the breast the segmentation task stages, resulting in task-specific
also includes lung, heart, and pectoral muscles. As a result, optimization. This architecture demonstrated superior
separating the other organs from the breast region is essential. performance over the task-specialized DL pipelines for 33
Segmentation is indispensable in various clinical applications international public segmentation competitions [17].
[4,5], such as therapy planning and intra-operative surgical Moreover, it has been broadly applied to areas spanning MRI
navigation [6]. However, manual segmentation analysis of brain tumor segmentation [18], liver tumor CT segmentation,
MRI volumes is time-intensive while also prone to inter and and prostate MRI segmentation [17]. However, nnU-Net has
intra-rater variability [7]. Various techniques have been not been applied to multimodality breast cancer MRI datasets
utilized to assist radiologists in detecting and diagnosing breast to date. To the best of our knowledge, this study is the first to
lesions and improve clinical analysis efficacy [8]. Semi- investigate and test nnU-Net for breast region segmentation as

*Research supported by Old Dominion University (ODU) and Eastern K.K. and S.D. Authors are with Mechanical & Aerospace Engineering,
Virginia Medical School (EVMS) funding. ODU, Norfolk, VA, (email: kkaipa@odu.edu, sdong002@odu.edu).
M.A. and M.A.A. Authors are with Biomedical Engineering, ODU, G.F. Author is with School of Computing, Queen’s University, Kingston,
Norfolk, VA, 23529, USA (email: malqa004@odu.edu) (corresponding ON, Canada, (email: fichting@queensu.ca).
author email: maudette@odu.edu). Y.X. Author is with Computer Science & Software Engineering,
J.P. and E.F. Authors are with EVMS (Radiology and Surgery), Norfolk Concordia University, Montreal, QC, Canada (email:
VA, 23507, USA (email: jkplemmons@gmail.com, felibeec@evms.edu). yiming.xiao@concordia.ca)

978-1-7281-2782-8/22/$31.00 ©2022 IEEE 3495


Authorized licensed use limited to: Old Dominion University. Downloaded on March 15,2024 at 14:23:52 UTC from IEEE Xplore. Restrictions apply.
well as 3-class breast tissue segmentation for fat, FGT, and registered images to the reference images to validate our
tumor mass in routine thoracic MRI datasets. The adjustment method. Then, the manual segmentation can be
segmentation pipeline was trained and validated on a dataset concluded with these steps. 1- Breast region cropping to
of 10 T2-weighted and STIR fat-suppressed multimodality exclude the rest of the chest region, namely the lung and the
images obtained from an open-access breast MRI database. heart, by choosing landmarks on fat tissue to anchor the
exclusion of all non-breast areas. Thus, landmarks are selected
Beyond this software algorithm, we are also presenting a based on step 4 of Singh et al.'s automatic breast region
patient-specific breast-mimicking phantom based on an segmentation method [22]. 2- Next, apply a foreground filter
automated segmentation result. The achievement of high- to obtain the whole breast region mask used for the first nnU-
fidelity patient-specific breast phantoms will enable us to Net model. 3- We manually select a threshold value and apply
develop and validate a calibration-assisted preoperative MRI Otsu thresholding for the entire breast to select FGT voxels,
to intraoperative 3D ultrasound (US) nonrigid registration fat voxels, and tumors. 4- Then, we use morphological opening
technique based on the open-source PLUS toolkit [19]. This (kernel size = 3). 5- The tumor is then segmented out of the
registration will later subsume optical tracking of a breast FGT by region growing. 6- multi-label smoothing is applied
surgery grasping robot in the calibration, thereby anchoring its later to smooth multiple segments labels simultaneously,
MRI-US-based intraoperative navigation. Validation will preserving their watertight interface.
exploit surgical fiducials implanted in phantom breast tissue.
C. Segmentation based on nnU-Net
II. MATERIALS AND METHODS
We build our pipeline network based on nnU-Net—the two
A. Datasets cascaded nnU-Net architectures with multimodal inputs of
The study's datasets include 10 patients (median age, 40 T2-weighted and STIR fat-suppressed MRI modalities. Our
years; range, 40-70 years). All subjects are obtained from the cascaded nnU-Net approach is first to binarize the whole
cancer imaging archive (www.cancerimagingarchive.net) breast region and then use these masks as input for the second
under the category breast-diagnosis study. 5 subjects have network to perform three-class segmentation of fat tissue,
confirmed Invasive ductal carcinoma (IDC), and the other 5 FGT tissue, and tumor mass within the breast region, as
have a confirmed benign tumor. We choose each subject shown in Fig, 2 which illustrates the proposed segmentation
dataset consisting of two distinct MR modalities: T2-weighted framework.
and STIR fat-suppressed images. Each breast image volume
contains 82 to 95 axial slices and has a slice thickness of 2 mm. Background Noise
Variations in pixel resolution and image sizes are observed on Reduction
the selected dataset. Therefore, we resample them during a co-
registration step in the preprocessing stage to the pixel
resolution of 0.65 mm along x and y and an image size of 512 Gradient
× 512. Patients are scanned in the prone position while their Anisotropic
breasts are hanging in the two holes of the 1.5 T Philips MRI
scanner coil. Thus, we randomly split the dataset to 80:20,
where 8 image volumes were assigned for training/validation Co-Registration of
and 2 image volumes for testing. T2—W to STIR
B. Preprocessing and ground truth segmentation
It is necessary to manually produce the ground truth (GT) Breast Region
segmentation to generate a reference for breast region, fat,
Cropping
FGT, and tumor since there is not enough data analysis in
online repositories. In addition, among breast MRI scans,
every breast has a unique shape and size. Thus, a trained Morphological
biomedical engineer has performed this GT manual Foreground Filter
Operation Opening
segmentation in conjunction with a radiologist from Eastern
Virginia Medical School (EVMS) in Norfolk, VA, who
revised and validated them. 3D Slicer 4.11[20] manual Global Otsu
segmentation tools running on a workstation are used for this Grow From Seeds Thresholding
purpose.
As shown in Fig.1, preprocessing of the dataset and GT
segmentation proceeds as follows: 1- apply background noise Joint Smoothing
removal, 2- apply gradient-preserving anisotropic diffusion
filter (T2W/STIR), and then, 3- apply N4 bias-field correction.
Thus, 4- co-registration is necessary given that images may
have been affected by subject motion and breathing during the
data acquisition. T2W images are taken as reference images GT Segmentation Completed
for the mutual information nonrigid registration [21]. To this
end, a series of rigid, affine, and B-spline-based elastic Figure 1. Flowchart of the main steps of manual GT segmentation: the
transformations are computed to minimize the loss of image orange-colored rectangular indicates preprocessing steps, and the blue-
coherence due to misalignment. Hence, we overlay the colored rectangular shows breast region and tissues segmentation steps.

3496
Authorized licensed use limited to: Old Dominion University. Downloaded on March 15,2024 at 14:23:52 UTC from IEEE Xplore. Restrictions apply.
The nnU-Net architecture is adapted from 3D U-Net [14,
17, 23]. It consists of a contracting network cascaded with an
expansive network, which confers a U-shaped structure. The
contracting pathway, coinciding with the downslope of the U,
embeds the repeated application of a convolution, a Leaky
Rectified Linear Unit (Leaky ReLU), and a max-pooling
operation, where the spatial information is decreased, and
feature-based representation is increased. The expansive
Input image modalities pathway, which marks the upslope, combines features and
spatial data by deconvolutions series for high-resolution
features from the contracting path over successive layers [14].
As a result, the skip connection between the corresponding
First nnU-Net architecture contracting and expansive paths layers retains the accurate
feature information that is vital for the up-sampled output
image. Therefore, at the final layer of the expansive pathway,
a convolution with a 1 × 1 × 1 kernel is performed up-sampling
so that the segmentation voxel results correspond to the voxel
input image [17]. Fig. 3 illustrates the network parameters and
* their datasets. Rather than a regular cross-entropy loss
function, nnU-Net uses a combination of cross-entropy loss
and dice loss functions to train one class label and three class
Breast region mask labels for the first and second network, respectively. Because
of that, the segmentation accuracy and training stability are
improved [17]. In addition, eight operations of data
augmentation are implemented by nnU-Net to cope with our
limited training data, such as scaling, rotation, Gaussian noise,
and Gaussian blur and mirroring [17].
It is worth mentioning that nnU-Net embeds some
refinements to the U-Net architecture baseline [17,14],
namely: (1) convolution padding to maintain the exact image
size for inputs and outputs; (2) use of instance normalization
(IN) as a substitute for batch normalization; (3) instead of
Input image modalities ReLU, Leaky ReLU is used to address the dying neuron issue.
Also, nnU-Net is a self-configurable algorithm where
cropping, resampling, and normalization were executed to the
dataset parameters such as slice thickness and resolution as a
Second nnU-Net architecture
part of the nnU-Net preprocessing step [17].
The nnU-Net model utilizes the stochastic gradient descent
method with initial learning (0.01) and Nesterov momentum
(0.9) to optimize the loss function [17]. The patch sizes of the
* two nnU-Net networks are 40 × 192 × 256 and 48 × 160 × 256,
respectively, for the breast region and breast tissue
segmentations. In this study, the minimum batch size is at 2,
the minimum feature map size is at 4 × 4 × 4, while the
3-class inner breast region mask; fat in maximum number of feature map is at 320. Therefore, the
cyan, FGT in yellow and tumor in red down-sampling number is 6. All training runs are set for 1000
epochs, and each epoch consists of 250 batches. We apply 5-
fold cross-validation (CV) for training and validation to ensure
results and training reliability. The segmentation model is
trained on ODU's High-Performance Computing cluster. A
virtual environment based on python 3.8.5 was created in the
cluster using PyTorch 1.6.0 [24] as a framework. In addition,
Batchgenerators 0.21 [25] and all other necessary python
libraries were installed on the virtual environment. The nnU-
Net code is publicly accessible at github.com/MIC-DK
3-class segmentation output FZ/nnUNet.

Figure 2. The pipeline of two cascaded nnU-Net architectures where the


upper rectangle is the breast region segmentation and the lower rectangle
is the inner breast tissues segmentation.

3497
Authorized licensed use limited to: Old Dominion University. Downloaded on March 15,2024 at 14:23:52 UTC from IEEE Xplore. Restrictions apply.
Figure 3. The two cascaded nnU-Net architectures where upper architecture is the breast region segmentation network and lower architecture is inner
breast tissue segmentation network.

𝑇𝑇𝑇𝑇+𝑇𝑇𝑇𝑇
D. Evaluation Accuracy = 𝑇𝑇𝑇𝑇+𝐹𝐹𝐹𝐹+𝑇𝑇𝑇𝑇 𝐹𝐹𝐹𝐹 (2)
The performance of the segmentation network of cascaded 𝑇𝑇𝑇𝑇
architecture is evaluated with standard statistical metrics of Sensitivity = (3)
𝑇𝑇𝑇𝑇+𝐹𝐹𝐹𝐹
segmentation, DSC, and assessments of accuracy, sensitivity, 𝑇𝑇𝑇𝑇
and specificity [26,27]. Specificity = (4)
𝑇𝑇𝑇𝑇+𝐹𝐹𝐹𝐹
2×𝑇𝑇𝑇𝑇
DSC = 2×𝑇𝑇𝑇𝑇+𝐹𝐹𝐹𝐹+𝑇𝑇𝑇𝑇+𝐹𝐹𝐹𝐹 (1) Where, TP, TN, FP, and FN represent true positive, true
negative, false positive, and false negative, respectively.

3498
Authorized licensed use limited to: Old Dominion University. Downloaded on March 15,2024 at 14:23:52 UTC from IEEE Xplore. Restrictions apply.
E. Segmentation-guided elastic breast phantom application Subsequently, the tumor and FGT components are
As proof of concept for our automated breast segmentation removed from their molds and inserted in the fat-mimicking
approach and to promote surgery planning and navigation mold to finalize the complete breast phantom. To ensure
experiments for our robotic surgery application, we have spatial coherence of the three components, we sew a nylon
developed Polyvinyl Alcohol Cryogel (PVA-C) breast thread over the tumor and FGT mimicking volume, so that
phantoms founded on our segmentation results. We opt for they are suspended in the fat mimicking volume mold. With
PVA-C because it demonstrates elastic fidelity and mimics its complements in place, the container for the fat mimicking
soft tissue deformations and realistic imaging properties [28]. volume is filled with PVA. Then we place the lid on the top
PVA-C is a solution of PVA powder in distilled water that is where the tumor and FGT tissue-mimicking volumes are fixed
first heated to promote a complete dissolution, then frozen and to each other and hanging down at the same distance and place
thawed one or more times, at which point it becomes an elastic that in the freezer for a last FTC. Once it is completed, we cut
solid. Moreover, we can control elastic and imaging properties the threads out from the outside. Thus, the phantom is removed
through the number of freeze-thaw cycles (FTC) and PVA from the mold as we have a breast-mimicking phantom that
concentrations [28]. It is important to emphasize that while 3D consists of 1 FTC for fat-mimicking tissue, 2 FTCs for the
printing is an important component of the PVA-C molds, the FGT component, and 3 FTCs for the tumor. Subsequently, the
planning and navigation experiments require a highly three-component PVAC phantom is completed, and now we
compliant soft tissue phantom, which precludes most 3D- store it in de-ionized water at 5 ◦C. We can keep it safe this
printed options for our phantom implementation. way [28].

A hydrolysis of over 99% of polyvinyl alcohol (PVA) III. RESULTS


powder with an average molecular weight (MW) of 130,000 We evaluated our pipeline method on the segmentation
(Sigma-Aldrich, SKU 563900) is used for aqueous solutions evaluation metrics described in section D. All results were
preparation. Our approach is based on the published methods shown in Table 1 as mean ± SD. The reported results are from
of Surry et al. [28] and Kharine et al. [29]. A 10 wt.% PVA the highest model performance out of 5-fold CV. The DSC
powder and 90 wt. % de-ionized water are first combined in values for breast region, fat, FGT, and tumor segmentation are
an Erlenmeyer flask while using a magnetic stir bar. We record 0.95 ± 0.07, 0.95 ± 0.00, 0.83 ± 0.04, and 0.41 ± 0.58
the flask weight and its contents. Next, we stir the mixture on respectively.
a magnetic stir plate for 30 minutes to break down any masses.
Then, the solution is brought to a 95◦C temperature bath for 2 Fig. 4 showed a case of an image prediction of our DL
h. Then, we gently stir PVA for 30 minutes to promote method compared to the ground truth, as well as an applicable
dissolution and make certain of the solution homogeneity. The phantom is shown in Fig. 5, which is made based on the
resulting solution is stirred for approximately 30 minutes, then segmentation results.
cooled to room temperature. When cooling is completed, the Our breast segmentation task is implemented through two
flask is weighed, and restore any weight loss by de-ionized cascaded networks. We employed 5-fold CV models for every
water, so it remains a 10 wt—% PVA solution. network. One model of breast region and 3-class inner tissue
For PVA liquid to form an anthropomorphic elastic segmentation networks took an average of 145 s and 170 s,
phantom, it must then be poured into the appropriate molds, respectively, for one epoch during model training. In addition,
which are 3D-printed from surfaces extracted from our every model took less than two days (40 h and 47 h) for the
segmentation results. Thus, we create 3D printed molds for the complete training process.
PVA-C tissue-mimicking of fat, FGT, and tumor. IV. DISCUSSION
Accordingly, segmented tissue volumes are processed to
extract surfaces using open-source 3D Slicer 4.11 [18]. Traditional methods of breast MRI segmentation do not
perform very well to delineate the breast region and label
Before pouring any PVA liquid in any mold, 30 ml of breast tissues. Therefore, we proposed a DL approach in this
liquid PVA are colored with red enamel paint (Testor's 1105tt, study to address this clinical need and establish a foundation
red metallic paint) as well as 320 ml with yellow paint for image navigation. As a result, our segmentation pipeline
(Testor's 1115TT, amber metallic paint) so that the existence consists of two cascaded networks based on the nnU-Net
of colored PVA-C in the sample visually indicates the tumor network. In addition, a breast phantom was developed based
and FGT mimicking tissue boundaries respectively. We then on its results to support the navigation further. We used an
pour the red-colored liquid PVA into the tumor-mimicking open dataset of T2W/STIR of 10 patients.
mold and allow it to rest for 12 hours so that air bubbles can
rise up and dissipate. Liquid PVA is kept well sealed while As shown in Table 1, our pipeline confirmed high
stored at room temperature. Soon after, the freezing phase is accuracy, high sensitivity, and high specificity, preventing
initiated in a standard chest freezer, beginning at room over-segmentation and preserving segmentation sensitivity.
temperature. The phantoms are cooled to −20 ◦C from room
We compared our results to other deep-learning methods, as
temperature and kept at this temperature with a total freezing
shown in Table 2. [30,31]. Accordingly, our framework had
time of 12 hours. Subsequently, the freezer is turned off to
achieved advanced performance with fewer MRI scans,
begin the thawing phase, and it is allowed to resume room
particularly for breast region and fat segmentation with a DSC
temperature over 12 hours gradually. In this manner, one FTC
value of 0.95 and 0.95, respectively. However, the DSC values
is completed. Then, a second FTC takes place for the tumor-
of literature [30,31] in Table 2 were close to our results; the
mimicking volume while we pour the yellow-colored liquid
number of training cases was much more than ours,
PVA into the FGT- mimicking volume mold for its first FTC.

3499
Authorized licensed use limited to: Old Dominion University. Downloaded on March 15,2024 at 14:23:52 UTC from IEEE Xplore. Restrictions apply.
Table 1. Performance Evaluation of The Two Networks*.
Network/ Seg.
DSG Accuracy Sensitivity Specificity
task
1st Breast
nnU region 0.95±0.00 0.98±0.00 0.95±0.03 0.99±0.00
Net (Test (0.95±0.00) (0.99±0.00) (0.96±0.02) (0.99±0.00)
dataset)
Fat 0.95±0.00 0.99±0.00 0.98±0.02 0.99±0.00
(Test
(0.96±0.00) (0.99±0.00) (0.96±0.04) (0.99±0.00)
2nd dataset)
nnU FGT
0.83±0.04 0.99±0.00 0.77±0.01 0.99±0.00
Net (Test
(0.84±0.03) (0.99±0.00) (0.86±0.18) (0.99±0.00)
dataset)
Tumor
(a) (Test
0.41±0.58 0.99±0.00 0.45±0.64 0.99±0.00
(0.35±0.48) (0.99±0.00) (0.33±0.45) (0.99±0.00)
dataset)
*Results shown were from the fold 2 model among 5-fold CV.

Another limitation is that we used MRI scans from an online


repository with no control over MRI scanning protocols.
Additionally, the Breast Phsnotm has not been characterized
for the US and MRI imaging as we depend on the Kharine et
al. [29] characterization method. However, even with these
limitations, our framework enabled us to present impressive
results and show the potential capabilities of cascaded nnU-
Net architectures. Finally, we would investigate the surgery
planning and navigation to support the intraoperative US based
on our patient-specific elastic breast phantom in future work.
(b) V. CONCLUSION
We present a DL segmentation pipeline built on the nnU-
Net for breast region, fat, FGT, and tumor segmentation. In
addition, we present a mimicking breast phantom created on
these segmentation results. We used multimodality breast
MRI datasets obtained from a public archive. The nnUNet-
based pipeline showed high segmentation accuracy across
routine breast MR images without fine-tuning or post-
processing steps. Thus, this study would benefit future
automated segmentation of breast analysis studies and
establish a foundation to develop and validate robotic
experiments in intra-operative and artificial intelligence
(c) studies.
Figure 4. (a) the prediction case from the test dataset of the breast region
mask in white overlaid on its ground truth. (b) GT of the 3-class ACKNOWLEDGMENT
segmentation shows fat tissue in light blue, FGT in yellow, and tumor in
red. (c) shows the prediction from the test dataset of the 3-class This work was supported by Old Dominion University
segmentation of the same case of (b). (Biomedical Engineering) and by Eastern Virginia Medical
School.
and their segmentation challenge had two tasks.
The DSC and sensitivity of tumor segmentation were 0.41
and 0.45, respectively, which indicates that a substantial tumor
mass was falsely unsegmented (i.e., high false negative)
because of the small dataset. However, increasing the training
dataset would more likely increase the segmentation
performance and overcome this challenge.
Additionally, we introduce an elastic breast phantom that
can serve in experiments to evaluate and validate researches
on navigation systems for image-guided breast cancer surgery
since no access to clinical trials can be obtained during the (a) (b)
development process.
Figure 5. indicates the developed PVA-C breast phantom based on
Our study has some limitations. First, our small dataset of segmentation results, where, at (a). the yellow represents the segmented
volume of FGT, and the red represents the tumor volume. (b) PVA-C
10 patients will expand in future work to improve its breast phantom mimics fat, FGT, and tumor.
performance and reliability.

3500
Authorized licensed use limited to: Old Dominion University. Downloaded on March 15,2024 at 14:23:52 UTC from IEEE Xplore. Restrictions apply.
Table 2. DSC Values Comparison of Our Method and Other Literature*. Cham, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, Eds.,
Segmentation tasks 2015// 2015: Springer International Publishing, pp. 234-241.
Method, number of [15] F. Milletari, N. Navab, and S. Ahmadi, "V-Net: Fully Convolutional
Author the training dataset Breast Tumor
Fat FGT Neural Networks for Volumetric Medical Image Segmentation," in
region sensitivity
2016 Fourth International Conference on 3D Vision (3DV), 25-28 Oct.
Damls 0.81 2016 2016, pp. 565-571, doi: 10.1109/3DV.2016.79.
0.94 ±
et al. 2-D U-Net 66 - ± -
[30]
0.00
0.01
[16] Y. Zhang et al., "Automatic Breast and Fibroglandular Tissue
Segmentation in Breast MRI Using Deep Learning by a Fully-
Jiao et 0.95 ± Convolutional Residual Neural Network U-Net," (in eng), Acad Radiol,
U-Net ++ 75 - - 0.87
al. [31] 0.00 vol. 26, no. 11, pp. 1526-1535, Nov 2019, doi:
0.95 0.83 10.1016/j.acra.2019.01.012.
0.95 ±
Ours 3-D nnU-Net 8 ± ± 0.45 [17] F. Isensee, P. F. Jaeger, S. A. A. Kohl, J. Petersen, and K. H. Maier-
0.00
0.00 0.04 Hein, "nnU-Net: a self-configuring method for deep learning-based
* For comparison purposes, tumor sensitivity results were shown. biomedical image segmentation," Nature Methods, vol. 18, no. 2, pp.
203-211, 2021/02/01 2021, doi: 10.1038/s41592-020-01008-z.
REFERENCES [18] F. Isensee, P. F. Jäger, P. M. Full, P. Vollmuth, and K. H. Maier-Hein,
"nnU-Net for Brain Tumor Segmentation," in Brainlesion: Glioma,
[1] R. L. Siegel, K. D. Miller, and A. Jemal, "Cancer statistics, 2016," (in Multiple Sclerosis, Stroke and Traumatic Brain Injuries, Cham, A.
eng), CA Cancer J Clin, vol. 66, no. 1, pp. 7-30, Jan-Feb 2016, doi: Crimi and S. Bakas, Eds., 2021// 2021: Springer International
10.3322/caac.21332 Publishing, pp. 118-132.
[2] A. Jemal, F. Bray, M. M. Center, J. Ferlay, E. Ward, and D. Forman, [19] A. Lasso, T. Heffter, A. Rankin, C. Pinter, T. Ungi, and G. Fichtinger,
"Global cancer statistics," (in eng), CA Cancer J Clin, vol. 61, no. 2, "PLUS: open-source toolkit for ultrasound-guided intervention
pp. 69-90, Mar-Apr 2011, doi: 10.3322/caac.20107. systems," (in eng), IEEE Trans Biomed Eng, vol. 61, no. 10, pp. 2527-
[3] C. S. Giess, E. D. Yeh, S. Raza, and R. L. Birdwell, "Background 37, Oct 2014, doi: 10.1109/tbme.2014.2322864.
Parenchymal Enhancement at Breast MR Imaging: Normal Patterns, [20] A. Fedorov et al., "3D Slicer as an image computing platform for the
Diagnostic Challenges, and Potential for False-Positive and False- Quantitative Imaging Network," (in eng), Magn Reson Imaging, vol.
Negative Interpretation," RadioGraphics, vol. 34, no. 1, pp. 234-247, 30, no. 9, pp. 1323-41, Nov 2012, doi: 10.1016/j.mri.2012.05.001.
2014, doi: 10.1148/rg.341135034. [Online]. Available: http://www.slicer.org.
[4] H. J. Aerts et al., " Decoding tumour phenotype by noninvasive imaging [21] D. Mattes, D. R. Haynor, H. Vesselle, T. K. Lewellyn, and W. Eubank,
using a quantitative radiomics approach," (in eng), Nat Commun, vol. "Nonrigid multimodality image registration," in Medical Imaging
5, p. 4006, Jun 3 2014, doi: 10.1038/ncomms5006. 2001: Image Processing, 2001, vol. 4322: International Society for
[5] U. Nestle et al., "Comparison of different methods for delineation of Optics and Photonics, pp. 1609-1620.
18F-FDG PET-positive tissue for target volume definition in [22] S. Thakran, S. Chatterjee, M. Singhal, R. K. Gupta, and A. Singh,
radiotherapy of patients with non-Small cell lung cancer," (in eng), J "Automatic outer and inner breast tissue segmentation using multi-
Nucl Med, vol. 46, no. 8, pp. 1342-8, Aug 2005. parametric MRI images of breast tumor patients," (in eng), PLoS One,
[6] O. Bernard et al., " Deep Learning Techniques for Automatic MRI vol. 13, no. 1, p. e0190348, 2018, doi: 10.1371/journal.pone.0190348.
Cardiac Multi-Structures Segmentation and Diagnosis: Is the Problem [23] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger,
Solved?," IEEE Transactions on Medical Imaging, vol. 37, no. 11, pp. "3D U-Net: Learning Dense Volumetric Segmentation from Sparse
2514-2525, 2018, doi: 10.1109/TMI.2018.2837502. Annotation," in Medical Image Computing and Computer-Assisted
[7] R. W. Y. Granzier et al., "MRI-based radiomics in breast cancer: feature Intervention – MICCAI 2016, Cham, S. Ourselin, L. Joskowicz, M. R.
robustness with respect to inter-observer segmentation variability," (in Sabuncu, G. Unal, and W. Wells, Eds., 2016// 2016: Springer
eng), Sci Rep, vol. 10, no. 1, p. 14163, Aug 25 2020, doi: International Publishing, pp. 424-432.
10.1038/s41598-020-70940-z. [24] A. Paszke et al., "PyTorch: An Imperative Style, High-Performance
[8] Z. Pang, D. Zhu, D. Chen, L. Li, and Y. Shao, "A computer-aided Deep Learning Library," in NeurIPS, 2019
diagnosis system for dynamic contrast-enhanced MR images based on [25] F. Isensee, P. Jäger, J. Wasserthal, D. Zimmerer, J. Petersen, S. Kohl,
level set segmentation and ReliefF feature selection," (in eng), Comput J. Schock, A. Klein, T. Roß, S. Wirkert, P. Neher, S. Dinkelacker, G.
Math Methods Med, vol. 2015, p. 450531, 2015, doi: Köhler, K. Maier-Hein, "batchgenerators - a python framework for data
10.1155/2015/450531. augmentation", Zenodo, 2020, doi:10.5281/ZENODO.3632567.
[9] W. Chen, M. L. Giger, and U. Bick, "A fuzzy c-means (FCM)-based [26] A. Tharwat, "Classification assessment methods," Applied Computing
approach for computerized segmentation of breast lesions in dynamic and Informatics, vol. 17, no. 1, pp. 168-192, 2021, doi:
contrast-enhanced MR images," (in eng), Acad Radiol, vol. 13, no. 1, 10.1016/j.aci.2018.08.003.
pp. 63-72, Jan 2006, doi: 10.1016/j.acra.2005.08.035.
[27] A. Popovic, M. de la Fuente, M. Engelhardt, and K. Radermacher,
[10] A. Gubern-Mérida et al., "Automated localization of breast cancer in "Statistical validation metric for accuracy assessment in medical image
DCE-MRI," (in eng), Med Image Anal, vol. 20, no. 1, pp. 265-74, Feb segmentation," International Journal of Computer Assisted Radiology
2015, doi: 10.1016/j.media.2014.12.001. and Surgery, vol. 2, no. 3, pp. 169-181, 2007/12/01 2007, doi:
[11] S. Wu, S. P. Weinstein, E. F. Conant, M. D. Schnall, and D. Kontos, 10.1007/s11548-007-0125-1
"Automated chest wall line detection for whole-breast segmentation in [28] K. J. Surry, H. J. Austin, A. Fenster, and T. M. Peters, "Poly(vinyl
sagittal breast MR images," (in eng), Med Phys, vol. 40, no. 4, p. alcohol) cryogel phantoms for use in ultrasound and MR imaging," (in
042301, Apr 2013, doi: 10.1118/1.4793255. eng), Phys Med Biol, vol. 49, no. 24, pp. 5529-46, Dec 21 2004, doi:
[12] E. Shelhamer, J. Long, and T. Darrell, "Fully Convolutional Networks 10.1088/0031-9155/49/24/009.
for Semantic Segmentation," (in eng), IEEE Trans Pattern Anal Mach [29] A. Kharine et al., "Poly (vinyl alcohol) gels for use as tissue phantoms
Intell, vol. 39, no. 4, pp. 640-651, Apr 2017, doi: in photoacoustic mammography," (in eng), Phys Med Biol, vol. 48, no.
10.1109/tpami.2016.2572683. 3, pp. 357-70, Feb 7 2003, doi: 10.1088/0031-9155/48/3/306.
[13] R V. Badrinarayanan, A. Kendall, and R. Cipolla, "SegNet: A Deep [30] M. U. Dalmış et al., "Using deep learning to segment breast and
Convolutional Encoder-Decoder Architecture for Image fibroglandular tissue in MRI volumes," (in eng), Med Phys, vol. 44, no.
Segmentation," IEEE Transactions on Pattern Analysis and Machine 2, pp. 533-546, Feb 2017, doi: 10.1002/mp.12079.
Intelligence, vol. 39, no. 12, pp. 2481-2495, 2017, doi:
[31] H. Jiao, X. Jiang, Z. Pang, X. Lin, Y. Huang, and L. Li, "Deep
10.1109/TPAMI.2016.2644615.
Convolutional Neural Networks-Based Automatic Breast Segmentation
[14] O. Ronneberger, P. Fischer, and T. Brox, "U-Net: Convolutional and Mass Detection in DCE-MRI," Computational and Mathematical
Networks for Biomedical Image Segmentation," in Medical Image Methods in Medicine, vol. 2020, p. 2413706, 2020/05/05 2020, doi:
Computing and Computer-Assisted Intervention – MICCAI 2015, 10.1155/2020/2413706.

3501
Authorized licensed use limited to: Old Dominion University. Downloaded on March 15,2024 at 14:23:52 UTC from IEEE Xplore. Restrictions apply.

You might also like