Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

bioengineering

Article
Rapid Segmentation and Diagnosis of Breast Tumor Ultrasound
Images at the Sonographer Level Using Deep Learning
Lei Yang 1,† , Baichuan Zhang 2,† , Fei Ren 3,† , Jianwen Gu 1, *, Jiao Gao 1, *, Jihua Wu 1 , Dan Li 1 , Huaping Jia 1 ,
Guangling Li 4 , Jing Zong 1 , Jing Zhang 1 , Xiaoman Yang 1 , Xueyuan Zhang 2 , Baolin Du 2 , Xiaowen Wang 2
and Na Li 2

1 Strategic Support Force Medical Center, Beijing 100024, China; leiyang1105@163.com (L.Y.);
jh_02821@126.com (J.W.); 13388715773@163.com (Jing Zhang)
2 Chongqing Zhijian Life Technology Co., Ltd., Chongqing 400039, China; 15755082328@163.com (B.Z.)
3 State Key Laboratory of Processors, Institute of Computing Technology, Chinese Academy of Sciences,
Beijing 100049, China; renfei@ict.ac.cn
4 Central Medical District of Chinese PLA General Hospital, Beijing 100080, China
* Correspondence: gujianwen5000@sina.com (J.G.); tsyxzxky@126.com (J.G.)
† These authors contributed equally to this work.

Abstract: Background: Breast cancer is one of the most common malignant tumors in women.
A noninvasive ultrasound examination can identify mammary-gland-related diseases and is well
tolerated by dense breast, making it a preferred method for breast cancer screening and of significant
clinical value. However, the diagnosis of breast nodules or masses via ultrasound is performed by
a doctor in real time, which is time-consuming and subjective. Junior doctors are prone to missed
diagnoses, especially in remote areas or grass-roots hospitals, due to limited medical resources
and other factors, which bring great risks to a patient’s health. Therefore, there is an urgent need
to develop fast and accurate ultrasound image analysis algorithms to assist diagnoses. Methods:
We propose a breast ultrasound image-based assisted-diagnosis method based on convolutional
Citation: Yang, L.; Zhang, B.; Ren, F.; neural networks, which can effectively improve the diagnostic speed and the early screening rate
Gu, J.; Gao, J.; Wu, J.; Li, D.; Jia, H.; Li, of breast cancer. Our method consists of two stages: tumor recognition and tumor classification.
G.; Zong, J.; et al. Rapid (1) Attention-based semantic segmentation is used to identify the location and size of the tumor;
Segmentation and Diagnosis of (2) the identified nodules are cropped to construct a training dataset. Then, a convolutional neural
Breast Tumor Ultrasound Images at network for the diagnosis of benign and malignant breast nodules is trained on this dataset. We
the Sonographer Level Using Deep
collected 2057 images from 1131 patients as the training and validation dataset, and 100 images of the
Learning. Bioengineering 2023, 10,
patients with accurate pathological criteria were used as the test dataset. Results: The experimental
1220. https://doi.org/10.3390/
results based on this dataset show that the MIoU of tumor location recognition is 0.89 and the average
bioengineering10101220
accuracy of benign and malignant diagnoses is 97%. The diagnosis performance of the developed
Academic Editors: Larbi Boubchir, diagnostic system is basically consistent with that of senior doctors and is superior to that of junior
Alan Wang, Sibusiso Mdletshe, doctors. In addition, we can provide the doctor with a preliminary diagnosis so that it can be
Brady Williamson and Guizhi Xu
diagnosed quickly. Conclusion: Our proposed method can effectively improve diagnostic speed
Received: 28 August 2023 and the early screening rate of breast cancer. The system provides a valuable aid for the ultrasonic
Revised: 28 September 2023 diagnosis of breast cancer.
Accepted: 12 October 2023
Published: 19 October 2023 Keywords: convolutional neural network; ultrasound; breast cancer; tumor identification; auxiliary
diagnosis

Copyright: © 2023 by the authors.


Licensee MDPI, Basel, Switzerland.
1. Introduction
This article is an open access article
distributed under the terms and The American Cancer Society’s journal, CA Cancer J Clin, published [1] cancer statistics
conditions of the Creative Commons in the United States for 2023, and the report states that breast cancer alone accounts for
Attribution (CC BY) license (https:// 31 percent of cancers in women. According to the IARC statistics [2], there are 420,000 new
creativecommons.org/licenses/by/ cases of female breast cancer in China, which is one of the countries with the fastest growth
4.0/). rate of breast cancer incidence, exceeding the global rate of 2% per year. The incidence rate

Bioengineering 2023, 10, 1220. https://doi.org/10.3390/bioengineering10101220 https://www.mdpi.com/journal/bioengineering


Bioengineering 2023, 10, 1220 2 of 13

of breast cancer in rural and remote areas has increased significantly, with an annual growth
rate of 6.3%. Breast cancer has become the most common malignant tumor threatening
women’s health. The analysis of many prospective clinical trials of RCT [3–5] shows that
strong and effective breast cancer screening can increase the rate of early diagnosis [6],
increase the chance of a cure and reduce mortality. Breast cancer screening [7] includes an
X-ray of the breast, imaging, molybdenum target, ultrasound and other means. Ultrasound,
when used in the detection of human soft tissue, has a good resolution [8–10]; in clinical
medicine, it has a good performance in identifying tiny pathological changes in the tissue.
Experienced doctors can determine pathological changes via ultrasound images, which can
distinguish between benign and malignant lesions [11], gaining valuable treatment time
through the early screening of patients.
At present, ultrasound diagnosis has certain limitations [12]. Different hospitals
have different ultrasonic machines, and the quality of images collected by using different
machines differs, resulting in large differences in the characteristics expressed by the images,
which will cause difficulties for doctors in diagnoses. Due to the different methods of image
acquisition used by doctors with different levels of experience, a full picture of the lesion
area may not be captured from multiple angles, resulting in deformations in the mass. The
recognition of ultrasound greatly depends on the personal experience of the attending
physician, and doctors with less experience may not be able to see the lesion area from
an ultrasound image. This can result in missed diagnoses and misdiagnosis, delaying
the golden time of patient treatment. In order to avoid the above limitations as much
as possible, the use of computers to learn from ultrasound images [13–16], simulate the
diagnosis of doctors with high experience and identify the region and category of breast
tumors can greatly shorten the diagnosis time and provide doctors with an auxiliary basis
for diagnoses.
In recent years, big data has helped solve the problem of feature acquisition in many
image tasks, and its advantages can be increased by combining artificial intelligence tech-
nology in medical image tasks [17–19]. Some related work on the detection of breast tumors
has used relevant methods. For example, Chen [20] et al. proposed a machine learning
method that can automatically segment ultrasound tumors using the threshold features of
local regions and morphological algorithms to achieve tumor localization. However, the
ultrasonic image has an uneven gray scale, heavy artifacts and strong noise, and the recogni-
tion effect of tumors against a complex background is not good. Wang et al. [21] proposed a
multi-view convolutional neural network based on Inception-V3 and trained small datasets
using transfer learning. Du et al. [22] used an edge-enhanced anisotropic diffusion (EEAD)
model to achieve data preprocessing, establish shape sample constraint terms and improve
the recognition accuracy of difficult samples, with a final accuracy of 92.58%. However, the
actual collection of breast ultrasound images is complicated and changeable, and there are
certain limitations. Yang et al. [23] used dynamic enhanced magnetic resonance imaging
(DCE-MRI) technology to propose a hybrid integrated model, MEI-CNN, at the diagnostic
stage, which classified benign and malignant breast tumors through modular classification.
This has certain advantages compared with a single classifier. Yu et al. [24] studied a deep
learning automatic classification algorithm, TDS-Net, based on breast ultrasound images,
designed a double-branch structure to enhance its ability to extract features and conducted
experiments with data provided by Yunnan Cancer Hospital. The results showed that the
accuracy rate was 94.67%, but the generalization performance needed to be improved.
Previous studies mostly focused on a single task and did not integrate the identifi-
cation and classification of breast tumors [25,26]. The simple task of ultrasound image
segmentation and classification often cannot help clinicians make a rapid diagnosis, and the
actual use process will be restricted by certain conditions. Therefore, according to the needs
of clinical diagnosis and treatment and the advantages of the deep learning algorithm, we
proposed combining the two tasks, which not only outline the shape of the tumor, but can
also be used to provide preliminary diagnostic results to form a systematic diagnosis and
Previous studies mostly focused on a single task and did not integrate the identifica-
tion and classification of breast tumors [25,26]. The simple task of ultrasound image seg-
mentation and classification often cannot help clinicians make a rapid diagnosis, and the
actual use process will be restricted by certain conditions. Therefore, according to the
needs of clinical diagnosis and treatment and the advantages of the deep learning algo-
Bioengineering 2023, 10, 1220 rithm, we proposed combining the two tasks, which not only outline the shape of the 3 oftu-
13

mor, but can also be used to provide preliminary diagnostic results to form a systematic
diagnosis and treatment plan [27,28] and provide a practical and effective means of aux-
iliary diagnostics
treatment for clinicians.
plan [27,28] and provide a practical and effective means of auxiliary diagnostics
Therefore, the main contributions of our work are as follows:
for clinicians.
- Therefore,
We the amain
introduce contributions
two-stage of our work
convolutional arenetwork
neural as follows:
framework that integrates
- We introduce a two-stage
deep-learning-based imageconvolutional
segmentationneural network framework
and classification that integrates
tasks, thereby reducing
deep-learning-based
the time to diagnosis and image segmentation
improving and classification
recognition accuracy. tasks, thereby reducing
- the time
With the to diagnosis of
integration and improving
the attention recognition
mechanismaccuracy.
in the tumor recognition method,
- With the integration of the attention mechanism
the breast tumor region can be located and information in the tumor recognition
regarding the sizemethod, the
and shape
breast tumor region can be located and information regarding the size
of its main region can be obtained. The influence of the ultrasonic background on theand shape of
its main region
classification cancan
be be obtained.
reduced, andThe
the influence
diagnosticofeffect
the ultrasonic backgrounddoctors
of senior ultrasound on the
classification
can be achieved.can be reduced, and the diagnostic effect of senior ultrasound doctors
- can validate
We be achieved.
the efficiency of the proposed method on different datasets.
-- We develop the
We validate efficiency of
an intelligent the proposed
auxiliary method
diagnosis system on different
for breastdatasets.
ultrasound images,
- We develop an intelligent auxiliary diagnosis system
which can realize end-to-end recognition, provide auxiliary for breast ultrasound
diagnosis images,
schemes for
which can realize end-to-end recognition,
ultrasound doctors and improve diagnosis efficiency.provide auxiliary diagnosis schemes for
ultrasound doctors and improve diagnosis efficiency.
2. Materials and Methods
2. Materials and Methods
2.1.
2.1. Breast
Breast Ultrasound
Ultrasound Image
Image Datasets
Datasets Are
Are Used
Used for
for the Development of
the Development of Multiple
Multiple Convolutional
Convolutional
Neural Networks
Neural Networks
In
In this study, we
this study, we collected
collected 2057
2057 images
images from
from 1131
1131 patients
patients who
who underwent
underwent breast
breast
ultrasonography at the Strategic Support Force Medical Center in 2022. In order to avoid
the influence of racial differences, the images we retained were all from female patients in
China and contained at least one tumor area. For multiple images of the same patient, we
used the same training set or validation set. The training set and the verification set were
divided according to 8:2. AllAll the data were used for the localization and identification of
breast tumors
tumors after
afterdesensitization
desensitizationand
andfor
forthe
thetraining
training and
and verification
verification of the
of the classifica-
classification
tion of benign
of benign and malignant
and malignant tumors.
tumors. The collected
The collected data data are shown
are shown in Figure
in Figure 1. 1.

Among them,
Figure 1. Ultrasound image of breast tumor. Among them, (a,b) are benign tumors, while (c,d) are
malignant tumors.

After the pathological diagnosis, the images of axillary lymph node metastasis were
excluded, and 264 ultrasound images of the other 100 patients were used to test the
network performance, of which 56 were benign and 44 were malignant. The specific dataset
distribution is shown in Table 1.
After the pathological diagnosis, the images of axillary lymph node metastasis were
excluded, and 264 ultrasound images of the other 100 patients were used to test the net-
work performance, of which 56 were benign and 44 were malignant. The specific dataset
Bioengineering 2023, 10, 1220 distribution is shown in Table 1. 4 of 13

Table 1. The number of breast tumor images for the training, validation and test.
Table 1. The number of breast tumor images for the training, validation and test.
The Images Used as Training and
The Images Used as Test Datasets
TheValidation
Images UsedDatasets
as Training and
Diagnosis The Images Used as Test Datasets
Validation
Number of Datasets
Number of Number of Number of
Diagnosis
Patients
Number (n)of Images
Number(n) of Patients
Number (n)
of Images (n)
Number of
benign Patients
649 (n) Images
1288 (n) Patients
56 (n) Images
132 (n)
malignancy
benign 482649 769
1288 44
56 132
132
malignancy 482 769 44 132
2.2. Data Processing
In order
2.2. Data to train the breast tumor recognition module, we invited an experienced
Processing
ultrasound
In order to train to
physician themark thetumor
breast location of the tumor
recognition and used
module, welabelme
invited software to fine-
an experienced
tune the location of the nodule. The markings cover all the outer edges of the tumor.
ultrasound physician to mark the location of the tumor and used labelme software Release The
classification of benign and malignant tumors was also provided by a senior ultrasound
3.0 to finetune the location of the nodule. The markings cover all the outer edges of the
doctor who
tumor. had more than
The classification of 20 yearsand
benign of experience
malignantin ultrasound
tumors diagnosis.
was also provided by a senior
ultrasound doctor who had more than 20 years of experience in ultrasound diagnosis.
2.3. Method Overview
2.3. Method Overview
At present, CNNs [29] and Transformers [30] are the most widely used deep learning
techniques in the
At present, field [29]
CNNs of image processing. However,
and Transformers themost
[30] are the transformer-based
widely used deep method re-
learning
quires a large
techniques amount
in the fieldof
ofdata
imagefor processing.
training, andHowever,
our data volume is small, so using
the transformer-based CNN
method
will not lead
requires to overfitting
a large on small
amount of data datasets.and
for training, Theourmain structure
data volumeofisthe CNN
small, so consists
using CNN of a
will not lead to overfitting on small datasets. The main structure
convolutional layer, pooling layer and fully connected layer. By fusing of the CNN consists
above network
of a convolutional
layers, we build a fullylayer, pooling layer
convolutional and network
neural fully connected layer. By fusing
model, Attention-Unet thetoabove
[31], iden-
network layers, we
tify the location build atumors
of breast fully convolutional
from end to endneural
andnetwork
then usemodel,
D-CNN,Attention-Unet [31],
another network
to identifybased
classifier the location
on deepoflearning,
breast tumors fromthe
to classify end to endThe
tumors. andoverall
then use D-CNN,
flow chart isanother
shown
network
in Figureclassifier
2. based on deep learning, to classify the tumors. The overall flow chart is
shown in Figure 2.

Figure
Figure 2. A schematic
2. A schematicof
ofour
ourmethod.
method.We We trained
trained a multi-convolutional
a multi-convolutional neural
neural network
network model,
model, in-
including
cluding ananAttention-Unet
Attention-Unetnetwork
networkfor
forregion
regionofofinterest
interestidentification
identification and
and aa D-CNN
D-CNN network
network for
for
benign and malignant classification. The primary goal of the first-stage network is to accurately
locate the breast tumor. In order to mitigate the impact of the background noise of ultrasound
images, the output of the first-stage network is used as the input of the second-stage network after a
post-processing step, which involves extracting image blocks from the central mask region. After
that, the second-stage network, D-CNN, was used as the classification network to distinguish the
tumor categories.
benign and malignant classification. The primary goal of the first-stage network is to accurately
locate the breast tumor. In order to mitigate the impact of the background noise of ultrasound im-
ages, the output of the first-stage network is used as the input of the second-stage network after a
post-processing step, which involves extracting image blocks from the central mask region. After
that, the second-stage network, D-CNN, was used as the classification network to distinguish the
Bioengineering 2023, 10, 1220 5 of 13
tumor categories.

In the first stage of the network, all the input images are fed, together with their
In into
masks, the first stage of the network,
the Attention-Unet network, allwhich
the input images
contains bothare
thefed, together
encoding andwith their
decoding
masks,
modules.into the of
First Attention-Unet network,
all, the input image which
enters thecontains both the the
coding module; encoding
codingandpartdecoding
is taken
modules.
by using theFirst of all, the input
convolutional layerimage
as theenters
featurethe coding module;
extraction layer; andthethen
coding part is taken
the pooling layer
by
is used to reduce the size of the feature map to obtain high-level semantic features.layer
using the convolutional layer as the feature extraction layer; and then the pooling The
is used to part
decoding reduce the sizethe
integrates of the feature
output mapmap
feature to obtain
after high-level semantic features.
encoding, combined with a seriesThe
decoding part integrates
of upsampling operations, theand
output feature
finally map the
outputs afteroriginal
encoding,
sizecombined with In
feature map. a series of
feature
upsampling operations,
maps of different andlocation
sizes, the finally outputs
informationthe original
of breastsize feature
tumors map. In feature
is different, and it ismapseasy
of different sizes, the location information of breast tumors is different,
to lose the location information of tumors during the network learning process, resulting and it is easy to
lose the location information of tumors during the network learning
in the misidentification of non-tumor regions and strong similarity boundaries. Therefore,process, resulting in
the misidentification of non-tumor regions and strong similarity boundaries.
we added the attention module to the network layer to selectively learn the input features, Therefore,
we
andadded the attention
we added module to to
fusion operations thethe
network layer to
upsampling toselectively learn the
fuse the feature mapinput features,
information
and we added fusion operations to the upsampling to fuse the feature
of different scales so that the network could learn the features of tumor location and edge. map information
of
Thedifferent scales so
experimental that the
results shownetwork
that thiscould learn can
method the features of tumor
distinguish location
the edge and edge.
and texture of
The experimental results show that this method can distinguish the edge and texture of
breast tumors from the background, providing better edge and shape information for the
breast tumors from the background, providing better edge and shape information for
second-stage classification network. The Attention-Unet model structure is shown in Fig-
the second-stage classification network. The Attention-Unet model structure is shown in
ure 3.
Figure 3.

Figure 3. Attention-Unet
Attention-Unetnetwork
networkstructure.
structure.We
Weused
usedthe Unet+Attention
the module
Unet+Attention moduleas the feature
as the ex-
feature
tractor ofof
extractor the first-stage
the first-stagenetwork,
network,the
themain
mainpurpose
purposeofofwhich
whichwaswasto
toimprove
improve the
the weight
weight ofof position
position
information by
information by fusing
fusing multi-layer
multi-layerfeature
featuremaps
mapsand to to
and perform end-to-end
perform output
end-to-end through
output throughan en-
an
coding layer and decoding layer.
encoding layer and decoding layer.

The attention
The attentionmodule
moduleintegrated
integratedinin thethe decoding
decoding layer
layer cancan provide
provide the neural
the neural net-
network
workthe
with with the ability
ability to focustoonfocus on its output.
its output. The core’sThe core’s function
function is to makeisthe
to make
networkthepay
network
more
attention
pay moreto the place
attention to to
thewhich
place ittoneeds
whichtoitpay
needsattention, which is generally
to pay attention, reflected re-
which is generally by
weight.
flected byTheweight.
main implementation method is that
The main implementation the input
method andthe
is that output
inputareandtheoutput
current layer
are the
of the encoder
current layer ofand the decoder
the encoder and layer of the corresponding
the decoder size, respectively.
layer of the corresponding Firstly, the
size, respectively.
input
Firstly,isthe × H is
a Cinput × aWCfeature
× H × Wmap, andmap,
feature the feature
and themap × 1 of
of 1map
feature × 1N×is1 obtained using
× N is obtained
the activation
using function
the activation F1. TheF1.
function feature map matches
The feature the number
map matches of feature
the number channels
of feature of the
channels
input and represents the global distribution of the feature response. Then, each feature
channel is weighted by the activation function F2, which is used to represent the correlation
and weight between feature channels. Finally, the activation function F3 is used to restore
the feature map, and the previous features are weighted by channel to output the feature
map with different weight information. In this way, the attention coefficient can be used to
weight the feature map so as to obtain the final feature map. The attention module structure
diagram is shown in Figure 4.
ture
to channel
restore the is weighted
feature map,byand
thethe
activation
previousfunction
featuresF2,are
which is used
weighted bytochannel
represent
to the cor-
output
relation
the and
feature weight
map withbetween
differentfeature
weightchannels. Finally,
information. the way,
In this activation functioncoefficient
the attention F3 is used
to restore
can be usedthe
to feature map,
weight the and the
feature map previous
so as tofeatures
obtain thearefinal
weighted
featurebymap.
channel to output
The attention
the feature
module map with
structure different
diagram weight
is shown information.
in Figure 4. In this way, the attention coefficient
Bioengineering 2023, 10, 1220 can be used to weight the feature map so as to obtain the final feature map. The attention 6 of 13
module structure diagram is shown in Figure 4.

Figure 4. Attention module structure diagram. The feature graph is weighted by a 1 × 1 convolution.

Figure 4. Attention module structure diagram. The feature graph is weighted by a 1 × 1 convolution.
Then,
Figure the mask
4. Attention resultstructure
module identified by theThe
diagram. first network
feature isissuperimposed
graph weighted by a 1with the origi-
× 1 convolution.
nal image
Then, theto obtain the tumor
mask result coordinate
identified position,
by the first networkandisa superimposed
512 × 512 patchwith centered on the
the original
image to obtain the tumor coordinate position, and a 512 × 512 patch centered on origi-
tumor Then,
regiontheismask result
simulated identified
according by
to the
its first
position.network
For is
tumorssuperimposed
in the edgewith the
region, 0 is
the
nal image to obtain the tumor coordinate position, and a 512
tumor region is simulated according to its position. For tumors in the edge region, 0 the
added to avoid deformations in the tumor region, which is used× 512
as patch
the centered
training set on
of the
is
tumortonetwork
second
added region
avoid isdeformations
simulatedD-CNN.
classifier, according
in theThen, to its
tumor the position. For tumors
classification
region, which used in
ismodel is the
theedge
astrained byregion,
training thisset 0 is
deep
of
added
the second to avoid
convolutionalnetwork deformations
neural network.
classifier, inThe
the tumor
D-CNN. D-CNN
Then, region,
network which is used
is connected
the classification as the
model istraining
throughtrained set
by of
multi-layer the
this
second
deep network
convolution,
convolutional classifier,
pooling and full
neural D-CNN. Then,
connection
network. the classification
layers.
The D-CNN resnet18
networkisis model
used asisthe
connected trained
basic by
through this
feature deep
ex-
multi-
convolutional
traction
layer convolution, neural
layer, and poolingnetwork.
multi-scale
and full The
feature D-CNN
connection network
fusionlayers.
technology is connected
resnet18 is used
is usedin asthrough
the middle
the basicmulti-layer
layer to
feature
convolution,
obtain thelayer,
extraction pooling
featureandmaps and full connection
of the high-level
multi-scale layers.
network
feature fusion resnet18 is
and the low-level
technology used
is used in as the
network, basic
the middle feature
respectively.
layer toex-
traction
Then,
obtain thelayer,
the and multi-scale
high-dimensional
feature maps feature
of the features
high-level and fusion technology
low-dimensional
network isfeatures
and the low-level used network,
inarethefused
middle layer to
through
respectively. a
obtain
Then, thethe
concatenate feature maps
operation
high-dimensionaltoof the high-level
superposition
features network
andmultiple and themaps.
feature
low-dimensional low-levelThen,
features network,
are the respectively.
multi-layer
fused throughfea- a
Then,
ture the are
maps
concatenate high-dimensional
integrated
operation features
through
to superposition and
fullylow-dimensional
themultiple connected features
layer Then,
feature maps. to the are
predict thefused through
category
multi-layer witha
feature
concatenate
maps
the highest operation
are integrated
score. The to superposition
through
D-CNN the
model multiple
fullystructure
connected feature
layer
diagram ismaps.
to predict
shown Then, the multi-layer
theFigure
in category5. with the fea-
ture maps
highest score.are
Theintegrated
D-CNN model through the fully
structure connected
diagram layer in
is shown to Figure
predict5.the category with
the highest score. The D-CNN model structure diagram is shown in Figure 5.

Figure 5. The D-CNN model consists of a feature extraction module and a fully connected layer
Figure 5. The D-CNN model consists of a feature extraction module and a fully connected layer for
for feature fusion. The model can extract 1 × 64 global feature maps from the complete original
feature fusion. The model can extract 1 × 64 global feature maps from the complete original image.
image.
The The model
lower lower model layer
layer model
can can reduce
reduce theof the dimension
dimension of the original
of the original feature feature map through the
Figure 5. The D-CNN consists a feature extraction module and map through
a fully the pooling
connected layer for
pooling
layer. layer.
At At the
the end of end
the of the model,
CNN CNN model, the global feature maplocal
and local feature
mapmap of full
the full
feature fusion. The model can extractthe
1 ×global feature
64 global mapmaps
feature and from feature
the complete of the
original con-
image.
connection
nection layer
layer are
are fused
fused toto output
output a a two-dimensional
two-dimensional vector
vector and predict
predict the
the optimal
optimal
The lower model layer can reduce the dimension of the original feature map through the pooling classification
classification
label
labelofofthe
layer. input
thethe
At endimage.
input image.
of the CNN model, the global feature map and local feature map of the full con-
nection layer are fused to output a two-dimensional vector and predict the optimal classification
2.4. Evaluation Indicators
label of the input image.
(1) MIoU and Dice are used to evaluate the first deep convolutional neural network.
MIoU: The MIoU is the average intersection ratio, and the formula is area of over-
lap/area of union, which mainly determines the intersection and union ratio of the real
and predicted values.
Dice: The Dice coefficient is a common dichotomous index used to measure the degree
of overlap between the predicted segmentation and true segmentation.
(2) The most commonly used indicators in a medical evaluation for the second-degree
convolutional neural network include accuracy (ACC), sensitivity, specificity and the
Bioengineering 2023, 10, 1220 7 of 13

area under the curve. The sensitivity is also called the true positive rate (TPR), which
is the probability that a patient is classified as malignant, and the specificity is called
the true negative rate (TNR), which is the probability that a person who does not
actually have the disease is classified as benign. The AUC is the area bounded by the
axis under the receiver operator characteristic curve (ROC).

3. Result
3.1. Model Performance Evaluation
We tested the validity of multiple models on breast ultrasound image datasets, and
as a comparison, we also used a U-net network with no added attention mechanism to
demonstrate the validity of the model’s added attention mechanism. The MIoU and Dice
coefficients of the breast tumor localization module were about 0.89 and 0.92, respectively.
As shown in Table 2, we compared the results of different network models in the verification
of tumor region recognition on breast ultrasound images. A comparison of the final results8 of 14
Bioengineering 2023, 10, x FOR PEER REVIEW
shows that our model is superior to other methods in terms of the evaluation indexes. The
comparison of model recognition results is shown in Figure 6.

Figure 6. Segmentation
Figure 6. Segmentationmodel
model identification results
identification results show:
show: (a)(a) original
original image;
image; (b) ground
(b) ground truth,truth,
The The
mask
mask confirmed by the doctor; (c) prediction image, the mask identified by the model; (d) heat map, map,
confirmed by the doctor; (c) prediction image, the mask identified by the model; (d) heat
probabilistic heat
probabilistic heatmap
mapof
ofmodel output.The
model output. Thecloser
closer the
the color
color isred,
is to to red, the deeper
the deeper the model's
the model’s focus. focus.

The
The experimentalresults
experimental results clearly
clearlydemonstrate
demonstrate that thethe
that Attention-Unet model
Attention-Unet we used
model we used
plays a powerful role in the recognition of breast tumors. By comparison with the
plays a powerful role in the recognition of breast tumors. By comparison with the label label
map, the recognition results of the model show that it can distinguish the tumor and the
background region, and the details of the edge region are also detailed. We also output
the intermediate process of the model, and the probability heat map also proves that the
focus position of the model is correct.
Bioengineering 2023, 10, 1220 8 of 13

map, the recognition results of the model show that it can distinguish the tumor and the
background region, and the details of the edge region are also detailed. We also output the
intermediate process of the model, and the probability heat map also proves that the focus
position of the model is correct.

Table 2. Performance evaluation of segmentation network model.

Model MIoU Mdice


U-net 0.82 0.84
Fast-RCNN 0.83 0.85
Deeplab V3 0.85 0.87
Ours 0.89 0.92

For the classification network, D-CNN, we evaluate its final performance using accu-
Bioengineering 2023, 10, x FOR PEER REVIEW
racy, sensitivity and specificity. The accuracy of the model is 97%, the sensitivity is 97.7%,9 of 14
and the specificity is 96.4%. The results show that our model can distinguish between
benign and malignant breast tumors. The AUC and ROC curves are shown in Figure 7.

Figure
Figure7.7.Confusion
Confusion matrix andROC
matrix and ROCcurve.
curve.When
When tested
tested using
using a dataset
a dataset with with pathological
pathological results,results,
thetheconfusion
confusionmatrix
matrix of the
the diagnostic
diagnosticprocedure
procedure is shown
is shown in left
in the the figure,
left figure,
with awith a sensitivity
sensitivity of of
97.7%
97.7%and
andaaspecificity
specificity of 96.4%
96.4%forfordiagnosing
diagnosing benign
benign andand malignant
malignant tumors.
tumors. Thecurve
The ROC ROC of curve
the of the
diagnostic
diagnosticisisshown
shown on the
the right,
right,with
withanan AUC
AUC of of 0.96.
0.96.

3.2. External Data Validation


3.2. External Data Validation
To verify the generalization ability of our proposed algorithm, we collected data from
100To verifyfrom
patients the Ejin
generalization ability
Banner Hospital of our
in Inner proposed
Mongolia algorithm,
as an we collected
external validation data from
set. Each
100 patients
patient had afrom
clearEjin
imageBanner
of the Hospital
tumor, and inallInner Mongolia
of them as anpatients
were female external validation
from China. set.
Each
We used our model to test the data on various metrics, including accuracy, sensitivity from
patient had a clear image of the tumor, and all of them were female patients
and specificity.
China. We used Theour results
modelshow
to testthat
thethe
dataaccuracy of the metrics,
on various model is 94%, the sensitivity
including accuracy, is sensi-
91.7%
tivity andand the specificity
specificity. The is 95.3%,show
results showingthat the
thesame excellent
accuracy performance.
of the model is 94%, However,
the sensitiv-
itycompared
is 91.7% with the original
and the dataisset,
specificity the test
95.3%, resultsthe
showing showsamethatexcellent
the modelperformance.
has potential How-
clinical application, which also proves that our algorithm can achieve the same performance
ever, compared with the original data set, the test results show that the model has poten-
using the data of different hospitals and machines. The Confusion matrix of validation
tial clinical application, which also proves that our algorithm can achieve the same per-
dataset is shown in Figure 8.
formance using the data of different hospitals and machines. The Confusion matrix of
validation dataset is shown in Figure 8.
China. We used our model to test the data on various metrics, including accuracy, sensi-
tivity and specificity. The results show that the accuracy of the model is 94%, the sensitiv-
ity is 91.7% and the specificity is 95.3%, showing the same excellent performance. How-
ever, compared with the original data set, the test results show that the model has poten-
tial clinical application, which also proves that our algorithm can achieve the same per-
Bioengineering 2023, 10, 1220 formance using the data of different hospitals and machines. The Confusion matrix 9 ofof
13

validation dataset is shown in Figure 8.

Bioengineering 2023, 10, x FOR PEER REVIEW 10 of 14

the same dataset and


Confusion distribution
matrix. Wetested as
thethe
testedthe unsegmented
externally method
validated dataandand,
and in order
showed to ensure
anaccuracy
accuracy the
Figure
Figure 8.8.Confusion matrix. We externally validated data showed an ofof94%,
94%,
fairness of
sensitivityof the experiment,
of91.7%
91.7%and we
andspecificity used
specificityof the
of95.3%.
95.3%. same hyperparameters to ensure optimal perfor-
sensitivity
mance and a fair evaluation. The confusion matrix without the segmentation algorithm is
3.3.
shownAblation Experiment
in Figure 9.
3.3. Ablation Experiment
The
In different
order methods
to verify used inofthe
the validity thetwo datasets
two-stage are shown
network in thewe
we used, Table 3:
designed an abla-
In order to verify the validity of the two-stage network we used, we designed an
tion experiment to demonstrate that the segmented and unsegmented data have a certain
ablation
Table 3.onexperimentoftoclassification
Comparison demonstrate that the segmented andsegmentation.
unsegmented data have a
impact the classification results.performance
Specifically,before and after
we designed the experiment using the same
certain impact on the classification results. Specifically, we designed the experiment using
dataset and distribution as the unsegmented method and, in order to ensure the fairness of
Model Accuracy Sensitivity Specificity AUC
the experiment, we used the same hyperparameters to ensure optimal performance and
U-net+D-CNN 97% 97.7% 96.4% 0.96
a fair evaluation. The confusion matrix without the segmentation algorithm is shown in
Figure D-CNN
9. 89% 86.4% 91.5% 0.87

Figure
Figure 9. Confusion matrix.
9. Confusion matrix. We
We tested
tested the
the externally
externally validated
validated data
data and
and showed
showed an
an accuracy
accuracy of
of 89%,
89%,
sensitivity of 86.4% and specificity of 91.5%.
91.5%.

The
The different methods
experimental used
results in the
show thattwo
thedatasets are shown
model index in by
trained thethe
Table 3:
unsegmented tu-
mor data is lower than that trained after data segmentation. Due to the complexity of ul-
Table 3. Comparison of classification performance before and after segmentation.
trasound images, the background region has a great impact on the judgment of the model,
and the two-stage model
Model we propose can
Accuracy avoid this problem,
Sensitivity which also verifies
Specificity AUC the ra-
tionality of our
U-net + D-CNN
design algorithm.
97% 97.7% 96.4% 0.96
D-CNN
3.4. Design 89%
of Intelligent Auxiliary Diagnosis86.4% 91.5%
System for Breast Ultrasound Imaging 0.87
In order to make the artificial intelligence algorithm more interactive, we designed
and developed an intelligent assisted-diagnosis system for breast ultrasound. The main
object of this system is the clinician. Through end-to-end recognition, the sonographer
only needs to input the acquired images into our system and can immediately obtain the
Bioengineering 2023, 10, 1220 10 of 13

The experimental results show that the model index trained by the unsegmented
tumor data is lower than that trained after data segmentation. Due to the complexity
of ultrasound images, the background region has a great impact on the judgment of the
model, and the two-stage model we propose can avoid this problem, which also verifies
the rationality of our design algorithm.

3.4. Design of Intelligent Auxiliary Diagnosis System for Breast Ultrasound Imaging
In order to make the artificial intelligence algorithm more interactive, we designed
and developed an intelligent assisted-diagnosis system for breast ultrasound. The main
object of this system is the clinician. Through end-to-end recognition, the sonographer
only needs to input the acquired images into our system and can immediately obtain the
location of the tumor and the benign and malignant results.
The system design is mainly divided into several steps: 1. Upload pictures: the
ultrasound images taken by the doctor, including JPG, PNG and DICOM formats, are
uploaded directly to the system. 2. Through AI analysis, the location and type of breast
tumor nodules, benign or malignant, are identified by using an algorithm, and the results
Bioengineering 2023, 10, x FOR PEER are
REVIEW 11 of 14
displayed on the software. 3. Doctor discrimination, where the accuracy of system
output is tested through the recognition results of the doctor’s judgment algorithm. The
software diagnosis logic and interface are as follows Figure 10.

Figure 10. (a) Diagnostic system logical structure diagram. The diagnostic system consists of 4 steps:
Figure
(1) image 10.upload,
(a) Diagnostic
in whichsystem logical
the doctor structure
uploads thediagram. The
image that diagnostic
needs system consists
to be detected of 4 steps:
to the diagnostic
(1) image
system; (2)upload, in which
AI analysis, usingthethe
doctor uploads
algorithm the image
model that needs
to identify to be detected
the location to the diagnostic
and category of breast
system; (2) AI analysis, using the algorithm model to identify the location and category of breast
tumors; (3) output results and visualization, displayed in the front-end interface; and (4) the doctor
tumors; (3) output results and visualization, displayed in the front-end interface; and (4) the doctor
evaluates the results and judges the correctness of the output. (b) Diagnostic system interface.
evaluates the results and judges the correctness of the output. (b) Diagnostic system interface. The
The interface
interface of diagnostic
of the the diagnostic system
system includes
includes the information
the information area the
area and andimage
the image
area, area, andinfor-
and the the
information area includes the number of breast tumors, tumor size and whether the
mation area includes the number of breast tumors, tumor size and whether the tumor is benign or tumor is benign
or malignant.
malignant. TheThe image
image area
area contains
contains the
the results
results and
and position
position outline
outline ofof the
the recognitionimage.
recognition image.

3.5. Clinical Trial Design


3.5. Clinical Trial Design
Ultrasound doctors with different years of experience may not make the same judg-
Ultrasound doctors with different years of experience may not make the same judg-
ment regarding tumor images. In order to verify the effectiveness of our algorithm in
ment regarding tumor images. In order to verify the effectiveness of our algorithm in clin-
clinical diagnosis for sonographers, we designed a multi-viewer, multi-data sample val-
ical diagnosis for sonographers, we designed a multi-viewer, multi-data sample valida-
idation test. We invited three sonographers of different seniorities as the study group,
tion test. We invited three sonographers of different seniorities as the study group, with
with more than 20 years, 3 years and 1 year of ultrasonography experience, respectively.
more than 20 years, 3 years and 1 year of ultrasonography experience, respectively. We
We divided the three sonographers into three independent groups, and they acquired
divided the three sonographers into three independent groups, and they acquired com-
completely independent test results. The test process was divided into two stages.
pletely independent
In the first stage, test results.
we first Thesenior
let the test process
and twowas divided
junior intointerpret
readers two stages.
the ultrasonic
In the first stage, we first let the senior and two junior readers interpret
images directly, without using the algorithm results. In the second stage, after theaultrasonic
washout
images directly, without using the algorithm results. In the second stage, after a washout
period, our algorithm gave the result as a reference, and then the viewer carried out a new
round of interpretation.
In this experiment, the film readers used the algorithm-assisted interpretation results
as the experimental group, the interpretation results without algorithm-assisted diagnosis
as the control group and the pathological results as the fine standard. Based on tumor
Bioengineering 2023, 10, 1220 11 of 13

period, our algorithm gave the result as a reference, and then the viewer carried out a new
round of interpretation.
In this experiment, the film readers used the algorithm-assisted interpretation results as
the experimental group, the interpretation results without algorithm-assisted diagnosis as
the control group and the pathological results as the fine standard. Based on tumor location
accuracy, benign and malignant accuracy and the time required to diagnose 100 patients,
we finally came to the following conclusions. The Results of clinical trial are shown in the
Tables 4 and 5:

Table 4. Results for control group.

Medical Seniority Accuracy of Benign and Malignant Time


1 year 71% 60 min
3 years 83% 45 min
20 years 98% 42 min

Table 5. Results for trial group.

Medical Seniority Accuracy of Benign and Malignant Time


1 year 85% 40 min
3 years 92% 34 min
20 years 98% 16 min

As can be seen from the above table, the benign and malignant diagnosis of breast
tumors by senior doctors in the control group and the experimental group was not affected,
but the time to diagnosis was more than doubled, mainly because we could mark the tumor
location, reducing the interpretation of tumor location by senior doctors and improving
the diagnostic efficiency. For junior doctors, the results of the trial group show that our
approach can significantly reduce the time it takes to locate tumors in complex ultrasound
images, and the labeled locations and results can also assist junior doctors and improve
their judgement accuracy.

4. Conclusions
In this paper, we introduce a multi-task framework combining tumor segmentation
and classification tasks in breast ultrasound images. Our proposed method mainly con-
sists of two modules: a U-net segmentation module based on attention mechanism and
a D-CNN classification model. In order to enhance the extraction of spatial position in-
formation in the image, we used the attention mechanism to increase the position weight
of features to improve the segmentation accuracy. Moreover, the classification of benign
and malignant tumors by using segmentation images also confirmed that the preservation
of local tumor regions was superior to the classification of the whole image. The exper-
imental results on multi-center datasets show that our method has good generalization
performance. Compared with other state-of-the-art methods, our method is capable of
end-to-end identification, taking into account both performance and timeliness, and can
help sonographers improve diagnostic efficiency in clinical practice.

5. Discussion
Although our experimental results have demonstrated the validity of our proposed
model, there are some limitations. First of all, most ultrasound images in clinical practice
are continuous DICOM images, and benign and malignant tumors with different frame
numbers and angles are not necessarily the same. However, we used 2D datasets, ignoring
the correlation information between different layers. Secondly, the patients in the dataset
we used were all from the same region, and whether there are differences between different
Bioengineering 2023, 10, 1220 12 of 13

ethnic groups remains to be studied. Finally, although our model shows some results in
simulated scenarios, its performance in real medical settings needs to be further explored.
To address these limitations, we plan to develop better models, such as data in a
DICOM format, that incorporate continuity information between the different layers of
the image. In addition, we would like to experiment with more center data and doctors
using our auxiliary diagnostic software, which will further validate the practicability of our
method in clinical medical diagnosis and contribute to more efficient and accurate breast
tumor identification in the future.

Author Contributions: L.Y. designed research. B.Z. designed and completed the experiment and
wrote the manuscript. F.R., J.G. (Jianwen Gu), J.G. (Jiao Gao), H.J. and J.W. reviewed and edited the
manuscript. X.Z. polished the manuscript. D.L., G.L., J.Z. (Jing Zong) and J.Z. (Jing Zhang) collected
data. X.Y., B.D., X.W. and N.L. annotated data. All authors have read and agreed to the published
version of the manuscript.
Funding: This research was supported by grants from the research on Artificial Intelligence-based
Breast Tumor Localization Diagnosis System (22XK0105), the Informatization Plan of Chinese
Academy of Sciences (CAS-WX2021SF-0101), the National Key Research and Development Pro-
gram of China (2021YFF1201005) and the National Natural Science Foundation of China (82072939).
Institutional Review Board Statement: According to the Declaration of Helsinki, this study was
conducted with the Medical Ethics Committee of the Chinese People’s Liberation Army Strategic
Support Specialty Medical Center (code: LL-XKZT-2022(2)).
Informed Consent Statement: Informed consent was obtained from all subjects involved in this
study. Written informed consent was obtained from the patient(s) to publish this paper if applicable.
Data Availability Statement: Data are not available at present because there are further research
plans. If necessary, you can contact the author to obtain the data.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Siegel, R.L.; Miller, K.D.; Wagle, N.S.; Jemal, A. Cancer statistics, 2023. CA A Cancer J. Clin. 2023, 73, 17–48. [CrossRef] [PubMed]
2. World Health Organization. World Cancer Report, 2022; IARC: International Agency for Research on Cancer: Lyon, France, 2022.
3. Kim, M.S.; Yun, L.B.; Jang, M. Clinical Applications of Automated Breast Ultrasound: Screening for Breast Cancer. J. Korean Soc.
Radiol. 2019, 80, 32–46. [CrossRef]
4. Kriaucioniene, V.; Petkeviciene, J. Predictors and Trend in Attendance for Breast Cancer Screening in Lithuania. Int. J. Environ.
Res. Public Health 2019, 16, 4535. [CrossRef] [PubMed]
5. Das, M. Supplemental ultrasonography for breast cancer screening. Lancet Oncol. 2019, 20, e244. [CrossRef] [PubMed]
6. Hooley, R.J.; Butler, R. Modern Challenges in Assessing Breast Cancer Screening Strategies: A Call for Added Resources. Radiology
2023, 306, e230145. [CrossRef]
7. Xing, B.; Chen, X.; Wang, Y.; Li, S.; Liang, Y.K.; Wang, D. Evaluating breast ultrasound S-detect image analysis for small focal
breast lesions. Front. Oncol. 2022, 12, 1030624. [CrossRef]
8. Marini, T.J.; Castaneda, B.; Iyer, R.; Baran, T.M.; Nemer, O.; Dozier, A.M.; Parker, K.J.; Zhao, Y.; Serratelli, W.; Matos, G.; et al.
Breast Ultrasound Volume Sweep Imaging: A New Horizon in Expanding Imaging Access for Breast Cancer Detection. J.
Ultrasound Med. Off. J. Am. Inst. Ultrasound Med. 2022, 42, 817–832. [CrossRef]
9. Nicosia, L.; Pesapane, F.; Bozzini, A.C.; Latronico, A.; Rotili, A.; Ferrari, F.; Signorelli, G.; Raimondi, S.; Vignati, S.; Gaeta, A.; et al.
Prediction of the Malignancy of a Breast Lesion Detected on Breast Ultrasound: Radiomics Applied to Clinical Practice. Cancers
2023, 15, 964. [CrossRef]
10. Choi, E.J.; Lee, E.H.; Kim, Y.M.; Chang, Y.W.; Lee, J.H.; Park, Y.M.; Kim, K.W.; Kim, Y.J.; Jun, J.K.; Hong, S. Interobserver agreement
in breast ultrasound categorization in the Mammography and Ultrasonography Study for Breast Cancer Screening Effectiveness
(MUST-BE) trial: Results of a preliminary study. Ultrasonography 2019, 38, 172. [CrossRef]
11. Apantaku, L. Breast cancer diagnosis and screening. Am. Fam. Physician 2000, 62, 605–606.
12. Romeo, V.; Cuocolo, R.; Apolito, R.; Stanzione, A.; Ventimiglia, A.; Vitale, A.; Verde, F.; Accurso, A.; Amitrano, M.; Insabato, L.;
et al. Clinical value of radiomics and machine learning in breast ultrasound: A multicenter study for differential diagnosis of
benign and malignant lesions. Eur. Radiol. 2021, 31, 9511–9519. [CrossRef] [PubMed]
13. Zhang, S.; Liao, M.; Wang, J.; Zhu, Y.; Zhang, Y.; Zhang, J.; Zheng, R.; Lv, L.; Zhu, D.; Chen, H.; et al. Fully automatic tumor
segmentation of breast ultrasound images with deep learning. J. Appl. Clin. Med. Phys. 2022, 24, e13863. [CrossRef] [PubMed]
Bioengineering 2023, 10, 1220 13 of 13

14. Mahant, S.S.; Varma, A.R.; Varma, A. Artificial Intelligence in Breast Ultrasound: The Emerging Future of Modern Medicine.
Cureus 2022, 14, e28945. [CrossRef]
15. Podda, A.S.; Balia, R.; Barra, S.; Carta, S.; Fenu, G.; Piano, L. Fully-automated deep learning pipeline for segmentation and
classification of breast ultrasound images. J. Comput. Sci. 2022, 63, 101816. [CrossRef]
16. Qu, X.; Lu, H.; Tang, W.; Wang, S.; Zheng, D.; Hou, Y.; Jiang, J. A VGG attention vision transformer network for benign and
malignant classification of breast ultrasound images. Med. Phys. 2022, 49, 5787–5798. [CrossRef] [PubMed]
17. Mishra, A.K.; Roy, P.; Bandyopadhyay, S.; Das, S.K. CR-SSL: A closely related self-supervised learning based approach for
improving breast ultrasound tumor segmentation. Int. J. Imaging Syst. Technol. 2021, 32, 1209–1220. [CrossRef]
18. Mishra, A.K.; Roy, P.; Bandyopadhyay, S.; Das, S.K. Achieving highly efficient breast ultrasound tumor classification with deep
convolutional neural networks. Int. J. Inf. Technol. 2022, 14, 3311–3320. [CrossRef]
19. Huang, Y.; Han, L.; Dou, H.; Luo, H.; Yuan, Z.; Liu, Q.; Zhang, J.; Yin, G. Two-stage CNNs for computerized BI-RADS
categorization in breast ultrasound images. Biomed. Eng. Online 2019, 18, 8. [CrossRef]
20. Chen, D.; Liu, Y.; Tao, Y.; Liu, W. A level set segmentation algorithm for breast B-ultrasound images combined with morphological
automatic initialization. Electron. Compon. Inf. Technol. 2022, 6, 22–24.
21. Wu, Y.; Lou, L.; Xu, B.; Huang, J.; Zhao, L. Intelligent classification diagnosis of ultrasound images of breast tumors based on
transfer learning. Chin. Med. Imaging Technol. 2019, 35, 357–360.
22. Du, Z.; Gong, X.; Luo, J.; Zhang, J.; Yang, F. Classification of confusing and difficult samples in breast ultrasound images. Chin. J.
Image Graph. 2020, 25, 7.
23. Yang, Z.; Wang, J.; Xin, C. Automatic segmentation and classification of MR Images using DCE-MRI combined with improved
convolutional neural network. J. Chongqing Univ. Technol. 2020, 34, 147–157.
24. Yu, F.; Yi, L.; Luo, X.; Li, L.; Yi, S. A deep learn-based BI-RADS classification method for breast ultrasound images. J. Yunnan Univ.
2023, 17, 815–824.
25. Ren, P.; Sun, W.; Luo, C.; Hussain, A. Clustering-Oriented Multiple Convolutional Neural Networks for Single Image Super-
Resolution. Cogn. Comput. 2018, 10, 165–178. [CrossRef]
26. Hussain, S.; Xi, X.; Ullah, I.; Inam, S.A.; Naz, F.; Shaheed, K.; Ali, S.A.; Tian, C. A Discriminative Level Set Method with Deep
Supervision for Breast Tumor Segmentation. Comput. Biol. Med. 2022, 149, 105995. [CrossRef] [PubMed]
27. Kabir, S.M.; Bhuiyan, M.I. Correlated-Weighted Statistically Modeled Contourlet and Curvelet Coefficient Image-Based Breast
Tumor Classification Using Deep Learning. Diagnostics 2022, 13, 69. [CrossRef] [PubMed]
28. Cheng, X.; Tan, L.; Ming, F. Feature Fusion Based on Convolutional Neural Network for Breast Cancer Auxiliary Diagnosis. Math.
Probl. Eng. 2021, 2021, 7010438. [CrossRef]
29. Lecun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [CrossRef] [PubMed]
30. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need.
Comput. Sci. 2017, 30, 7.
31. Zhong, Y. Attention Mechanisms with UNet3+ on Brain Tumor MRI Segmentation. In Proceedings of the 2021 International
Conference on Biological Engineering and Medical Science (ICBioMed 2021), Online, 9–15 December 2021; AIP Publishing: Long
Island, NY, USA, 2021; pp. 209–215.

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like