Professional Documents
Culture Documents
Liu 2019
Liu 2019
Liu 2019
fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2893357, IEEE Access
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
Digital Object Identifier 10.1109/ACCESS.2019.Doi Number
ABSTRACT Objective: A novel fuzzy reasoning model using temporal grayscale change and texture
information of acetic acid test cervical images was developed to classify patients at risk for Cervical
Intraepithelial Neoplasia (CIN). Methods: Pre- and post- acetic acid test images were obtained from 505
patients (383 CIN negative, 122 CIN positive). An automatic image segmentation algorithm was
implemented to extract the acetowhite region, followed by feature extraction of the: (1) temporal grayscale
change from pre- to post- images, (2) texture coarseness and (3) texture complexity of post-test images. An
index of CIN was derived based on fuzzy reasoning incorporating expert knowledge. Logistic regression
was built with cross-validation to compare the classification performance between different combinations
of features. Results: The averaged sensitivity was 80.8%, 80.9%, 82.8% and specificity was 82.0%, 87.4%,
86.2% for three individual features, respectively. The combination of all three features significantly
improved the sensitivity (84.7%) without a significant loss of specificity (86.5%). The fuzzy reasoning
model further improved the sensitivity significantly (sensitivity 85.9%, specificity 86.6%). Conclusion:
Image texture features outperformed temporal grayscale change in specificity. Combining grayscale-based
and texture-based features could improve the overall classification performance, creating a synergistic
effect. The proposed fuzzy reasoning model could further increase the sensitivity without requiring
additional features. Significance: The proposed automatic CIN classification algorithm could provide a
useful screening tool in the prevention of cervical cancers.
INDEX TERMS Cervical cancer screening, acetic acid test, fuzzy reasoning, texture feature.
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2893357, IEEE Access
Jun Liu et al.: A Fuzzy Reasoning Model for Cervical Intraepithelial Neoplasia Classification using Temporal Grayscale Change and
Textures of Cervical Images during Acetic Acid Tests
FIGURE 2. Representative pre- and post-acetic acid test images from (a)
healthy patient (CIN negative), (b) patient with LSIL and (c) patient with
HSIL. The left side is the pre-test images and the right side is the post-
test images.
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2893357, IEEE Access
D1 1 *( Ar ( x, y ) Ag ( x, y ) Ab ( x, y )) / (256*3)
D2 2 *(2* Ar ( x, y ) Ag ( x, y ) Ab ( x, y )) / 256 (2)
D3 3 *( Ba ( x, y ) 128) / 256 FIGURE 4. Conversion of the scale of input and output values to a
m n m n
standard continuous domain of [1,10].
D4 4 * (( x ) 2 ( y ) 2 ) / (( ) 2 ( ) 2 )
2 2 2 2 where Gray(x,y) denotes the grayscale intensity at pixel (x,y),
ranging from 0 to 255. Next, a ratio image was defined using
(4):
where Ar(x,y), Ag(x,y) and Ab(x,y) represent the RGB
post _ image( x, y) / avg _ post _ image
components of the pixel (x,y), Ba(x,y) represents the ratio _ image( x, y) (4)
component of channel A in the LAB space, and D1,D2,D3 pre _ image( x, y) / avg _ pre _ image
and D4 are user-defined functions to describe the overall
grayscale intensity, the red-channel intensity in RGB color where pre_image(x,y) and post_image(x,y) represent the
space, the A-channel intensity in LAB color space and grayscale intensity at pixel (x,y) on the pre- and post-acetic
central location tendency, with λ1, λ2, λ3 and λ4 being the acid images, respectively, avg_pre_image and
weighting coefficients which are solved by the k-means avg_post_image represent the averaged grayscale intensity of
clustering algorithm. the pre- and post- acetic acid images, respectively, and the
The segmentation of the cervix region was performed in ratio_image(x,y) indicates the temporal grayscale intensity
both pre- and post-acetic acid test images. Only the change normalized according to the global intensity level.
overlapping regions in both pre- and post-acetic acid test Based on the ratio image, the first feature was defined as
images were considered as the region of interest where the the percentage area in the region of interest that showed a
subsequent feature extraction is performed. ratio value greater than 1.2 (greater value in CIN region),
Specular reflection is often encountered in colposcopy using (5):
images and interferes with the true change of grayscale
intensities [21, 27-29]. A threshold method was therefore area _ AW
Feature _1 (5)
used. A pixel was labeled as a specular reflection if its G and area _ ROI
B components in RGB color space were both greater than
200 (for the grayscale within 0-255). All specular reflection where area_AW represents the area of region that shows a
regions were excluded from the region of interest. An ratio greater than 1.2, and area_ROI represents the area of
illustration of this step is provided in Fig. 3. the total region of interest.
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2893357, IEEE Access
Jun Liu et al.: A Fuzzy Reasoning Model for Cervical Intraepithelial Neoplasia Classification using Temporal Grayscale Change and
Textures of Cervical Images during Acetic Acid Tests
TABLE I L L
FUZZY REASONING RULES
Feature _ 3 entropy p(w, v) log[ p( w, v)] (8)
Index of
Rule # Feature_1 Feature_2 Feature_3 w1 v 1
CIN
1 Big Big Big Big
2 Big Big Medium Big F. INDEX OF CIN – FEATURE FROM FUZZY
3 Big Big Small Medium REASONING MODEL
4 Big Medium Big Big
5 Big Medium Medium Medium
Fuzzy reasoning model concerns vague aspects related to
6 Big Medium Small Medium information, and it is less restrictive and suitable to handle
7 Big Small Big Medium information provided by human. It extends the classic binary
8 Big Small Medium Medium
logic reasoning to a continue domain using a gradual
9 Big Small Small Medium
10 Medium Big Big Medium transition between pertinence and non-pertinence of an
11 Medium Big Medium Medium element within a set. The above mentioned three features
12 Medium Big Small Medium were used as inputs to the fuzzy reasoning model. The output
13 Medium Medium Big Medium
14 Medium Medium Medium Medium was termed as the “index of CIN”, a final feature to assess
15 Medium Medium Small Medium the severity of intraepithelial neoplasia. To do this, the input
16 Medium Small Big Medium and output values were projected to a continuous universe of
17 Medium Small Medium Small
18 Medium Small Small Small [1, 10] according to the relation shown in Fig. 4. Next, the
19 Small Big Big Medium domain value was categorized into three sets (‘Small’,
20 Small Big Medium Medium ‘Medium’ and ‘Big’) according to the membership functions
21 Small Big Small Small
22 Small Medium Big Small
shown in Fig. 5 for each input feature or the index of CIN.
23 Small Medium Medium Small Lastly, fuzzy reasoning rules were developed based on the
24 Small Medium Small Small expert knowledge of an experienced gynecologic oncologist,
25 Small Small Big Small as shown in Table I.
26 Small Small Medium Small
27 Small Small Small Small During implementation, the continuous domain of [1, 10]
for each feature and the output were discretized in to a set of
grayscale intensities of the two pixels are w and v with spatial [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. The reasoning process of the
distance of d and intersectional angle of θ respectively. In our fuzzy system was determined using (9) and (10), by the
experiment, w and v were in the range of [0,255], d and θ combination of input subsets and corresponding output
were set as 2 pixel distance and 0 degree respectively. As the subsets based on Table I. For our fuzzy model with three
most prominent feature of acetowhite region was within the features and three subsets per feature, 27 rules existed in the
region with an increased grayscale level database.
27
(ratio_image(x,y)>1), the co-occurrence matrix p(w,v) was
only calculated within the region with ratio_image(x,y)>1, in
R F1 (i) F2 (i) F3 (i) P(i) (9)
i 1
order to minimize the influence of non-acetowhite region on
the extraction of subsequent texture features.
where ‘ ∧ ’ represents intersection operation and ‘U’
The energy and entropy of the gray-level co-occurrence
represents union operation, R is the rule database, F1(i), F2(i)
matrix were calculated from the images according to (7) and
and F3(i) are the subsets of three inputs for i-th rule and P(i)
(8), in which L denotes the highest gray-level of the image.
is the subset of the output for i-th rule.
The energy of the co-occurrence matrix could reflect the
image texture coarseness (greater energy less coarse) or
texture smoothness (greater energy higher smoothness), Pj* ( F1*j F2*j F3*j ) R
while the entropy of the co-occurrence matrix could reflect (10)
( p j1 , p j 2 , p j10 ) ( j 1 1000)
the image texture complexity (greater entropy more
complexity) [30, 31]. Consequently, the second feature was
conveniently defined as the inverse of the energy of the co- where ‘o’ represents max-min composition of fuzzy relations
occurrence matrix, representing image texture coarseness [32], F*1j, F*2j, and F*3j are the subsets of each input for j-th
(greater value in CIN positive patients), and the third feature combination (in total 10 by 10 by 10 combinations) and P*j is
was defined directly as the entropy of the co-occurrence the output for j-th combination that consisted by ten
matrix, representing image texture complexity (greater value components (pj1, pj2,…pj10), and it can be defuzzified to the
in CIN positive patients). universe space of [1,10] according to (11) [33] and then
1 1 mapped to the CIN index range of [0,100] according to Fig. 4
Feature _ 2 L L
(7) and (12). The input-output matrix (10 by 10 by 10) was
1000* p ( w, v)
1000* energy 2 calculated offline and saved locally to facilitate the
w1 v 1 calculation of any new input combinations based on a linear
interpolation.
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2893357, IEEE Access
FIGURE 5. Relation of membership (subsets) and corresponding standardized input and output values. The red line, green line and blue line denote
the fuzzy set of ‘Small’, ‘Medium’ and ‘Big’ respectively.
p
i 1
ji
implemented in Matlab R2014a with a laptop with 2.53 GHz
CPU and 4G RAM.
100
CIN _ index( j ) [index( j ) 1]* ( j 1 1000) (12) III. RESULTS
9
A. IMAGE REGISTRATION AND SEGMENTATION AND
FEATURE EXTRACTION
G. DATA ANALYSIS For all patients, pre-to-post acid test image registration and
A total number of 505 pairs of pre- and post-acetic acid test
segmentation were successfully implemented, with
images were obtained (383 healthy CIN negative and 122
exemplary pre- and post-test images registration from three
CIN positive). To compare the performance of the index of
subjects shown in Fig. 6 and exemplary automatic
CIN (derived from fuzzy-reasoning) and individual features
segmentation of the acetowhite region from nine subjects
or their combinations, a logistical regression model was built shown in Fig. 7. Based on the image registration and
separately using the following independent variables: (1) segmentation results, the proposed three features were
feature 1 (temporal grayscale change), (2) feature 2 (image extracted and the index of CIN was derived using the
texture coarseness), (3) feature 3 (image texture complexity), proposed fuzzy reasoning model. A comparison of each
(3) feature 2 and 3 (image texture), (4) feature 1, 2 and 3 individual feature and the index of CIN between CIN
(temporal grayscale change and texture feature, without negative and positive patients is shown in Fig. 8.
fuzzy reasoning) and (5) index of CIN (fuzzy reasoning In CIN negative and CIN positive patients respectively,
based on image temporal grayscale change and textures). A feature 1 (temporal grayscale change) was 0.112±0.104 and
five-fold cross-validation was repeated 1000 times, by 0.206±0.178, feature 2 (image texture coarseness) was
randomly selecting 80% of the low-risk and at-risk samples 0.975±0.746 and 1.734±1.685 and feature 3 (image texture
as the training dataset and the remaining 20% of negative and complexity) was 7.220±0.784 and 7.756±0.770, and the
positive samples as testing dataset in each repetition. The index of CIN after fuzzy reasoning was 24.057±20.299 and
parameters of logistic regression from the training dataset 62.360±48.793. All comparisons reached statistical
were used to predict the classifications in the testing dataset. significance upon independent two sample t test (p<0.05).
The averaged sensitivity and specificity in CIN classification
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2893357, IEEE Access
Jun Liu et al.: A Fuzzy Reasoning Model for Cervical Intraepithelial Neoplasia Classification using Temporal Grayscale Change and
Textures of Cervical Images during Acetic Acid Tests
Figure 6. Representative image registration results for three patients, Figure 7. Representative images of cervix region and specular
with pre-test images shown in the upper panel, post-test images in the reflection region segmentation from nine patients. The green curves
mid panel and registered image in the lower panel. The ‘+” marker indicate the segmented cervix region by using the presented algorithm
indicates references pixels for comparison. and the red curves are the cervix region drawn by the expert. The blue
curves are the segmented specular reflection region by using the
reported method.
B. CLASSIFICATION PERFORMANCE USING When both texture features were used (F2 + F3, Fig. 9), the
INDIVIDUAL FEATURES
sensitivity and specificity was 82.5% and 85.1%. Compared
A summary of sensitivity and specificity for different feature
to individual F2 or F3, their combination maintained the
inputs is shown in Fig. 9. Comparing the classification
relatively high sensitivity performance offered by F3 alone,
performance with individual features could provide useful
and showed a slightly reduced specificity. Such a lack of
insights into the separated roles of image color and image
synergistic effect may result from the fact that these two
texture features. The sensitivity and specificity reached
features, originating from the same co-occurrence matrix,
80.8% and 82.0% with feature 1 (temporal grayscale change),
were both based on the texture information and as a result,
80.9% and 87.4% with feature 2 (image texture coarseness),
shared a lot of common information. However, when the
and 82.8% and 86.2% with feature 3 (image texture
temporal grayscale change feature (F1) was additionally
complexity). Compared to the temporal grayscale change,
included, a significant increase was observed in both
both image texture features significantly improved the
sensitivity and specificity (p<0.001), suggesting a synergistic
specificity (p<0.001), while the image texture complexity
effect. Therefore, our findings provide evidence of a
additionally improved the sensitivity significantly (p<0.001).
synergistic effect between temporal grayscale change and
These results suggested that using texture features could
image texture features, further supporting the use of texture
effectively enhance the classification performance in
features in addition to traditionally used image color-based
detecting false positives, compared to temporal grayscale
features.
change alone. One possible explanation is that there may still
be residual artifacts (non-cervical tissues, such as the D. CLASSIFICATION PERFORMANCE USING THE
colposcopy device or specular reflections) that also exhibited INDEX OF CIN FROM FUZZY REASONING MODEL
a great increase in grayscale intensity from pre- to post-test Lastly, we evaluated the performance of the index of CIN
conditions, despite an automatic segmentation and specular that was derived from the proposed fuzzy reasoning model.
reflection algorithm have been implemented to mitigate this Compared with the combination of all three features, the
influence. Such artificial increase in image color could result index of CIN reached a sensitivity of 85.9% and specificity
in false positive predictions. In contrast, image texture of 86.6%, with the sensitivity significantly improved and
features were insensitive to the temporal change between being the highest among all models. As the CIN positive
pre- and post-test conditions; rather, they were based on the group in our study included patients with LSIL, who were
coarseness and complexity of the overall image. Such image usually difficult to identify from CIN negative patients, a
texture features could more reliably rule out the interference high sensitivity is critical as these patients were not without
from artifacts, which lacked identifiable textures of cervical risk of cervical cancer and should be carefully examined.
tissues (e.g., vascular patterns, lesion margins). As currently Considering the fact that the fuzzy reasoning model was
the majority of existing VIA-based algorithms are based on essentially based on the same feature inputs, the increase in
color-based features of the acetowhite region, our finding sensitivity without requiring additional features became a
provided evidences to support the use of image texture particularly useful advantage, largely owning to the model’s
features in CIN classification. capability in incorporating expert knowledge. This advantage
may be particularly useful in clinical practice without placing
C. CLASSIFICATION PERFORMANCE USING a demanding requirement on the training dataset size.
COMBINATIONAL FEATURES
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2893357, IEEE Access
Figure 8. The box-plot distribution of three features and the index of CIN after fuzzy reasoning in each patient group.
Figure 9. A summary of sensitivity and specificity for different feature inputs used in the classification.
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2893357, IEEE Access
Jun Liu et al.: A Fuzzy Reasoning Model for Cervical Intraepithelial Neoplasia Classification using Temporal Grayscale Change and
Textures of Cervical Images during Acetic Acid Tests
acetowhite region segmentation and temporal grayscale There are several limitations of our research which provide
change characterization. According to a recent state-of-art us the future research directions. First, we only implemented
review [16], the majority of existing studies relied heavily on the classification between CIN negative cases and CIN
the post-acetic acid images for CIN grading (without using positive cases. Though this classification boundary is
temporal acetowhite patterns intrinsic to color changes). As important, it would be equally important to further
multiple factors could affect the image acquisition process, distinguish cancer or HSIL cases from LSIL cases. Second,
e.g., camera positions, overall intensity, use of exposure, the current sample size is relatively small, limiting the total
distance of camera from cervix, etc., variations across number of features we could use (because of the risk of
subjects added additional difficulty in CIN classification that overfitting) and the comparison to other existing studies that
was based solely on post-test images. In contrast, our study used different features. With a continuing effort to recruit
employed an automatic cervical region segmentation more subjects, we aim to compare our results with existing
algorithm to extract and align the cervical region in both pre- studies, including some benchmark databases, using the same
and post-test images and then extracted the temporal features once a much larger sample size has been reached.
grayscale change by referencing the post-test image to the Similarly, more features will be implemented to further
pre-test image, effectively utilizing patient-specific temporal strengthen the proposed algorithm.
information during the acetic acid test and eliminating
variations in baseline images and image acquisition process. V. CONCLUSION
Second, although color-based feature of the acetowhite In this paper, we presented a novel fuzzy reasoning model
region represents the signature for suspicious CIN, texture using temporal grayscale and texture features of the
features of the acetowhite region provide equally important acetowhite region during an acetic-acid test for the CIN
information regarding the health of cervical tissues. For classification. The results showed that image texture features
example, in the REID Index system, a commonly used tool provided useful information in adjunct to color-based
for colposcopic grading of CIN levels by gynecologic features by effectively ruling out false positives. Compared
oncologists [20], the final combined colposcopic index was to non-fuzzy reasoning-based methods, the proposed fuzzy
based on the assessment of not only the color (e.g., oyster- reasoning model showed a significantly increased sensitivity
white dull color), but also features related to the image without losing specificity. The proposed model has the
textures (e.g., tissue margin being rolled) or vascular patterns potential to serve as a useful screening tool in the prevention
(e.g., mosaic or definite punctuation) [44]. However, there is of cervical cancers for women.
a lack of automatic CIN classification algorithms combining
both color-based and texture-based features, with only a few REFERENCES
[1] L. Peng, X. Yuan, B. Jiang, Z. Tang, and G. C. Li, “LncRNAs: key
exceptions [35, 36]. In this study, the energy and entropy of players and novel insights into cervical cancer,” Tumour Biology,
the gray-level co-occurrence matrix were included as features vol. 37, no. 3, pp. 2779-2788, 2016.
related to image texture. These features are known to reflect [2] A. Jemal, F. Bray, M. M. Center, J. Ferlay, E. Ward, and D. Forman,
“Global cancer statistics,” CA: a cancer journal for clinicians, vol.
the image texture such as coarseness and complexity, which 61, no. 2, pp. 69-90, 2011.
are directly related to the lesion margin and vascular patterns. [3] K. Ochs, G. Meili, J. Diebold, V. Arndt, and A. Günthert,
As a result, a significant improvement in classification “Incidence Trends of Cervical Cancer and Its Precancerous Lesions
in Women of Central Switzerland from 2000 until 2014,” Frontiers
performance was observed when both color and texture
in Medicine, vol. 5, pp. 58, 2018.
features were used over individual features (Fig. 9). [4] J. Ferlay, I. Soerjomataram, R. Dikshit, S. Eser, C. Mathers, M.
Third, we introduced the fuzzy reasoning method to mimic Rebelo, D. M. Parkin, D. Forman, and F. Bray, “Cancer incidence
expert experience and deal with uncertainty within the data. and mortality worldwide: sources, methods and major patterns in
GLOBOCAN 2012,” International journal of cancer, vol. 136, no.
These characteristics are suitable for the uncertainty in 5, pp. E359-E386, 2015.
acetowhite region CIN scoring as a result of the fuzziness in [5] B. Baldur-Felskov, C. Munk, T. S. S. Nielsen, C. Dehlendorff, B.
manual scoring caused by different categories to be Kirschner, J. Junge, and S. K. Kjaer, “Trends in the incidence of
cervical cancer and severe precancerous lesions in Denmark, 1997–
examined and multiple levels within each category. For 2012,” Cancer Causes & Control, vol. 26, no. 8, pp. 1105-1116,
example, the fuzzy reasoning process could effectively 2015.
mimic the human scoring process used in the REID index [6] A. Jemal, M. M. Center, C. DeSantis, and E. M. Ward, “Global
patterns of cancer incidence and mortality rates and trends,” Cancer
system [20], a frequently used tool in clinical diagnosis of Epidemiology and Prevention Biomarkers, vol. 19, no. 8, pp. 1893-
cervical cancers which is comprised of four categories (color, 1907, 2010.
lesion margin, vascular patterns and iodine patterns) and four [7] W. Chen, R. Zheng, P. D. Baade, S. Zhang, H. Zeng, F. Bray, A.
levels in each category. Compared to typical supervised Jemal, X. Q. Yu, and J. He, “Cancer statistics in China, 2015,” CA:
a cancer journal for clinicians, vol. 66, no. 2, pp. 115-132, 2016.
learning algorithms, the fuzzy reasoning process could serve [8] L. A. Chocontá-Piraquive, N. Alvis-Guzman, and F. De la Hoz-
as a “pre-training” to facilitate the classification process and Restrepo, “How protective is cervical cancer screening against
reduce the amount of training dataset needed. This could be cervical cancer mortality in developing countries? The Colombian
case,” BMC health services research, vol. 10, no. 1, pp. 270, 2010.
particular useful when there is an only moderate size of [9] R. Landy, F. Pesola, A. Castañón, and P. Sasieni, “Impact of
dataset in many clinics. cervical screening on cervical cancer mortality: estimation using
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2893357, IEEE Access
stage-specific results from a nested case–control study,” British [26] J. Kuruvilla, and K. Gunavathi, “Lung cancer classification using
journal of cancer, vol. 115, no. 9, pp. 1140, 2016. fuzzy logic for CT images,” International Journal of Medical
[10] L. A. Torre, R. L. Siegel, E. M. Ward, and A. Jemal, “Global cancer Engineering & Informatics, vol. 7, no. 3, pp. 233, 2015.
incidence and mortality rates and trends—an update,” Cancer [27] H. Greenspan, S. Gordon, G. Zimmerman, S. Lotenberg, J.
Epidemiology and Prevention Biomarkers, vol. 25, no. 1, pp. 16-27, Jeronimo, S. Antani, and R. Long, “Automatic detection of
2016. anatomical landmarks in uterine cervix images,” IEEE Transactions
[11] R. T. Grenko, C. S. Abendroth, E. E. Frauenhoffer, F. M. Ruggiero, on Medical Imaging, vol. 28, no. 3, pp. 454-468, 2009.
and R. J. Zaino, “Variance in the Interpretation of Cervical Biopsy [28] Z. Xue, S. Antani, L. R. Long, J. Jeronimo, and G. R. Thoma,
Specimens Obtained for Atypical Squamous Cells of Undetermined “Comparative performance analysis of cervix ROI extraction and
Significance,” American Journal of Clinical Pathology, vol. 114, no. specular reflection removal algorithms for uterine cervix image
5, pp. 735-740, 2000. analysis,” Proceedings of SPIE - The International Society for
[12] B. A. Crothers, “Cytologic-histologic correlation: Where are we Optical Engineering, vol. 6512, no. 1, pp. 187-189, 2007.
now, and where are we going?,” Cancer Cytopathology, vol. 126, [29] J. Liu, L. Li, and L. Wang, “Acetowhite region segmentation in
no. 5, pp. 301-308, 2018. uterine cervix images using a registered ratio image,” Computers in
[13] K. A. A. Mohamad, A. S. Saad, A. W. A. Murad, and A. Altraigy, Biology & Medicine, vol. 93, no. 2, pp. 47-55, 2018.
“Visual inspection after acetic acid (VIA) as an alternative [30] I. Champion, C. Germain, J. P. D. Costa, A. Alborini, and P.
screening tool for cancer cervix,” Apollo Medicine, vol. 13, no. 4, Dubois-Fernandez, “Retrieval of Forest Stand Age From SAR
pp. 204-207, 2016. Image Texture for Varying Distance and Orientation Values of the
[14] P. Adsul, N. Manjunath, V. Srinivas, A. Arun, and P. Madhivanan, Gray Level Co-Occurrence Matrix,” IEEE Geoscience & Remote
“Implementing community-based cervical cancer screening Sensing Letters, vol. 11, no. 1, pp. 5-9, 2013.
programs using visual inspection with acetic acid in India: A [31] S. C. Tai, Z. S. Chen, and W. T. Tsai, “An Automatic Mass
systematic review,” Cancer Epidemiology, vol. 49, pp. 161-174, Detection System in Mammograms Based on Complex Texture
2017. Features,” IEEE Journal of Biomedical & Health Informatics, vol.
[15] E. J. Suba, and S. S. Raab, “Cervical cancer screening by simple 18, no. 2, pp. 618-627, 2014.
visual inspection after acetic acid,” Obstetrics & Gynecology, vol. [32] E. Sanchez, “Resolution of Eigen fuzzy sets equations,” Fuzzy Sets
98, no. 3, pp. 441-444, 2001. & Systems, vol. 1, no. 1, pp. 69-74, 1978.
[16] V. Kudva, and K. Prasad, “Pattern Classification of Images from [33] S. OPRICOVIC, and G.-H. TZENG, “Defuzzification within a
Acetic Acid–Based Cervical Cancer Screening: A Review,” Critical Multicriteria Decision Model,” International Journal of Uncertainty,
ReviewsTM in Biomedical Engineering, vol. 46, no. 2, pp. 117-133, Fuzziness and Knowledge-Based Systems, vol. 11, no. 05, pp. 635-
2018. 652, 2003.
[17] V. H. De, P. Claeys, S. Njiru, L. Muchiri, S. Steyaert, S. P. De, M. [34] N. Thekkek, and R. Richardskortum, “Optical Imaging for Cervical
E. Van, J. Bwayo, and M. Temmerman, “Comparison of pap smear, Cancer Detection: Solutions for a Continuing Global Problem,”
visual inspection with acetic acid, human papillomavirus DNA-PCR Nature Reviews Cancer, vol. 8, no. 9, pp. 725-31, 2008.
testing and cervicography,” International Journal of Gynaecology [35] W. Li, S. Venkataraman, U. Gustafsson, J. C. Oyama, D. G. Ferris,
& Obstetrics the Official Organ of the International Federation of and R. W. Lieberman, “Using acetowhite opacity index for
Gynaecology & Obstetrics, vol. 89, no. 2, pp. 120-126, 2005. detecting cervical intraepithelial neoplasia,” Journal of Biomedical
[18] A. O. Raifu, M. El-Zein, G. Sangwa-Lugoma, A. Ramanakumar, S. Optics, vol. 14, no. 1, pp. 014020, 2009.
D. Walter, E. L. Franco, and C. S. S. Group, “Determinants of [36] T. Xu, H. Zhang, C. Xin, E. Kim, L. R. Long, Z. Xue, S. Antani,
Cervical Cancer Screening Accuracy for Visual Inspection with and X. Huang, “Multi-feature based Benchmark for Cervical
Acetic Acid (VIA) and Lugol's Iodine (VILI) Performed by Nurse Dysplasia Classification Evaluation,” Pattern Recognition, vol. 63,
and Physician,” Plos One, vol. 12, no. 1, pp. e0170631, 2017. pp. 468, 2017.
[19] J. Jeronimo, P. Bansil, J. Lim, R. Peck, P. Paul, J. J. Amador, F. [37] A. Alush, H. Greenspan, and J. Goldberger, “Automated and
Mirembe, J. Byamugisha, U. R. Poli, and L. Satyanarayana, “A Interactive Lesion Detection and Segmentation in Uterine Cervix
multicountry evaluation of careHPV testing, visual inspection with Images,” IEEE Transactions on Medical Imaging, vol. 29, no. 2, pp.
acetic acid, and papanicolaou testing for the detection of cervical 488-501, 2010.
cancer,” International Journal of Gynecological Cancer, vol. 24, no. [38] E. D. Dickman, T. J. Doll, C. K. Chiu, and D. G. Ferris,
3, pp. 576-585, 2014. “Identification of cervical neoplasia using a simulation of human
[20] R. Reid, and P. Scalzi, “Genital warts and cervical cancer : VII. An vision,” Journal of Lower Genital Tract Disease, vol. 5, no. 3, pp.
improved colposcopic index for differentiating benign 144-152, 2001.
papillomaviral infections from high-grade cervical intraepithelial [39] D. Song, E. Kim, X. Huang, J. Patruno, H. Munozavila, J. Heflin, R.
neoplasia,” American Journal of Obstetrics & Gynecology, vol. 153, Long, and S. Antani, “Multi-modal Entity Coreference for Cervical
no. 6, pp. 611-618, 1985. Dysplasia Diagnosis,” IEEE Transactions on Medical Imaging, vol.
[21] V. Kudva, K. Prasad, and S. Guruvare, “Detection of Specular 34, no. 1, pp. 229-245, 2014.
Reflection and Segmentation of Cervix Region in Uterine Cervix [40] Y. P. Sun, D. Sargent, R. Lieberman, and U. Gustafsson, “Domain-
Images for Cervical Cancer Screening,” IRBM, vol. 38, no. 5, pp. Specific Image Analysis for Cervical Neoplasia Detection Based on
281-291, 2017. Conditional Random Fields,” IEEE Transactions on Medical
[22] K. Fernandes, J. S. Cardoso, and J. Fernandes, “Automated Methods Imaging, vol. 30, no. 3, pp. 867-78, 2011.
for the Decision Support of Cervical Cancer Screening using Digital [41] Y. Jusman, S. C. Ng, and N. A. Abu Osman, “Intelligent screening
Colposcopies,” IEEE Access, vol. 6, pp. 33910-33927, 2018. systems for cervical cancer,” TheScientificWorldJournal, vol. 2014,
[23] M. Nilashi, O. Ibrahim, H. Ahmadi, and L. Shahmoradi, “A no. 2, pp. 810368, 2014.
knowledge-based system for breast cancer classification using fuzzy [42] J. D. Garcí a-Arteaga, J. Kybic, and W. Li, “Automatic colposcopy
logic method,” Telematics & Informatics, vol. 34, no. 4, pp. 133- video tissue classification using higher order entropy-based image
144, 2017. registration,” Computers in Biology and Medicine, vol. 41, no. 10,
[24] G. H. Miranda, and J. C. Felipe, “Computer-aided diagnosis system pp. 960-970, 2011.
based on fuzzy logic for breast cancer categorization,” Comput Biol [43] V. Kudva, K. Prasad, and S. Guruvare, “Automation of Detection of
Med., vol. 64, no. 9, pp. 334-346, 2015. Cervical Cancer Using Convolutional Neural Networks,” Critical
[25] O. Obajemu, M. Mahfouf, and J. W. F. Catto, “A New Fuzzy ReviewsTM in Biomedical Engineering, vol. 46, no. 2, pp. 135-145,
Modelling Framework for Integrated Risk Prognosis and Therapy of 2018.
Bladder Cancer Patients,” IEEE Transactions on Fuzzy Systems, vol. [44] Q. Ji, J. Engel, and E. Craine, “Texture analysis for classification of
26, no. 3, pp. 1565-1577, 2017. cervix lesions,” IEEE Trans Med Imaging, vol. 19, no. 11, pp. 1144-
9, 2000.
2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.