Research Journal Sukanya

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 12

Application of Support Vector Machine (SVM) classifier in Mammograms for the

detection of Structural Similarity


M.Sukanya1and S.Ragunathan2
1
AP, Adhiyamaan College of Engineering,Hosur
2
Professor, AVS Engineering College,Salem

ABSTRACT- One of the greatest threats prevailing among women is the breast cancer. According to the
national cancer institute,22% of new cases occur every year and it is considered to be the second most frequent
type of cancer, worldwide. After few years of initial diagnosis the persistence rate is less than61% in women.
Mammogram is the radiology tool which gives better accuracy than clinical breast examination. It not only
identifies the abnormalities but also identify the normal breast among women. In Proposed Method, threshold
segmentation is used to partition the image in order to get the information from the image and then feature are
extracted using Local Binary Pattern (LBP). Finally Support Vector Machines (SVMs, also support vector
networks) are supervised learning models with associated learning algorithms that analyze data and recognize
patterns, used for classification and regression analysis. The SVM classifier classifies the image into Malignant,
benign images. Maximum classification accuracy of this method is 88.8%.
Keywords-Mammograms, Bilateral Asymmetry, Local Binary Pattern, Support Vector Machines

INTRODUCTION:
A mammogram is a routine part of a breast cancer screening program. ‘Breast palpation programs
(physically checking for lumps) are generally agreed to be insufficient. Breast self examination programs are
also unreliable, as a lesion can develop for years before it becomes palpable. Of course, when a ‘bump‘ or
‘lump‘ of some kind has been found on a clinical exam by a family physician, the patient will immediately be
referred for a mammogram.
Normally, the X-ray component of a mammogram is all that is required for breast cancer screening
purposes. An ultrasound is typically a ‘second look’ type of application. It is not a good idea to have an
ultrasound and not a mammogram, and it is probably best to follow the advice of the screening physicians.
Breast cancer is cancer that develops from breast tissue. Signs of breast cancer may include a lump in the breast,
a change in breast shape, dimpling of the skin, fluid coming from the nipple, or a red scaly patch of skin. In
those with distant spread of the disease, there may be bone pain, swollen lymph nodes, shortness of breath, or
yellow skin.
Risk factors for developing breast cancer include: female sex, obesity, lack of physical exercise,
drinking alcohol, hormone replacement therapy during menopause, ionizing radiation, early age at first
menstruation, having children late or not at all, older age, and family history. About 5–10% of cases are due to
genes inherited from a person's parents, including BRCA1 and BRCA2 among others. Breast cancer most
commonly develops in cells from the lining of milk ducts and the lobules that supply the ducts with milk.
Cancers developing from the ducts are known as ductal carcinomas, while those developing from lobules are
known as lobular carcinomas. In addition, there are more than 18 other sub-types of breast cancer. Some cancers
develop from pre-invasive lesions such as ductal carcinoma in situ. The diagnosis of breast cancer is confirmed
by taking a biopsy of the concerning lump. Once the diagnosis is made, further tests are done to determine if the
cancer has spread beyond the breast and which treatments it may respond to.
The medications tamoxifen or raloxifene may be used in an effort to prevent breast cancer in those
who are at high risk of developing it. Surgical removal of both breasts is another useful preventative measure in
some high risk women. In those who have been diagnosed with cancer, a number of treatments may be used,
including surgery, radiation therapy, chemotherapy, hormonal therapy and targeted therapy. Types of surgery
vary from breast-conserving surgery to mastectomy. Breast reconstruction may take place at the time of surgery
or at a later date. In those in whom the cancer has spread to other parts of the body, treatments are mostly aimed
at improving quality of life and comfort.
Outcomes for breast cancer vary depending on the cancer type, extent of disease, and person's age.
Survival rates in the developed world are high, with between 80% and 90% of those in England and the United
States alive for at least 5 years. In developing countries survival rates are poorer. Worldwide, breast cancer is the
leading type of cancer in women, accounting for 25% of all cases.
The sensitivity of mammography is mainly depends on breast density amount of radiation dose. The
diagnosis of structure of Mammogram mainly depends on shape, texture or the distribution of the clusters.
Miller et.al.,[1] proposed an automatic method for the detection of asymmetrical features using texture analysis.
The accuracy of this method is affected fat and non fat tissue and variation in left and right side breast features.
This technique provides an accuracy of 80%.It needs a technique that combines texture and brightness features
to improve the accuracy.
An Directional Analysis procedure with KL transform for the detection of mammogram features is
reported in [2].However the selection number of scales and orientation angle depends on optimality constrain.
The selection of symmetrical and non symmetrical mammogram is obtained using Gobar Wavelets. The
accuracy of this technique is majorly depends on texture differences.
D. Tzikopoulos et.al.,[4] presents an fully automated segmentation based on breast density estimation
and breast boundary extraction algorithm. A support vector machine is employed for classification and provides
an accuracy of 85%.An automatic segmentation of breast tissue density using Genetic Algorithm and Artificial
Neural Network (ANN) was presented in [5].It works well only for symmetric images.
The detection of asymmetry in breast using land marking matching and Gabor filter was presented in
[26].Using angular symmetric the images were classified as normal or abnormal. In this paper an automatic
segmentation method to determine the bilateral asymmetry and classification of Mammogram is presented. The
flow diagram for the proposed method is shown in Figure-1.A set of 100 images were selected from the digital
Mammographic Database.
Mammographic Mean Filter Median Filter
Images

Micro- Unsharp mask Bilateral Filter


calcification Filter

Detected
Images

Figure 1:Flow Diagram of the Proposed Method


PREPROCESSING
Image preprocessing typically denotes a processing step transforming a source image into a new image
which is fundamentally similar to the source image, but differs in certain aspects, e.g. improved contrast. The
output image created by such an operation usually exhibits no visual similarity to the source image. Instead it
shows certain characteristics of the source image, for example the spectral properties derived by a Fourier
transform. According to the above definition, preprocessing results in changing the brightness of individual
image pixels. The preprocessing functions used for mammogram images can be divided into two basic groups,
depending on what the resulting brightness value of a pixel in the output image is derived from:
1. Pixel operations compute the brightness of a pixel in the output image exclusively from the brightness of the
corresponding pixel in the source image. This group also encompasses image arithmetic functions which
combine several images, because these functions only use a single pixel on a fixed position from each of the
source images. Pixel operations can be further divided into homogeneous and inhomogeneous pixel operations.
Homogeneous operations use the same transformation function for each pixel, for inhomogeneous ones the
transformation function depends on the location of the pixel in the image. Pixel operations are discussed in Gray
scale transformations, Image arithmetic algorithms. Mean filtering is a simple image arithmetic algorithm for
smoothing images. It simply replaces each pixel with its average value. The mean value of 3X3 or 5X5 window
can be computed as
N
1
MeanFilter ( X 1 , X 2 , Xn) 
N
x
i 1
i (1)

Where N is no of pixels in an window. The performance of an mean filter is very poor for preserving
edges.
2. Local operations take a certain neighborhood of the current pixel into account when computing the brightness
of the corresponding output image pixel. An example is the mean value filter, which sets the brightness of the
output image pixel to the average brightness of a small neighborhood of the corresponding point in the source
image. Local operations are normally done with linear filters and Median filter.


MedianFilt er ( X 1 , X 2 , Xn)  MEDIAN x1 , x 2  x n
2 2 2
 (2)

For the rough segmentation a Mean Filter is used and Median Filter is used for fine segmentation. The
Performance comparison of the two filters is shown in Table 1.
Table 1:MSE and PSNR comparison of Different Filters
Flitering Technique Mean Square Error(MSE) Peak Signal to Noise
Ratio(PSNR)
Median Filter 13.25 32.41
Mean Filter 26.97 38.54

IMAGE SEGMENTATION
Segmentation partitions an image into distinct regions containing each pixel with similar attributes. To
be meaningful and useful for image analysis and interpretation, the regions should strongly relate to depicted
objects or features of interest. Meaningful segmentation is the first step from low-level image processing
transforming a grayscale or colour image into one or more other images to high-level image description in terms
of features, objects, and scenes. The success of image analysis depends on reliability of segmentation, but an
accurate partitioning of an image is generally a very challenging problem.
Segmentation techniques are either contextual or non-contextual. The latter take no account of spatial
relationships between features in an image and group pixels together on the basis of some global attribute, e.g.
grey level or colour. Contextual techniques additionally exploit these relationships, e.g. group together pixels
with similar grey levels and close spatial locations. In computer vision, image segmentation is the process of
partitioning a digital image into multiple segments (sets of pixels, also known as super pixels). The goal of
segmentation is to simplify and/or change the representation of an image into something that is more meaningful
and easier to analyze. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.)
in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image
such that pixels with the same label share certain characteristics.
The result of image segmentation is a set of segments that collectively cover the entire image, or a set
of contours extracted from the image (see edge detection). Each of the pixels in a region are similar with respect
to some characteristic or computed property, such as color, intensity, or texture. Adjacent regions are
significantly different with respect to the same characteristic(s). When applied to a stack of images, typical in
medical imaging, the resulting contours after image segmentation can be used to create 3D reconstructions with
the help of interpolation algorithms like marching cubes.

Simple thresholding
The most common image property to threshold is pixel grey level: g(x,y) = 0 if f(x,y) < T and g(x,y) =
1 if f(x,y) ≥ T, where T is the threshold. Using two thresholds, T1 < T1, a range of grey levels related to region 1
can be defined:
g(x,y) = 0 if f(x,y) < T1 OR
f(x,y) > T2 and g(x,y) = 1 if T1 ≤ f(x,y) ≤ T2.

The main problems are whether it is possible and, if yes, how to choose an adequate threshold or a
number of thresholds to separate one or more desired objects from their background. In many practical cases the
simple thresholding is unable to segment objects of interest, as shown in the above images.
A general approach to thresholding is based on assumption that images are multimodal, that is,
different objects of interest relate to distinct peaks of the 1D signal histogram. The thresholds have to optimally
separate these peaks in spite of typical overlaps between the signal ranges corresponding to individual peaks. A
threshold in the valley between two overlapping peaks separates their main bodies but inevitably detects or
rejects falsely some pixels with intermediate signals. The optimal threshold that minimizes the expected
numbers of false detections and rejections may not coincide with the lowest point in the valley between two
overlapping peaks:

Adaptive thresholding
Since the threshold separates the background from the object, the adaptive separation may take account
of empirical probability distributions of object (e.g. dark) and background (bright) pixels. Such a threshold has
to equalize two kinds of expected errors: of assigning a background pixel to the object and of assigning an
object pixel to the background. More complex adaptive thresholding techniques use a spatially varying threshold
to compensate for local spatial context effects.
A simple iterative adaptation of the threshold is based on successive refinement of the estimated peak
positions. It assumes that (i) each peak coincides with the mean grey level for all pixels that relate to that peak
and (ii) the pixel probability decreases monotonically on the absolute difference between the pixel and peak
values both for an object and background peak. The classification of the object and background pixels is done at
each iteration j by using the threshold Tj found at previous iteration.
Thus, at iteration j, each grey level f(x,y) is assigned first to the object or background class (region) if
f(x,y) ≤ Tj or f(x,y) >Tj, respectively. Then, the new threshold, Tj+1 = 0.5(μ j,ob + μj,bg) where μj,ob and μj,bg
denote the mean grey level at iteration j for the found object and background pixels, respectively.

Contextual segmentation: Region growing


Non-contextual thresholding groups pixels with no account of their relative locations in the image
plane. Contextual segmentation can be more successful in separating individual objects because it accounts for
closeness of pixels that belong to an individual object. Two basic approaches to contextual segmentation are
based on signal discontinuity or similarity. Discontinuity-based techniques attempt to find complete boundaries
enclosing relatively uniform regions assuming abrupt signal changes across each boundary. Similarity-based
techniques attempt to directly create these uniform regions by grouping together connected pixels that satisfy
certain similarity criteria. Both the approaches mirror each other, in the sense that a complete boundary splits
one region into two.
Pixel connectivity is defined in terms of pixel a neighborhood which is shown in figure 2. A normal rectangular
sampling pattern producing a finite arithmetic lattice {(x,y): x = 0, 1, ..., X−1; y = 0, 1, ..., Y−1} supporting
digital images allows us to define two types of neighborhoods surrounding a pixel. A 4-neighbourhood {(x−1,y),
(x,y+1), (x+1,y), (x,y−1)} contains only the pixels above, below, to the left and to the right of the central pixel
(x,y). An 8-neighbourhood adds to the 4-neighbourhood four diagonal neighbours: {(x−1,y−1),(x−1,y),
(x−1,y+1), (x,y+1), (x+1,y+1), (x+1,y), (x+1,y−1), (x,y−1)}.
Figure 2.Pixel Connectivity
A 4-connected path from a pixel p1 to another pixel pn is defined as the sequence of pixels {p1, p2, ..., pn} such
that pi+1 is a 4-neighbour of pi for all i = 1, ..., n−1. The path is 8-connected if pi+1 is an 8-neighbour of pi. A
set of pixels is a 4-connected region if there exists at least one 4-connected path between any pair of pixels from
that set. The 8-connected region has at least one 8-connected path between any pair of pixels from that set.
Region similarity
The uniformity or non-uniformity of pixels to form a connected region is represented by a uniformity predicate,
i.e. a logical statement, or condition being true if pixels in the regions are similar with respect to some property
.This predicate does not restrict the grey level variation within a region because small changes in signal values
can accumulate over the region.
Intra-region signal variations can be restricted with a similar predicate: P(R) = TRUE if |f(x,y) − &muR| ≤
&Delta and FALSE otherwise where (x,y) is a pixel from the region R and μR is the mean value of signals f(x,y)
over the entire region R.
In case of global thresholding the threshold value chosen remains the same for the entire image and acts as a
„cutoff value. In case of local thresholding the image is to be subdivided in to subimages and the threshold is to
be chosen depending on the properties of local pixels in that subimage.
Threshold value can be modified and are categorized as band thresholding, multi-thresholding and
semi-thresholding. Either the global thresholding or local thresholding yield the result depending on the value of
threshold chosen. Hence the choice of threshold is crucial and complicated. There are several methods employed
for detection of threshold value to name a few mean method, p-tile- thresholding, bimodal histogram, optimal
thresholding, multispectral thresholding, edge maximization method. The available techniques for threshold
based segmentation; threshold selection based on the histograms suggested by Nobuyuki Otsu in 1979 is most
used with minor modifications. Otsu method is optimal for thresholding large objects from the background. This
method provides an optimal threshold selected by the discriminant criterion by maximizing the discriminate
measure η .Hence Ostu method is chosen for thresholding.
Local Binary Pattern
A local binary pattern (LBP) is a type of feature used for classification in computer vision. It is mainly
used for texture classification .The LBP feature vector, in its simplest form, is created by dividing the windows
in to cells (3X3) window is normally preferred. For each pixel in a window 8 neighbors were calculated .For a

pixel ( x1 , y1 ) the LBP can be given in an decimal form as


p
LBP ( x1, y1)   s(Gx  G y)2 P (3)
i 1

Where Gx and Gy are Gray value of the center pixel


Normalized Histogram value for each center pixel is computed. The output of the LBP is the feature vector .The
feature vector can now be processed using the Support vector machine classification.
A useful extension to the original operator is the so-called uniform pattern [8], which can be used to
reduce the length of the feature vector and implement a simple rotation invariant descriptor. This idea is
motivated by the fact that some binary patterns occur more commonly in texture images than others. A local
binary pattern is called uniform if the binary pattern contains at most two 0-1 or 1-0 transitions. For example,
00010000(2 transitions) is a uniform pattern, 01010100(6 transitions) is not. In the computation of the LBP
histogram, the histogram has a separate bin for every uniform pattern, and all non-uniform patterns are assigned
to a single bin. Using uniform patterns, the length of the feature vector for a 3x3 window reduces from 256 to
59.

SVM Classifiers
In machine learning, support vector machines [1] are supervised learning models with associated
learning algorithms that analyze data and recognize patterns, used for classification and regression analysis.
Given a set of training examples, each marked for belonging to one of two categories, an SVM training
algorithm builds a model that assigns new examples into one category or the other, making it a non-probabilistic
binary linear classifier.
An SVM model is a representation of the examples as points in space, mapped so that the examples of
the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped
into that same space and predicted to belong to a category based on which side of the gap they fall on. In
addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what
is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces.
More formally, a support vector machine constructs a hyper-plane or set of hyper-planes in a high- or
infinite-dimensional space, which can be used for classification, regression, or other tasks. Intuitively, a good
separation is achieved by the hyper-plane that has the largest distance to the nearest training-data point of any
class (so-called functional margin), since in general the larger the margin the lower the generalization error of
the classifier. Whereas the original problem may be stated in a finite dimensional space, it often happens that the
sets to discriminate are not linearly separable in that space. For this reason, it was proposed that the original
finite-dimensional space be mapped into a much higher-dimensional space, presumably making the separation
easier in that space.
To keep the computational load reasonable, the mappings used by SVM schemes are designed to ensure
that dot products may be computed easily in terms of the variables in the original space, by defining them in
terms of a kernel function k(x,y) selected to suit the problem. The hyper planes in the higher-dimensional space
are defined as the set of points whose dot product with a vector in that space is constant. The vectors defining
the hyperplanes can be chosen to be linear combinations with parameters  i of images of feature vectors xi that
occur in the data base. With this choice of a hyperplane, the points x in the feature space that are mapped into

the hyperplane are defined by the relation:   k ( x , x)  cons tan t


i i i

In this way, the sum of kernels above can be used to measure the relative nearness of each test point to
the data points originating in one or the other of the sets to be discriminated. Note the fact that the set of points x
mapped into any hyperplane can be quite convoluted as a result, allowing much more complex discrimination
between sets which are not convex at all in the original space.

Figure 3 (a) Input image (b) Filtered Image (c)Threshold Binarization


Figure 3(a) shows an input image, and the filtered and threshold images are shown in figure3(b) and
3(c).The filtered images are clustered and classified using SVM classifier and the output shows that it is a
normal mammogram image.

Figure 4(a) Mammogram output after Clustering

Figure 5(a) Input Image (b) Filtered Image (c) Threshold Binarization
(c) Mammogram output after Clustering
The filtered and threshold images are shown in figure 5(b) and 5(c).The filtered images are clustered and
classified using SVM classifier and the output shows that it is mildly affected by cancer cells.

Figure 6(a) Filtered Image (b) Threshold Binarization

(c) Mammogram output after Clustering


The filtered and threshold images are shown in figure 6(b) and 6(c).The filtered images are clustered and
classified using SVM classifier and the output shows that it is severely affected by cancer cells.
Table2 show the difference in the Completeness in percentage and correctness in percentage of a
mammogram images using Histogram based equalization and LBP. The Completeness of an image can be
calculated as
reference Im age
Completness  X 100
FeatureExtracted Im age

CorrectlyExtracted Im age
Corretness  X 100
Extracted Im age

Table 2: Completeness and Correctness Comparison

Completeness (%) Correctness (%)

Histogram Based Histogram Based


LBP LBP
Equalization Equalization

93.4 96 91.4 95

A listing of classification methods close to the current study and the accuracy comparison is listed in figure
7.The SVM classifier combined with Local Binary Pattern (LBP) provides good accuracy compared with other
methods.

Authors Method used Accuracy (%)

Spatial gray level dependence and


Petrosian textural features with a decision tree 76-89
classifier
et al. [6]

Statistical features in a multiple view


Wei et al. [7] 85
mammogram with SVM and KFD

Gray level co-occurence matrices,


Mudigonda et al. [8] polygonal modeling with jack-knife 83
classification

Forward stepwise linear regression


Alolfe et al. [9] method with a combined classifier of 82.5-90
SVM and LDA

Present study LBP features and SVM 88.80


Conclusion:
In this paper an Local Binary Pattern (LBP) features and SVM (Support Vector Machine) classifier
model is used to obtain the mammogram affected stages, the clustering process get together the features and
analyzed in SVM model. The results of simulation things from MATLAB also give good accuracy. Through this
categorized detection approach can be implemented in medical field and obtain the cancer affects of breast.
However this enhancement model need more number of databases images trains them and achieves good
accuracy.

REFERENCES
[1] P. Miller and S. Astley, “Automated detection of mammographicasymmetry using anatomical features,”
Int. J. Pattern Recogn. Artif.Intell., vol. 7, no. 6, pp. 1461–1476, 1993.
[2] R. J. Ferrari, R. M. Rangayyan, J. E. L. Desautels, and A. F. Frère,“Analysis of asymmetry in
mammograms via directional filtering with Gabor wavelets,” IEEE Trans.Med. Imag., vol. 20, no. 9, pp.
953–964,Sep. 2001.
[3] R. M. Rangayyan, R. J. Ferrari, and A. F. Frère, “Analysis of bilateralasymmetry in mammograms using
directional, morphological, anddensity features,” J. Electron. Imag., vol. 16, no. 1, p. 12, 2007.
[4] S. D. Tzikopoulos, M. E. Mavroforakis, H. V. Georgiou, N. Dimitropoulos,and S. Theodoridis, “A fully
automated scheme for mammographic segmentation and classification based on breast density and
asymmetry,” Comput. Meth. Prog. Bio., vol. 102, no. 1, pp.47–63, 2011.
[5] X. Wang, D. Lederman, J. Tan, X. H. Wang, and B. Zheng, “Computerized detection of breast tissue
asymmetry depicted on bilateral mammograms:A preliminary study of breast risk stratification,” Acad.
Radiology .,vol. 17, no. 10, pp. 1234–1241, 2010.
[6] Petrosian A, Chan HP, Helvie MA, Goodsitt MM, Adler DD. Computer-aided diagnosis in mammography:
Classification of mass and normal tissue by texture analysis. Phys Med Biol 39, 2273–2288, (1994).
[7] Wei L, Yang Y, Nishikawa RM, Jiang Y., “A study on several Machine-learning methods for classification
of malignant and benign clustered microcalcifications”,Medical Imaging, IEEE Transactions on 24 (3),
371–380 (2005).
[8] Mudigonda NR, Rangayyan RS, Desautels JEL. Gradient and texture analysis for the classification of
mammographic masses. Medical Imaging IEEE Transactions on 19 (10), 1032–1043 (2000).
[9] Alolfe MA, Mohamed WA, Youssef ABM, Mohamed AS, Kadah YM. Computer aided diagnosis in
digital mammography using support vector machine and linear discriminant analysis classification. Image
Processing (ICIP), 2009 16th IEEE International Conference on, 2609–2612 (2009).
[10] X. Wang, D. Lederman, J. Tan, X. H. Wang, and B. Zheng, “Computerizedprediction of risk for developing
breast cancer based on bilateralmammographic breast tissue asymmetry,” Med. Eng. Phys., vol. 33, no.8,
pp. 934–942, 2011.
[11] J. Suckling, J. Parker,D.R.Dance, S. Astley, I. Hutt, C. R. M. Boggis, I.Ricketts, E. Stamakis, N. Cerneaz,
S. L. Kok, P. Taylor, D. Betal, and J.Savage, The Mammographic Image Analysis Society Digital
MammogramDatabase. Bridgewater, NJ: ExerptaMedica, 1994, vol. 1069,Int. Congr., pp. 242–248.
[12] M. Tan, B. Zheng, P. Ramalingam, and D. Gur, “Prediction of neartermbreast cancer risk based on
bilateralmammographic feature asymmetry,”Acad. Radiol., vol. 20, no. 12, pp. 1542–1550, 2013.
[13] R. J. Ferrari, R. M. Rangayyan, J. E. L. Desautels, R. A. Borges, andA. F. Frère, “Automatic identification
of the pectoral muscle in mammograms,”IEEE Trans. Med. Imag., vol. 23, no. 2, pp. 232–245, Feb.2004.
[14] P. Casti, A. Mencattini, M. Salmeri, A. Ancona, F. F. Mangieri, M.L. Pepe, and R. M. Rangayyan,
“Automatic detection of the nipple inscreen-film and full-field digital mammograms using a novel
Hessianbasedmethod,” J. Digit. Imag., vol. 26, no. 5, pp. 948–957, 2013.
[15] P. Casti, A. Mencattini, M. Salmeri, A. Ancona, F. F. Mangieri, M.L. Pepe, and R. M. Rangayyan,
“Estimation of the breast skin-linein mammograms using multidirectional Gabor filters,” Comput.
Biol.Med., vol. 43, no. 11, pp. 1870–1881, 2013.
[16] L. Tabár, T. Tot, and P. B. Dean, Breast Cancer, The Art and Scienceof Early Detection with
Mammography: Perception, Interpretation,histopathologic Correlation. New York: George Thieme
Verlag,2005.
[17] H. Zhang, P. Heffernan, and L. Tabár, “User interface and viewingworkflow for mammography
workstation,” U.S. Patent 2009/0185732A1, 2009.
[18] P. Casti, A.Mencattini, andM. Salmeri, “Characterization of the breastregion for computer assisted Tabár
masking of paired mammographicimages,” in Proc. 25th Int. Symp. IEEE Comput.-Based Med.
Syst.,2012, pp. 1–6.
[19] M. A. Oliver, “Determining the spatial scale of variation in environmentalproperties using the variogram,”
in Modelling Scale in GeographicalInformation Science, N. Tate and P. M. Atkinson, Eds.New York:
Wiley, 2001, pp. 193–219.
[20] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Imagequality assessment: From error visibility
to structural similarity,” IEEETrans. Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004.
[21] M. P. Sampat, Z. Wang, S. Gupta, A. C. Bovik, and M. K. Markey,“Complex wavelet structural similarity:
A new image similarityindex,” IEEE Trans. Image Process., vol. 18, no. 11, pp. 2385–2401,Nov. 2009.
[22] J. Portilla and E. P. Simoncelli, “A parametric texture model based onjoint statistics of complex wavelet
coefficients,” Int. J. Comput. Vis.,vol. 40, no. 1, pp. 49–71, 2000.
[23] F. J. Ayres and R. M. Rangayyan, “Design and performance analysisof oriented feature detectors,” J.
Electron. Imag., vol. 16, no. 2, p. 12,Apr. 2007.
[24] S. Banik, R. M. Rangayyan, and J. E. L. Desautels, “Detection of architectural distortion in prior
mammograms,” IEEE Trans. Med. Imag.,vol. 30, no. 2, pp. 279–294, Feb. 2011.
[25] M. Heath, K. Bowyer, D. Kopans, R. Moore, and W. P. Kegelmeyer,“ The digital database for screening
mammography,” in Proc. 5th Int.Workshop Digital Mammogr., 2001, pp. 212–218.
[26] P. Casti, A.Mencattini, M. Salmeri, A. Ancona, F.Mangieri, and R.M.Rangayyan, “Masking procedures and
measures of angular similarity for detection of bilateral asymmetry in mammograms,” in Proc. IEEE e-
Health and Bioeng. Conf., Iasi, Romania, Nov. 21–23, 2013.

You might also like