Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Shading Correction in Human Skin Color Images

Pablo G. Cavalcanti and Jacob Scharcanski


Instituto de Informatica
UFRGS, Brazil

Introduction

In the context of computer vision, there are several applications that require human skin image processing and
analysis. However, the color of human skin structures can be distorted by shading effects. The occurrence of
shading effects depends on the color of the object and the lighting conditions. Nevertheless, other factors also can
influence the surface color captured by a camera, like the surface roughness, the relative position of the reflective
skin area with respect to the light sources and the camera (Shapiro & Stockman, 2001). Specifically, human
skin images are affected by these factors and the analysis of these images can become even more difficult if the
illumination is uneven, unless it is correctly modeled and corrected, as we discuss next.
Man-machine interaction is an important human skin color imaging application. Often, color images
containing human skin are used in head pose estimation or in face recognition systems, and shading effects
may distort some important facial features in the captured images (e.g., eyes, nose, head geometry). Several
efforts have been directed towards the development of hand gesture recognition systems. Also, in hand gesture
recognition applications usually often it is not feasible to control the illumination during image acquisition due to
the complex hand motion, and shading may occlude parts of the hand and make the scene analysis more difficult.
In these cases, an automatic preprocessing step to mitigate the impact of the shading effects may help increase
the efficiency of these systems, as we will illustrate later.
Nowadays, computer vision is an important ally in medical problems. For example, computer-aided system have been developed in the recent last years to enable remote surgeries or help physicians in image-based
diagnosis. An area that has been receiving a lot of attention is dermatology, probably because it is the most visual
specialty in medicine. In teledermatology, often a color image of a skin lesion acquired with a standard camera
is transmitted to a specialist (or to a pre-screening system), without paying special attention to the illumination
conditions (Whited, 2006)(Massone et al., 2008). Nevertheless, the illumination may influence significantly the
quality of the lesion visualization, and impact on the physician diagnosis, as well as it may limit the efficiency
of the pre-screening system. If the illumination condition is inadequate, in general we obtain low diagnosis accuracy of pigmented skin lesions. These lesions usually are darker than healthy skin, and automatic approaches
to segment such lesions tend to confuse shading areas with lesion areas. As a consequence, the early detection
of malignant cases is more difficult without removing shading effects from the input images. Considering that
melanoma is the most dangerous type of pigmented skin lesion, and that this disease results in about 132000 cases
globally each year (World Health Organization, 2011), any contribution to improve the quality of these images
can be an important step to increase the efficiency of teledermatology and pre-screening systems, and to help to
detect cases in their early-stages.

In the following sections, we discuss the automatic shading effect attenuation in human skin images. In
Section 2, we describe a method for skin image shading attenuation, and discuss the experimental results in
Section 3. We emphasize the benefits of shading attenuation for the color image analysis of face, hand gesture
and pigmented skin lesions images. We compare this methodology with other techniques available in the literature
in Section 4. Finally, in Section 5 we present our conclusions.

Shading Attenuation in Human Skin Color Images

The approach for shading attenuation discussed in the Section improves on the method proposed by Soille (Soille,
1999). The method in (Soille, 1999) corrects uneven illumination in monochromatic images with a simple operation:
R(x, y) = I(x, y) / M(x, y),

(1)

where, R is the resultant image, I is the original image, M = I s is the morphological closing of I by the
structuring element s, and (x, y) represents a pixel in these images. The main idea behind Soille method is to use
the closing operator to estimate the local illumination, and then correct the illumination variation by normalizing
the original image I by the local illumination estimate M. The division in Eq. 1 relights unevenly illuminated
areas, without affecting the original image characteristics. Unfortunately, it is often difficult to determine an
efficient structuring element for a given image, specially for human skin images that have so many distinct
features, such as hair, freckles, face structures, etc. In this way, the results tends to be unsatisfactory for this
type of images, as can be seen in Figs. 1(b)-(c).

(a)

(b)

(c)

(d)

(e)

(f)

Figure 1: Shading attenuation in a pigmented skin lesion image : (a) Input image; (b) Morphological
closing of Value channel by a disk (radius = 30 pixels); (c) Unsatisfactory shading attenuation after
replacing the Value channel by R(x, y), as suggested by Soille (Soille, 1999); (d) Local illumination
based on the obtained quadric function; (e) 3D plot of the obtained quadric function; (f) Shading
attenuation by using our approach.

The presented method modifies the Soille approach by providing a better local illumination estimate M.
In order to provide this local illumination estimate, it starts by converting the input image from the original RGB
color space to the HSV color space, and then retaining the Value channel V . This channel presents a higher
visibility of the shading effects, as observed originally by Soille.
This approach is inspired on the computation of shape from shading (Shapiro & Stockman, 2001). The
human body is assumed to be constituted by curved surfaces (e.g. arms, back, faces, etc.) and, in the same
way humans see, digital images present a smoothly darkening surface as one that is turning away from the view
direction. However, instead of using this illumination variation to model the surface shape, this information is
used to relight the image itself.
Let S be a set of known skin pixels (more details in Section 3). This pixel set is used to adjust the following
quadric function z(x, y):
z(x, y) = P1 x2 + P2 y2 + P3 xy + P4 x + P5 y + P6 ,
(2)
where the six quadric function parameters Pi (i = 1, ..., 6) are chosen to minimize the error :
N

[V (S j,x , S j,y ) z(S j,x , S j,y )]2 ,

(3)

j=1

where, N is the number of pixels in the set S, and S j,x and S j,y are the x and y coordinates of the jth element of the
set S, respectively.
Calculating the quadric function z(x, y) for each image spatial location (x, y), an estimation z of the local
illumination intensity in the image V is generated. Replacing M(x, y) by z(x, y), and I(x, y) by V (x, y) in Eq. 1,
we obtain the image R(x, y) normalized with respect to the local illumination estimate z(x, y). The final step is to
replace the original Value channel by this new Value channel, and convert the image from the HSV color space
to the original RGB color space. As a consequence of this image relighting, the shading effects are significantly
attenuated in the color image. Figs. 1(d)-(e) illustrate the results obtained with this shading attenuation method.

Shading Attenuation: Experimental Results and Discussion

As mentioned in Section 2, the shading attenuation approach discussed in this chapter is initialized by a set of
pixels S known to be associated with healthy skin areas. In this section, we discuss how to select this set of
pixels S in three typical applications of human skin color image analysis (i.e. image segmentation), namely, the
segmentation of faces, hands and pigmented skin lesions in color images. Our goal is to show that our shading
attenuation approach helps in the image analysis in these applications, making the processing steps simpler.
3.1

Face Segmentation in Color Images

Face segmentation is a common need in man-machine interaction applications. In fact, face segmentation can
be used in many applications, like face recognition, speech understanding, head pose/motion detection (like
head shaking, eye sight direction, etc.), or active speaker detection. Usually such applications require accurate
face segmentation for an adequate system usability. Also, the detection of face expression involves extracting
sensitive features from facial landmarks such as the regions surrounding the mouth, nose, and eyes of a normalized
image (Mitra & Acharya, 2007). So, faces usually are localized and segmented as the initial step to approach any
of the above mentioned applications, but that may not be an easy task under uneven illumination conditions.
In addition to the shading effects, in this case we must be aware that a face may be located at virtually
any image location. In this discussion, face segmentation will use color information only, i.e. we will search for

image locations with colors similar to human skin tones. According to previous research (Vassili et al., 2003), a
pixel can be associated to a skin region if :
R > 95 G > 40 B > 20

(4)

max(R, G, B) min(R, G, B) > 15


|R G| > 15 R > G R > B,
where, denotes the logical operator and. However, as can be seen in Fig. 2, this method is not able to identify all
the skin pixels correctly. Although this criterion to determine pixels associated to skin color is used often (Vassili
et al., 2003), it can be very imprecise in practical situations, specially where there is image shading. On other
hand, this method can locate with some accuracy some skin regions in images, and these pixels can be used as
the set S of known skin pixels. We adopt this method to find the set S, since all we need is a set of adjacent image
pixels with skin color (i.e. likely to be located in skin regions) to initialize our error minimization operation (see
Eqs. 2 and 3), and erroneously located pixels should not influence significantly the final result.

(a)

(b)

(c)

Figure 2: Illustration of skin pixels localization using Eq. 4 : (a) Input image; (b) Binary mask; and
(c) adjacent pixels identified as human skin.

Given the set S by using Eq. 4, the shading effects in the face image can be attenuated. To demonstrate the
efficacy of our method in this application, we show the face segmentations with, and without, shading attenuation
using a known Bayes Classifier for the pixels based on their corrected colors (Vassili et al., 2003). A pixel is
considered skin if:
P(c|skin)
> ,
P(c|skin)
1 P(skin)
where =
.
P(skin)

(5)
(6)

In Eq. 5, the a priori probability P(skin) is set to 0.5, since we use the same number of samples for each
class (i.e. 12800 skin pixels and 12800 non-skin pixels). The constant also is set to 0.5, increasing the chance
of a pixel be classified as skin, and P(c|skin) and P(c|skin) are modeled by Gaussian joint probability density
functions, defined as:
1
1
T 1
P=
e 2 (c) (c) ,
(7)
1/2
2| |

where, c is the color vector of the tested pixel, and and are the distribution parameters (i.e., the mean vector
and covariance matrix, respectively) estimated based on the training set for each class (skin and non-skin).

Figure 3: Face segmentation examples. In the first and second columns the original images are shown,
as well as their respective segmentation results. In the third and fourth columns, the images after the
application of our shading attenuation method are shown, and their respective segmentation results.

Figs. 3 and 4 illustrate some face segmentation examples. These face images are publicly available in the
Pointing04 dataset (Gourier et al., 2004). The images in Fig. 3 show four different persons, with different physical characteristics and different poses (i.e. angles between their view direction and the light source), resulting
in different shading effects. Clearly, the skin pixels, and consequently the faces, are better segmented after we
apply our shading attenuation method in all these different situations. In Fig. 4, we present four examples of the
same person but with different head poses (the angle between her view direction and the light source). It shall
be observed that even when the face is evenly illuminated, the face is better segmented after using our shading
attenuation method. However, inaccuracies may occur near the facial features partially occluded by cast shadows
(e.g. near the nose and the chin). Based on these results, it should be expected that algorithms that extract facial features (e.g., eyes, mouth and nose) would perform their tasks more effectively, which helps man-machine
interaction systems.

Figure 4: Face segmentation examples for the same person with different head poses. In the first and
second columns are shown the original images, and their respective segmentation results. In the third
and fourth columns, are shown these images after applying our shading attenuation method, and the
respective segmentation results.

3.2

Hand Segmentation in Color Images

In the same way that face segmentation is important in man-machine interaction applications, sometimes interaction requires a precise hand segmentation (e.g., when hand gestures are used in the man-machine interaction).
Hand gestures can be used to facilitate the access for people with difficulty in using traditional input devices (e.g.,
keyboards), or make the man-machine interaction easier and more natural.
Usually, hand gestures image interpretation require dynamic information, and the hand and/or arm motion
must be segmented to capture temporal segmentation. The automatic segmentation of the gesture start and end
points in time and in space are important in this application. Sometimes, gesture detection is affected by the
context (e.g. preceding and subsequent gestures) (Mitra & Acharya, 2007), and gesture recognition often is
approached using statistical models like particle filtering and Hidden Markov Models. However, before using
such techniques for tracking hands and/or recognizing gestures, a frame-by-frame hand segmentation is needed.
So, the image area corresponding to a hand is localized first, and then features are extracted for tracking and
recognition.

(a)

(b)

(c)

(d)

(e)

(f)

Figure 5: Illustration of skin pixels localization in typical hand gestures images using Eq. 8 : in the
first row, the input images; in the second row, the adjacent pixels identified as human skin.

Hand segmentation is a problem very similar to face segmentation, since both are based on skin-pixel
localization, and different classification techniques such as thresholding, Gaussian classifier, and multilayer perceptrons could be used for this task. Recently, Dardas and Georganas (Dardas & Georganas, 2011) suggested
using the HSV color space, and based on this approach a pixel can be associated to a hand skin region if :
0 < H < 20 75 < S < 190,

(8)

where, denotes the logical operator and. However, as can be observed in Figure 5, this thresholding technique
has the similar problems to the RGB based thresholding technique that we presented in Section 3.1 (see Eq. 4).
Although it localizes the skin area with some accuracy, it is very susceptible to noise and may fail specially in
pixels affected by shading effects.
Nevertheless, the Bayes Classifier could be used to improve the segmentation results as follows. The hand
images can be processed with a similar algorithm to that presented in Section 3.1. In other words, the shading
effects are modeled and attenuated using an RGB-based thresholding technique, two Gaussian joint probability
density functions P(c|skin) and P(c|skin) are modeled, and a pixel is defined as a skin-pixel or background
according to Eq. 5.
We present some illustrative examples of shading attenuation in Fig. 6 based on the hand images of the
Sebastien Marcel database (Marcel, 1999), which often is used as a benchmark in hand gesture recognition. The
reader can observe that the areas affected by shading are relighted. As can be seen in Fig. 7, the Bayes Classifier
achieves more accurate results than the thresholding method proposed by Dardas and Georganas before described.
Moreover, a better segmentation result is obtained if shading attenuation is used as a pre-processing stage. An
improved hand segmentation tends to increase the chance of correct hand gesture recognition, since the extracted
hand features are more reliable in this case.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

(k)

(l)

Figure 6: Illustration of the application of our shading attenuation approach to hand gestures images:
the input images are in the first row; after shading attenuation, the images are shown in the second row.

3.3

Pigmented Skin Lesion Segmentation in Color Images

Several methods have been proposed for analyzing pigmented skin lesions in dermoscopy images (Maglogiannis
& Doukas, 2009). Dermoscopy, is a non-invasive technique that magnifies submacroscopic structures with the
help of an optical lens (a dermoscope) and liquid immersion. Usually, the first step of a computer-aided diagnosis
system for pigmented skin lesions is to segment the lesion areas, discriminating lesion and healthy skin pixels.
Many segmentation methods have also been proposed for this task (Celebi et al., 2009). However, the dermoscope
usually has a light source so the captured images are not affected by illumination artifacts. However, dermoscopes
are tools used by experts to help in the diagnosis, but there are practical situations where a non-specialist wishes
to have a qualified opinion about a suspect pigmented skin lesion, but only standard camera imaging is available
on site (i.e., telemedicine applications).
Our discussion in this section is based on standard camera images, i.e. standard photographs of pigmented
skin lesions. It is not trivial to acquire reliable standard photographs of these lesions, since pigmented skin lesions
usually are only a few millimeters width, the camera should be placed near the the skin and the acquired image
usually is affected by shading effects. Considering the telemedicine context, the physician who receives such
an image probably would have difficulties in screening it. In the same way, pre-screening systems also could
have difficulties to automatically pre-screen such a lesion image. Next, we discuss how shading attenuation
could be helpful in this situation, justifying its use in telemedicine and in standard camera imaging. To illustrate
the effectiveness of our shading attenuation approach, we compare the segmentation results for pigmented skin
lesions with and without the application of our method.
Let us focus on the image skin area containing the lesion. During image acquisition, the lesion is captured
as the central portion of the image, and is surrounded by healthy skin. Therefore, we assume the four image
corners to contain healthy skin. This assumption is common in dermatological imaging, and has been adopted
by researchers in this field (Celebi et al., 2008) (Melli et al., 2006). Therefore, we use 20 20 pixel sets around
each image corner, and determine S as the union of these 1600 pixels (i.e. the four pixel sets).
Unfortunately, segmentation methods for pigmented skin lesions on standard camera images did not receive much attention in the literature as the segmentation methods for pigmented skin lesions for dermoscopy
images. Usually, thresholding-based techniques are used in the segmentation, like the Otsus Thresholding

Figure 7: Hand segmentation examples. In the first and second columns are shown the original images,
and their respective segmentation results. In the third and fourth columns, are shown images after the
application of our shading attenuation method and their segmentation results, respectively.

method (Otsu, 1979), which has been widely used in grayscale images (Manousaki et al., 2006; Ruiz et al.,
2008; Tabatabaie et al., 2009). Furthermore, Cavalcanti et al. (Cavalcanti et al., 2010) also employed this thresholding scheme to the Red channel (R of the RGB color space), trying to take advantage of the fact that healthy
skin usually has a reddish tone. This method assumes two pixel classes, namely healthy and unhealthy skin pixels,
and searches exhaustively for the threshold th that minimizes the total intra-class variance 2w (th), defined as the

weighted sum of variances of the two classes:


2w (th) = 1 (th)21 (th) + 2 (th)22 (th),

(9)

where i are the a priori probabilities of the two classes separated by the threshold th, and 2i are their intra-class
variances. Minimizing the intra-class variance is equivalent to maximizing the inter-class variance 2b (th):
2b (th) = 2 2w (th)

(10)
2

= 1 (th)2 (th) [1 (th) 2 (th)] ,


where 2 is the image pixels variance, and i are the class means. Computed the th threshold, the lesion pixels
correspond to the pixels with values lower than th.
Following the Otsus method usually a post-processing step is employed, and often is constituted by successive morphological operations, to eliminate regions associate to artifacts that may be thresholded (besides the
skin lesion). A thorough discussion of this topic is out of the scope of this work, and we suggest the reading some
additional literature (Manousaki et al., 2006; Ruiz et al., 2008; Tabatabaie et al., 2009; Cavalcanti et al., 2010)
for more details.
Exploring a different line of work, Alcon et al. suggested that Otsus method may over-segment the lesion
area. So, they proposed a different thresholding method specific for pigmented skin lesions acquired with standard
camera images. They observed that, although the lesion intensities distribution fl (x) is unknown, the distribution
fs (x) of the skin corresponds to a Gaussian-like distribution:
fs (x) = A e

(xs )2
22
s

(11)

where, s is the mean value of healthy skin pixel intensities. Being fl+s the distribution of grayscale intensities of
the whole image, s is determined by the corresponding intensity value of the highest peak of fl+s . Since fl+s =
fl + fs , and l (the mean value of lesion pixels) always is lower than s , this distribution can be approximated as:

fl+s (x) =

fs (x)
, x s
.
fl (x) + fs (x) , x < s

(12)

Therefore, based on this assumption, the skin pixels distribution can be estimated as :

e
fs (x) =

fl+s (2s x) , x < s


,
fl+s (x)
, x s

(13)

and, consequently, the lesion pixels distribution can be estimated as :


efl (x) = fl+s (x) e
fs (x).

(14)

e s ) and E(X
e l ) of the distributions e
Finally, the means E(X
fs (x) and efl (x), respectively, are used for the
computation of the threshold T as follows :
e s ) + E(X
e l)
E(X
,
(15)
2
and, as in the Otsus method, the pixels with values lower than the computed threshold, are segmented as lesion
pixels.
T=

(a)

(b)

Figure 8: Pigmented skin lesion image segmentation using thresholding-based methods. (a) The input
image converted to grayscale (after pre-processed by the shading attenuation method). (b) The plot of
histogram of figure (a), and the Otsu and Alcon thresholds obtained.

Figure 8 presents the difference between Otsus and Alcon et al. thresholds. The reader may observe that
each technique determines a different threshold value to separate the lesion pixels (pixels intensities above the
threshold value) and the healthy skin pixels (pixels intensities higher then the threshold value).
The segmentation results are presented in Figure 9. Malignant pigmented skin lesions (melanomas) from
the Dermnet Skin Disease Image Atlas (Dermnet Skin Disease Image Atlas, 2010) were processed with the shading attenuation method, and then segmented by the three segmentation techniques described previously (i.e.,
Otsus Thresholding method applied on grayscale images, Otsus Thresholding method applied on the Red channel, and the Alcon et al. Thresholding method). The results obtained without the application of the shading
attenuation step are also presented for the sake of comparison. In similar way, Figure 10 also presents segmentation results, but for benign lesions (dysplastic nevi).
As can be seen, the obtained segmentation results are better if shading attenuation is used as a preprocessing step. Considering that the lesion shape and boundary provide crucial information for discriminating malignant and benign lesions, this improved segmentation results potentially could be very helpful for prescreening skin lesions.
It is important to observe that our shading attenuation approach may fail in some situations, as illustrated
in Fig. 11. The typical situations illustrated in Fig. 11 are: (a) our method is adequate to model and attenuate the
global illumination variation (which changes slowly), but tends to have limited effect on local cast shadows; and
(b) our approach tends to fail on surface shapes that are not locally smooth, since the quadric function is not able
to capture the local illumination variation in this case. In such cases, the segmentation method may cause healthy
and unhealthy skin areas to be confused. Possibly, better results could be achieved in such cases by acquiring the
images in a way that surface shapes are smoother and illumination varies slowly across the scene.

Figure 9: Malignant pigmented skin lesions segmentation examples. From the first to the fourth row,
results without the application of the shading attenuation method. From the fifth to the eighth row,
results after using the shading attenuation method as a color image preprocessing step. In the first
column, the original color images are shown; in the second column, results of the Otsus Thresholding
method applied on grayscale images are shown; in the third column, results of the Otsus Thresholding method applied on the Red channel are shown; in the fourth column, results of the Alcon et al.
Thresholding method are shown.

Figure 10: Benign pigmented skin lesions segmentation examples. From the first to the fourth row, results without the application of the shading attenuation method. From the fifth to the eighth row, results
after using the shading attenuation method as a color image preprocessing step. In the first column,
the original color images are shown; in the second column, results of the Otsus Thresholding method
applied on grayscale images are shown; in the third column, results of the Otsus Thresholding method
applied on the Red channel are shown; in the fourth column, results of the Alcon et al. Thresholding
method are shown.

Figure 11: Illustrations of cases where our shading attenuation method tends to fail, such as cast
shadows (first line) and surface shapes not well modeled by quadric functions (second line). The first
and second columns show the original images and their respective segmentation results (using Otsus
Thresholding Method). The third and fourth columns show the resulting images after the application
of our shading attenuation method, and the respective segmentation results (using Otsus Thresholding
Method).

Comparison With Other Shading Attenuation Methods

It is important to compare the results of the proposed shading attenuation method with other techniques frequently
used in the literature, such as homomorphic filtering (Petrou & Petrou, 2010) and the Retinex model (Jobson
et al., 1997). We already presented in Section 2 the method proposed by Soille (Soille, 1999), which inspired the
methodology presented in this chapter, and compared results.
Homomorphic filtering relies on a non-linear transform (i.e., logarithm) of the image intensities, and the
illumination component can be removed by suppressing the low-frequency image components. The remaining
higher-frequency components are associated with the surfaces reflectance showing in the image (Petrou & Petrou,
2010). However, selecting the appropriate high-pass filtering can be challenging, given the different characteristics of surfaces (e.g. human skin images). Moreover, the elimination of the low-frequency components may
alter the image characteristic making the image segmentation more difficult, since edges and details are sharpened (Petrou & Petrou, 2010). In Fig. 12 we present examples of applying the homomorphic filtering to the Value
channel of our tested input images.
The Retinex model has been proposed to achieve lightness-color constancy, i.e. preserve the perceived
color of objects relatively constant under varying illumination conditions (Jobson et al., 1997). Although the
Retinex may correct the illumination condition, it requires parameters specification and may negatively affect
the image colors and contrast. In Fig. 12, we present examples of applying the Retinex model implementation
proposed by Jobson et al. (Jobson et al., 1997).
Besides the results obtained with the two methods discussed above, we show in Fig. 12 the results obtained with the shading attenuation method presented in this chapter. As can be seen, the proposed method tends
to attenuate shading better then the other methods used in our comparison, while enhancing the skin-background
contrast in human skin images. We also superimposed the boundaries of the segmentation results obtained with
the methods presented in Section 3) and show that our shading attenuation method tends to help in image segmentation. Our shading attenuation method has been proposed specifically for this type of images, and it also has
the advantage of being fully automatic, since it does not require parameters tuning and is adaptive to the image
data.

Figure 12: Comparison with other methods. In the first column, the original images. In the second
column, after homomorphic filtering. In the third column, after applying the Retinex model. In the
fourth column, after applying the shading attenuation method presented in Section 2. The red curves
indicate the region borders after segmenting these images.

Conclusions

We discussed an approach for attenuating the shading effects in human skin images. According to the proposed
approach, given a set of pixels known to be skin, a quadric function is adjusted to this pixel set to derive a model
to relight all the skin area in the image.
A set of experiments are used to illustrate how the proposed approach could be applied to some typical
color image analysis problems where human skin imaging is of central importance. It has been demonstrated
how to automatically determine the set of known skin pixels, and how the application of the proposed method
may increase the segmentation accuracy, and potentially contribute to improve the overall system efficiency.
Considering the man-machine interaction problem, our experiments suggest that the shading attenuation method
can improve the robustness of face and hand gesture recognition. We also showed that in the case of pigmented
skin lesion segmentation, shading attenuation method helps improving the lesion detection, and, hopefully, can
contribute for the early identification of skin cancer cases.
We also shall observe that our approach may not improve the image quality in situations where we have
cast shadows or surface shapes that are not locally smooth. Possibly, the use of more complex quadric functions
could help solve these difficulties with our approach.

References
Cavalcanti, P., Yari, Y., & Scharcanski, J. (2010). Pigmented skin lesion segmentation on macroscopic images. In Proceedings of
the 25th International Conference on Image and Vision Computing New Zealand.
Celebi, M., Iyatomi, H., Schaefer, G., & Stoecker, W. V. (2009). Lesion border detection in dermoscopy images. Computerized
Medical Imaging and Graphics, 33(2), 148 153.
Celebi, M. E., Kingravi, H. A., Iyatomi, H., Aslandogan, Y. A., Stoecker, W. V., Moss, R. H., Malters, J. M., Grichnik, J. M.,
Marghoob, A. A., Rabinovitz, H. S., & Menzies, S. W. (2008). Border detection in dermoscopy images using statistical region
merging. Skin Res. Technol., 14(3), 347353.
Dardas, N. & Georganas, N. (2011). Real-time hand gesture detection and recognition using bag-of-features and support vector
machine techniques. Instrumentation and Measurement, IEEE Transactions on, 60(11), 3592 3607.
Dermnet Skin Disease Image Atlas (2010). http://www.dermnet.com.
Gourier, N., Hall, D., & Crowley, J. L. (2004). Estimating face orientation from robust detection of salient facial features. In
Proceedings of Pointing 2004, ICPR, International Workshop on Visual Observation of Deictic Gestures Cambridge, UK.
Jobson, D., Rahman, Z., & Woodell, G. (1997). Properties and performance of a center/surround retinex. Image Processing, IEEE
Transactions on, 6(3), 451 462.
Maglogiannis, I. & Doukas, C. (2009). Overview of advanced computer vision systems for skin lesions characterization. Information
Technology in Biomedicine, IEEE Transactions on, 13(5), 721 733.
Manousaki, A. G., Manios, A. G., Tsompanaki, E. I., Panayiotides, J. G., Tsiftsis, D. D., Kostaki, A. K., & Tosca, A. D. (2006). A
simple digital image processing system to aid in melanoma diagnosis in an everyday melanocytic skin lesion unit: a preliminary
report. International Journal of Dermatology, 45(4), 402410.
Marcel, S. (1999). Hand posture recognition in a body-face centered space. In CHI 99 extended abstracts on Human factors in
computing systems, CHI EA 99 (pp. 302303). New York, NY, USA: ACM.
Massone, C., Wurm, E. M. T., Hofmann-Wellenhof, R., & Soyer, H. P. (2008). Teledermatology: an update. Semin. Cutan. Med.
Surg., 27(1), 101105.
Melli, R., Grana, C., & Cucchiara, R. (2006). Comparison of color clustering algorithms for segmentation of dermatological images.
In J. M. Reinhardt & J. P. W. Pluim (Eds.), Medical Imaging 2006: Image Processing, volume 6144 (pp. 61443S).: SPIE.
Mitra, S. & Acharya, T. (2007). Gesture recognition: A survey. Systems, Man, and Cybernetics, Part C: Applications and Reviews,
IEEE Transactions on, 37(3), 311 324.
Otsu, N. (1979). A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man and Cybernetics,
9(1), 6266.
Petrou, M. & Petrou, C. (2010). Image Processing: The Fundamentals. John Wiley & Sons, 2nd. edition.
Ruiz, D., Berenguer, V. J., Soriano, A., & Martin, J. (2008). A cooperative approach for the diagnosis of the melanoma. In
Proceedings of the 30th. Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC),
volume 2008 (pp. 51445147).
Shapiro, L. & Stockman, G. (2001). Computer Vision. Prentice Hall.
Soille, P. (1999). Morphological operators. In B. Jahne, H. Hauecker, & P. Geiler (Eds.), Handbook of Computer Vision and
Applications, volume 2 chapter 21, (pp. 627682). San Diego: Academic Press.
Tabatabaie, K., Esteki, A., & Toossi, P. (2009). Extraction of skin lesion texture features based on independent component analysis.
Skin Research and Technology, 15(4), 433439.

Vassili, V. V., Sazonov, V., & Andreeva, A. (2003). A survey on pixel-based skin color detection techniques. In Proc. Graphicon2003 (pp. 8592).
Whited, J. D. (2006). Teledermatology research review. Int. J. Dermatol., 45(3), 220229.
World Health Organization (2011). How commom is skin cancer? http://www.who.int/uv/faq/skincancer/en/index1.html.

You might also like