Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

462

IEEE SIGNAL PROCESSING LETTERS, VOL. 18, NO. 8, AUGUST 2011

Illumination Normalization Based on Webers Law With Application to Face Recognition


Biao Wang, Weifeng Li, Wenming Yang, and Qingmin Liao
AbstractWebers law suggests that for a stimulus, the ratio between the smallest perceptual change and the background is a constant, which implies stimuli are perceived not in absolute terms but in relative terms. Inspired from this, we exploit and analyze a novel illumination insensitive representation of face images under varying illuminations via a ratio image, called Weber-face, where a ratio between local intensity variation and the background is computed. Experimental results on both CMU-PIE and Yale B face databases show that Weber-face performs better than the existing representative approaches. Index TermsFace recognition, illumination insensitive representation, Webers law.

I. INTRODUCTION UTOMATIC face recognition has become a very active topic during the last several decades. However, robust face recognition under uncontrolled illumination conditions is still challenging [1], [2]. Most of the existing methods are highly sensitive to illumination variations because face appearence variations due to varying illumination conditions are typically more signicant than those due to different personal identities [1]. In recent years, a variety of approaches have been proposed to solve this problem. They could be roughly divided to three categories. The rst category takes advantage of the illumination samples to learn a face model of the possible illumination variations. For example, in [3] the authors show that the set of face images in xed pose but under varying illumination is a convex cone in the space of images, called illumination cone. In [4] the authors show that the images of the same face under varying illumination or expressions span a low-dimensional linear subspace, and then the spherical harmonic model is proposed to represent this face subspace. These methods can model the illumination variations quite well. However, the main disadvantage of them is that a lot of training samples are required, which makes it not practical for real applications. The second category attempts to compensate the uneven illumination with traditional image processing methods, including histogram equalization [5], logarithmic transform [6]. These methods merely adjust the gray level distribution and ignore the physical illumination model.

The third category tries to nd the illumination invariant representation of face images under varying illumination conditions. Most of these methods take advantage of the Lambertian reectance model and take the assumption that the albedo of salient facial parts (eyes, tips, etc.), which are considered illumination invariant, correspond to relatively higher spatial frequencies, while the illumination parts correspond to low spatial frequencies. However, [7] shows that there are no illumination invariant discriminative functions for an object with Lambertian reectance, and thus the existing approaches could merely extract an illumination insensitive representation. For example, in [8], the authors propose the Multiscale Retinex (MSR) method, which reduces the effect of illumination by dividing the facial image with a smoothed version. Self-quotient image (SQI) model [9] takes the similar idea but uses the weighted Gaussian lter to obtain the smoothed version. Logarithmic total variation (LTV) [10] proposed by Chen et al. improved SQI by utilizing the edge-preserving capability of the total variation model. Tan and Triggs [11] proposed a simple and efcient method based on a pipeline of image preprocessing (PP) operations such as gamma correction, bandpass ltering and contrast equalization. In [12], the authors use relative gradients (RG) to restore an illumination compensated image by solving a Possion equation. Zhang et al. [13] show that gradient direction of face images, which they called Gradientface (GradFace), is an illumination insensitive measure. In this letter, we propose an illumination normalization method based on Webers law, which concludes that stimuli are perceived not in absolute terms, but in relative terms. Given a face image, for each pixel we compute the ratio between two terms: one is the relative intensity difference of the current pixel against its neighbors; the other is the intensity of the current pixel. The obtained ratio image, which we name Weber-face, could extract the local salient patterns very well from the input image. Our theoretical analysis based on the Lambertian reectance model further implies that Weber-face is rather an illumination insensitive representation. The experimental results show that the proposed method signicantly improves recognition performance under varying illuminations. II. PROPOSED ILLUMINATION NORMALIZATION METHOD A. Webers Law

Manuscript received March 29, 2011; revised May 25, 2011; accepted May 27, 2011. Date of publication June 09, 2011; date of current version June 17, 2011. This work was supported by the Shenzhen-Hongkong Innovation Circle Project under Grant ZYB200907070030A. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Alexander Loui. The authors are with the Department of Electronic Engineering/Graduate School at Shenzhen, Tsinghua University, Beijing 100084, China (e-mail: wangbiao08@mails.thu.edu.cn; li.weifeng@sz.tsinghua.edu.cn; yang.wenming@sz.tsinghua.edu.cn; liaoqm@tsinghua.edu.cn). Digital Object Identier 10.1109/LSP.2011.2158998

Most of us can easily catch a whispered voice in a quiet room, but in a noisy condition we may not notice someone shouting in our ear. This is the essence of Webers law, proposed by the German physiologist Ernst Weber in 1834. He hypothesized that the ratio between the smallest perceptual change in a stimulus and the background level of the stimulus is a constant [14]: (1)

1070-9908/$26.00 2011 IEEE

WANG et al.: ILLUMINATION NORMALIZATION BASED ON WEBERS LAW

463

albedo (surface texture) and the surface normal (3D shape), and thus could be regarded as the illumination insensitive part. Many methods, such as MSR [8], SQI [9], TVQI [10], attempt to obtain by separating from as a smoothed version, which is basically an ill-posed problem for real images. Since WLD is computed as a ratio, its robust to multiplicative noise, which actually resembles the uneven illumination in face images. In the following, we will prove theoretically that by applying WLD to a face image , we could get an illumination insensitive representation of , which we name : Weber-face
Fig. 1. Illustration of the computation of WLD.

where is called Weber fraction and implies that remains a constant despite the variations in the term. Webers law is approximately true for the perception of a variety of sensations, including weight, sound intensity and light intensity. It suggests that stimuli be perceived not in absolute terms, but in relative terms: their fold change in magnitude relative to the background level of stimulus. Similar idea has been successfully applied to texture classication [15], illumination compensation [12], [16] and adaptive signal sampling [17]. For a 2D image , the gradients, or rather rst-order derivatives, are sensitive to illumination. However, by normalizing it locally, we have (2) is the gradient operator, and is a where constant to avoid dividing by zero. The left side of (2), which is called as relative gradients [12] or normalized gradients [16], is shown to be illumination insensitive. In [15], Chen et al. proposed a local descriptor WLD (Weber local descriptor), which consists of two components: differential excitation and orientation, to capture the magnitude and the direction of local intensity variations respectively. WLD is proved to be suitable for the application of texture classication and face detection [15]. B. Weber-Face We followed the denition of the differential excitation of WLD to compute the ratio image, which is illustrated in Fig. 1. The response of the current pixel in the output image could be expressed as follows. (3) where the arctangent function is used to prevent the output from being too large and thus could partially suppress the side-effect of noise. denotes the center pixel, and are the neighboring pixels. is the number of neighbors as shown in Fig. 1.), and is a parameter for ad(e.g., justing (magnifying or shrinking) the intensity difference between neighboring pixels. Based on the Lambertian reectance model, a face image could be expressed by (4) in which is the image pixel value, is the reis the illuminance at each pixel . ectance and depends on the lighting source, while depends only on the characteristics of the facial surface, which includes both the

(5) in which From (4), we have .

(6) varies Typically, since the illumination component very slowly except for the shadow boundaries, which is a commonly-used assumption, we could get (7) By substituting (4), (6), (7) into (5), we have

(8) could From the above equation, we could see that be regarded as an illumination insensitive representation of the original face image , since it depends only on the reection component and has nothing to do with the illumination component . Therefore our proposed method does not need to explicitly estimate the illumination part form , thus avoiding the problems mentioned above. Moreover, as can be seen from (3), we have used the laplace operator to measure the local intensity variation. Compared with the gradients used in [12], [16], the laplace operator, actually the second order derivatives, has better edge detection ability, especially for the ramp edge [18], thus could emphasize the salient facial parts which are useful for face recognition. C. Implementation However, the laplace operator itself is very sensitive to noisy pixels. Therefore, instead of using WLD dened in (3) directly to a face image, we rstly smoothen the image with a Gaussian lter. As mentioned in [13], the Gaussian lter could also reduce the side-effect of shadow boundaries. Our Weber-face algorithm thus consists of Gaussian ltering and WLD, which could be summarized in Table I. Fig. 2 shows the original face images and corresponding Weber-faces, from which we could see that Weber-face reduces the effect of illumination, while at the same time maintaining most salient details of the facial features.

464

IEEE SIGNAL PROCESSING LETTERS, VOL. 18, NO. 8, AUGUST 2011

TABLE I IMPLEMENTATION OF WEBER-FACE

Fig. 3. Average recognition rate versus .

Fig. 2. Sample face images and corresponding Weber-faces.

III. EXPERIMENTAL RESULTS A. Experiment Setting In this section, experiments are conducted on two publicly available face databases with large illumination variations, namely, CMU-PIE [19] and Yale Face Database B [3] to illustrate the effectiveness of our Weber-face. We will also compare our method with several state-of-the-art: HE [5], LTV [10], gradientface (GradFace) [13] and RG [12]. All these algorithms are implemented in Matlab with parameters set as the authors recommended. All face images from the two databases are properly aligned, cropped and resized to 120 120. We use the one nearest neighborhood rule with norm as the classier. We also give the result of original images without any preprocessing (ORI) as the baseline. B. Results on CMU-PIE CMU-PIE face database consists of 68 subjects with 41 368 images under variations in illumination, pose and expression. The illumination subset (C27, 1425 images) of 68 subjects under illumination from 21 directions were chosed in our experiments. In our experiments, we chose one image per subject as the gallery each time and the others were used as the probes. In our method, is designed for adjusting (magnifying or shrinking) the intensity difference between neighboring pixels. If this value is too small, the salient changes which are helpful for face recognition but has relatively small intensity difference due to illumination variation may not be emphasized. If its too large, the smooth regions which have small intensity difference may be exaggerated. Fig. 3 shows the relation between and yields the the average recognition rates. As can be seen, highest recognition rate. Fig. 4 shows the faces for the same person under illumination from 21 directions and the corresponding Weber-faces, from which we could see that our method gives good illumination normalization results. The recognition rates of different methods per kind of gallery are illustrated in Fig. 5. As can be seen, compared to the other methods, Weber-face could improve the recognition performance, even by using the gallery image with bad illumination conditions. The average recognition rate

Fig. 4. (a) Samples from PIE. (b) Corresponding Weber-faces.

Fig. 5. Recognition rates versus gallery images. The horizontal axis gives sample gallery images with certain illumination condition. TABLE II AVERAGE RECOGNITION RATES (%) ON CMU-PIE

are summarized in Table II, which illustrates the advantage of our methods. C. Results on Yale B Yale B face database includes ten subjects under nine poses and 64 illumination conditions, which is more challenging than CMU-PIE. Only the frontal images were chosed in our experiments. Totally there are 640 frontal images of ten subjects under 64 illumination conditions. They are divided into ve subsets according to the angle between the light source directions and the central camera axis: Subset 1 (0 to 12 , 70 images), subset

WANG et al.: ILLUMINATION NORMALIZATION BASED ON WEBERS LAW

465

from which we could conclude that Weber-face is more computationally efcient than all these algorithms except HE. IV. CONCLUSION The proposed illumination normalization method is inspired from Webers law and dened as a ratio image, called Weber-face, between two terms: one is local intensity variation measured by convolving with a laplace operator; the other is the intensity of the current pixel. Based on the Lambertian reectance model, we conclude that Weber-face is an illumination insensitive representation for face images in the presence of variant illumination conditions. Experimental results on CMU-PIE and Yale B face databases demonstrated that our approach was more computationally efcient and performed better than several state-of-the-art approaches.
Fig. 6. Illumination preprocessing with different approaches on face images in Yale B database. The images in each column are (from left to right): (a) original images, (b) results of HE, (c) results of RG, (d) results of LTV, (e) results of GradFace, (f) results of our Weber-face. TABLE III RECOGNITION RATES (%) ON YALE B

REFERENCES
[1] Y. Adini, Y. Moses, and S. Ullman, Face recognition: The problem of compensating for changes in illumination direction, IEEE Trans. Pattern Anal. Mach. Intell., vol. 19, no. 7, pp. 721732, Jul. 1997. [2] X. Zou, J. Kittler, and K. Messer, Illumination invariant face recognition: A survey, in Int. Conf. Biometrics: Theory, Applications, and Systems, 2007, pp. 18, IEEE. [3] P. Belhumeur, A. Georghiades, and D. Kriegman, From few to many: Illumination cone models for face recognition under variable lighting and pose, IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 6, pp. 643660, 2001. [4] R. Basri and D. Jacobs, Lambertian reectance and linear subspaces, IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 2, pp. 218233, 2003. [5] S. M. Pizer and E. P. Amburn, Adaptive histogram equalization and its variations, Comput. Vis. Graph. Image Process., vol. 39, no. 3, pp. 355368, Jul. 1987. [6] M. Savvides and V. Kumar, Illumination normalization using logarithm transforms for face authentication, in Proc. IAPR AVBPA, 2003, pp. 549556. [7] H. F. Chen, P. N. Belhumeur, and D. W. Jacobs, In search of illumination invariants, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2000, pp. 254261. [8] D. Jobson, Z. Rahman, and G. Woodell, A multiscale retinex for bridging the gap between color images and the human observation of scenes, IEEE Trans. Image Process., vol. 6, no. 7, pp. 965976, 1997. [9] H. Wang, S. Z. Li, Y. Wang, and J. Zhang, Face recognition under varing lighting vonditions using self quotient image, in Int. Conf. Automatic Face and Gesture Recognition, 2004, pp. 819824, IEEE. [10] T. Chen, X. S. Zhou, D. Comaniciu, and T. S. Huang, Total variation models for variable lighting face recognition, IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 9, pp. 15191524, 2006. [11] X. Tan and B. Triggs, Enhanced local texture feature sets for face recognition under difcult lighting conditions, IEEE Trans. Image Process., vol. 19, no. 6, pp. 16351650, 2010. [12] Z. Hou and W. Yau, Relative gradients for image lighting correction, in ICASSP, 2010, pp. 549556. [13] T. Zhang, Y. Tang, B. Fang, Z. Shang, and X. Liu, Face recognition under varying illumination using gradient faces, IEEE Trans. Image Process., vol. 18, no. 11, pp. 25992606, 2009. [14] A. K. Jain, Fundamentals of Digital Signal Processing. Englewood Cliffs, NJ: Prentice-Hall, 1989. [15] J. Chen, S. Shan, C. He, G. Zhao, M. Pietikinen, X. Chen, and W. Gao, Wld: A robust local image descriptor, IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 9, pp. 17051720, 2010. [16] G. Zhu, S. Zhang, X. Chen, and C. Wang, Efcient illumination insensitive object tracking by normalized gradient matching, IEEE Signal Process. Lett., vol. 14, no. 12, pp. 944947, 2007. [17] O. Dabeer and S. Chaudhuri, Analysis of an adaptive sampler based on webers law, IEEE Trans. Signal Process., vol. 59, no. 4, pp. 18681878, 2011. [18] R. C. Gonzales and R. E. Woods, Digital Image Processing, 2nd Ed. . Upper Saddle River, NJ: Prentice-Hall, 2002. [19] T. Sim, S. Baker, and M. Bsat, The cmu pose, illumination, and expression database, IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 12, pp. 16151618, 2003.

TABLE IV AVERAGE CPU TIME (IN ms) PER IMAGE ON CMU-PIE

2 (13 to 25 , 120 images), subset 3 (26 to 50 , 120 images), subset 4 (51 to 77 , 140 images), subset 5 (above 78 , 190 images). In our experiments, the images with the most neutral light condition (A+00E+00) were used as the gallery, and images from subset 15 were used as the probes. The corresponding results of several sample images from Yale B processed by variant methods are given in Fig. 6. The corresponding recognition rates of each methods for the ve subsets are illustrated in Table III. Our Weber-faces improve the overall average recognition rate from 43.4% to 98.3%. For subsets 4 and 5, which are really challenging due to the harsh illumination and shadows, Weber-faces improve the average recognition rate from 17.9%, 10.5% to 96.4%, 96.8% respectively, signicantly better than HE, RG and LTV and as good as the recently proposed gradient face approach. D. Efciency Efciency is important for real-time face recognition applications. As illustrated in Table I, our proposed method mainly requires two image convolutions and one arctangent operation. We compare the efciency of Weber-face with other competing algorithms experimentally by measuring the CPU time on CMU-PIE database. The results are illustrated in Table IV,

You might also like