Neural-Network-Based Photometric Stereo For 3D Surface Reconstruction

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

2006 International Joint Conference on Neural Networks

Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada


July 16-21, 2006

Neural-Network-Based Photometric Stereo for 3D Surface


Reconstruction
Wen-Chang Cheng

Department of Information Network Technology


Hsiuping institute of Technology
11 Gungye Road, Dali City, Taiwan.
E-mail: wccheng@mail.hit.edu.tw

Abstract—This paper proposes a novel neural-network-based known and at least 6 frames in which the relative value
photometric stereo approach for 3D surface reconstruction. of the light-source intensity is constant or known) for
The neural network inputs are the pixel values of the 2D finding the linear transformation between the surface
images to be reconstructed. The normal vectors of the surface
can then be obtained from the weights of the neural network reflectance matrix and the light source matrix.
after supervised learning, where the illuminant direction does Belhumeur etc. [10] showed that a generalized
not have to be known in advance. Finally, the obtained normal bas-relief transformation is a transformation of both
vectors are applied to enforce integrability when reconstructing the surface shape and the surface albedo for an
3D objects. The experimental results demonstrate that the
proposed neural-network-based photometric stereo approach
arbitrary Lambertian surface. The set of images of an
can be successfully applied to objects generally, and perform object in fixed post but under all possible illumination
3D surface reconstruction better than some existing conditions is a convex cone (illumination cone) in the
approaches. space of images. When the surface reflectance can be
approximated as Lambertian, this illumination cone
Keywords: Lambertian model, shape from shading, can be constructed from a handful of images acquired
enforcing integrability, surface normal, neural network. under variable lighting. They used as few as seven
images of a face seen in a fixed pose, but illumination
1. Introduction by point light sources at varying, unknown position, to
Photometric stereo approach is able to estimate estimate its surface geometry and albedo map up to a
local surface orientation by using several images of the generalized bas-relief transformation. Despite they
same surface taken from the same viewpoint but under announced their success under unknown light source
illuminations from different directions. It was first directions, the estimation of surface methods still need
introduced based on the Lambertian reflectance model to be assisted with some added constraints or more
by R. J. Woodham[1]. It has received wide attention images.
and several efforts have been made to improve the Multi-layer neural networks have also been
performance of recovery [2]-[23]. The main limitation adopted to handle the photometric stereo problem
of classical photometric stereo approach is that the [20-22]. However, these approaches are still restricted
light source positions must be accurately known and by the Lambertian model, which requires estimating
this necessitates a fixed, calibrated lighting rig. Hence, the direction of the light source. Obviously, this
an improved photometric stereo method for estimating restriction makes the algorithm impractical for many
the surface normal and the surface reflectance of applications in which illumination information is not
objects without a priori knowledge of the light source available. In this paper, we propose a novel
direction or the light source intensity is proposed by neural–network–based photometric stereo method to
Hayakawa [19]. The method used the singular-value solve the Lambertian model for 3D surface recognition.
decomposition (SVD) method to factorize image data A supervised learning algorithm is applied to tune up
matrix of three different illuminations into surface the normal vectors of the surface for reconstruction and
reflectance matrix and light source matrix based on the light source vectors automatically based on image
Lambertian model. However, they still used one of the intensities. The 3D surface can also be reconstructed
two added constraints (i.e., at least 6 pixels in which according to these normal vectors using existing
relative value of the surface reflection is constant or approaches such as enforcing integrability [24].

1
0-7803-9490-9/06/$20.00/©2006 IEEE 404

Authorized licensed use limited to: Corporacion Universitaria de la Costa. Downloaded on April 20,2023 at 16:05:53 UTC from IEEE Xplore. Restrictions apply.
Additionally, the proposed approach considers the vector from the input 2D images in the left side of the
albedo and the reflecting characteristic of each surface symmetric neural network are separated and then
point individually. According to the experimental combined inversely to generate the reflectance map in
results presented in Section 6, the shape recovery the right side of the network. The function of each
algorithm is the most robust for recovery of surfaces layer is discussed in detail below.
with the variant albedo and complex reflecting
characteristic.
The remainder of this study is organized as
follows. Section 2 describes the Lambertian reflectance
model. The details of the neural-network- based
photometric stereo approach and its learning rule
derivations are presented in Sections 3 and 4. Section 5
introduces the 3D surface reconstruction from the
surface normal by enforcing integrability. Section 6
presents the results of experiments performed to
evaluate the performance of the proposed approach.
Conclusions are drawn in the last section. Figure 1. Framework of the symmetric neural network for
diffuse reflection model.
2. The Lambertian reflectance model
Lambertian model which represents a surface Assuming that an input image has m pixels in
illuminated by a single point light source given by total, then the symmetric neural network includes m
Rd (n ( x, y ), α ( x, y )) = L α ( x, y ) n ( x, y ) T s , (1) input variables. The 2D image is rearranged to form a
m ×1 column vector represented as
where α ( x, y ) denotes the surface’s albedo at
point ( x, y ) over the image plane domain; n( x, y) I = (I 1 , I 2 , ..., I m )T and fed into the symmetric neural
network. Through the symmetric neural network, the
denotes the surface normal on position ( x, y ) ; s ∈ R 3
denotes the direction of point light at infinity, and L reflectance map R = (R1 , R2 , ..., Rm )T can be
denotes the light strength. Assuming that the obtained in the output of the symmetric neural
Lambertian surface of a convex object is given by the network.
depth function z ( x, y ) , the surface normal n( x, y ) Each node also outputs an activation value as a
on position ( x, y ) can be represented as function of its net-input,
node-output = a (l ) (net-input) = a (l ) ( f ) , (3)
n ( x, y ) =
[− z x ( x, y) − z y ( x, y ) 1 T ] , (2) where a (l ) (⋅) denotes the activation function, and the
z x2 ( x, y) + z 2y ( x, y) + 1 superscript l denotes the layer number. The proposed
where z x ( x, y ) and z y ( x, y ) denote the x- and y- symmetric neural network contains six layers. The
functions of the nodes in each layer are described as
partial derivatives of z ( x, y ) , respectively. Therefore, follows.
the 3D surface can also be reconstructed according to Layer 1: This layer gathers the intensity values of
these normal vectors using the enforcing integrability the input images as the network inputs. Node I i
approach (The details shows in the Section 5). denotes the ith pixel of the 2D image, and m denotes
the number of total pixels of the image. That is,
3. Our Proposed Method to Solve the Lambertian Ii
Reflectance Model fi = , i = 1, ..., m,
Figure 1 shows the framework of the proposed max ( I j ) (4)
1≤ j ≤ m
symmetric neural network which simulates the
Lambertian reflection model. The input/output pairs of ai(1) = f i , i = 1, ..., m.
the network are arranged like a mirror in the center The following equation also uses this notation.
layer, where the number of input nodes equals the Layer 2: This layer adjusts the intensity of the
number of output nodes, making it a symmetric neural input 2D image with corresponding albedo value. Each
network. The light source direction and the normal node in this layer, corresponding to one input variable,

2
405

Authorized licensed use limited to: Corporacion Universitaria de la Costa. Downloaded on April 20,2023 at 16:05:53 UTC from IEEE Xplore. Restrictions apply.
divides the input intensity by the corresponding albedo 3
and transmits it to the next layer. That is, f k = ∑ s j v jk , k = 1, ..., m,
Ii 1 j =1 (8)
fi = ⋅ , i = 1, ..., m,
max ( I j ) α i Rˆ k = a k(5) = fk , k = 1, ..., m.
1≤ j ≤m (5)
Significantly, R̂k denotes the non-normalized
Iˆi = ai( 2) = f i , i = 1, ..., m.
reflectance map, and therefore is normalized in Layer
The output of this layer is the adjusted intensity value 6.
of the original normalized 2D image. The nodes of this Layer 6: This layer transfers the non-normalized
layer are labeled as Iˆ1 , Iˆ2 , ..., Iˆm . The term α i reflectance map obtained in Layer 5 into the interval [0,
denotes the ith albedo value corresponding to the ith 1]. These nodes, R1 , R2 , ..., Rm , denote the normalized
pixel of the 2D image, and 1 α i denotes the weight reflectance map, and their output can be calculated as:
between I and Iˆ . f k = Rˆ k , k = 1, ..., m,
i i
Layer 3: The purpose of layer 3 is to separate the
R k = a k( 6) =
( f k − min(Rˆ )) (9)
light source direction from the 2D image. The light max (R ˆ ) − min (R
ˆ)
source directions of this layer is not normalized, and
are labeled as s′1 , s ′2 , and s′3 . The link weight in =
(Rˆ k − min(Rˆ )) , k = 1, ..., m,
max (R ˆ ) − min (R
ˆ)
layer 3 is represented as wij for the connection
where Rˆ = ( Rˆ1 , Rˆ 2 , ..., Rˆ m ) T , and the link weights
between node i of layer 2 and node j of layer 3.
between layers 5 and 6 are unity.
m
f j = ∑ Iˆi wij , i = 1, ..., m, j = 1, 2, 3, Through the supervised learning algorithm
i =1 (6) derived in the following section, the normal surface
vectors can be obtained automatically.
s′j = a (j3) = f j, j = 1, 2, 3.
Layer 4: The nodes of this layer represent the 4. Training Algorithm of the Proposed Model
unit light source. Equation 6 is used to normalize the Back-propagation learning is employed for
non-normalized light source direction obtained in layer supervised training of the proposed model to minimize
3. These nodes in layer 4 are labeled as s1 , s 2 , and s3 , the error function defined as
m
respectively, and the light source direction is
ET = ∑ (Ri − Di ) ,
2
(10)
represented as s = (s1 , s 2 , s3 ) . The output of s j can
T
i =1
be calculated from: where m denotes the number of total pixels of the 2D
1 image; Ri denotes the ith output of the neural
fj = , j = 1, 2, 3,
network, and Di denotes the ith desired output equal
s1′ + s 2′ 2 + s 3′ 2
2
to the ith intensity of the original normalized 2D image.
s ′j For each 2D image, starting at the input nodes, a
s j = a (j4) = f j ⋅ s ′j = , j = 1, 2, 3.
forward pass is used to calculate the activity levels of
s1′ 2 + s 2′ 2 + s3′ 2
all the nodes in the network to obtain the output. Then,
(7) starting at the output nodes, a backward pass is used to
Layer 5: This layer 5 combines the light source calculate ∂ET ∂ω , where ω denotes the adjustable
direction s and normal vectors of the surface to
parameters in the network. The general parameter
generate the reflectance map. The link weight
update rule is given by
connecting node j of layer 4 and node k of layer 5 is
denoted as v jk , and represents the normal vectors of  ∂ET 
ω (t + 1) = ω (t ) + ∆ω (t ) = ω (t ) + η  −  , (11)
 ∂ω (t ) 
the surface. That is, (v1k , v 2k , v 3k ) T denotes the
where η denotes the learning rate.
normal vector of the surface on the point k, where k = The normal vector calculated from the network is
1, …, m. The outputs of the nodes in this layer are
denoted as n k = (v1k , v 2k , v3k ) T for the kth point on
denoted as R̂k , and can be calculated as:
the surface. The normal vectors n k is updated

3
406

Authorized licensed use limited to: Corporacion Universitaria de la Costa. Downloaded on April 20,2023 at 16:05:53 UTC from IEEE Xplore. Restrictions apply.
iteratively using the gradient method as: derivatives zˆ x ( x, y ) and zˆ y ( x, y ) . These partial
v jk (t + 1) = v jk (t ) + 2η s j (t )[Dk (t ) − Rk (t ) ], j = 1, 2, 3, derivatives can also be expressed as a series, giving
(12) zˆx ( x, y) = ∑cˆ1(ω) φx (x, y, ω) (18)
where s j (t ) denotes the jth element of illuminant ω∈Ω
zˆy (x, y) = ∑cˆ2 (ω)φy (x, y, ω) . (19)
direction s; The updated v jk should be normalized as ω∈Ω
follows: A method has been proposed for finding the
v jk (t + 1) expansion coefficients c(ω) given a possibly
v jk (t + 1) = , j = 1, 2, 3. (13) nonintegrable estimate of surface slopes zˆ x ( x, y ) and
n k (t + 1)
Since the structure of the proposed neural zˆ y ( x, y ) :
networks is like a mirror in the center layer, the update p x (ω )cˆ1 (ω ) + p y (ω )cˆ2 (ω )
c(ω ) = , for ω = (u, v ) ∈ Ω , (20)
rule for the weights between layer 2 and layer 3 of the p x (ω ) + p y (ω )
networks denoted as W (see Fig. 1) can be calculated
px (ω) = ∫∫ φx ( x, y, ω) dxdy
2
where and
by the least square method. Hence, W at time t + 1
2
can be calculated by p y (ω ) = ∫∫ φ y (x, y, ω ) dxdy . Finally, we can reconstruct
( T
W (t + 1) = V (t + 1) V (t + 1) )
−1 T
V (t + 1) , (14) the object’s surface by performing the inverse 2-D
where V (t + 1) denotes the weights betweens the DCT on the coefficients c(ω) .
output and central layers of the mirror-networks.
Finally, the values n k (t + 1) are denoted as 6. Experimental Results and Discussions
Quantitative experimental results have been
albedo αk , where k = 1,2,..., m. obtained by reconstructing a synthetic sphere object.
The true depth map of the synthetic sphere object is
5. 3D Surface Reconstruction from the Surface generated mathematically as
Normal by Enforcing Integrability  r 2 − x 2 − y 2 , if x 2 + y 2 ≤ r 2
The enforcing integrability approach was proposed z (x, y ) =  ,(21)
in the earliest stage by R. T. Frankot and R. Chellappa  0, otherwise
[24]. Suppose that we represent the surface z ( x, y ) by
where r=48, 0 < x , y ≤ 100 , and the center is located at
a finite set of integrable basis functions φ (x, y, ω ) so (x, y)=(51, 51). The sphere object is showed in Fig. 2.
that This synthetic image was generated using the depth
z (x, y ) = ∑ c(ω )φ (x, y , ω ) , (15) function in Eq. (21) and the surface gradients were
ω∈Ω computed using the discrete approximation. Figure 3
where ω = (u, v ) is a two-dimensional index, Ω is a shows the synthetic images generated according to the
finite set of indexes, and { φ (x, y, ω ) } is a finite set of Lambertian model with varying albedo and different
integrable basis functions which are not necessarily directions. The different albedos are, 0.6 for
mutually orthogonal. We chose the discrete cosine right-bottom of the sphere, 0.8 for left-top of the
basis so that { c (ω) } is exactly the full set of discrete sphere, and 1 for the rest part. The locations of light
cosine transform (DCT) coefficients of z ( x, y ) . Note sources in Figs. 3(a)-(i) are S1=(30, 140), S2=(30, 90),
S3=(30, 40), S4=(30, 180), S5=(0, 0), S6=(30, 0),
that the partial derivatives of z ( x, y ) can also be S7=(30, -140), S8=(30, -90), and S9=(30, -40), where
expressed in terms of this expansion, giving the first component is the degree of tilt angle and the
z x (x, y ) = ∑ c(ω ) φx (x, y, ω ) (16) second component is the degree of pan angle. The
ω∈Ω center of image is set as the origin of the coordination.
z y ( x, y ) = ∑ c(ω)φ y (x, y, ω ) , (17) The x-y plane is parallel to the image plane. The z-axis
ω∈Ω is perpendicular to the image plane. The experimental
where φ x ( x, y, ω ) = ∂φ (⋅) ∂x and φ y (x, y, ω ) = ∂φ (⋅) ∂y . results are shown in Table 1 and the proposed method
Suppose we now have the possibly nonintegrable is compared with two photometric stereo algorithms,
estimate n( x, y ) from which we can easily deduce Hayakawa’s method [19] and Georghiades’s method
from Eq. (2) the possibly nonintegrable partial [10]. In Table 1, we take 5 groups of images with
different illuminant angles from the left, the right, and

4
407

Authorized licensed use limited to: Corporacion Universitaria de la Costa. Downloaded on April 20,2023 at 16:05:53 UTC from IEEE Xplore. Restrictions apply.
the front for 3D reconstruction. Both estimated surface In the second experiment, we test the algorithm
and synthetic one are normalized within the interval [0, on a number of real images from the Yale Face
1]. According to Table 1, it is found that the proposed Database B [25] showing the variability due to
method can achieve the lowest mean errors compared illumination and there is varying albedo in each point
with the other methods in all illumination conditions. of surface of human faces.
First, we take the images of the same person that
was photographed under three different light sources
from these testing images arbitrarily shown in Fig.
4(a)-(c). We feed the normalized images into our
algorithm. After updating the parameters by several
iterations, we can get the normal vector of the surfaces
of human faces corresponding to each pixel in an
image in the weights of networks. The results are
shown in the second row in Fig. 4, which are the first
component, the second component, and the third
component of the surface normal vector in order.
Figure 2. Synthetic sphere surface object.
Figure 5 presents the results of 3D human face
reconstruction. Figure 5(a) shows the surface albedo of
human face in Fig. 4. Figure 5(b) shows the result with
our proposed algorithm. By using the Georghiades’s
approach [10] and the Hayakawa’s approach [19], the
reconstructed results are demonstrated in Fig. 5(c) and
(a) S1=(30, 140) (b) S2=(30, 90) (c) S3=(30, 40)
5(d), respectively. The results clearly indicate that the
performance of our proposed nonlinear reflectance
model is better than that of the Georghiades’s approach
and the Hayakawa’s approach. Comparing to the
results obtained by the Georghiades’s approach, the
reconstructed surfaces with the consideration of
(d) S4=(30, 180) (e) S5=(0, 0) (f) S6=(30, 0) specular components in our algorithm, are obviously
better in high-gradient parts such as the nose. Besides,
the Hayakawa’s approach did need added constraints,
it could reconstruct the 3D model of human face
similar as our approach, but when the constraints are
unavailable, then it could not reconstruct the 3D model
(g) S7=(30, -140) (h) S8=(30, -90) (i) S9=(30, -40)
Figure 3. The 2D sphere images generated with varying
of human face.
albedo and different lighting directions (the
degree of tilt angle, the degree of pan angle).

Table 1. The absolute mean errors between estimated


depths and desired depth of synthetic object’s 3D
surfaces. (Both light and viewing directions are
unknown in the experiment.) (a) (b) (c)
Mean Georghiades’ Our
Hayakawa’s
absolute Lights s method proposed
method ([19])
depth error ([10]) method
S1, S2, S3 0.049 0.077 0.020
Sphere with S7, S8, S9 0.050 0.079 0.027
Variant S1, S5, S3 0.033 0.073 0.021
albedo S1, S8, S6 0.034 0.079 0.018 (d) (e) (f)
S1, S5, S7 0.032 0.076 0.018 Figure 4. (a)-(c)Three training images with differ light source
positions from Yale Face Database B in frontal. (d)-(f)
Surface normal corresponding to the three source
images.

5
408

Authorized licensed use limited to: Corporacion Universitaria de la Costa. Downloaded on April 20,2023 at 16:05:53 UTC from IEEE Xplore. Restrictions apply.
Acknowledgment References
[1] R. J. Woodham, “Photometric Method for Determining
This work was supported by the NSC, under Surface Orientation from Multiple Images,” Jour. of
94-2622-E-164-001-CC3. Optical Engineering, Vol. 19, No. 1, 1980.
[2] K. M. Lee and C. J. Kuo, “Shape Reconstruction from
Photometric Stereo,” Jour. of Optical Society of
America: A, Vol. 10, No. 5, 1993.
[3] C. Cho and H. Minanitani, “A New Photometric
Method Using 3 Point Light Sources,” IEICE Trans.
Inf. and Syst. V.E76-D, No. 8, pp. 898-904, August
1993.
[4] Y. Iwahori, R. Woodham, and A. Bagheri, “Principal
(a) (b) Components Analysis and Neural Network
Implementation of Photometric Stereo,” Proc. of the
Workshop on Physics-Based Modeling in Computer
Vision, pp. 117-125, June 1995.
[5] G. Kay and T. Caelli, “Estimating the Parameters of an
Illumination Model Using Photometric Stereo,”
(c) (d) Graphical Models and Image Processing, Vol. 57, No.
Figure 5. (a)The surface albedo of human face in Fig. 4. The 5, pp. 365-388, 1995.
results of 3D model reconstruction by (b) our [6] G. McGunnigle, “The Classification of Textured
proposed algorithm, (c) Georghiades’s approach in Surfaces under Varying Illuminant Direction,” Ph.D.
[10], and (d) Hayakawa’s approach in [19]. Thesis, Department of Computing and Electrical
Engineering, Heriot-Watt University, Edinburgh, 1998.
7. Conclusions [7] S. K. Nayar, K. Ikeuchi, and T. Kanade, “Determining
In this paper, we used the images under three Shape and Reflectance of Hybrid Surfaces by
different light source locations to solve this problem Photometric Sampling,” IEEE Trans. on Robotics and
which have to know the locations of light sources first Automation, Vol. 6, No. 4, pp. 418-431, 1990.
[8] E. Angelopoulou and J. P. Williams, “Photometric
for solving the photometric stereo based on Lambertian
Surface Analysis in a Triluminal Environment,” IEEE
model. In our approach, it can obtain a very good International Conf. on Computer Vision, 1999.
result even if the locations of light sources are not [9] P. N. Belhumeur, D. J. Kriegman, and A. L. Yuille,
given. Addition, using the structure of the proposed “The Bas-Belief Ambiguity,” IEEE Conf. on CVPR,
symmetric neural network for solving photometric pp. 1060-1066, 1997.
stereo problems does not need any other desired output [10] S. Georghiades, P. N. Belhumeur, and D. J. Kriegman,
value and the smoothing conditions. It is easier to “From Few to Many: Illumination Cone Models for
converge and make the system stable. Face Recognition under Variable Lighting and Pose,”
The performance comparisons of our proposed IEEE Trans. on PAMI, Vol. 23, No. 6, pp. 643-660,
neural-network-based photometric stereo approach to June 2001.
[11] S. Georghiades, “Incorporating the Torrance and
the Georghiades’s approach in [10] and the
Sparrow Model of Reflectance in Uncalibrated
Hayakawa’s approach in [19] were made. We test the Photometric Stereo,” IEEE International Conf. on
algorithm on synthetically generated images for the Computer Vision, 2003.
reconstruction of surface of objects. The results clearly [12] K. Ikeuchi, “Determining Surface Orientations of
indicate that the performances of our proposed Specular Surfaces by Using the Photometric Stereo
approach are better than that of the Georghiades’s Method,” IEEE Trans. on PAMI, Vol. 3, pp. 661-669,
approach in [10] and the Hayakawa’s approach in [19]. 1981.
All the experimental results showed that the [13] E. N. Coleman and R. Jain, “Obtaining 3Dimensional
performance of the proposed approach is better than Shape of Textured and Specular Surfaces Using
those of the two proposed existing photometric stereo Four-Source Photometry,” Proc. of Computer Vision,
Graphics, and Image, Vol. 18, pp. 309-328, 1982.
approaches.
[14] H. D. Tagare and R. J. P. deFigueiredo, “Simulataeous
Estimation of Shape and Reflectance Maps from
Photometric Stereo,” IEEE International Conf. on
Computer Vision, pp. 340-343, 1990.

6
409

Authorized licensed use limited to: Corporacion Universitaria de la Costa. Downloaded on April 20,2023 at 16:05:53 UTC from IEEE Xplore. Restrictions apply.
[15] F. Solomon and K. Ikeuchi, “Extracting the Shape and
Roughness of Specular Lobe Objects Using Four Light
Photometric Stereo,” IEEE Conf. on CVPR, pp.
466-471, 1992.
[16] H. Rushmeier, G. Taubin, and A. Gueziec, “Applying
Shape from Lighting Variation to Bump Map
Capture,” In Eurographics Rendering Techniques’97,
pp. 35-44, St. Etienne, France, June 1997.
[17] O. Drbohlav and A. Leonardis, “Detecting Shadows
and Specularities by Moving Light,” Proc. of
Computer Vision Winter Workshop, Ljubljana,
Slovenia, pp. 60-74, 1998.
[18] O. Drbohlav and R. Sara, “Specularities Reduce
Ambiguity of Uncalibrated Photometric Stereo,” IEEE
Conf. on Computer Vision, Copenhagen, Denmark,
2002.
[19] K. Hayakawa, “Photometric Stereo under a Light
Source with Arbitrary Motion,” Jour. of the Optical
Society of America: A, Vol. 11, No. 11, 1994.
[20] G. Q. Wei and G. Hirzinger, “Learning shape from
shading by a multilayer network,” IEEE Trans. on
Neural Networks, Vol. 17, pp. 985-995, 1996.
[21] S.Y. Cho and T. W. S. Chow, “Shape recovery from
shading by a new neural-based reflectance model,”
IEEE Trans. on Neural Networks, Vol. 10,
pp.1536-1541, 1999.
[22] S. Y. Cho and T. W. S. Chow, “Neural computation
approach for developing a 3D shape reconstruction
model,” IEEE Trans. on Neural Networks, Vol. 12,
No. 5, September 2001.
[23] R. D. Morris, P. Cheeseman, V. N. Smelyanskiy, and
D. A. Maluf, “A Bayesian Approach to High
Resolution 3D Surface Reconstruction from Multiple
Imagesm,” Proc. of the IEEE Signal Processing
Workshop on, 14-16 June 1999, pp. 140 – 143.
[24] R. T. Frankot and R. Chellappa, “A Method for
Enforcing Integrability in Shape From Shading
Algorithms,” IEEE Trans. on PAMI, Vol. 10, No. 4,
pp. 439-451, July 1988.
[25] Online available =
http://cvc.yale.edu/projects/yalefacesB/yalefacesB.htm
l.

7
410

Authorized licensed use limited to: Corporacion Universitaria de la Costa. Downloaded on April 20,2023 at 16:05:53 UTC from IEEE Xplore. Restrictions apply.

You might also like