Designing Cat Imaging Systems - Swainathan - PAMI - 03

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

A Framework for Designing General Catadioptric

Imaging and Projection Systems


Rahul Swaminathan, Shree K. Nayar and Michael D. Grossberg
Department of Computer Science, Columbia University
New York, New York 10027
Email: {srahul, nayar, mdog}@cs.columbia.edu

This research was supported by the DARPA HID Program under Contract No. N00014-00-1-0916.

Abstract
A key problem in designing catadioptric systems is finding the mirror shape that, together with
some known primary optics provides the required imaging geometry. Previous methods for mirror
design usually derived partial differential equations (PDEs) from the imaging geometry requirements that constrained the mirror shape. Then, analytic or approximate solutions to these PDEs
for the mirror shape were sought. Therefore, previous mirror designs used case-specific methods
requiring considerable designer interaction and skill. In this paper we present we present a fully
automatic method to determine the mirror shape for a large class of flexible catadioptric imaging
systems. Flexibility is achieved by allowing the user to specify a map from pixels in the image to
points in the scene, called the image-to-scene map. Also the primary optics that constitute the
catadioptric system can be general and are not restricted to perspective or orthographic projection
based lenses.
We model arbitrary mirror surfaces using tensor-product splines and show that the parameters
of the spline can be efficiently computed by solving a set of linear equations. Although we focus
on the design of imaging systems, our framework is directly applicable to the design of projector
systems as well. Furthermore, the systems we can design, may not even have a single viewpoint.
In such cases a locus of viewpoints called a caustic is formed. We also present a simple method to
compute these caustics using our framework. We demonstrate the effectiveness of our approach
by computing the mirror shapes for both catadioptric imaging and projector systems as well as
their caustics. These also include previously designed systems for comparison with ground truth.
We believe that the method we propose is a powerful tool that is general and throws wide open
the space of imaging and projector systems that can be designed.

Contents
1 Design of Catadioptric Imaging Systems

2 Image-to-scene Map using Catadioptrics

2.1 Modeling the Primary Optics . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.2 Modeling the Mirror Shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.3 The Image-To-Scene Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 Computing the Mirror Shape

3.1 Normals Known: A Linear Method . . . . . . . . . . . . . . . . . . . . . . . . . .

3.2 The General Iterative Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11

4 Caustics of Catadioptric Systems

12

5 Example Mirror Designs

13

5.1 Parabolic Mirror based Single Viewpoint System . . . . . . . . . . . . . . . . . . .

15

5.2 Elliptical Mirror based Single Viewpoint System,

. . . . . . . . . . . . . . . . . .

16

5.3 Equi-Angular Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

16

5.4 Plane Rectifying Imaging System . . . . . . . . . . . . . . . . . . . . . . . . . . .

19

5.5 Cylindrical Panorama Imaging System . . . . . . . . . . . . . . . . . . . . . . . .

20

5.6 Conference Table Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21

5.7 Skewed Plane Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

23

6 Conclusions

24

A Weighting for Image/Scene Error Metric

25

A.1 Relating Weights to Mirror Normal . . . . . . . . . . . . . . . . . . . . . . . . . .

26

A.2 Scene Error from Perturbation of Mirror Normal . . . . . . . . . . . . . . . . . . .

28

A.3 Mapping Errors to the Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

28

B Computing Caustics

29

C Practical Issues with Solving Large Linear Systems

30

Design of Catadioptric Imaging Systems

Consider the light field associated with a scene. A perspective imaging system immersed in the
scene performs a very special type of sampling of the light field. In recent years, researchers in
the fields of computer vision and computer graphics have begun to realize that the light field
can be sampled in alternative ways to produce new forms of visual information. The time has
therefore arrived to broaden the notion of a camera. Such a notion should be more than just an
advanced implementation of the camera obscura. It should be viewed as a device that can sample
the light-field [13] in a manner that is most suited to the application at hand.
consider for a moment the problem of acquiring panoramic images. This is usefull for many
applications including omnidirectional surveillance as well as for video-conferencing. Typical omnidirectional systems acquire images as shown in Fig. 1(a). However, this acquired image appears
distorted and necessitates the use of computational resources to create undistored cylindrical
panoramas. What if we could design a sensor that acquired such an image directly as shown
in Fig. 1(b), by optical warping the scene rays. Similarly, for surveillance applications, one can
develop a camera that makes the faces of people standing in a hallway or seated in an auditorium
have the same size in the image, irrespective of where the people are located in the scene (see
Fig. 1(c,d)). As our final example we consider the application of machine inspection of known objects such as golf balls. The area of the dimples in the image acquired with a perspective imaging
system, shown in Fig. 1(e), varies across the image due to foreshortening. We would ideally like to
image the golf-ball uniformly such that these foreshortening effects are minimized (see Fig. 1(f)).

Clearly, there are many applications that benefit from sampling the light-field in unconventional
ways. One might imagine that such sampling of the light field can be accomplished by designing
an appropriate imaging lens. In practice, however, this is not the case. Lenses with exotic shapes
generally produce strong undesirable optical aberrations. Furthermore, fabrication of such lenses
if very difficult. As a result, the class of light field samplings that can be done using just lenses
is very restricted. Mirrors however do not suffer from these shortcomings. For this reason, recent
years, there has been significant interest in catadioptric imaging systems which use a combination
of lenses and mirrors. The use of mirrors enables a designer to manipulate the sampling function
of the camera in a flexible manner without compromising image quality. Catadioptrics, therefore,
significantly broadens the class of cameras that can be realized in practice.
A variety of catadioptric imaging systems have been proposed over the last century. Much of

Surveillane

Machine Inspection

(e)

(a)

(c)

(f)

(b)

(d)

Desired View

Typical View

Panoramic Imaging

Figure

1: A comparison between the typical images acquired with conventional imaging systems and

desired flexible imaging systems. (a) An image acquired with a conventional para-catadioptric imaging
system [21] for omnidirectional imaging. Cylinfrical panoramic views can be computed, but necessitate
the use of computational resources. (b) We would rather directly acquire such an image using the optics
alone [15], to ensure image quality and freedom from computational devices. (c) A perspective view of
audience in an auditorium. Due to increasing depth from the imaging system, the peoples faces decrease
in size in the image. If the faces occupied the same number of pixels, as shown in (d), in the image,
applications such as face recognition could function just as well for people seated far from as well as
close to the sensor. (e) Another example of a perspective view of a golf ball. The image of the dimples
on the ball decrease in size due to the curvature induced foreshortening. In applications such as machine
inspection, we would like to view the ball surface at uniform resolution. Thus all the dimples should
occupy the same number of pixels in the image irrespective of their location, as in (f). We present a
novel framework to automatically design such flexible catadioptric imaging systems.

this work has focused on the design of telescopes [19] and microwave antennae [9]. In the last
decade, there has been considerable interest in developing catadioptric systems for computer
vision, computer graphics and photography. These systems are assumed to consist of some known
primary optics and an unknown mirror. The key problem then is to determine the mirror shape
that implements the required image-to-scene map (see [23, 27, 28, 8, 3, 21, 10, 16, 6, 5, 12,
18] for examples). The complete class of single-mirror and two-mirror systems that have a single
viewpoint (center of projection) have been derived [1, 22]. In addition, several non-single viewpoint
systems [8, 3, 10, 16, 2, 11, 6, 20, 12, 18]. have been designed that perform specific image-to-scene
maps. In each of these cases, the approach has been to use the constraints imposed by the imageto-scene map to derive partial differential equations (PDEs) that the mirror must satisfy. Then,
analytical solutions for the PDEs are sought. This approach is cumbersome and case specific,
making it hard to generalize.
It is therefore highly desirable to have a method for deriving mirror shapes for arbitrary image-toscene maps. Hicks was the first to pose this general problem [17] using geometric distributions.
He also presented a novel method to test if a mirror that implemented the desired map exists.
However, Hicks approach requires different analytical tools to be used and an appropriate numerical method to be chosen for each case of mirror design. That is, although the formulation is a
general one, each mirror derivation requires the user to guide the process by imposing constraints
(generally PDEs) that are particular to the specific problem at hand.
The goal of this paper is to develop a completely automatic method to design catadioptric systems
for arbitrary image-to-scene maps with no human intervention or effort. There are cases where a
desired mapping simply cannot be accomplished by any mirror shape; that is, a solution does not
exist. In such cases, our method computes a mirror that is optimal in the sense that it minimizes
image distortions. Due to all these attributes, our method throws wide open the space of cameras
that can be implemented in practice.
At the core of our method is a linear and yet highly flexible and efficient representation of the
shape of the mirror. This representation is based on tensor product splines. It enables us to obtain
the optimal mirror shape for any given scene-image mapping using a iterative linear estimation
procedure. When the scene lies at infinity, a single iteration of the method produces the desired
mirror shape. As a result, the proposed method is very efficient. In addition, it can handle a
variety of models for the primary optics including orthographic, perspective, and the generalized
imaging model [14]. Furthermore, since our method is general, it the systems we design need
not always have a single viewpoint. In such cases a locus of viewpoints called a caustic [4,
5

26] is formed. We also present a simple method to compute these caustics using our framework.
We demonstrate the power of the method by computing the mirror shapes for previously developed
catadioptric cameras as well as new ones. In the case of previously derived mirrors, we compare
our results with the known mirror shapes. In each case, we also compute the caustic of the imaging
system. In fact, we present the caustics of many previously designed non-single viewpoint cameras
whose caustics have not been known until this time.
While our focus in this paper is on designing cameras, it is worth noting that our method is
directly applicable to the design of novel projection systems as well. By simply defining the
mapping of image pixels to points on an arbitrary projection screen, we can compute the desired
mirror shape. In this case, the only difference is that we wish to minimize errors on the display
surface instead of in the image plane. This is shown to be an easy modification of the proposed
method.
We believe that the method we propose is a powerful tool that is general and throws wide open
the space of projector and imaging systems that can be designed.

Image-to-scene Map using Catadioptrics

The flexibility of our framework comes from the fact that the designer can define a map from pixels
in the image to points in the scene, which we call the image-to-scene map. The catadioptric system
we design must therefore implement this image-to-scene map. As with previous approaches, we
assume the catadioptric system to consist of some known primary optics and an unknown mirror.
Given the image-to-scene map and a model for the primary optics, we determine the mirror shape
that best implements the map. Throughout this paper we will address the problem as one of
designing catadioptric imaging systems. The same analysis applies directly to projector design as
well.

2.1

Modeling the Primary Optics

The catadioptric system is assumed to consist of some known primary optics and a mirror. For
example, the para-catadioptric camera [21] uses a telecentric lens (orthographic projection) along
with a parabolic reflector. Our framework accommodates the use of primary optics possessing
any projection model including: (1) perspective, which has a fixed viewpoint Sl (Fig. 2(a)), (2)

Sl(u,v)

Vl (u,v)

Vl (u,v)

Vl (u,v)

Sl(u,v)
Sl(u,v)
(a) Perspective
Figure

(b) Orthographic

(c) Generalized

2: The different models of the primary optics for which our method can compute a mirror shape.

(a) A perspective imaging model found in most projectors and imaging systems. (b) Orthographic
projection, typically obtained with telecentric lenses. (c) The generalized imaging model (caustic model)
[14], wherein each pixel can have its own associated viewpoint and viewing direction. This last model is
most general and allows the most flexibility in designing catadioptric systems.

orthographic, which has a fixed viewing direction Vl (Fig. 2(b)), and (3) the generalized imaging
model [14, 26], which can have a locus of viewpoints and viewing directions (Fig. 2(c)). In this
general case, pixel (u, v) in the image possesses a viewpoint Sl (u, v) and a viewing direction
Vl (u, v). Thus, the primary optics could be a simple perspective lens or a complex catadioptric
system itself.

2.2

Modeling the Mirror Shape

We now describe our model for the mirror shape. For convenience, we express the mirror shape
Sr in terms of the model for the primary optics as:
Sr (u, v) = Sl (u, v) D(u, v)Vl(u, v),

(1)

where D(u, v) is the distance of the mirror from the viewpoint surface. We model D using tensor
product splines in order to facilitate simple and efficient estimation of the mirror shape. We define
D(u, v) to be:
D(u, v) =

Kf Kg
X
X

ci,j fi (u)gj (v),

(2)

i=1 j=1

where fi (u) and gj (v) are 1-D spline basis functions, ci,j are the coefficients of the spline model,
and Kf Kg are the number of spline coefficients.
We now have a simple linear model for the mirror surface, which can be locally smooth and yet
flexible enough to model arbitrary mirror shapes.
7

(u,v)

Primary Optics

Vl (u,v)
Sr (u,v)
Nr(u,v)
Mirror
Vr (u,v)
Scene

M (u,v)

Figure

3: A catadioptric imaging system consisting of known primary optics and a mirror. In general, a

pixel (u, v) in the image maps to the scene point M(u, v) after reflecting at S r (u, v) on the mirror. This
forces constraints on the surface normals N r (u, v) of the mirror.

2.3

The Image-To-Scene Map

Fig. 3 shows a catadioptric system used to image some known scene. The primary optics can
be perspective, orthographic or the generalized projection model. The user provides a map M

from points (u, v) in the image I to points M(u, v) in the scene. The mirror surface Sr (u, v)

implements the mapping M by reflecting each scene point M(u, v) along the scene ray Vr (u, v)
into the primary optics, where:

Vr (u, v) =

Sr (u, v) M(u, v)
.
|Sr (u, v) M(u, v)|

(3)

This constrains the surface normal of the mirror Nr as:


Nr (u, v) =

Vl (u, v) Vr (u, v)
.
|Vl (u, v) Vr (u, v)|

(4)

We now have all the components needed by our framework to design any catadioptric system.
These include the general model for the primary optics, the spline based linear model for the
mirror surface and the image-to-scene map needed to describe the desired imaging system.

Computing the Mirror Shape

We now present our method to compute the mirror shape for a general catadioptric imaging or
projection system. We begin by assuming that the surface normals of the mirror are known.
Later, we relax this constraint and present an iterative linear solution for the mirror shape for
the general case. In this case the normals are unknown and depend on the relative location of the
mirror and the scene.

3.1

Normals Known: A Linear Method

Many catadioptric systems are designed assuming the scene to be very distant (theoretically
at infinity) from the mirror. This is often used to design imaging systems (see [8, 16, 24], for
examples). In this scenario, the image-to-scene map essentially maps points in the image (u, v)
to the reflected ray direction Vr (u, v). Since the primary optics are known, we can derive the
required mirror surface normals using Eq.(4) (see Fig.3).
From Eq.(1), the tangent vectors to the mirror surface are given by:
Tu =

Sr
Sr
, Tv =
.
u
v

(5)

These tangents must be orthogonal to the normal in Eq.(4), providing two constraints on the
mirror shape:
Sr (u,v)
u

Sr (u,v)
v

Nr (u, v) = 0,

(6)

Nr (u, v) = 0.

Rearranging the terms and substituting Eq.(1) into Eq.(6), we get:


l
( D
Vl + D V
) Nr =
u
u

Sl
u

l
Vl + D V
) Nr =
( D
v
v

Sl
v

Nr ,
Nr .

Now, substituting Eq.(2) into Eq.(7), we get two new constraints:

Vl

Vl

Vl X
Sl
ci,j fi (u)gj (v) Nr =
ci,j fi0 (u)gj (v) +
Nr
u i,j
u

Sl
Vl X
ci,j fi (u)gj (v) Nr =
Nr .
ci,j fi (u)gj0 (v) +
v i,j
v

i,j

i,j

(7)

where, fi0 (u) and gj0 (v) denote the partial derivatives of fi (u) and gj (v), respectively. The above
constraints are linear in the spline coefficients ci,j and therefore can be re-written in the form:

c = b,
A

(8)

where,

=
A

Vl Nr f00 (u)g0 (v)


+

Vl
u

... Vl Nr fi0 (u)gj (v) ...


+

Nr f0 (u)g0 (v) ...

Vl Nr f0 (u)g00 (v)

...

Vl
v

c =

c0,0 ... ci,j ... cKf ,Kj

=
b

Sl
u

Sl
v

Nr

T

Nr fi (u)gj (v) ...

Vl Nr fi (u)gj0 (v)

...

Nr f0 (u)g0 (v) ...

Nr

Vl
u

Vl
v

T

Nr fi (u)gj (v) ...

Vl Nr fK0 f (u)gKj (v)

Vl
u

Nr fKf (u)gKj (v)

0
Vl Nr fKf (u)gK
(v)
j

Vl
v

Nr fKf (u)gKj (v)

Here, c represents the set of unknown coefficients ci,j of the spline, that models the mirror shape.
Every point (u, v) in the image provides two constraints. To solve for c, we need at least

Kf Kg
2

image points at which the normals are known. We solve for the spline coefficients c, at the
resolution of the image which is an over-determined system of equations. This linear system is
together to form A, b respectively, giving:
b,
formed by stacking the multiple constraints A,
A c = b.

(9)

The least-squares solution to Eq.(9) for c minimizes the algebraic error in orthogonality between
the surface tangents and the desired normals. It however does not explicitly minimize the image
projection error1 . Mirror computed using the above constraint minimize image errors only when
an exact solution to the mirror shape exists that implements the prescribed map. However, if no
exact solution for the mirror exists, then the computed mirror shape is not guaranteed to minimize
the image projection error. That is, minimizing the above algebraic error is not equivalent to
minimizing image projection errors.
1

Note that for projector systems we must measure the scene projection error instead.

10

Ideally, we should compute the mirror shape that minimizes error in the image. Generally speaking, minimizing this metric makes the problem non-linear and unwieldy. For instance, a non-linear
search would require the estimation of hundreds of spline coefficients c. This is intractable and
not guaranteed to be free of local minima. We circumvent this problem by transforming the linear
system in Eqs.(8,9), into a new weighted system. We compute weights W for every equation at
every image point (see details of this computation in Appendix A). We then compute the mirror
shape by solving this set of linear weighted constraints for c as:
(W A)T (W A) c = (W A)T W b,

(10)

So far we have derived a simple linear constraint to determine the mirror shape given the required
surface normals. The surface normals can be precisely known in cases when the scene is assumed
to be infinitely far away from the imaging system. This is a common case in many imaging system
designs, wherein we simply have a map from pixels in the image to viewing directions in the scene.
Also, we have presented a weighting scheme to bias the linear constraints in order to compute a
mirror shape that causes the least image distortion.

3.2

The General Iterative Method

In the previous section, we derived a simple linear method to compute the mirror shape given a
set of known surface normals. This is the case when the scene is very far from the sensor. We now
extend this method for the general case of arbitrary image-to-scene maps and scenes that can be
close to the sensor.
We first observe the mirror can lie anywhere within the field of view of the primary optics. Fig. 4
shows two mirrors Sr (1) and Sr (2) at different locations, reflecting the scene point M(u, v) into
the primary optics. As seen, the direction Vr (u, v) along which the scene point M(u, v) is viewed

depends on the mirror location. This in turn influences the surface normals on the mirror and

hence its shape.


We resolve this cyclical dependency using an iterative method. We first approximate the mirror
by a set of facets (say, one for each pixel) on a plane, whose distance from the primary optics is
chosen by the designer. We then estimate the initial set of surface normals using Eqs.(3,4). These
normals are used to linearly solve for the mirror shape using Eq.(10). We now iterate as shown
in Fig. 6, by using the computed mirror shape in each iteration to obtain a better estimate of the
surface normals for the next iteration, until the mirror shape converges. Typically, convergence is
11

Mirror 2
S(2)
r
Mirror 1
S(1)
r
N(2)
r

Vr(2)

Vl
I

(u,v)

Vr(1)

N(1)
r

Primary Optics

Scene

Figure

4: The direction along which a prescribed scene point M(u, v) is viewed by the sensor depends

on the position of the point of reflection on the mirror. As shown here, changing the location from S (0)
to S(1) alters the reflection direction, thus changing the surface normal.

achieved within 10 iterations. An example of how the mirror shape evolves from the initial planar
guess to its final shape is shown in Fig. 5. In most cases, the mirror shape is already close to its
final shape after the first iteration.

Caustics of Catadioptric Systems

The catadioptric systems our framework can design are not restricted to single viewpoint systems
alone. Due to the generality of the framework, the designed system can actually possess a locus
of viewpoints described by their caustics [26] is formed. The caustic is essentially the envelope to
all scene rays imaged by our system.
Caustics are important as they completely describe the geometry of the catadioptric system. With
12

20
0

0
20

14

10

Y
10

Initial guess (planar)

12

Iteration: 1

20

20

10

10

Y
10
5

12

Iteration: 5
Figure

Y
10

12

Iteration: 10 (Final iteration)

5: Convergence of the mirror shape designed for the skewed plane projection in Fig. ??(row 3).

(a) Initial planar guess for the mirror with facets having different surface normals. (b) After the first
iteration, the mirror shape is already close to the final solution. (c) After five iterations, the mirrors
changes a negligible amount showing fast convergence. (d) The final mirror shape obtained after 10
iterations.

respect to imaging systems they describe the effective viewpoint locus. For projector systems
caustics describe the effective projection model of the catadioptric projector (see Fig.2).
Our framework makes it very easy to compute the caustic of arbitrary imaging and projector
systems. In the Appendix B, we present details on computing caustics for general imaging and
projector systems using our framework.

Example Mirror Designs

We now use our method to compute the mirror shapes for different catadioptric imaging and
projector systems. These include previously designed systems as well as two novel designs. The
13

START

Input:
Scene-Image map, Primary Optics Model,

(1)

= Initial mirror depths (default planar)

Compute :
Set of "desired" surface-normals :

Nr

Compute :

Assign :

Estimate of mirror shape :

(2)

(1)

(2)

Compute :

Change in mirror shape


=

(1)

(2)

< Threshold

No

Yes

Output:
Mirror shape, spline parameters.
Error in image pixels, Error in scene points.

STOP

Figure

6: Flow-chart for the spline-based method to compute mirror shapes for general catadioptric

imaging systems. The user specifies a map between pixels in the image and corresponding scene points
as well as the geometry of the primary optics (shown in Figs. 2(a,b,c)). Using these as inputs, our method
computes the required mirror shape automatically.

previously designed systems we present include the parabolic and elliptic reflector based single
viewpoint systems [21, 1], the equi-angular system [8], the plane rectifying system [16] and the
cylindrical panorama imaging system [15, 24]. The new systems we present include a novel imaging
system for tele-conferencing applications and a new catadioptric projector design.

14

Z
1

0
0

Ideal mirror.
Spline mirror.

Telecentric
optics

-2

-1

2 X

(b) Computed Spline Mirror in 3D

(a) Comparison between Spline and Ideal Mirror

N/A

x 10

0 8
x 10

No Error
(c) Image Projection Errors in Degress (view angle)
Figure

(d) Caustic of Spline Mirror based Sensor

7: Results of applying our general mirror design method for the para-catadioptric imaging system

[21]. (a) The mirror shape computed using our approach is identical to the ground truth solution
(parabolic mirror). (b) The 3D shape of the mirror shown for visualization. (c) The mirror design being
an accurate fit to ground truth, produced no image projection error. (d) The caustic, essentially a very
compact point cloud, conforms to the required single viewpoint.

5.1

Parabolic Mirror based Single Viewpoint System

We computed the mirror for a para-catadioptric [21] imaging system using a telecentric lens for
the primary optics (orthographic projection). The theoretical mirror shape for this system is
known to be parabolic. The mirror was designed by specifying an image-to-scene map such that
all the imaged rays pass through a virtual viewpoint located 1cm below the apex of the reflector.

15

As seen in Fig. 7(a), the profile of the computed mirror using our method matches precisely with
the analytic parabolic profile. We also show the three dimensional shape of the computed mirror
in Fig. 7(b). Due to the precision of the recovered mirror shape, errors in terms of the reflected
ray directions are practically zero. The small perturbations in error we observed (on order of
108 ) were due to numerical imprecision issues. Therefore, we do not present an actual error plot
in fig. 7(c). Finally, the caustic surface (viewpoint locus) for this system was computed and found
to be a very compact cluster of points (essentially a single viewpoint).

5.2

Elliptical Mirror based Single Viewpoint System,

The previous design used a telecentric lens for its primary optics. However, one can also use
perspective lens based primary optics to design single viewpoint systems (see [1] for details).
Depending on the location of the mirror apex with respect to the effective viewpoint, the mirror
is either hyperbolic or elliptical. If the viewpoint is behind the mirror, we expect a hyperbolic
mirror. Conversely, if the viewpoint lies in front of the mirror, we expect an elliptical mirror. We
now present the latter design.
The perspective lens based primary optics were assumed to lie 10 inches away from the mirror
apex. The effective viewpoint was constrained to lie behind the entrance pupil of the perspective
primary optics. Thus, entrance pupil of the primary sensor lies between the mirror surface and
the effective viewpoint. The computed mirror surface using our general method closely matches
the ground truth (see Fig. 8(a)). The computed mirror surface is also shown in Fig. 8(b). As
shown in Fig. 8(c) the errors in the resulting viewing directions of this sensor are quite small. The
effective viewpoint locus (see Fig. 8(d)) although not a single point is still small in extent.

5.3

Equi-Angular Sensors

We now present our results on computing the mirror shape for the equi-angular imaging system
developed by Chahl and Srinivasan [8]. This mirror provides a linear map between scene-ray
and reflected ray entering into the primary optics by controlling a single parameter, the angular
magnification .
The authors derived a family of mirror surfaces that conform to the required constraint by solving
a set of PDEs as:
We tested our method on various values of = 3, 5, 7 off which we present results for = 5. The
16

0.4

0.2

4.9

4.4

4.8

-0.4

0.2
0.4

4.98
-0.2

0.2

0.4

Ideal mirror.
Spline mirror.

0.4

(b) Computed Spline Mirror in 3D

(a) Comparison between Spline and Ideal Mirror

Z
4.3

4.05

Y
-.005
.005

Max Error = 0.0028deg.

.005

Min Error = 0.0 deg

(c) Image projection errors in degress (view angle)


Figure

.005

(d) Caustic of Spline Mirror based Sensor

8: Result of applying our general mirror design method for the elliptical mirror based single

viewpoint catadioptric imaging system [1]. (a) The mirror shape computed using our approach is identical
to the ground truth solution (elliptic mirror). (b) The 3D shape of the mirror shown for visualization.
(c) There were very small almost negligible errors in the viewing direction at every pixel as shown. (d)
We computed the caustic and although not a single point, is quite small in extent.

analytic solution in this case becomes:




x x2 3y 2 = x30 .

(11)

As shown in Fig. 9(a), the plot of the mirror computed using our general technique closely matches
the ground truth mirror shape (see Eq.(11)). The computed mirror shape in three dimensions is
shown in Fig. 9(b). The errors in viewing direction (scene-ray) across the entire image plotted
17

11

13

10

15

11

Ideal mirror.
Spline mirror.

(b) Computed Spline Mirror in 3D

(a) Comparison between Spline and Ideal Mirror

11.4
11.6
11.8

0.5
1

Max Error = 0.0044deg.

0.5

Min Error = 0.0 deg

(c) Image Projection Errors in Degress (view angle)


Figure

0.5

0
0.5

(d) Caustic of Spline Mirror based Sensor

9: Results of applying our general mirror design method for the equi-angular imaging system

[8]. (a) The mirror shape computed using our approach is identical to the ground truth solution (see
Eq.(11)). (b) The 3D shape of the mirror is also shown for visualization. (c) The mirror design being an
accurate fit to ground truth, produced no image errors. (d) The caustic was computed using our general
frameowrk. Note this viewpoint locus as not computed until now.

in degrees is also shown to be negligible (see Fig. 9(c)). The regular pattern in the error plot is
conjectured to be due to the piece-wise spline modeling. This catadioptric system does not have
a single viewpoint but rather a locus of viewpoints described by its caustic. We computed the
caustic using our general framework as shown in Fig. 9(d).

18

Sought
mirror

Z
0
I

(u,v)

Primary Optics

Y
-2

-1

Ground plane

M (u,v)
(b) Computed Spline Mirror in 3D

(a) Setup for panoramic imaging system

10

40

6
4

70

0
-6

100

10
40
70
100
Max Error = 0.03 pixels RMS Error = 0.01 pixles
(c) Image projection errors in pixels.
Figure

(d) Caustic of Spline Mirror based Sensor

10: Results of applying our mirror design method for the plane-rectifying imaging system [16]. (a)

The setup showing the mirror reflecting the ground plane into the primary optics. The image-to-scene
map is defined in Eq.(12). (b) The 3D shape of the mirror is shown for visualization. (c) The computed
mirror is very accurate and produces almost no image error. (d) The caustic of this system computed
using our general framework was alao was not computed until now.

5.4

Plane Rectifying Imaging System

Consider the scenario of imaging the ground plane with some imaging system. Most imaging
systems including those designed above, image the ground plane in a non-linear fashion. Thus,
distances in the image are not linearly related to distances in the ground plane. This property
is however desirable in applications such as autonomous robot navigation. We now present a

19

catadioptric imaging system called the plane rectifying system [16] that achieves this goal.
As shown in Fig. 10(a), the mirror faces down towards the ground plane. The primary optics are
located below the mirror facing upwards. We know the ground plane to be precisely 34 inches
below the mirror as in [16]. Assuming the mirror apex to lie at the origin, and a scale factor of
= 54, we define the image-to-scene map as:
X = 54u , Y = 54v , Z = 34 ,

(12)

where u, v represent the image coordinates and X, Y , Z the scene coordinates.


Using the iterative method described in Section ??, we computed the mirror shape as shown in
Fig. 10(b). The errors in image projections of scene points are negligible as shown in Fig. 10(c).
Furthermore, the caustic of this system, shown in Fig. 10(d), was computed using our framework.

5.5

Cylindrical Panorama Imaging System

The omnidirectional catadioptric cameras designed above, acquire a panoramic image but need
computational resources to unwarp it into a panorama. Hicks and Srinivasan [15, 24] independently presented a design of a catadioptric camera that acquires an optically unwarped panoramic
image.
In Fig. 11(a) we show a schematic to describe the desired imaging system. A telecentric lens is
used as the primary optics of the catadioptric system. The mirror then reflects the surrounding
scene into the image directly to form a cylindrical panorama. Every image pixel (u, v) is mapped
directly to the required viewing direction Vr (u, v) of a panorama assuming the scene to be at
infinity. The horizontal field of view is a full 360 degrees, while the vertical field of view is 60
degrees.
Since the scene is assumed to lie at infinity, it is possible to compute the mirror shape using a
single iteration of our method. This shape is shown in Fig. 11(b). Only for certain aspect ratios
of the image does this image-to-scene posses an exact mirror. The aspect ratio we chose was close
enough to provide very small image distortions as shown in Fig. 11(c). Note we show the error in
the angle of the scene-ray being imaged. Also, we computed the caustic for this new and complex
catadioptric system using our general framework (see Fig. 11(d)).

20

Primary
sensor

0.2
0
-0.4

/2

-0.8

3/2

-0.8

Field of view
(panorama)

0
0.8

(b) Computed Spline Mirror in 3D

(a) Setup for panoramic imaging system

Z
0.4

Y
100

X
0

Max Error = 0.65deg.

575

-0.4

Min Error = 0.0 deg

-0.5

0
0.5

0.2

0 -0.2

X
(c) Image projection errors in degress (view angle)
Figure

(d) Caustic of Spline Mirror based Sensor

11: Results of applying our general mirror design method for the cylindrical panoramic imaging

system [15, 24]. (a) The setup shows the mirror reflecting the entire panorama around the system into
the primary optics to directly form the cylindrical panorama in the image. (b) The 3D shape of the
mirror shown here closely resembles those obtained previously [15]. (c) The computed mirror exhibits
mimnimal viewing direction errors across the image. (d) The caustic was again coputed using our general
method. Again the caustic of this complex catadioptric system was not known until now.

5.6

Conference Table Rectification

We now present two new designs for catadioptric systems. Consider the scenario of imaging people
seated at an elliptic conference table. We would like to display the acquired image directly, such
that the curved edge of the table appears straight. Thus, all the people would appear as if seated

21

Primary
sensor

0.2
0
-0.4

/2

-0.8

3/2

-0.8

Field of view
(panorama)

0
0.8

(b) Computed Spline Mirror in 3D

(a) Setup for Panoramic Imaging System

Z
0.4

Y
100

X
0

Max Error = 0.65deg.

575

-0.4

Min Error = 0.0 deg

-0.5

0
0.5

0.2

0 -0.2

X
(c) Image Projection Errors in Degress (view angle)
Figure

(d) Caustic of Spline Mirror based Sensor

12: A novel imaging system designed using our mirror design method. (a) The mirror shape is

chosen so as to image people seated around a specified conference table so that thay appear as if seated
along a straight bench. In theory, a mirror hat implements this map need not exist. (b) We therefore
used the weighted scheme (see ?? for details) to compute the optimal mirror shape. (c) The computed
mirror shape does not exist and exhibits errors as shown. (d) Again we used our framework to also
compute the caustic surface.

along a desk. In general, we might wish to acquire images or video with some pre-determined
warp, so as to display them directly without the use of any computational resources.
We call the sensor used in such a conference table scenario, as the conference table rectifying
sensor. The setup of the mirror and table for the rectifying system are shown in Fig. 12(a).
The table consists of a semi-circle section of radius 3000 with two extended straight sides, each 3000
22

long. The camera is assumed to lie roughly 3000 behind this table facing away from the scene into
a mirror.
The computed mirror shape, is shown in Fig. 12(b). Note that no mirror shape exists that
provides the required image-to-scene map. The computed mirror shape approximates such a map
by minimizing image projection errors. It was therefore important to use the weighting scheme
discussed in Appendix (A) to design the mirror. The resulting image projection errors are shown
in Fig. 12(c). This imaging system also does not have a single effective viewpoint but rather a
locus of viewpoints shown in Fig. 12(d).

5.7

Skewed Plane Mapping

We now present a catadioptric projector design geared towards avoiding occlusions. Typical front
projection systems suffer from occlusions due to the user coming between the projecor and the
display surface. Techniques to reduce distortions [25] work to a limited extend and produce
artifacts at the shadow edges.
We propose to eliminate occlusions by positioning the projector very close to the display surface.
However, in such a configuration, the image needs to unwarped prior to projection. Digitally
warping the image leads to image quality degradation due to re-sampling. Furthermore, to project
onto large areas from such proximity is impossible using conventional projectors due to their
restricted field of view. We therefore have to optically warp the image using catadioptrics.
Referring to Fig. 13 (a), we note that the display surface lies 10 below and away from the projector
and spans a 100 100 square region. The projectors image plane was assumed to be parallel to

the XY -plane. The computed mirror shape using the weighted method, its associated scene

projection errors and caustic are shown in Fig. 13 (b,c,d). As expected from the setup, the mirror
is symmetric about a single plane. Note that, in spite of the projector being only a foot away
from a large screen, the errors in projection are negligible.
A point to note is that the same method was used to estimate all the mirror shapes. We could
compute single viewpoint as well as non-single viewpoint systems just as easily. Thus, making
our method truly flexible and applicable to a large space of mirror design problems.

23

Desired Projector System


Mirror

Image

Z
-X

Projection surface

1 Y

0.4
-0.5

10

0
0.5

10

(a) Setup for Projector System.

(b) Computed Spline Mirror in 3D

10

Z
2

40

3
Y
4.5

70

3.5
-0.5 0

100
10

40

70

100

0.5

2.5
X

Error : MAX = .02 (inches) RMS = .014 (inches)


(c) Scene Projection Errors in Inches.
Figure

(d) Caustic of Spline Mirror based Projector

13: We used our mirror design method to also design a catadioptric projection system. Note the

same method can be directly used for projectors as well. Here we design a occlusion minimizing projector
by placing it very close to the wall. (a) We design a catadioptric projector system located 1 above and 1
away from a 100 100 screen. (b) The computed mirror shape is shown for visualization. (c) Projection
errors were measured in the scene (on the display) and are shown to be negligible. (d) The point of
projection is no longer a point byt rather a locus called a caustic. We computed this locus as well, using
pur general framework.

Conclusions

In this paper we presented a framework to design general catadioptric imaging and projector systems. Such systems are assumed to consist of some known primary optics and an unknown mirror.
24

Using our framework, the designer has complete freedom in designing arbitrary catadioptric systems. One can define any desired imaging geometry, by specifying a map from pixels in the image
to points in the scene, which we called the image-to-scene map. Our method then computes the
optimal mirror shape that implements the desired image-to-scene map.
A major advantage of our method is that it is flexible enough to be used with all possible models
for the primary optics. These include the perspective, orthographic and the generalized imaging
model [14]. Furthermore, the same method can be used to design a very large class of mirror
designs including rotationally symmetric, asymmetric, single viewpoint or non-single viewpoint
systems.
We proposed the use of tensor-product splines to model mirrors. These are locally smooth as
well as flexible enough to model arbitrary mirror shapes. Using this framework, we presented a
simple linear solution to the mirror shape for a common class of image-to-scene maps, assuming
the scene to be at infinity. We also presnted a generalized iterative method for arbitrary imageto-scene maps. Furthermore, linear constraints were weighted in order to determine the mirror
shape that minimized image or scene projection error. We also used our framework to compute
the caustics of such general catadioptric systems. The caustics completelty describe the geometry
of the imaging or projection system. The caustic is needed for all vision tasks including stereo
and structure from motion.
We finally presented results by designing many catadioptric systems using our general method.
The systems included previously designed imaging systems for comparison with ground truth when
applicable. Also, we presented a new imaging system with applications to video conferencing as
well as a new projector system for occlusion elimination. For each designed system we also
computed its caustic surface.
We believe that the method we proposed is a powerful tool that is general and throws wide open
the space of projector and imaging systems that can be designed.

Weighting for Image/Scene Error Metric

The linear constraints derived in Eq.(9) measured an algebraic error between the surface tangents
(Tu , Tv ) and the known surface normal Nr . However, in designing an imaging system we must
determine the mirror that minimizes image distortion rather than this algebraic error in surface normals. Similarly, for catadioptric projector systems, the computed mirror must minimize

25

projection errors on the display surface.


In general, the computed mirror shape Sr (u, v) approximates the desired image-to-scene map
M(u, v). We denote the map achieved using the mirror by (u, v). Then, the distortion (projection

error) in the scene (u, v) for projector systems is:

(u, v) = |(u, v) M(u, v)|.

(13)

Similarly, for imaging systems the image projection error is given by:
I (u, v) = |M1 ((u, v)) (u, v)|.

(14)

Ideally our algorithm should be minimize these errors.


The above metric is extremely non-linear and intractable. Recall that we potentialy have to
estimate hunbdreds of spline coefficients. We therefore linearize the metric and solve for the
optimal2 mirror shape. The linearization is achieved by weighting the constraints in Eq.(9) as
used in Eq.(10).
Every image point provides two constraints on the mirror shape (see Eq.(7)). Our goal is to
compute two weights wu and wv to bias each of the constraint equations. We denote the weights
for projector systems by w,u , w,v , and for imaging systems by wI,u , wI,v respectively. These
weights scale the errors in Eq.(9) so that they approximate the scene error and the image error
I at a point.

A.1

Relating Weights to Mirror Normal

Recall that Eq.(7) expresses the error in the direction of the mirror surface normals. To relate
this error to the image or scene error we consider small perturbations of the normal vector at a
point on the mirror. A change in the mirror normal Nr vector results in a change in the scene-ray
Vr direction. In turn this causes the scene-ray to intersect the scene at a point other than the
desired point. If we consider a perturbation of the normal vector from the desired mirror normal
vector Nr (u, v) then the displacement in the scene is the scene error . For imaging systems, we
map this error onto the image to obtain the image error .
Revisiting Eq.(7), we observe that for an orthographic primary sensor, both the terms V l (u, v)/u
and Vl (u, v)/v vanish. This implies that independent of the mirror shape specified by D(u, v),
2

By optimal we mean the mirror minimizes the RMS image distortion or scene projection error.

26

the tangent vectors Tu and Tv lie in planes normal to u and v respectively, where:
u = V l

Sl
,
u

v = V l

Sl
.
v

(15)

Similarly, in the case of a perspective primary sensor, the partial derivatives of Sl (u, v) vanish. In
this case, the tangent vectors Tu and Tv are normal to:
u = V l

Vl
,
u

v = V l

Vl
.
v

(16)

Thus, the tangent vectors lie in two unique planes independant of the mirror shape. Furthermore,
the scene or image error due to purturbations of the normal vector can also be decoupled into two
independent components. In particular, the error given by the dot product of Nr with Tu and Tv
depend only on the projections of Nr onto the planes orthogonal to the vectors u and v given
by u and v , respectively. We therefore have:
Tu Nr = |Tu ||u (Nr )| cos u ,

(17)

Tv Nr = |Tv ||v (Nr )| cos v .

(18)

where, u and v are the angles between the tangent vectors and the normal vector, and u and
v are the projections onto u and v respectively . We only present analysis for errors related
to Tu . as the analysis of errors related to Tu is exactly the same.
For small perturbations in u , we have u =

+ , where 0. Thus, Eq.(17) reduces to:

Tu Nr |Tu ||u (Nr )|u ,


The weight wu , that relates the error in Eq.(19) to the scene or image error must satisfy wu |u (Nr )|u
, where the error , is either the scene errror or image error I . Therefore we define the

weights as:
1
1

,
|Tu | |u (Nr )| u

1
1
.
=
|Tv | |v (Nr )| v

wu =
wv

(19)

Notice that the above weight depends on three components: (1) the length of the tangent vector
|Tu |, (2) the projection of Nr onto u , and (3) dependence of error /u .
The magnitude of |Tu | depends on the spline parameters that are yet to be determined. However,

beyond the first iteration, the changes in the computed mirror shapes are negligible (see Section 3).
27

We therefore remove dependence on |Tu | iteratively, by using the value of |Tu |, after one iteration

in the next iteration. Also, we compute |u (Nr )| using Eqs.(15,16) as:


|u (Nr )| = Nr

u Nr
u .
|u |2

We must now determine the dependence of error /u on the angle u .

A.2

Scene Error from Perturbation of Mirror Normal

We first relate the angle u with the displacement in the scene using /u d /du . We

note that u represents a rotation around the axis v . Let G(u ) be the one parameter family
of rotations. The infinitesimal rotation on the normal vector is:
d
G(u )Nr = u Nr .
du

(20)

The change in the reflected ray due to rotation around the axis v is:
d
(Vl 2(Vl G(u )Nr )G(u )Nr ),
du
= 2(Vl (u Nr ))Nr 2(Vl Nr )(u Nr ) .

u =

The change in the reflected ray produces a change in the projected scene point. This change is
computed by projecting u onto the tangent plane of a scene point and scaling by the distance
DM of the mirror to the scene. If the unit normal to the surface of the scene is NM then the
infinitesimal vector displacement in the scene is:
u = (u (NM u )NM )DM

(21)

Simlarly, the error corresponding to Tv is:


v = (v (NM v )NM )DM

(22)

For a projection system the change in errors d /du and d /dv are simply the lengths |u | and
|v |, respectively.

A.3

Mapping Errors to the Image

For an imaging system we must compute weights that represent image error rather than scene
error. We must therefore inverse map the scene error component vectors u and v , onto the
28

image plane. The image error vectors [a, b], [c, d] are related to the scene error as:
u = aMu + bMu , v = cMu + dMv ,
where, M is the image-to-scene map and Mu =

M
u

and Mv =

M
.
v

Assuming the mapping M is non-degenerate these equations can be solved since the scene dis-

placement lies in the tangent plane to the scene, spanned by Mu and Mv . To solve these equation

we take the dot products with Mu and Mv to get four scaler equations in four unknowns:
u Mu = a|Mu |2 + bMv Mu ,
v Mu = c|Mu |2 + dMv Mu ,

u Mv = aMu Mv + b|Mv |2 ,
v Mv = cMu Mv + d|Mv |2 .

Re-writing these cosnstraints as a matrix, we solve for a, b, c, d as:

a c
b d

|Mu |2

Mu Mv

Mv Mu
|Mv |2

u M u v M u
u M v v M v

(23)

The change in the error dI /du is the length of the displacement associated to the change in
angle. Using Eq.(25) we see that:

wI,u = ( |u (Nr )|)1 ( a2 + b2 )

wI,v = ( |v (Nr )|)1 ( c2 + d2 )

(24)
(25)

where, is a normalization factor,given by the sum of all the weights. Referring to Eq.(10),
W = [wI,u wI,v ]T .

Computing Caustics

We now present a simple method to compute the caustics of catadioptric systems designed using
the proposed spline based method. The use of splines to model the mirror shape also simplifies
the estimation of its caustic surface. We begin by deriving the caustic surface for the general
imaging system, and then describe ways to compute it numerically.
The caustic can be defined in terms of the reflector surface Sr (u, v) and the set of incoming lightrays Vr (u, v) as it lies along Vr (u, v) at some distance rc from the point of reflection Sr (u, v)
29

given by:
L(u, v, rc) = Sr (u, v) + rc Vr (u, v).

(26)

In order to determine rc we employ the Jacobian method [7] by constraining the determinant of
the Jacobian of Eq.(26) to vanish:

det

Sr (x) (u,v)
u
Sr (x) (u,v)
v

(x)

+ rc Vr u(u,v)
(x)

+ rc Vr v(u,v)

Sr (y) (u,v)
u
Sr (y) (u,v)
v

Vr (x) (u, v)

(y)

+ rc Vr u(u,v)
(y)

+ rc Vr v(u,v)

Vr (y) (u, v)

Sr (z) (u,v)
u
Sr (z) (u,v)
v

(z)

+ rc Vr u(u,v)
(z)

= 0 , (27)
+ rc Vr v(u,v)

(z)
Vr (u, v)

where the superscripts of (x), (y) and (z) denotes the X-axis, Y -axis and Z-axis components,
respectively.
Note that, we do not know the analytic forms for Sr (u, v) and Vr (u, v). To compute their partial
derivatives, we first fit splines to Sr (u, v) and Vr (u, v). Then, we compute the required partial
derivatives numerically. Thus, using the framwork of spline, we can easily determine the caustic
of any general projector or imaging system.

Practical Issues with Solving Large Linear Systems

The size of the matrix formed by AT A in Eq.(10) is Kf Kg , where Kf and Kg are the number of

spline coefficients. Typically the system of equations can be very large, thereby inhibiting memory

efficient and fast computations. For instance, using a spline with 30 coefficients, produces a matrix
of size 900 900. However, as seen from Fig. 14, the matrix AT A is sparse. We therefore use a
sparse representation for the matrices to efficiently compute the spline parameters. Also, to solve

for the spline coefficients we cannot use typical techniques such as singular value decomposition due
to the large non-sparse matrices formed. Hence we use iterative methods such as the BiConjugate
Gradients method found in Matlab.

References
[1] S. Baker and S. K. Nayar. A Theory of Catadioptric Image Formation. In Proc. ICCV, pages
3542, 1998.
[2] R. Benosman, E. Deforas, and J. Devars. A New Catadioptric Sensor for the Panoramic
Vision of Mobile Robots. In Proc. OMNIVIS, 2000.
30

Number of Zeros = 82%

Coefficients

50

100

150

200
0

50

100

150

200

Coefficients

Figure

14: A binary image view of the matrix A0 A showing the non-zero elements as black points. Most

of this image is white indicating a majority of zero valued elements. We therefore use the sparse
representation to do our computations. This not only optimizes the use of memory, but facilitates faster
computations. This memory optimization is critical for large scale problems, where we need to compute
the mirror shapes at a high resolution.

[3] S. Bogner. Introduction to Panoramic Imaging. In IEEE SMC Conference, volume 54, pages
31003106, 1995.
[4] M. Born and E. Wolf. Principles of Optics. Permagon Press, 1965.
[5] A.M. Bruckstein and T.J. Richardson. Method and System for Panoramic Viewing with
Curved Surface Mirrors. In US Patent, 1999.
[6] A.M. Bruckstein and T.J. Richardson. Omniview Cameras with Curved Surface Mirrors. In
Proc. OMNIVIS, pages 7984, 2000.
[7] D. G. Burkhard and D. L. Shealy. Flux Density for Ray Propoagation in Gemoetrical Optics.
Journal of the Optical Society of America, 63(3):299304, 1973.
[8] J. Chahl and M. Srinivasan. Reflective surfaces for panoramic imaging. Applied Optics,
36(31):82758285, 1997.
[9] S. Cornbleet. Microwave and Geometric Optics. Academic Press, 1994.
[10] S. Derrien and K. Konolige. Aproximating a single viewpoint in panoramic imaging devices.
In International Conference on Robotics and Automation, pages 39323939, 2000.
31

[11] S. Gachter. Mirror Design for an Omnidirectinal Camera with a Uniform Cylindrical Projection when using the SVAVISCA Sensor. Technical Report CTU-CMP-2001-03, Czech
Technical University, 2001.
[12] J Gaspar, C. Decco, J. Okamoto Jr, and J. Santos-Victor. Constant Resolution Omnidirectional Cameras. In Proc. OMNIVIS, page 27, 2002.
[13] A. Gershun. Svetovoe Pole (The Light Field, in English). Journal of Mathematics and
Physics, XVIII:51151, 1939.
[14] M. Grossberg and S.K. Nayar. A General Imaging Model and a Method for Finding its
Parameters. In Proc. ICCV, pages 108115, 2001.
[15] A. Hicks. Differential Methods in Catadioptric Sensor Design with Applications to Panoramic
Imaging. Technical report, Drexel University, Computer Science, 2002.
[16] R.A. Hicks and R. Bajcsy. Catadioptric Sensors that Approximate Wide-Angle Perspective
Projections. In Proc. CVPR, pages I:545551, 2000.
[17] R.A. Hicks and R.K. Perline. Geometric distributions for catadioptric sensor design. In Proc.
CVPR, pages I:584589, 2001.
[18] R.A Hicks and R.K Perline. Equi-areal Catadioptric Sensors. In Proc. OMNIVIS, page 13,
2002.
[19] P. L. Manly. Unusual Telescopes. Cambridge University Press, 1995.
[20] F. M. Marchese and D. G. Sorrenti. Mirror Design of a Prescribed Accuracy Omni-directional
Vision System. In Proc. OMNIVIS, page 136, 2002.
[21] S. K. Nayar. Catadioptric Omnidirectional Cameras. In Proc. CVPR, pages 482488, 1997.
[22] V. N. Peri and S. K. Nayar. Generation of Perspective and Panoramic Video from Omnidirectional Video. DARPA-IUW, pages I:243245, 1997.
[23] D. Rees. Panoramic television viewing system. United States Patent No.3,505,465, 1970.
[24] M.V. Srinivasan. New Class of Mirrors for Wide-Angle Imaging. In Proc. OMNIVIS, 2003.
[25] R. Sukthankar, T.J. Cham, and G. Sukthankar. Dynamic Shadow Elimination for MultiProjector Displays. In Proc. CVPR, pages II:151157, 2001.
32

[26] R. Swaminathan, M. D. Grossberg, and S. K. Nayar. Caustics of Catadioptric Cameras. In


Proc. ICCV, pages II:29, 2001.
[27] Y. Yagi and M. Yachida. Real-Time Generation of Environmental Map and Obstacle Avoidance Using Omnidirectional Image Sensor with Conic Mirror. In Proc. CVPR, pages 160165,
1991.
[28] K. Yamazawa, Y. Yagi, and M. Yachida. Omnidirectional imaging with hyperboloidal projection. In Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems, pages
10291034, 1993.

33

You might also like