Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 71

LECTURE 11,12

CAMERA MODELS

CSE 320
GRAPHICS PROGRAMMING
Where are we?
Rendering Pipeline
Modeling
Display
Transformation

Illumination Rasterization

Viewing
Projection
Transformation

Clipping
Pinhole Camera
Ingredients
www.kodak.com
• Box
• Film
• Hole Punch
Results
• Pictures!
www.pinhole.org
Pinhole Camera
Non-zero sized hole Multiple
rays
Pinhol of projection
e

Film
Plane
Pinhole Camera
Theoretical Pinhole One ray
of
projection
Pinhol
e

Film
Plane
Pinhole Camera
Field of View

Pinhol
Focal e Field of View
Length

Film
Plane
Pinhole Camera
Field of View
Pinhol
Focal
e Field of View
Length

Film
Plane
Moving the Film Plane
Varying distance to film plane
What does this do?

d1 Pinhol
e Field of View
d2

Film
Plane
Adding a Lens
• Pinhole camera has small aperture (lens opening)
– It’s hard to get enough light to expose the film

• Lens permits larger apertures


• Lens permits changing distance to film plane without actually moving the
film plane
Computer Graphic Camera
We use
• Center of Projection (COP)
• Projection Plane
COP

Projectio
n
Plane
Moving the COP
Perspective vs. Orthographic Views
Perspectiv
e

When COP at infinity, Orthographic View


Multi-point Perspective
One-point Perpective
•• One
One Vanishing
Vanishing Point
Point

Two-point Perspective
•• Two
Two Vanishing
Vanishing Points
Points

http://www.sanford-artedventures.com/create/tech_2pt_perspective.html
Perspective Projection
Our camera must model perspective
Perspective Projection

Projectio
n
Plane

How tall should COP


this bunny be?
Perspective Projection
The
The geometry
geometry of
of the
the situation
situation is
is that
that of
of similar
similar triangles.
triangles. View
View from
from
above:
above:

View
X plane P (x, y, z)

(0,0,0 x’ = ?
) Z

d
What
What is
is x’
x’ ??
Perspective Projection
Desired result for a point [x, y, z, 1]TT projected onto the view
plane:

What could a matrix look like to do this?


A Perspective Projection Matrix
Answer:
A Perspective Projection Matrix
Example:

Or, in 3-D coordinates:


Homogeneous Coordinates
The role of w in (x, y, z, w)
•• All
All 3-D
3-D points
points are
are described
described with
with aa four
four vector
vector
•• All
All 3-D
3-D tranformations
tranformations are
are represented
represented with
with 4x4
4x4 matrix
matrix
•• When
When projected
projected to
to screen
screen coordinates
coordinates (rasterization)
(rasterization)
–– x,
x, y,
y, and
and zz are
are divided
divided by
by point’s
point’s w
w value
value
•• This
This allows
allows us
us to
to perform
perform perspective
perspective foreshortening
foreshortening while
while
preserving
preserving the
the reversibility
reversibility of
of the
the mapping
mapping
–– We
We can
can retrieve
retrieve x,
x, y,
y, and
and zz by
by multiplying
multiplying by
by w
w
Perspective Projection
•• Perspective
Perspective projection
projection matrix
matrix is
is not
not affine
affine
–– Parallel
Parallel lines
lines not
not preserved
preserved
•• Perspective
Perspective projection
projection is
is irreversible
irreversible
–– Many
Many 3-D
3-D points
points can
can be
be mapped
mapped to
to same
same (x,
(x, y,
y, d)
d) on
on the
the projection
projection plane
plane
–– No
No way
way to
to retrieve
retrieve the
the unique
unique zz values
values
Orthographic Camera Projection
•• Camera’s
Camera’s back
back plane
plane parallel
parallel to
to
lens
lens
•• Infinite
Infinite focal
focal length
length
•• No
No perspective
perspective convergence
convergence
Pipeline

Modelview

Projection
Perspective
Division
Clip

Rasterize
OpenGL Pipeline
•• Projection
Projection matrix
matrix is
is stored
stored in
in GL_PROJECTION
GL_PROJECTION stack
stack
–– This
This controls
controls ‘type’
‘type’ of
of camera
camera
–– All
All vertices
vertices are
are multiplied
multiplied by
by this
this matrix
matrix
•• GL_MODELVIEW
GL_MODELVIEW controls
controls camera
camera location
location
–– All
All vertices
vertices are
are multiplied
multiplied by
by this
this matrix
matrix
Making GL_PROJECTION
glFrustum – for perspective projections
•• xmin
xmin
•• xmax
xmax •• Camera
Camera looks
looks along
along –z
–z
•• ymin
ymin •• min/max
min/max need
need not
not be
be symmetric
symmetric
•• ymax
ymax about
about any
any axis
axis
•• near
near •• near
near and
and far planes
planes are
are parallel to
to
•• far
far plane
plane z=0
z=0
Making GL_PROJECTION
gluPerspective – for perspective projections
•• fovy
fovy
•• aspect
aspect •• fovy
fovy is
is the
the angle
angle between
between top
top and
and
•• near
near bottom
bottom of viewing volume
•• far
far •• aspect
aspect is
is ratio
ratio of
of width
width over
over height
height
•• This
This volume
volume is
is symmetrical
symmetrical
•• View
View plane
plane is
is parallel
parallel to
to camera
camera
Making GL_PROJECTION
glOrtho – for orthographic projections
•• left
left
•• right
right •• (left,
(left, bottom)
bottom) and
and (right,
(right, top)
top) define
define
•• bottom
bottom dimensions
dimensions ofof projection
projection plane
plane
•• top
top •• near
near and
and far used to
to clip
clip
•• near
near
•• far
far
Making GL_PROJECTION
It’s like any other matrix
•• These
These OpenGL
OpenGL commands
commands just
just build
build aa matrix
matrix for
for you
you
•• You
You could
could build
build the
the matrix
matrix yourself
yourself
•• You
You can
can multiply
multiply the
the GL_PROJECTION
GL_PROJECTION matrix
matrix by
by any
any affine
affine
transformation
transformation you
you wish
wish
–– Not
Not typically
typically needed
needed
Rendering with Natural Light
Fiat Lux
Light Stage
Moving the Camera or the World?
Two equivalent operations
•• Initial
Initial OpenGL
OpenGL camera
camera position
position is
is at
at origin,
origin, looking
looking along
along -Z
-Z
•• Now
Now create
create aa unit
unit square
square parallel
parallel to
to camera
camera at
at zz == -10
-10
•• If
If we
we put
put aa z-translation
z-translation matrix
matrix of
of 33 on
on stack,
stack, what
what happens?
happens?
–– Camera
Camera moves
moves to
to zz == -3
-3
▪▪ Note
Note OpenGL
OpenGL models
models viewing
viewing in
in left-hand
left-hand coordinates
coordinates
–– Camera
Camera stays
stays put,
put, but
but square
square moves
moves to
to -7
-7
•• Image
Image at
at camera
camera is
is the
the same
same with
with both
both
A 3D Scene
Notice the presence of
the camera, the
projection plane, and
and
the world
coordinate axes

Viewing
Viewing transformations
transformations define
define how to acquire the image on
the projection plane
Viewing Transformations
Goal: To create a camera-centered view

Camera is at origin
Camera is looking along negative z-axis
Camera’s ‘up’ is aligned with y-axis (what
(what does
does this
this mean?)
mean?)
2 Basic Steps
Step 1: Align the world’s coordinate frame with camera’s by
rotation
2 Basic Steps
Step 2: Translate to align world and camera origins
Creating Camera Coordinate Space
Specify a point where the camera is located in world space,
the eye point (View Reference Point = VRP)
Specify a point in world space that we wish to become the
center of view, the lookat point
Specify a vector in world
world
space that we wish to
point up in camera
image, the up vector (VUP)
Intuitive
Intuitive camera
camera
movement
Constructing Viewing Transformation, V
Create a vector from eye-point to lookat-point

Normalize the vector

Desired rotation matrix should map this vector


to [0, 0, -1]TT Why?
Constructing Viewing Transformation, V
Construct another important vector from the cross product
of the lookat-vector and the vup-vector

This vector, when normalized, should align with [1, 0, 0] TT


Why?
Constructing Viewing Transformation, V
One more vector to define…

This vector, when normalized, should align


align with
with [0,
[0, 1,
1, 0]
0]TT

Now let’s compose the results


Composing Matrices to Form V
We know the three world axis vectors (x, y, z)
We know the three camera axis vectors (u, v, n)
Viewing transformation,
transformation, V,
V, must
must convert
convert from world to
camera coordinate
coordinate systems
systems
Composing Matrices to Form V
Remember
•• Each
Each camera
camera axis
axis vector
vector is
is unit
unit length.
length.
•• Each
Each camera
camera axis
axis vector
vector is
is perpendicular
perpendicular to
to others
others

Camera matrix is orthogonal and normalized


•• Orthonormal
Orthonormal

Therefore, M-1-1 = MTT


Composing Matrices to Form V
Therefore, rotation component of viewing transformation is
just transpose of computed vectors
Composing Matrices to Form V
Translation component too

Multiply it through
Final Viewing Transformation, V
To transform vertices, use this matrix:

And you get this:


Canonical View Volume
A standardized viewing volume representation

Parallel (Orthogonal) Perspective


x or y = +/- z
x or x or
Back
y y Back
1 Plane
Plane
Front Front
Plane -1 -z Plane -z

-1
Why do we care?
Canonical View Volume Permits Standardization
•• Clipping
Clipping
–– Easier
Easier to
to determine
determine ifif an
an arbitrary
arbitrary point
point is
is enclosed
enclosed in
in volume
volume
–– Consider
Consider clipping
clipping to
to six
six arbitrary
arbitrary planes
planes of
of aa viewing
viewing volume
volume versus
versus
canonical
canonical view
view volume
volume
•• Rendering
Rendering
–– Projection
Projection and
and rasterization
rasterization algorithms
algorithms can
can be
be reused
reused
Projection Normalization
One additional step of standardization
•• Convert
Convert perspective
perspective view
view volume
volume to
to orthogonal
orthogonal view
view volume
volume to
to
further
further standardize
standardize camera
camera representation
representation
–– Convert
Convert all
all projections
projections into
into orthogonal
orthogonal projections
projections by
by distorting
distorting points
points in
in
three
three space
space (actually
(actually four
four space
space because
because we
we include
include homogeneous
homogeneous coord
coord w)
w)
▪▪ Distort
Distort objects
objects using
using transformation
transformation matrix
matrix
Projection Normalization
Building
Building aa transformation
transformation
matrix
•• How
How do
do we
we build
build aa matrix
matrix that
that
–– Warps
Warps any
any view
view volume
volume to
to
canonical
canonical orthographic
orthographic view
view
volume
volume
–– Permits
Permits rendering
rendering with
with
orthographic
orthographic camera
camera

All scenes rendered


with orthographic
camera
Projection Normalization - Ortho
Normalizing Orthographic Cameras
•• Not
Not all
all orthographic
orthographic cameras
cameras define
define viewing
viewing volumes
volumes of
of right
right size
size
and
and location
location (canonical
(canonical view
view volume)
volume)
•• Transformation
Transformation must
must map:
map:
Projection Normalization - Ortho
Two
Two steps
steps
•• Translate
Translate center
center to
to (0,
(0, 0,
0, 0)
0)
–– Move
Move xx by
by –(x
–(xmax + x min)) // 22
max + xmin

•• Scale
Scale volume
volume to
to cube
cube with
with sides
sides == 22
–– Scale
Scale xx by
by 2/(x
2/(xmax – x min))
max – xmin

•• Compose
Compose these
these transformation
transformation
matrices
matrices
–– Resulting
Resulting matrix
matrix maps
maps
orthogonal
orthogonal volume
volume to
to canonical
canonical
Projection Normalization - Persp
Perspective Normalization is Trickier
Perspective Normalization
Consider N=

After multiplying:
•• p’
p’ == Np
Np
Perspective Normalization
After dividing by w’, p’ -> p’’
Perspective Normalization
Quick Check • If x = z
–x’’ = -1
• If x = -z
–x’’ = 1
Perspective Normalization
What about z?
• if z = zmax
max

• if z = zmin
min

• Solve for α and β such


such that
that zmin
zmin ->
-> -1 and zmax ->1
• Resulting z’’ is nonlinear, but preserves ordering of points
– If z11 < z22 … z’’11 < z’’22
Perspective Normalization
We did it. Using matrix, N
•• Perspective
Perspective viewing
viewing frustum
frustum transformed
transformed to
to cube
cube
•• Orthographic
Orthographic rendering
rendering of
of cube
cube produces
produces same
same image
image as
as
perspective
perspective rendering
rendering of
of original
original frustum
frustum
Color
Next topic: Color

To
To understand
understand how
how to
to make
make realistic
realistic images,
images, we we need
need aa basic
basic
understanding
understanding ofof the
the physics
physics and
and physiology
physiology of of vision.
vision. Here
Here we
we
step
step away
away from
from the
the code
code and
and math
math for
for aa bit
bit to
to talk
talk about
about basic
basic
principles.
principles.
Basics Of Color
Elements of color:
Basics of Color
Physics:
•• Illumination
Illumination
–– Electromagnetic
Electromagnetic spectra
spectra
•• Reflection
Reflection
–– Material
Material properties
properties
–– Surface
Surface geometry
geometry and
and microgeometry
microgeometry (i.e.,
(i.e., polished
polished versus
versus matte
matte
versus
versus brushed)
brushed)
Perception
•• Physiology
Physiology and
and neurophysiology
neurophysiology
•• Perceptual
Perceptual psychology
psychology
Physiology of Vision
The eye:
The retina
•• Rods
Rods
•• Cones
Cones
–– Color!
Color!
Physiology of Vision
The center of the retina is a densely packed region called
the fovea.
•• Cones
Cones much
much denser
denser here
here than
than the
the periphery
periphery
Physiology of Vision: Cones
Three types of
of cones:
cones:
•• LL or
or R,
R, most
most sensitive
sensitive to
to red
red light
light (610
(610 nm)
nm)
•• MM or
or G,
G, most
most sensitive
sensitive to
to green
green light
light (560
(560 nm)
nm)
•• SS or
or B,
B, most
most sensitive
sensitive to
to blue
blue light
light (430
(430 nm)
nm)

•• Color
Color blindness
blindness results
results from
from missing
missing cone
cone type(s)
type(s)
Physiology of Vision: The Retina

Strangely, rods and cones are at


the back of the retina, behind a
mostly-transparent neural
structure that
collects their response.
http://www.trueorigin.org/retina.asp
http://www.trueorigin.org/retina.asp
Perception: Metamers
A given perceptual sensation of color derives from the
stimulus of all three cone types

Identical perceptions of color can thus be caused


by very different spectra
Perception: Other Gotchas
Color perception is also difficult because:
•• ItIt varies
varies from
from person
person to
to person
person
•• ItIt is
is affected
affected by
by adaptation
adaptation (stare
(stare at
at aa light
light bulb…
bulb… don’t)
don’t)
•• ItIt is
is affected
affected by
by surrounding
surrounding color:
color:
Perception: Relative Intensity
We are not good
good at
at judging absolute intensity
Let’s illuminate pixels with white light on scale of 0 - 1.0
Intensity
Intensity difference
difference of neighboring
neighboring colored
colored rectangles
rectangles with
with
intensities:
▪ 0.10 -> 0.11 (10% change)
▪ 0.50 -> 0.55 (10% change)
will look the same
We perceive relative intensities,
intensities, not
not absolute
absolute
Representing Intensities
Remaining
Remaining in
in the
the world
world of
of black
black and
and white…
white…
Use photometer to
to obtain
obtain min
min and
and max
max brightness
brightness of
of monitor
monitor
This is the dynamic range
Intensity
Intensity ranges
ranges from
from min,
min, I00, to max, 1.0
How do we represent 256 shades of gray?
Representing Intensities

Equal distribution between min and


and max
max fails
fails
•• relative
relative change
change near
near max
max is
is much
much smaller
smaller than
than near
near II00
•• Ex:
Ex: ¼,
¼, ½,
½, ¾,
¾, 11 I0=I0
Preserve % change I1 = rI0
•• Ex:
Ex: 1/8,
1/8, ¼,
¼, ½,
½, 11
I2 = rI1 = r2I0
•• IInn == II00 ** rrnnII00,, nn >> 00

I255=rI254=r255I0
Dynamic Ranges
Dynamic Range
Range Max # of
Display (max / min
min illum)
illum) Perceived
Intensities (r=1.01)
CRT: 50-200 400-530
Photo (print) 100 465
Photo (slide) 1000 700
B/W printout 100 465
Color printout 50 400
Newspaper 10 234
Gamma Correction
But most display devices are inherently nonlinear:
nonlinear: Intensity
Intensity ==
k(voltage)γγ
•• i.e.,
i.e., brightness
brightness ** voltage
voltage !=
!= (2*brightness)
(2*brightness) ** (voltage/2)
(voltage/2)
-- γγ is
is between
between 2.2
2.2 and
and 2.5
2.5 on
on most
most monitors
monitors
Common solution: gamma correction
•• Post-transformation
Post-transformation on
on intensities
intensities to
to map
map them
them to
to linear
linear range
range on
on
display
display device:
device:
•• Can
Can have
have separate
separate γγ for
for R,
R, G,
G, B
B
Gamma Correction
Some monitors perform the gamma correction in hardware
(SGI’s)
Others do not (most PCs)
Tough to generate images that look good on both platforms
(i.e. images from web pages)

You might also like