T G Dissolve Plus T F Dissolve T G F Cross: BB GG RR E

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Achromatic light (black and white)

Perceptual Issues
Humans can discriminate about 0.5 minute of arc
- At fovea, so only in center of view, 20/20 vision
- At 1m, about 0.2mm (Dot Pitch of monitors)
- Limits the required number of pixels

Humans can discriminate about 8(9) bits of intensity


Intensity Perception
Humans are actually tuned to the ratio of intensities.
- So we should choose to use 0, 0.25, 0.5 and 1

Most computer graphics ignores this:


- It uses 0, 0.33, 0.66, and 1

Low range, high res

Dynamic Range
Humans can see:
High range, low res
contrast at very low and very high light levels,
but cannot see all levels all the time
- use adaptation to adjust
- high range even at one adaptation level
Film has low dynamic range ~ 100:1
Monitors are even worse ~ 70:1

Display on a Monitor
Voltage to display intensity is not linear (Digital, Analog).
Gamma Control: (Gamma correction)
Idisplay Ito-moniter, so Ito-moniter Idisplay1/
, is controlled by the user,
- Should be matched to a particular monitor
- Typical values are between 2.2 and 2.5

Color
Light and Color
The frequency of light determines its color
- Frequency, wavelength, energy all related

Color Spaces
The principle of trichromacy means: the displayable colors are
all the linear combination of primaries
Taking linear combinations of R, G and B defines the RGB
color space
- The range of perceptible colors generated by adding some part each
of R, G and B.
- If R, G and B are correspond to a monitors phosphors (monitor
RGB), the space is the range of colors displayable on the monitor.

RGB
- Only a small range of all the colors humans are perceivable
(i.e. no magenta on a monitor)
- It is not easy for humans to say how much of RGB to use to
make a given color
- Perceptually non-linear:
- two points a certain distance apart in one part of the space may be
perceptually different
- Two other points, the same distance apart in another part of the
space, may be perceptually the same

CIE-XYZ and CIE-xy


Color matching functions are everywhere positive
- Cannot produce the primaries need negative light!
- But, can still describe a color by its matching weights
- Y (Z?) component intended to correspond to intensity

Most frequently set x=X/(X+Y+Z) and y=Y(X+Y+Z)


- x,y are coordinates on a constant brightness slice
- Linearity: colors obtainable by mixing A, B lie on line segment AB.
- Monochromatic colors (spectral colors) run along the Spectral Locus
- Dominant wavelength: Spectral color that can be mixed with white
to match
- Purity = (distance from C to spectral locus)/(distance from white to
spectral locus)
- Wavelength and purity can be used to specify color.
- Complementary colors=colors that can be mixed with C to get white
Linear Transform: [x,y,z]T = [xr,xg,xb \\ yr,yg,yb \\ zr,zg,zb][r,g,b]T

Describe incoming light by a spectrum

Gamut: The range of colors that can be produced by a space


YIQ: mainly used in television

- Intensity of light at each frequency


- Wavelengths in the visible spectrum: between the infra-red (700nm)
and the ultra-violet (400nm)

- Y is (approximately) intensity, I, Q are chromatic properties


- Linear color space: there is lin. trans. from XYZ(and RGB) to YIQ
- I and Q can be transmitted with low bandwidth

# Photons

Red paint absorbs green and blue wavelengths, and


reflects red
wavelengths, resulting
in you seeing a red
appearance
Wavelength (nm)
400 500 600 700
Sensor is defined by its response to a frequency distribution.
Expressed sensitivity vs. wavelength, ()
- For each unit of energy at the given wavelength, how much
voltage/impulses/whatever the sensor provides.

To compute the response, take Int{() E()d}


- E() is the incoming energy at the particular wavelength
Changing Response
Take a white sensor and change it into a red sensor? Use red
filters.
Can not change a red sensor into a white sensor.
Assume your eye is a white sensor. Why you can see a black
light (UV) shining on a surface?
- Such surfaces are fluorescent Change the frequency of light
- Your eye is not really a white sensor - it just approximates one

Seeing in Color
Rods work at low light levels and do not see color
Cones come in three types (experimentally and
genetically proven),
each responds in a
different way to
frequency distributions

HSV
- Hue: the color family: red, yellow, blue
- Saturation: The purity of a color: white is totally unsaturated
- Value: The intensity of a color: white is intense, black isnt
- Space looks like a cone: parts of the cone can be mapped to RGB
-Not a linear space: no linear transform to take RGB to HSV.

Uniform Color Spaces


Distance in the color space corresponds to perceptual distance.
Only works for local distances: Red from green? is hard to define.
MacAdams ellipses defines perceptual distance.

CIE uv is non-linear, color differences are more uniform.


[u,v]T=1/(X+15Y+3Z)[4X,9Y]T
Subtractive mixing
Cyan=WhiteRed, Magenta=WGreen,Yellow=WBlue
Linear transform between XYZ and CMY
Color Quantization
Indexed Color
Assume k bits per pixel (typically 8)
Define a color table containing 2k colors (24 bits per color)
Color Quantization
Quantization Error - Define an error for each color, c, in the original
image: d(c,c1), where c1 is the color c maps to under the quantization
- squared distance in RGB, distance in CIE uv space
- Sum up the error over all the pixels

Uniform Quantization: Break the color space into uniform cells


- poor on smooth gradients (Mach band)

Populosity:
Color histogram: count the number of times each color appears
Choose the n most commonly occurring colors

- L-con: red
- M-con: green
- S-con: blue

- Typically group colors into small cells first

Color Perception
Colors may be perceived differently:

Median Cut:
Recursively:

- Affected by 1.other nearby colors 2.adaptation to previous views 3.


state of mind

- Find the longest dimension (r, g, b)


- Choose the median of the long dimension as a color to use
- Split along the median plane, and recurse on both halves

Color Deficiency
Red-green color blindness in men
- Red and green receptor genes are carried on X chromosome
- Most of them have two red genes or two green genes

Other color deficiencies


- Anomalous trichromacy, Achromatopsia, Macular degeneration
- Deficiency can be caused by the central nervous system, by optical
problems in the eye, injury, or by absent receptors

Trichromacy
Experiment:
- Show a target color beside a user controlled color
- User has knobs that add primary sources to their color
- Ask the user to match the colors
- It is possible to match almost all colors using only three primary
sources - the principle of trichromacy
- Sometimes, have to add light to the target
- This was how experimentalists knew there were 3 types of cones

Math:
primaries: A, B and C (can be R, G, B, r, g, b)
Colors: M=aA+bB+cC (Additive matching)
Gives a color description system - two people who agree on A, B,
C need only supply (a, b, c) to describe a color

Some colors, M+aA=bB+cC (Subtractive matching)


- Interpret this as (-a, b, c)
- Problem for reproducing colors cannot suck light into a display
device

Color matching functions


Given a spectrum, how to determine how much each of R, G
and B to use to match it?
For a light of unit intensity at each wavelength, ask people to
match it with R, G and B primaries
Result is three functions, r(), g() and b(), the RGB color
matching functions

E rR gG bB
r

r ( ) E ( )d

g ( ) E ( )d
b b( ) E ( ) d
g

E(): the amount of energy at each wavelength.


E is the color due to E().
The RGB matching functions describe how much of each
primary is needed to match one unit of energy at each
wavelength

Map other colors to the closest chosen color


- ignore under-represented but important colors

This algorithm is building a kD-tree, a common form of spatial


data structure. It divides up the space in the most useful way.
Mach bands
- The difference between two colors is more pronounced when they
are side by side and the boundary is smooth.
- This emphasizes boundaries between colors, even if the color
difference is small.
- Rough boundaries are averaged by our vision system to give smooth
variation

Dithering
Why? 1.Adding noise along the boundaries can remove Mach
bands. 2.General perceptive principle: replaced structured
errors with noisy ones and people complain less
Black-and-white to grayscale: I=0.299R+0.587G+0.114B
Threshold Dithering: (Nave) If the intensity < 0.5, replace
with black, else replace with white
- Not good for non-balanced brightness

Constant Brightness Threshold: to keep the overall image


brightness the same: Compute the average intensity over the
image; and use a threshold that gives that average.
- i.e. average intensity is 0.6, use a threshold that is higher than 40% of
the pixels, and lower than the remaining 60%
- Not good when the brightness range is small

Random Modulation: Add a random amount to each pixel


before thresholding
- Not good for black and white, but OK for more colors
Ordered Dithering: Define a threshold matrix

2223102014
25 3 8 5 17
15 6 1 2 11
12 9 4 7 21
1819132416

random
which
looks
better

Filitering
H ( ) F ( ) G( ) h( x ) f g f (u ) g ( x u )du

Convolution Theorem
Convolution in spatial domainMultiplication in freq. domain
Multiplication in spatial domainConvolution in freq. domain
Aliasing
If the sampling rate is too low, high frequencies get
reconstructed as lower frequencies

Transformations convert points between coordinate systems


2D Affine Transformations
x a xx x a xy y bx
Why? Affine transformations are linear
y a yx x a yy y by

boxes

k 1

k 1

- High freq.s from one copy get added to low freq.s from another

Poor reconstruction also results in aliasing


Nyquist frequency: minimum freq. with which functions must
be sampled twice the maximum freq. present in the signal
Filtering Algorithm

I output [ x][ y ]

k /2

k /2

i k / 2 j k / 2

input

[ x i ][ y j ]M [i k / 2][ j k / 2]

Box Filter Spatial: Box; Freq.: sinc


1 1 1
Box filters smooth by averaging neighbors 1 1 1 1

In frequency domain, keeps low frequencies 9 1 1 1


1 2 3 2
2 4 6 4
and attenuates (reduces) high frequencies.
1
3 6 9 6
Bartlett Filter
81
2
2 4 6 4
Spatial: Triangle(boxXbox) Freq.: sinc
1 2 3 2
Attenuates high frequencies more than a box.
1 4 6 4
4 16 24 16
Guassian Filter
1
6 24 36 24
Attenuates high frequencies even further
256
4 16 24 16
In 2d, rotationally symmetric, so fewer artifacts
1 4 6 4
1D to 2D Filter
Multiply 2 1D masks together using outer product
M is 2D mask, m is 1D mask M [i][ j ] m[i]m[ j ]
High-Pass Filters can be obtained from a low-pass filter
- Subtracting the smoothed image from the 0
0
original means subtracting out the low

frequencies, and leave the high frequencies.0

1
2

2
1

1
4

4
1

0 0
1 2 1
1 2 1
1
1
1 0 2 4 2 2 12 2
16
16

0 0
1 2 1
1 2 1

High-pass masks come from matrix subtraction


Edge Enhancement
Adding high frequencies back into the image enhances edges
Image = Image + [Image smooth(Image)]
Fixing Negative Values
Truncate: Chop off values below min or above max
Offset: Add a constant to move the min value to 0
Re-scale: Rescale the image values to fill the range (0,max)

Image Warping
Mapping from the points in one image to points in another
f tells where in the new image to put
I out [x] I in [ f (x)]
the data from x in the old image
Reducing Image Size
Warp function: f(x)=kx, k > 1
Problem: More than one input pixel maps to each output pixel
Solution: Apply the filter, only at desired output locations
Enlarging Image
Warp function: f(x)=kx, k < 1
Problem: Have to create pixel data
Solution: Apply the filter at intermediate pixel locations
New pixels are interpolated from old ones
May want to edge-enhance images after enlarging
Image Morphing Process to turn one image into another

7/16

3/16 5/16 1/16

Pattern Dithering: Compute the intensity of each sub-block and


index a pattern.
- Pixel is determined only by average intensity of sub-block

Floyd-Steinberg Dithering: Start at one corner and work


through image pixel by pixel, and threshold each pixel.
- Usually top to bottom in a zig-zag

Compute the error at that pixel, propagate error to neighbors by


adding some proportion of the error to each unprocessed
neighbor
Color Dithering: Same techniques can be applied, with some
modification (FS: Error is diff. from nearest color in the color table)

- Blue-screening is the analog method


- Why blue? Its the least component in human body.
3. Store pixel depth instead of alpha
- compositing can truly take into account foreground and background

Transformations
Coordinate Systems
are used to describe the locations of points in space.
Multiple coordinate systems make graphics algorithms easier
to understand and implement
- Some operations are easier in one coordinate system than in another
(Box example)

- Transforming all the individual points on a line


or
gives the same set of points as transforming the
x a xx a xy x bx
endpoints and joining them
y a a y b
yx yy y
- Interpolation is the same in either space.

2D Translation
2D Scaling
2D Rotation

x 1 0 x bx
y 0 1 y b

y

x cos
y sin

X-Axis Shear

x s x
y 0

sin x 0
x

cos y 0

Reflect About X Axis y 0

- Easier in hardware and software

2.To compose transformations, simply multiply matrices


3.Allows for non-affine transformations:
- Perspective projections! Bends, tapers, many others.

3D Rotation
Rotation is about an axis in 3D passing through the origin.
Any matrix with an orthonormal top-left 3x3 sub-matrix is a
rotation
- Rows are mutually orthogonal (0 dot product)
- Determinant is 1
- columns are also orthogonal, and the transpose is equal to the inverse

Problems
Specifying a rotation really only requires 3 numbers
- Axis (a unit vector, requires 2) and Angle to rotate

Rotation matrix has a large amount of redundancy


- Orthonormal constraints reduce degrees of freedom back down to 3
- Keeping orthonormal is difficult when transformations are combined

Alternative Representations
1.Specify the axis and the angle
- Hard to compose multiple rotations

2.Euler angles: Specify how much to rotate about X, then how


much about Y, then how much about Z

Filtering in Color
Simply filter each of R,G and B separately
Re-scaling and truncating are more difficult to implement:

4.Quaternions

Compositing
Combines components from two or more images to make a new image

Mattes an image that shows which parts of another image are


foreground objects
To insert an object into a background:
- Call the image of the object the source
- Put the background into the destination
- For all the source pixels, if the matte is white, copy the pixel,
otherwise leave it unchanged
Blue Screen: Photograph/film the object in front of a blue BG, then
consider all the blue pixels in the image to be the background.

Alpha
Basic idea: Encode opacity information in the image
Add an extra alpha channel to each image, RGBA

- =0 is always black
- Some loss of precision as gets small, but generally not a problem

Basic Compositing Operation


The different compositing operations define which image wins
in each sub-region of the composite.
At each pixel, combine the pixel data from f and the pixel data
from g with the equation: co Fc f Gcg
- F and G describe how much of each input image survives, and cf and
cg are pre-multiplied pixels, and all four channels are calculated
Over F=1, G=1-f f covers g
Inside F=g G=0 only parts of f that are inside g contribute
Outside F=1-g G=0 only parts of f outside g contribute
Atop F= g, G=1-f over but restricted to where there is g
Xor F=1-g G=1-f f where there is no g, and g where there is no f
Clear F=0, G=0 fully transparent
Set F=1, G=0 Copies f into the composite

shx x 0

1 y 0

Rotating About An Arbitrary Point


Say you wish to rotate about the point (a,b)
Translate such that (a,b) is at (0,0) x1=xa, y1=yb
Rotate x2=(x-a)cos-(y-b)sin, y2=(x-a)sin+(y-b)cos
Translate back again xf=x2+a, yf=y2+b
Scaling an Object About An Arbitrary Point
Translate, Scale, and Translate again
Homogeneous Coordinates
x a xx a xy bx x
Use three numbers to represent a point y a
a yy by y

(x,y)=(wx,wy,w) for any constant w0 yx
1
0
0
1

Typically, (x,y) becomes (x,y,1)


Translation can now be done with matrix multiplication!
Translation:
Rotation:
Scaling:
1 0 bx cos sin 0 sx 0 0
0 1 b sin cos 0 0 s 0
y
y

0
1 0 0 1
0 0 1 0
Advantages
1.Unified view of transformation as matrix multiplication

- Hard to think about, and hard to compose

- Adjusting each channel separately may change color significantly


- Adjusting intensity while keeping hue and saturation may be best,
although some loss of saturation is probably OK

0 x 0

s y y 0

y
0
0 x 0

1 y 0

- Define path from each point in the original image to its destination in
the output image
- Animate points along paths

To display and do color conversions, must extract RGB by


dividing out

& Dot Dispersion

looks like 2 16 3 13
newsprint 10 6 11 7
4 14 1 15
12 8 9 5

Unary Operators
Darken: Makes an image darker (or lighter) without affecting
its opacity.
darken ( f , ) (rf , g f , b f , f )
Dissolve: Makes an image transparent without affecting its
dissolve ( f , ) (rf , g f , b f , f )
color.
PLUS: Co=Cf+Cg
Example: cross( f , g, t ) dissolve( f , t ) plus dissolve( g,1 t )
Obtaining Values
1.Hand generate (paint a grayscale image)
2.Automatically create by segmenting an image into
foreground background:

- alpha = 1 implies full opacity at a pixel


- alpha = 0 implies completely clear pixels
Pre-Multiplied Alpha Instead of (R,G,B,), store (R,G,B,)

- Use a different threshold for each pixel of the block


- Compare each pixel to its own threshold

Clustered Dithering

Signal Processing
Spatial domain: signal is given as values at points in space
Freq. dom.: signal is given as values of frequency components
Periodic signal: can be represented as a sum of sine and cosine
waves with harmonic frequencies. S ( x) 12 2 (1) cos(22kk 11)x
Non-periodic function: can be
1 2
1
1

cosx cos 3x cos 5x


3
5

represented as a sum of sins and coss 2


1
ix
f
(
x
)

F
(

)
e
d

of (possibly) all frequencies


2
F() is the spectrum of f(x)
eix cosx i sin x
- Spectrum is how much of each frequency is present in thefunction
F ( ) f ( x )e ix dx
Fourier Transform

- Box: f(x) = 1, |x|<1/2, 0, otheriwse F(w)=sin(f)/f=sinc(f)


- Cos: f(x)=cos(x) F(w)=delta(w-1)+delta(w+1)
- Sin: f(x)=sin(x) F(w)=delta(w-1)-delta(w+1)
- Impulse: f(x)=delta(x) F(w)=1
- Shah Function: | | | | | | | |
- Gaussian: 1/2 exp(-x2/2) Gaussian
Qualitative Properties
Sharp edges give high frequencies
Smooth variations give low frequencies
Bandlimited: if its spectrum has no frequencies above a
maximum limit(sin and cos are, Box and Gaussian are not)

3.Specify the axis, scaled by the angle


- Only 3 numbers, sometimes called the exponential map
- 4-vector related to axis and angle, unit magnitude (Rotation about
axis (nx,ny,nz) by angle . )
- Easy to compose
- Easy to go to/from rotation matrix
- Only normalized quaternions represent rotations, but you can
normalize them just like vectors, so it isnt a problem

Viewing Transformation
Graphics Pipeline
Local
Coordina
te Space

World
Coordinat
e Space

View
Space

3D
Screen
Space

Displ
ay
Spac
e

Local Coordinate Space


Defining individual objects in a local coordinate system is easy
- Define an object in a local coordinate system
- Use it multiple times by copying it and transforming it into the
global system
- This is the only effective way to have libraries of 3D objects, and
such libraries do exist

Global Coordinate System


Everything in the world is transformed into one coordinate
system - the global coordinate system
-Some things, like dashboards, may be defined in a different space, but
well ignore that
-Lighting (locations, brightness and types), the camera, and some
higher level operations, such as advanced visibility computations, can
be done here

View Space
Associate a set of axes with the image plane
- The image plane is the plane in space on which the image should
appear, like the film plane of a camera
- One normal to, one up in, and one right in the image plane
- Some camera parameters are easy to define(focal length, image size)
- Depth is represented by a single number in this space

3D Screen Space
A cube: [-1,1][-1,1][-1,1] ; canonical view volume
- Parallel sides make many operations easier

Window Space also called screen space.


Convert the virtual screen into real screen coordinates
- Drop the depth coordinates and translate

The windowing system takes care of this

RGB -> XYZ

3D Screen to Window Transform


Windows are specified by an origin, width and height

Clipping
Parts of the geometry may lie outside the view volume

- Origin is either bottom left or top left corner, expressed as (x,y) on


the total visible screen on the monitor or in the framebuffer
- This representation can be converted to (xmin,ymin) and (xmax,ymax)

- View volume maps to memory addresses


- Out-of-view geometry generates invalid addresses
- Geometry outside the view volume also behaves very strangely
under perspective projection

(1,1)

(xmax,ymax)

x pixel xmax xmin 2


y
0
pixel
z pixel
0


0
1

0
0
ymax ymin 2 0
0
1
0
0

xmax xmin 2 xscreen


ymax ymin 2 yscreen

z screen

0
1

- X coordinate in 3D must be > -1


- In homogeneous screen space, same as: xscreen> -wscreen

In general, a point, p, is inside a plane if:


- the plane as nxx+nyy+nzz+d=0, with (nx,ny,nz) pointing inward
- and nxpx+nypy+nzpz+d>0

Simple Projection Example


The region of space that we wish to render as a view volume
- Assume Viewer is looking in z, with x to the right and y up
- near z=n
0
0
r l r l xview
xscreen 2 r l
- far, z=f (f<n)
y
0

t b t b yview
2
t

b
0

screen
- left, x=l
z screen 0
0
2 n f n f n f zview
- right, x=r(r>l)

0
0
1
1 0
1
- top, y=t
- bottom, y=b(b<t) x screen M view screen x view

General Projection Cases


Specifying a View
The center of the image plane, (cx,cy,cz)
A vector that points back toward the viewer: (dx,dy,dz)
- normal to the image plane
- This vector does not have to be perpendicular to n

Size of the view volume l,r,t,b,n,f


View Space
Origin: at the center of the image plane: (cx,cy,cz)
Normal vector: the normalized viewing direction: n=d
u=upn, normalized.
u x
v=nu
v
World to View Transformation
x
1. Translate the world so the origin is at (cx,cy,cz) nx

2. Rotation, such that (a) u in world space should be 0


0 cx u x
0 c y v x

1 c z nx

0 1 0

M world screen M view screen M world view

uy
vy
ny
0

uz
vz
nz
0

0
0
0

uz
vz
nz
0

u c
v c
n c

x screen M world screen x world

Perspective Projection
- Works like a pinhole camera
- Distant Objects Are Smaller
- Parallel lines meet

Vanishing points
Each set of parallel lines (=direction) meets at a different point:
The vanishing point for this direction
- Classic artistic perspective is 3-point persepctive
- Sets of parallel lines on the same plane lead to collinear vanishing
points: the horizon for that plane
- Good way to spot faked images

Basic Perspective Projection


Assume with x to the right, y up, and z back toward the viewer
Assume the origin of view space is at the center of projection
Define a focal distance, d, and put the image plane there (note
P(xs,ys)
P(xv,yv,zv)
d is negative)
y
xs xv

d zv

y s yv

d
zv

xv
xs y
y v
s zv
d z
v d

1
0
Ps
0

xv
0
1
0
0

0
0
1
1
d

-zv

d
0
0
P
0 v

Perspective View Volume


Near and far planes are parallel to the image plane: zv=n, zv=f
Other planes all pass through the center of projection (the
origin of view space)
- The left and right planes intersect the image plane in vertical lines
- The top and bottom planes intersect in horizontal lines
Left Clip
Near
Plane
Clip
xv
Plane
Far Clip
View
Plane
l
Volume
n
f
FOV
r
-zv

Right Clip
Plane

We want to map all the lines through the center of projection to


parallel lines
General Perspective

1
0
MP
0

0
1
0
0

0
0 n
0
0 0

n f n f 0

1n
0 0

0
0
n
0
0 n f
0
1

M view screen

2
t b

0
0

2
n f
0

r l
r l n
t b
0
t b 0
n f

n f 0
1

Outside

f
i

Inside

0
nf

0
0
0
0
n f nf

1
0

Near/Far and Depth Resolution


It may seem sensible to specify a very near clipping plane and
a very far clipping plan, but, a bad idea:
- OpenGL only has a finite number of bits to store screen depth
- Too large a range reduces resolution in depth - wrong thing may be
considered (in front)

Always place the near plane as far from the viewer as possible,
and the far plane as close as possible

T = w and then change the smallest magnitude component of t to 1. V = w X u.

Decomposition of a Transformation

A = RSRT
Rotate v1 and v2 to the x-and y-axes (the transform by RT.
Scale in x and y by (1 and 2)
Rotate the x- and y-axes back to v1 and v2 (the transformation R).

Box Function
P() = det(A- I)
Para cada , substituir em A- I, achar auto-vetores relacionados a cada autovalor(resolver o Sistema). Lembrar = autovetores em COLUNA.
More about clipping
Cossine
Sutherland-Hogman Clip (polygons)

n (s x) 0
n (i x) 0
n (f x) 0

Sine

Finding Intersection Pts


Use the parametric form for the edge between x1 and x2:
x(t ) x1 (x2 x1 )t
0 t 1
For planes of the form x=a:
( y y1 )
(z z )
xi (a, y1 2
(a x1 ), z1 2 1 (a x1 ))
( x2 x1 )
( x2 x1 )
Similar forms for y=a, z=a
Inside/Outside in Screen Space
- In canonical screen space, clip planes ws xs ws
are xs=1, ys=1, zs=1
ws ys ws
Inside/Outside reduces to comparisons
ws z s ws
before perspective divide
Clipping Lines
Cohen-Sutherland
Works basically the same as Sutherland-Hodgman
Clip line against each edge of clip region in turn

Constant Function
Finding Intesection points = find the interseption between plane and line
(remember).

When are we Inside?

- If both endpoints outside, discard line and stop


- If both endpoints in, continue to next edge (or finish)
- If one in, one out, chop line at crossing pt and continue

We want parameter values that are inside all the clip planes. Any clip plane that
we started inside we must not have left yet (first parameter value to leave is the
end of the visible segment). Any clip plane that we started outside we must have
already entered (last parameter value to enter is the start of the visible segment).
If we leave some clip plane before we enter another, we cannot see any part of
the line. All this leads to an algorithm Liang Barsky.

Some cases lead to premature acceptance or rejection

Liang-Barsky Sub-Tasks

Delta Function

- If both endpoints are inside all edges


- If both endpoints are outside one edge

General rule if a fast test can cover many cases, do it first


Details: Only need to clip line against edges where one
endpoint is out:
Use outcode to record endpoint in/out wrt each edge. One bit
per edge, 1 if out, 0 if in.
1
0010 2
- Trivial reject: outcode(x1)&outcode(x2)!=0
- Trivial accept: outcode(x1)|outcode(x2)==0
- Which edges to clip against?
outcode(x1)^outcode(x2)

Gaussian Function

2.

Find entering/leaving flags

3.

Find last parameter to enter, and first one to leave (check that enter before
leave).

4.

Convert these into endpoints of clipped segment

First Step (Liang-Barsky)


Bandlimited
A function is band limited if its spectrum has
no frequencies above a maximum limit. Sin
and cos are band limited. Box, Gaussian, etc
are not.

- More efficient, as not computing the coordinate values at irrelevant


vertices
- Works for rectilinear clip regions in 2D or 3D
- Clipping conditions on parameter: Line is inside clip region for
x x2 x1
values of t such that (for 2D): xmin x1 tx xmax

q1 x 1 x min ymin y1 ty ymax y y2 y1


edge is 1,
q 2 x max x 1
qk Left
right edge is 2,
q 3 y1 y min tk
edge is 3,
pk top
bottom is 4
q 4 y max y1

- Compute entering t values, which are qk/pk for each pk<0


- Compute leaving t values, which are qk/pk for each pk>0
- Parameter value for small t end of line is:tsmall= max(0, entering ts)
- parameter value for large t end of line is: tlarge=min(1, leaving ts)
- if tsmall<tlarge, there is a line segment - compute endpoints by
Inside
substituting t values

Blue + red = Mageta


Red + green + blue = White

5
6

- for out-to-in pair, follow usual rule


- for in-to-out pair, follow clip edge

go up

Easiest to start outside


General Clipping
Clipping general against general polygons is quite hard
Outline of Weiler algorithm:

About color:

Green + blue = cyan

go up
1

Otherwise, find parametric intersection: left clip edges: x = e + (xleft e)t, line x
= x1 + (x2 x1)s. Solve simultaneous equations in t and s.

Red + Green = yellow

- compute ts for each edge in turn


(some rejects occur earlier like this)

Bartlett filter

Compute inside/outside for endpoints of the line segment. Determine which side
of each clip plane the segment endpoints lie. Use the cross product. If both
segment endpoints are outside one cli plane, stop here if we can.

Use endpoint inside/outside information to label as entering or leaving.

When pk<0, as t increases line goes from outside to inside entering


When pk>0, line goes from inside to outside leaving
When pk=0, line is parallel to an edge (clipping is easy)

Weiler Atherton Polygon Clipping


Faster than Sutherland-Hodgman
for complex polygons
For clockwise polygon:

Find parametric intersection points

Liang-Barsky Clipping
0101
Parametric clipping - view line in parametric form and reason
about the parameter values

p1 x
p 2 x
p 3 y
p 4 y

1.

Improvement (and actual Liang-Barsky):


0
n
0
0

Here we have w a unit vector in the direction of a.

So fazer diagonalizacao:

- Consider the polygon as a list of vertices


- One side of the line/plane is considered inside the clip region, the
other side is outside
- We are going to rewrite the polygon one vertex at a time the
rewritten polygon will be the polygon clipped to the line/plane
- Check start vertex: if inside, emit it, otherwise ignore it
- Continue processing vertices as follows

If there is a segment of the line inside the clip region, sequence


of infinite line intersections must go: enter, enter, leave, leave
0 Algorithm:

Complete Perspective Projection


2
r l

0
MOM P

To clip a polygon to a line/plane:

Inside-Outside Testing
uy
vy
ny
0

(1,0,0) in view space (b) v should be (0,1,0) (c) n should be (0,0,1)


0
1
0
0

- Clip polygon each time to line containing edge


- Only works for convex clip regions

- edge crosses the clip line/plane from out to in: emit crossing point,
next vertex
- edge crosses clip line/plane from in to out: emit crossing
- edge goes from out to out: emit nothing
- edge goes from in to in: emit next vertex

- Specified with respect to the image plane, not the world

0 1
0 0
0 0

1 0

Sutherland-Hodgman Clip
Clip the polygon against each edge of the clip region in turn

Look at the next vertex in the list, and the edge from the last
vertex to the next. If the

A direction that we want to appear (up) in the image

uz
vz
nz
0

Emission I s what light sources do. Adsorption is


what prints, inks do. Emission produces light while
adsorption removes light. Note that adsorption
depends on such things as the surface, and the
substrate.
In printers, pigments are distributed on paper or
another medium so that when light reflects from the
paper it forms the desired image. For example, if
you want to print red, the printers will apply an ink
that absolves blue and green, and reflects red. You
see the color of the paper, filtered by the ink. White
paper red blue = green. Cyan = While red.
Magenta = white green. Yellow = white blue. C
+ M + Y = white white = black. C + M = while
red green = blue.

Why clipping is done in canonical view space?


Ie. to check against the left plane:

- Projection lines are perpendicular to the image plane


- Like a camera with infinite focal length

uy
vy
ny
0

Know how subtractive color spaces (printer inks)


work.

- The normals to the clip planes point inward, toward the visible stuff

Orthographic Projection
Orthographic projection projects all the points in the world
along parallel lines onto the image plane

u x
v
M world view x
nx

If we wish to rotate about a vector a, we can form an orthonormal basis with w


= a, rotate that basis to the canonical basis xyz, rotate about the z-axis, and then
rotate back to the uvw basis. In matrix form, to rotate about w-axis by an angle
fi.

Clipping removes parts of the geometry outside the view


Best done in screen space before perspective divide (dividing
out the homogeneous coordinate)
Clipping Points
A point is inside the view volume if it is on the (inside) of all
the clipping planes

(xmin,ymin)

(-1,-1)

Rotation 3D Other approach

- Replace crossing points with vertices


- Double all edges and form linked lists of edges
- Change links at vertices
- Enumerate polygon patches

Can use clipping to break concave polygon into convex pieces;


main issue is inside-outside for edges

You might also like