Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 131

Visible Surface

Detection
Visible Surface Detection
● Visible surface detection or hidden surface
removal.
● Realistic scenes: closer objects occludes the
others.
● Classification:
– Object space methods
– Image space methods
Occlusion: Full, Partial, None

Full Partial None

• The rectangle is closer than the triangle


• Should appear in front of the triangle
Object Space Methods
● Algorithms to determine which parts of the shapes
are to be rendered in 3D coordinates.
● Methods based on comparison of objects for their
3D positions and dimensions with respect to a
viewing position.
● For n objects, may require n*n comparision
operations.
● Efficient for small number of objects but difficult to
implement.
● Depth sorting, Back face detection methods.
Image Space Methods
● Based on the pixels to be drawn on 2D. Try to
determine which object should contribute to that
pixel.
● Running time complexity is the number of pixels in
display * number of objects. = N*n
● Space complexity is two times the number of pixels:
– One array of pixels for the frame buffer
– One array of pixels for the depth buffer
● Coherence properties of surfaces can be used.
● Depth-buffer and ray casting methods.
Depth Cueing
● Hidden surfaces are not removed but displayed with
different effects such as intensity, color, or shadow
for giving hint for third dimension of the object.
● Simplest solution: use different colors-intensities
based on the dimensions of the shapes.
Back-Face Detection

• A simple object space algorithm is Back-Face removal (or back-face


cull) where no faces on the back of the object are displayed.
• Since in general about half of the faces of objects are back faces this
algorithm will remove about half of the total polygons in the image.
There are two methods to compute this
1.Put the center of projection in to the plane equation and determine if
it is inside or outside.
2.If the angle between the plane normal(N) and vector from any point on
the plane to the center of projection is <90° (i.e, N.V > 0) then that the
plane is not visible.
Back-Face Detection
An object can be well approximated using
polyhedrons(e.g. a tetrahedran)

A smooth surface can be well approximated using small


polygons(eg. Triangles) such that normal to the plane of any
elemental polygon is represented the average surface normal
at that point.
Back-Face Detection
If any three points 𝑥1 , 𝑦1, 𝑧1 , 𝑥2 , 𝑦2, 𝑧2 , 𝑥3 , 𝑦3, 𝑧3 on any
plane surface are known, the unknown parameters A, B, C
and D of the equation as follows
The Plane surface equation

Ax  By  Cz  D  0
𝑦1 − 𝑦2 𝑧1 − 𝑧2 𝑧1 − 𝑧2 𝑥1 − 𝑥2
Where A = 𝑦 − 𝑦 𝑧2 − 𝑧3 B= 𝑧 −𝑧 𝑥2 − 𝑥3
2 3 2 3

𝑥1 𝑦1 𝑧1
𝑥1 − 𝑥2 𝑦1 − 𝑦2
C= 𝑥 −𝑥 𝑦2 − 𝑦3 and D = − 𝑥2 𝑦2 𝑧2
2 3
𝑥3 𝑦3 𝑧3
Back face detection
● A fast and simple object-space method for
identifying the back face of poly-hedron is
based on the Inside-Outside (given a point is
inside or outside of a plane/surface).
● Let Plane Parameters are A, B, C, D then a
point (x, y, z) will be inside to a plane or on
the plane if
Ax + By + Cz + D ≤ 0
Back-Face Detection
● Back-face detection of 3D polygon surface is
easy
● Recall the polygon surface equation:

Ax  By  Cz  D  0
● We need to also consider the viewing
direction when determining whether a surface
is back-face or front-face.
Backface Culling
 Avoid drawing polygons facing away from the viewer
 Front-facing polygons occlude these polygons in a closed
polyhedron
 Test if a polygon is front- or back-facing?

back-facing
Ideas?

front-facing
Detecting Back-face Polygons
 The polygon normal of a …
 front-facing polygon points towards the viewer
 back-facing polygon points away from the viewer

In otherwords
back
A polygon is front facing if its surface normal vector pointing 𝑛
𝜋
towards the view point else backface. 𝑉 𝜃<
2
Using n.V = 𝑛 𝑣 cos 𝜃 𝑛
If (n  v) > 0  “back-face” 𝑉
𝜋
If (n  v) ≤ 0  “front-face” 𝜃>
2
v = view vector
front
Back-Face Detection
● A polygon surface is a back face if:

Vview  N  0

● Vview = Viewing vector


N = Normal vector
Back-Face Detection

● We will also be unable to see surfaces with


C=0. Therefore, we can identify a polygon
surface as a back-face if:
C0
Back-Face Detection
● Back-face detection can identify all the hidden
surfaces in a scene that contain non-
overlapping convex polyhedra.
● But we have to apply more tests that contain
overlapping objects along the line of sight to
determine which objects obscure other
objects.
Painter’s Algorithm
● The method used by this algorithm similar to
that used by painter to draw an oil painting
over a canvas sheet.
● The painter paints first layer of paint which is
to be shown as the background then
sequentially he paints the object in the fore
ground layer by layer till the object which is
nearest.
● This algorithm is a combination of object
space and image space methods.
Painter’s Algorithm
● Firstly, the triangles are stored, appropriately
numbered and stored in an array.
Depth order is created for each triangle
which overlaps

Depth order table

Triangle Behind counter List of triangles in


front of it
1 2 5
2 0 4, 3, 1
3 2 5,1
4 1 3
5 2
6 0
Flow of algorithm

Triangle List of triangles Behind Output New behind


in front of it counter counter
1 5 2 1
2 4, 3, 1 0 -1
3 5,1 2 1
4 3 1 0
5 2 2
6 0 -1
Step 2

Triangle List of triangles Behind Output New behind


in front of it counter counter
1 5 1 1
2 4, 3, 1 -1 -1
3 5,1 1 0
4 3 0 -1
5 2 2
6 -1 -1
Step 3

Triangle List of triangles Behind Output New behind


in front of it counter counter
1 5 1 0
2 4, 3, 1 -1 -1
6

3 5,1 0 -1
4
2 3

4 3 -1 -1
5 2 1
6 -1 -1
Step 4

Triangle List of triangles Behind Output New behind


in front of it counter counter
1 5 0 -1
6

2 4, 3, 1 -1 5
-1
3 5,1 -1 4
-1
2 3
4 3 -1 -1
5 1 0
6 -1 1
-1
Step 5

Triangle List of triangles Behind Output New behind


in front of it counter counter
1 5 -1 -1
2 4, 3, 1 -1 -1
3 5,1 -1 -1
4 3 -1 -1
5 0 -1
6 -1 -1
Painter’s algorithm

● This method is well suited for a set


of triangles which do not intersect or
penetrate each other or any cyclic
over lapping.
● In such cases depth order doesn’t
exist.
● The algorithm can be extended to
work even with such cases, if the
intersecting polygones decomposed
to smaller ones from the point of
intersection of the planes and the
depth order is prepared.
Z- Buffer/ Depth-Buffer Method
● Z-buffer /depth buffer is a simplest algorithms
of the hidden surface removal.
● This technique was originally proposed by
Catmull.
● It is an image space method.
● It can be very effectively implemented specially
for objects whose faces can be described
individually as planner surfaces.
Z- Buffer/ Depth-Buffer Method

● Each point (x,y,z) on a planner surface


corresponds to the orthographic projection
point (x,y) object depths can be compared by
comparing z values of all corresponding
object points. So it should be visible.
Depth-Buffer Method
Z- Buffer/ Depth-Buffer Method
● The z-buffer is a separate depth buffer used to
store the z-coordinate or depth of every visible
pixel in image space.
● The depth or z-value of a new pixel to be
written to a frame buffer is compared to the
depth or z-value of the pixel already stored in
the z-buffer.
● The depth values relative to the viewing plane
can be calculated using the plane equation.
Ax  By  Cz  D  0
−𝐴𝑥 − 𝐵𝑦 − 𝐷
𝑧𝑥,𝑦 =
𝐶
Depth-Buffer Method
● Two buffers are used
– Frame Buffer
– Depth Buffer
● The z-coordinates (depth values) are usually
normalized to the range [0,1]
Z buffer algorithm
Step 1: Initialize frame buffer to background colour.
Step 2: Initialize z-buffer to minimum z value.
Step 3: Scan-convert each polygon in arbitrary
order.
Step 4: For each (x, y) pixel, calculate depth ‘z’ at
that pixel (z (x, y)).
Step 5: Compare calculated new depth z(x, y) with
value previously stored in z-buffer at the location
z(x, y).
● Step 6: If z(x, y) > z (x, y), then write the new
depth value to z-buffer and update frame-
buffer.
● Step 7: Otherwise, no action is taken.
Calculating depth values efficiently
● For any scan line adjacent horizontal x
positions or vertical y positions differ by 1
unit.
● The depth value of the next position (x+1,y)
on the scan line can be obtained using

 A( x  1)  By  D
z 
C
A
 z
C
Z-buffer - Example
       
       
       
        Z-buffer
       
       
       
       

Screen
 
      
     
    
Parallel with
   
the image plane
  
 



       
       
       
       
       
       
       
       


 
Not Parallel   
   
    
     

 

       
       
       
       
       
       
       
       
Z-Buffer Algorithm
 Algorithm easily handles this case
Depth-Buffer Method
● Is able to handle cases such as

View from the


Right-side
A-Buffer Method
● Extends the depth-buffer algorithm so that
each position in the buffer can reference a
linked list of surfaces.
● More memory is required
● However, we can correctly compose different
surface colors and handle transparent
surfaces.
A-Buffer Method
● Each position in the A-buffer has two fields:
– a depth field – stores a positive or negative
real number
– Intensity field – stores surface intensity
information or a pointer value
A-Buffer Method
● If the depth field is positive, the number
stored at that position is the depth of a single
surface overlapping the corresponding pixel
area. The intensity field then stores the RGB
components of the surface color at that point
and the percent of pixel coverage.
A-Buffer Method
● If the depth field is negative, this indicates multiple-surface
contributions to the pixel intensity.
● The intensity field then stores a pointer to a linked list of
surface data.
● Data for each surface in the linked list includes:
– RGB intensity components
– Opacity parameter
– Depth
– Percent of area coverage
– Surface identifier
– Other surface-rendering parameters
– Pointer to next surface
WARNOCK’S ALGORITHM/AREA SUBDIVISION
METHOD
● It was developed by Warnock and it uses divide - and -
conquer strategy. This algorithm has two steps –
● Step 1: First of all, decide which all polygons are visible in
area, they are displayed.
● Step 2: Else the area is divided into four equal areas, on
each area those polygons are further tested to determine
which ones should be displayed. If a visibility decision
cannot be made then this second area is further subdivided
either until a visibility decision can be made or until the
screen area is a single pixel. This method is also known as
Quadtree Method.
WARNOCK’S ALGORITHM/AREA SUBDIVISION
METHOD
WARNOCK’S ALGORITHM/AREA
SUBDIVISION METHOD
● There are 4 possible relationships that a
surface has with a specified area boundary.
They are as follows –
1. Surrounding Surface: A polygon surrounds
a viewport if it completely encloses or covers
the view port i.e.,
WARNOCK’S ALGORITHM/AREA
SUBDIVISION METHOD
2. Inside Surface (contained): A polygon is
contained in a viewport if no part of it is outside
any of the edges of the viewport i.e.,
WARNOCK’S ALGORITHM/AREA
SUBDIVISION METHOD
3. Disjoint Surfaces: A polygon is disjoint
from the viewport if the x-extent and y-extents
of the polygon do not overlap the viewport
anywhere i.e.,
WARNOCK’S ALGORITHM/AREA
SUBDIVISION METHOD

4. Overlapping (or intersecting)surfaces: A


polygon overlaps or intersects the current
background if any of its side cuts the edges of
viewport i.e.,
WARNOCK’S ALGORITHM/AREA
SUBDIVISION METHOD
So, for a given area,
● If polygons are disjoint then the background
colour fills the area.
● If there is a single contained polygon or
intersecting polygon then the background colour
is used to fill the area, then the part of the
polygon contained in the area is filled with the
colour of that polygon.
● If there is a single surrounding polygon and no
intersecting or contained polygons then the area
is filled with the colour of the surrounding
polygon.
WARNOCK’S ALGORITHM/AREA
SUBDIVISION METHOD
● If there is a single surrounding polygon in front of any other
surrounding, intersecting, or contained polygon then area is
filled with the colour of the front of surrounding polygon.
● Otherwise break the area into four equal parts and repeat.
WARNOCK’S ALGORITHM/AREA
SUBDIVISION METHOD

● Advantages of Warnock’s algorithms


1. Extra memory buffer is not a necessity now.
2. Since it follows divide-and-conquer strategy,
so parallel computer can be used to speed up
the process.
WARNOCK’S ALGORITHM/AREA
SUBDIVISION METHOD
WARNOCK’S ALGORITHM/AREA
SUBDIVISION METHOD
A binary space partitioning (BSP)

● A binary space partitioning (BSP) tree is


an efficient method for determining
object visibility by painting surfaces onto the
screen from back to front, as in painters
algorithm.
● The basic idea is to sort the polygons for
display in back-to-front order.
● The BSP tree is particularly useful when the
view reference point changes, but the objects
in a scene are at fixed position.
BSP Algorithm

The BSP tree method is a two step process –


1. Construction of BSP tree.
2. Display of the tree.
BSP Algorithm

● Step 1: Construction of BSP tree


● The BSP algorithm recursively subdivides the
space into two half spaces. But the dividing
plane is one of the polygon in the picture (or
scan). The other polygons in the scene are
placed in appropriate half space.
● For eg., if we consider polygon A as a plane
then on front side of A, the polygons will be B
and C whereas on back side of A, the polygon
will be D and E.
BSP Algorithm
BSP Algorithm

● Each half space is then recursively subdivided


using one of the polygon in the half space as
separating plane.
● This process of space subdivision continues till
there is only one polygon in each half space,
because this is a binary tree. Then the
subdivided space is represented by a binary
tree with the original polygon as the root.
BSP Algorithm
BSP Algorithm
BSP Algorithm

● Step II : Display a tree


● our aim is to display the polygons which are
closer to the view point or in other words,
first display the polygons which are away
from user and then display the polygons
which are nearer.
● So there should be some relation between the
view point and the root of the tree. The
relation is, traverse the BSP tree in order way
i.e, first visit left node, then root and then
right node.
BSP Algorithm

● If the view point is in front of the root


polygon, then BSP tree is traversed in the
order of back branch, root, front branch i.e.
reverse of in order traversal.
● For example, if the viewpoint is in front of the
root polygon then the sequence of polygons
for display will be
BSP Algorithm

● If the view point is in back of the root polygon


then BSP tree is traversed in the order of front
branch, root, back branch i.e, normal inorder
traversal.
● For example, if the view point is in back of the
root polygon, as shown then the sequence of
polygons for display will be
BSP Algorithm

The main advantages of BSP method is :


● We are not dealing with z-coordinates at all to
decide the priority of the polygons for display.
The main disadvantage of BSP method is:
● Waste of computational time as we are
performing calculations for those object also,
which are hidden.
Ray tracing/ Ray casting method

● The algorithm is based on the principles of


light and optics.
● In Ray-tracing algorithm the basic idea is to
trace light rays and determine which one
arrives back at the eye or viewpoint.
● Since this involves an infinite number of light
rays
Ray racing/ Ray casting method
Ray racing/ Ray casting method
Ray racing/ Ray casting method
● we work backward i.e, we trace a ray from
the viewpoint through a pixel until it reaches
a surface.
● Since this represents the first surface seen at
the given pixel, we set the pixel to the color
of the surface at the point where the light ray
strikes it.
Ray racing/ Ray casting method
● Each ray is tested for intersections with each
object in the picture, including the non-clipping
plane.
● Since each ray can intersect several objects,
we find the intersection point ‘I’ which is
closest to the viewpoint.
● We set the pixel belonging to the given ray to
the color of the surface on which this point ‘I’
lies.
● This represents the first surface intersected by
the ray. We repeat this process for each pixel.
Ray racing/ Ray casting method
● The computational expense can be reduced
by use of the extent or bounding box of an
object or surface.
● If a ray does not intersect a bounding box,
there is no need to check for intersections
with enclosed surface.
● The most important reasons for using ray-
tracing method is to create extremely realistic
renderings of pictures by incorporating laws
of optics for reflecting and transmitting light
rays.
Ray racing/ Ray casting method
Coloring
● Light is a electromagnetic wave and has wave
properties such as frequency and wave
length.
● Human eyes are able to detect differences in
frequency in wave length for a small range of
400 to 700nm.
● Each wave length appears to human eyes as
a color ranging from violet at 400 nm of the
rainbow to red at 700nm.
● The eye contains three diferent types of
cones ( red, blue and green).These three
colors are primary colors
Color models(RGB)

● The RGB model is usually


represented by a unit cube
with one corner located at
the origin of a 3D color
coordinate system.
● The axis are labelled as RGB
and has the range of
values[0, 1].
● The origin (0, 0, 0) is
considered black and
diagonally opposite corner
(1, 1, 1) is called white.
Colour models
● .
Color models(RGB)
● The line joining black to
white represents gray
scale and has equal
components of R,G, B.
● It is also called as an
additive color model as
we add three color
components together to
form any color.
Color models(RGB)
● RGB is a basic color model used in TV, on
computers and for web graphics but it cannot
be used for print production.
● In RGB color model, the higher the values of
R, G, and B, the brighter is the color. If R = G
= B, the color will be a shade of gray.
Color models(RGB)

● Our systems will generally display RGB using


24-bit color.
● In 24 RGB color model there are 256
variations for each of these additive colors of
RGB.
● RGB color model has = 256 (reds) * 256
(greens) * 256 (blues)(24 bit) = 16,777,216
possible colors.
Color models(RGB)
● Actually, in RGB model, colors are
represented by varying intensities of R, G and
B light.
● The intensity of each of R, G and B
components are represented on a scale from
0 to 255,
Color models(RGB)
● 0 – least intensity (no light emitted)
● 255 – maximum intensity
● & 127 – is the half intensity.

● (255, 0, 0) is the brightest, red color.


● (0, 255, 0) is the brightest, green color.
● (0, 0, 255) is the brightest, blue color.
● (255, 255, 255) is the brightest, white color.
Color models(RGB)
● (127, 127, 127) is gray.
● (255, 255, 0) is yellow color.
● & ( 255, 0, 255) is magenta color.
Color models(CMY)
Color models(CMY)
● Cyan, Magenta and Yellow are the primary
colors for this particular substractive model.
This model is useful for describing color
output to hard copy devices.
● For example, Cyan is formed by adding blue
and green light, when white light is reflected
from cyan colored ink, the reflected light
must have no red component i.e, red is
absorbed by the ink.
Color models(CMY)
● Cyan absorbed red, So C is sometimes called
as –R i.e, minus Red.
● Similarly, magenta ink subtracts green
components from the incident light (so M is –
G) and yellow subtract the blue component
(So Y is –B)
Color models(CMY)
CMYK Model/Process
Model/Substrate Model
● CMYK is short for Cyan, Magenta, yellow and
key or ink is subtractive color model. It is also
known as a Process model or substrate
model.
● CMY cannot reproduce the brightness of RGB
colors.
● This model works by partially or entirely
masking certain colors on the typically white
background i.e, absorbing particular
wavelengths of light.
CMYK Model/Process
Model/Substrate Model
● It is a subtractive model because inks
“subtract” brightness from white.
● In RGB model discussed earlier, white is the
“additive” combination of all primary colored
lights while black is the absence of light.
● But in this CMYK model it is just opposite i.e,
white is the natural color of the paper or
other background while black results from a
full combination of colored inks.
● But because of the impurities in ink, when
CMY inks are combined, it produces a muddy
brown color.
CMYK Model/Process
Model/Substrate Model
● that black ink is added to this system to
compensate for these impurities. So, to
provide genuine black, printers add black ink
which is shown as k i.e, CMYK model.
CMYK Model/Process
Model/Substrate Model
HSV Color Model
Hue (H), Saturation (S) and Value (V) or HSV color model was
created in 1978 by A.R.Smith
HSV Color Model
HSV Color Model
● Value is defined here as the intensity of the
maximum of RGB components of the color
● We view the cube along its sides, giving it a
shape of hexagon. This boundary represents
the various hues and is used as top of HSV
hexagon.
● In the hexagon, the saturation is measured
along the horizontal axis and value is along
the vertical axis through the centre of
hexagon.
HSV Color Model
● Hue is represented as an angle about the
vertical axis from 0° at red through 360°. As
it is hexagon the vertices are separated at
60° intervals.
● Saturation (S varies from 0 to 1. S for this
model is the ratio of purity at 1.
● The value (V) also varies from 0 to 1, when V
= 0 it indicates black and white when, V = 1.
At the top of hexagon, the colors have
Maximum intensity. When V = 1 and S = 1,
we have the “pure” hues V, white is the point
at V = 1 and S = 0.
HSV Color Model
● The following points may be noted
● 1. The hue (H) is given by the angle about
the vertical axis with red at 0°, yellow at 60°,
green at 120°, cyan at 180°, blue at 240°
and magenta at 300°,
● Please note here that the complementary
colors are 180° apart. Also note, that the
complementary colors are diagonally opposite
i.e, (red + cyan), (blue + yellow), (green +
magenta).
HSV Color Model
● 2. The vertical axis is called as value (V)
where 0.0 < V < 1.0. At V = 0, we have black
and at V = 1, we have white.
● 3. The horizontal axis represents saturation
(S) where 0.0 <S< 1.0. It gives purity of the
color and is the ratio of purity of a related hue
to its maximum purity at S =1 and V = 1.
● At S = 0, we have gray scale i.e, the
diagonal of the RGB cube corresponds to V of
the HSV hexcone.
● At S = V = 1, we have a pure hue.
HSV Color Model
● To add black, decrease V and to add white
decrease S. To add black and white, decrease
V & S.
● For example
● For pure green, H = 120°, S = V = 1
● For Dark green, H = 120°, S = 1, V = 0.40
● For Light green, H = 120°, S = 0.3, V = 1.0
YIQ Color Model
● The YIQ colour model is used in U.S.
commercial color television Broadcasting and
is therefore closely related to the color raster
graphics.
● YIQ is a recording of RGB for transmission
efficiency and for downward compatibility
with black-and-white television. The recorded
signal is transmitted using the National
Television System Committee (NTSC) system.
YIQ Color Model
● In the YIQ colour model, parameter Y is the
same as in the XYZ model. Luminance
information is contained in the Y parameter,
while chromaticity information (hue and
purity) is incorporated into the I and Q
parameters.
● A combination of green, red and blue
intensities are chosen for the Y parameter to
yield the standard luminosity curve.
● Since Y contains luminance information,
black-and-white television monitors use only
the Y signal.
YIQ Color Model
● The largest bandwidth in the NTSC video
signal (about 4 MHz) is assigned to the Y
information.
● Parameter I contains orange-cyan information
that provides the flesh-tone shading and
occupies a bandwidth of approximately 1.5
MHz. Parameter Q carries green-magenta hue
information in a bandwidth of about 0.6 MHz.
POLYGON-RENDERING METHOD
● Each polygon can be rendered with a single
intensity or the intensity can be obtained at
each point of the surface.
● Illumination models or lightning model or
shading models are used to calculate the
intensity of light at a given point on the
surface of the object.
● They are also known as surface rendering
methods.
Constant – Intensity Shading
● Any surface of an object can be shaded pixel
by pixel, by calculating surface normal vector
and obtaining its dot product with the
light vector and thereby calculating the
intensity of that pixel. But this is a slow
process.
● To solve this problem, we divide the polygon
surface into a polygon mesh. Please note
that the intensity of calculation is applied
only for each vertex of the polygon and then
the intermediate points are interpolated.
Polygon mesh
Constant – Intensity Shading
● The fast and simplest method for polygon
shading is constant shading or faceted
shading or flat shading.
● In this method, the illumination model is
applied only once for each polygon to
determine single intensity value. The entire
polygon is then displayed with the single
intensity value.
Z-flat shading.
Z-flat shading.

● It can be implemented by giving a polygon,


the color intensity value depending on the Z–
coordinate average of the polygon’s vertices
Z-flat shading.

● Where ‘a’ is a number such that the z–average


is in the range of 0 to maximum intensity.
● It assumes that z is directly into the plane of
the screen. Hence, the intensity (I) reduces as
the point is farther away into the screen.
Lambert Flat Shading
● This method uses a real light source. It uses a
surface normal vector, for intensity where is

● the vector perpendicular to the surface


plane. Please note that may be directed out
of or into the surface plane. It involves the
following steps –
● Step 1. The surface is divided into polygon
meshes preferably triangle, for shading.
● Step 2. Surface unit normal vectors for each
polygon are calculated.
Lambert Flat Shading
● Step 3. The result is a dot or scalar product
with the light source unit vector which gives
the cosine of angle between the surface
normal and light source vector.
● Step 4. Multiply it with maximum colour
intensity.
● Step 5. This intensity value is applied to
polygon surface uniformly.
Lambert Flat Shading
● Please note that the more is the number of
polygons, lesser will be the abrupt change in
colour. It will appear smoother.
● Also note that increasing the number of
polygons increases the calculation and
reduces the shading speed.
Lambert Flat Shading
𝐿. 𝑛 = 𝐿 𝑛 cos 𝜃= 𝐿𝑥 × 𝑛𝑥 + 𝐿𝑦 × 𝑛𝑦 + 𝐿𝑧 × 𝑛𝑧
L – Incident light unit vector
n=Unit normal vector to the surface.
Hence the intensity I =𝐼𝑚𝑎𝑥 × 𝐿. 𝑛
Lambert Flat Shading
Gouraud Shading
● It is a method for linearly interpolating a
color or shades across the polygon. It was
developed by Henri Gouraud.
● Here in, the polygon surface is displayed by
linearly interpolating intensity values across
the surface.
● The intensity values for each polygon are
matched with the values of adjacent polygon
along the common edges.
● This eliminates the intensity discontinuities
that can occur in flat shading.
Gouraud Shading
● The vertex normal, N is the average vector of
the adjoining set of surface normals i.e,
1
𝑛
● Where ‘n’ is the number of adjoining
surfaces.
● The angle between the vertex normal is
considered rather than the surface normal.
The intensity is calculated at the vertex by
taking the dot product with the light vector
and multiplying it with the maximum
Gouraud Shading
Gouraud Shading
● After calculating the intensity at the vertex,
it is linearly interpolated over the polygon
surface.
● So, each polygon surface is rendered with
Gouraud shading by performing the following
calculations –
● (a) Determine the average unit normal
vector at each polygon vertex.
● (b) Apply an illumination model to each
vertex to calculate the vertex intensity.
Gouraud Shading
● (c) Linearly interpolate the vertex intensities
over the surface of the polygon.
● At each polygon vertex, we obtain a normal
vector by averaging the surface normals of
apolygon sharing that vertex, as shown in
Fig.
● ∴Unit normal vector at Vertex, V, is given by

Gouraud Shading
● So, for any vertex position, V, we get the unit
vertex normal with the calculation.
● Once we have the vertex normals, we can
determine the intensity at the vertices from a
lighting model.
The next step of
interpolating intensities
along the polygon edges
is shown in fig
Gouraud Shading
● A fast method for obtaining the intensity at
point 4 is to interpolate between intensities I1
and I2 using only the vertical displacement of
the scanline.
Gouraud Shading
● Similarly, intensity at the right intersection
i.e, position 5 is interpolated from intensity
values at vertices 2 and 3 as follows –

𝑦3 − 𝑦5 𝑦5 − 𝑦2
𝐼5 = 𝐼 + 𝐼
𝑦3 − 𝑦2 2 𝑦3 − 𝑦2 3
Gouraud Shading
● Once the intensities of intersection points 4
and 5 are calculated for a scanline, the
intensity of an interior point (say, P) is
calculated as follows –
Flat shading Vs Gouraud Shading
Gouraud Shading
● Advantages
● 1. In this model, we calculate the intensity for
each pixel, it ends up with the neighbouring
pixel across the border having nearly the
same color intensity. So, the rendered
model appears continuous.
● 2. Gouraud shading can be combined with a
hidden surface algorithm to fill in the visible
polygons along each scanline.
Gouraud Shading
Disadvantages
1. Highlights on the surface are sometimes
displayed with irregular shapes.
2. This linear intensity interpolation can result
in bright or dark intensity streaks to appear on
the surface. These bright or dark intensity
streaks are called as mach bands. This mach
band effect can be reduced by breaking the
surface into a greater number of smaller
polygons.
Phong Shading
● Phong shading is similar to Gouraud
shading.
● The difference is that Phong shading
calculates the average unit normal vector
at each of the polygon vertices and then
interpolates the vertex normal over the
surface of the polygon.
● After calculating the surface normal, it
applies the illumination model to get the
colour intensity for each pixel of the polygon.
Phong Shading
● So for rendering a polygon surface, is to
interpolate normal vectors and then apply the
Illumination model to each surface point.
This method is also known as Phong shading
or normal-vector interpolation shading,
interpolates the surface normal vector N,
instead of the intensity.
Phong Shading
● To display the polygon surface, we follow
three steps –
Step 1. Determine the average unit normal
vector at each polygon vertex.
Step 2. Linearly interpolate the vertex normals
over the surface of the polygon.
Step 3. Apply an illumination model along
each scanline to determine projected pixel
intensities for the surface points.
Phong Shading
● Please note here that the first step is same as
in Gouraud shading.
● In second step, the vertex normals are
linearly interpolated over the surface of the
polygon.

The calculation of
interpolation of surface
normals along a polygon
edge is shown in Fig
Phong Shading
● the normal vector, , for the scanline
intersection point along the edge between
● vertices 1 and 2 can be obtained by vertically
interpolating between edge endpoint normal
i.e,
Phong Shading
● like Gouraud shading, we can use incremental
methods to evaluate normals between scan
lines and along each individual scanline.
Once the surface normals are evaluated, the
surface intensity at the point is determined by
applying the illumination model.
Phong Shading
● Advantages
● 1. It displays more realistic highlights on a
surface.
● 2. It reduces the Mach-Band effect.
● 3. It gives more accurate results.
Disadvantages
● 1. It requires more calculations.
● 2. It greatly increases the cost of shading
steeply.

You might also like