Unit 1-2_merged

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 94

Unit-IV

Hidden Surface Removal

Z-Buffer Algorithm

The Z-buffer algorithm, also known as the depth buffer algorithm, is a


fundamental method used in computer graphics for hidden surface
removal. It efficiently determines which surfaces are visible at each pixel
on the screen and ensures that only the closest surfaces are rendered,
while hidden surfaces are discarded. Here's an overview of how the Z-
buffer algorithm works:

1. Initialization:
 Create two buffers: the frame buffer and the Z-buffer (depth
buffer). The frame buffer stores the final image, while the Z-
buffer stores depth values for each pixel.
 Set all depth values in the Z-buffer to maximum depth (farthest
from the viewer) and clear the frame buffer.

2. Rendering Process:
 For each object in the scene:
 Project the object's vertices onto the screen (viewport

transformation).
 For each primitive (e.g., triangle) of the object:

 Determine the depth of each pixel covered by the

primitive (typically using interpolation).


 Compare the depth of each pixel with the

corresponding value stored in the Z-buffer.


 If the calculated depth is less than the value stored in

the Z-buffer, update both the Z-buffer and the


corresponding pixel in the frame buffer with the new
depth and color information, respectively.
3. Completion:
 After rendering all objects in the scene, the frame buffer
contains the final image with hidden surfaces removed.
The Z-buffer algorithm efficiently handles complex scenes with
overlapping geometry, providing an accurate representation of the visible
surfaces. It's widely used in real-time rendering applications, such as video
games and interactive simulations, due to its simplicity and effectiveness.
However, it requires additional memory to store the Z-buffer, and it may
suffer from artifacts such as Z-fighting when two surfaces are extremely
close to each other in depth. Nonetheless, its benefits outweigh these
limitations in many scenarios, making it a popular choice for hidden
surface removal in computer graphics.
Scanline Algorithm
The scanline algorithm is another method for hidden surface removal and
rendering in computer graphics, particularly for polygonal scenes. It
operates by dividing the rendering process into horizontal bands, or
scanlines, across the screen and processing polygons within each band.

Here's an overview of how the scanline algorithm works:

1. Initialization:
 Sort the vertices of each polygon by their y-coordinates (top to

bottom) to determine the order in which they intersect


scanlines.
 Identify the active edge table (AET) and initialize an empty edge

table (ET) for each scanline.


2. Edge Table (ET) Creation:
 For each polygon in the scene:

 Calculate the edges of the polygon and store them in the

edge table.
 Associate attributes such as color, texture coordinates, etc.,

with each edge.


3. Scanline Processing:
 Traverse each scanline from top to bottom:

 Move edges from the ET to the AET whose minimum y-

coordinate matches the current scanline.


 Sort the edges in the AET by their x-coordinates.
Pair adjacent edges in the AET to form spans, which
represent horizontal segments of the polygon within the
current scanline.
 Fill pixels within each span with the appropriate color or

texture using interpolation based on the attributes of the


edges.
4. Edge Update:
 Increment the x-coordinates of edges in the AET as the scanline

progresses.
 Remove edges from the AET when their maximum y-coordinate

is reached.
5. Completion:
 Once all scanlines have been processed, the entire frame is

filled with the rendered polygons, with hidden surfaces


removed.

The scanline algorithm efficiently handles complex scenes with


overlapping polygons and varying depths. However, it may require
additional processing to handle cases such as self-intersecting polygons
or polygons with holes.

While the scanline algorithm is effective, it's not always as efficient as


other techniques like the Z-buffer algorithm for real-time rendering due
to its complexity and the need for sorting and edge processing.
Nonetheless, it remains a valuable tool, especially in offline rendering or
situations where Z-buffering is not feasible or desirable.

Octree Algorithm
The Octree algorithm is a hierarchical data structure commonly used in
computer graphics, particularly for spatial partitioning and accelerating
collision detection, visibility determination, and ray tracing. It's particularly
effective for managing spatial data in three dimensions.

Here's how the Octree algorithm works:

1. Initialization:
 The root of the Octree represents the bounding box of the
entire scene or a specific region of interest.
 The bounding box is divided into eight equally-sized octants,
forming the initial level-1 nodes of the tree.

2. Subdivision:
 Each octant is recursively subdivided into eight smaller octants

if certain criteria are met. Typically, subdivision occurs when an


octant contains more objects or vertices than a specified
threshold, or when it exceeds a certain depth level in the tree.
 Subdivision continues until the criteria are no longer met or

until a maximum depth level is reached, resulting in a tree


structure with varying levels of detail.

3. Traversal and Query:


 To perform operations such as collision detection or visibility

determination, a traversal of the Octree is initiated.


 Starting from the root node, traversal proceeds recursively

down the tree based on the relative position of the objects or


points of interest.
 At each level of the tree, nodes are checked for potential

intersections or containment with the query object or region of


interest.
 Depending on the specific application, different traversal

algorithms may be used, such as depth-first search or breadth-


first search.

4. Collision Detection:
 For collision detection, objects are placed into the Octree based

on their bounding volumes (e.g., bounding boxes or spheres).


 During traversal, only the nodes that intersect with the

bounding volume of the query object need to be examined for


potential collisions, reducing the number of comparisons
required.

5. Visibility Determination:
 In visibility determination, the Octree can be used to quickly
identify potentially visible objects from a given viewpoint.
 By traversing the Octree and performing visibility tests, objects
that are occluded by others or located behind obstacles can be
efficiently culled from the rendering process, improving
performance.

The Octree algorithm provides a balance between spatial partitioning


efficiency and memory overhead. It adapts to the spatial distribution of
objects in the scene, ensuring that regions with higher object density are
subdivided more finely, while sparse regions require fewer nodes. This
adaptability makes it suitable for a wide range of applications in computer
graphics and computational geometry.

CURVED SURFACES

When dealing with curved surfaces in computer graphics, various


algorithms and techniques are used for rendering, modeling, and
manipulation. Here are some commonly employed algorithms for curved
surfaces:

1. Bezier Curves and Surfaces:


 Bezier curves and surfaces are popular for defining smooth,

parametric curves and surfaces. They are defined by control


points that influence the shape of the curve or surface.
 Algorithms for evaluating Bezier curves and surfaces, such as De

Casteljau's algorithm, are used to compute points along the


curve or surface efficiently.

2. B-spline Curves and Surfaces:


 B-spline curves and surfaces are similar to Bezier curves and

surfaces but provide additional flexibility in controlling the


shape using control points and knot vectors.
 Algorithms like the Cox-De Boor recursion formula are used to

evaluate points on B-spline curves and surfaces.


3. NURBS (Non-Uniform Rational B-splines):
 NURBS curves and surfaces extend B-splines by incorporating

rational functions, which allow for more complex and precise


shapes.
 Algorithms for evaluating NURBS curves and surfaces involve

dividing them into smaller segments and applying blending


functions to compute points.

4. Subdivision Surfaces:
 Subdivision surfaces provide a flexible framework for

representing curved surfaces using recursive subdivision of


control meshes.
 Algorithms such as the Catmull-Clark subdivision scheme

iteratively refine the control mesh to produce smoother


surfaces.

5. Parametric Surface Patch Techniques:


 Techniques like Coons patches and Hermite patches allow for

the construction of curved surfaces by interpolating or


approximating boundary curves or constraints.
 Algorithms for constructing parametric surface patches involve

blending boundary curves and interpolating interior points.

6. Implicit Surfaces:
 Implicit surfaces are defined by implicit functions that describe

the surface as the zero set of the function.


 Algorithms for rendering implicit surfaces involve techniques

like marching cubes for polygonization or ray marching for


direct rendering.

7. Surface Reconstruction:
 Surface reconstruction algorithms reconstruct curved surfaces

from point cloud data obtained from 3D scanning or other


sources.
 Techniques such as Poisson surface reconstruction or moving

least squares (MLS) surface reconstruction are commonly used.


These algorithms are used in various applications, including computer-
aided design (CAD), computer graphics, computer-aided manufacturing
(CAM), medical imaging, and scientific visualization, to model and
manipulate curved surfaces effectively. Each algorithm has its strengths
and weaknesses, making them suitable for different types of surfaces and
applications.

VISIBLE SURFACE RAY TRACING ALGORITHM

The Visible Surface Ray Tracing algorithm, also known simply as Ray
Tracing, is a sophisticated rendering technique used in computer graphics
to generate highly realistic images by simulating the behavior of light rays
as they interact with surfaces in a scene. It's particularly effective for
rendering complex lighting effects, reflections, refractions, and shadows.
Here's an overview of the algorithm:

1. Ray Casting:
 For each pixel in the image plane, a primary ray is cast from the

viewer's eye (or the camera) through the pixel into the scene.

2. Intersection Testing:
 The primary ray intersects with objects in the scene. For each

intersection, information such as the point of intersection,


surface normal, material properties, and texture coordinates are
recorded.

3. Shading:
 At each intersection point, various shading calculations are

performed to determine the color and intensity of light


reflected, transmitted, or emitted by the surface.
 This includes considering factors such as direct illumination

from light sources, diffuse reflection, specular reflection (based


on surface roughness), and transparency/refraction.

4. Secondary Rays:
 Secondary rays are generated based on reflection and
refraction properties of the materials at the intersection point.
 Reflection rays are cast in the direction of the reflected light
based on the surface normal and the incident ray direction.
 Refraction rays are cast when the material is transparent,
bending the ray based on the surface's refractive index.

5. Recursive Ray Tracing:


 The process of casting reflection and refraction rays can be

recursive, meaning that for each intersection point, additional


rays are traced to account for multiple bounces of light within
the scene.
 The recursion depth can be limited to control computational

complexity and prevent infinite loops.

6. Shadow Rays:
 Shadow rays are cast from the intersection point toward light

sources in the scene to determine if the point is in shadow.


 If the shadow ray intersects with an object before reaching the

light source, the point is in shadow and does not receive direct
illumination from that light source.

7. Global Illumination:
 Some advanced ray tracing algorithms also account for global

illumination effects, such as indirect lighting from surfaces


reflecting light onto other surfaces in the scene.

8. Rendering and Display:


 After tracing rays and calculating colors for each pixel, the

resulting image is displayed on the screen.

Ray tracing is computationally intensive, especially when dealing with


complex scenes and multiple light interactions. However, advancements
in hardware and algorithms have made real-time ray tracing feasible in
certain applications, leading to its increasing adoption in fields such as
gaming, architectural visualization, and film production.
RECURSIVE RAY TRACING
Recursive ray tracing is a rendering technique used in computer graphics
to simulate the behavior of light rays as they interact with surfaces in a
scene. It's an extension of the basic ray tracing algorithm and allows for
the modeling of complex lighting effects such as reflections, refractions,
and indirect lighting. Here's a breakdown of the recursive ray tracing
algorithm:

1. Primary Ray Casting:


 For each pixel on the image plane, a primary ray is cast from

the viewer's eye (or the camera) through the pixel into the
scene.

2. Intersection Testing:
 The primary ray is tested for intersection with objects in the

scene using techniques like ray-object intersection tests (e.g.,


ray-sphere, ray-triangle intersection).
 If an intersection is found, information about the intersection

point (position, surface normal, material properties, texture


coordinates) is recorded.

3. Shading:
 At each intersection point, shading calculations are performed

to determine the color and intensity of light reflected,


transmitted, or emitted by the surface.
 This includes factors such as direct illumination from light

sources, diffuse reflection, specular reflection (based on surface


roughness), and transparency/refraction.

4. Secondary Rays - Reflection and Refraction:


 If the material at the intersection point is reflective or

transparent, secondary rays are cast to simulate reflection and


refraction effects.
 Reflection rays are cast in the direction of the reflected light

based on the surface normal and the incident ray direction.


 Refraction rays are cast through the transparent material,
bending the ray based on the surface's refractive index (Snell's
Law).

5. Recursive Tracing:
 For each reflected and refracted ray, the ray tracing process is

recursively repeated starting from the new intersection point.


 The recursion depth is typically limited to prevent infinite loops

and control computational complexity.


 Each recursive step contributes to the overall lighting and

appearance of the scene, capturing multiple reflections and


refractions.

6. Shadow Rays:
 Shadow rays are cast from each intersection point toward light

sources to determine if the point is in shadow.


 If the shadow ray intersects with an object before reaching the

light source, the point is in shadow and does not receive direct
illumination from that light source.

7. Global Illumination:
 Some recursive ray tracing algorithms also incorporate global

illumination effects, accounting for indirect lighting


contributions from surfaces in the scene.
8. Rendering and Display:
 After tracing rays and calculating colors for each pixel, the

resulting image is displayed on the screen.

Recursive ray tracing provides a powerful and flexible framework for


generating realistic images with complex lighting effects. However, it can
be computationally intensive, especially for scenes with many reflective
and refractive surfaces. Various optimization techniques, such as
bounding volume hierarchies and Monte Carlo integration, are often
employed to improve performance.
Unit-V
Illumination and Shading Models
Illumination model, also known as Shading model or Lightning model,
is used to calculate the intensity of light that is reflected at a given
point on surface. There are three factors on which lightning effect
depends on:

1. Light Source : Light source is the light emitting source. There


are three types of light sources:
1. Point Sources – The source that emit rays in all
directions (A bulb in a room).
2. Parallel Sources – Can be considered as a point source
which is far from the surface (The sun).
3. Distributed Sources – Rays originate from a finite area
(A tubelight).
Their position, electromagnetic spectrum and shape determine
the lightning effect.

2. Surface : When light falls on a surface part of it is reflected and


part of it is absorbed. Now the surface structure decides the
amount of reflection and absorption of light. The position of the
surface and positions of all the nearby surfaces also determine
the lightning effect.

3. Observer : The observer’s position and sensor spectrum


sensitivities also affect the lightning effect.

1. Ambient Illumination : Assume you are standing on a road, facing a


building with glass exterior and sun rays are falling on that building
reflecting back from it and the falling on the object under observation.
This would be Ambient Illumination. In simple words, Ambient
Illumination is the one where source of light is indirect. The reflected
intensity Iamb of any point on the surface is:

2. Diffuse Reflection : Diffuse reflection occurs on the surfaces which


are rough or grainy. In this reflection the brightness of a point depends
upon the angle made by the light source and the surface. The reflected
intensity Idiff of a point on the surface is:

3. Specular Reflection : When light falls on any shiny or glossy surface


most of it is reflected back, such reflection is known as Specular
Reflection. Phong Model is an empirical model for Specular Reflection
which provides us with the formula calculation the reflected intensity

Ispec:
Shading models
Shading models are used in computer graphics to determine the color
and brightness of pixels on surfaces based on lighting information. They
play a crucial role in rendering realistic images by simulating how light
interacts with surfaces. Here are some common shading models:
1. Flat Shading:
 Flat shading is a basic shading technique where each polygon

in a scene is assigned a single color. This color is usually


computed based on the illumination at one point on the
polygon (often at its vertex), resulting in a flat, uniform
appearance across the entire polygon. Flat shading does not
consider the varying brightness across the polygon's surface
and can produce visibly faceted edges between polygons.

Example: Consider a simple 3D cube rendered using flat


shading. Each face of the cube is assigned a single color,
typically computed based on the illumination at one point on
the face (e.g., its vertex). As a result, each face of the cube
appears flat and uniformly colored, without any smooth
transitions between adjacent faces. This can create a faceted
appearance, especially when viewing the cube from certain
angles.

2. Gouraud Shading:
 Gouraud shading is an improvement over flat shading that aims

to provide smoother shading by interpolating vertex colors


across the surface of polygons. In this technique, the color at
each vertex of a polygon is computed based on illumination,
and these vertex colors are then interpolated across the
polygon's surface to determine the color of each pixel. Gouraud
shading results in a more visually appealing appearance
compared to flat shading and helps reduce the appearance of
faceted edges.
Example: Imagine the same 3D cube rendered using Gouraud
shading. In this case, instead of assigning a single color to each
face of the cube, colors are computed at each vertex of the
cube based on illumination. These vertex colors are then
interpolated across the surface of each face, resulting in
smoother color transitions between adjacent faces. As a result,
the cube appears more visually appealing compared to flat
shading, with reduced faceting and smoother shading
transitions.

3. Phong Shading:
 Phong shading is a shading model that calculates the color of

each pixel on a surface by interpolating surface normals across


the polygon's surface. This technique takes into account the
angle between the surface normal, the light direction, and the
viewer's direction to compute both diffuse and specular lighting
contributions. Phong shading produces more realistic shading
effects, including smooth specular highlights, and is widely
used in computer graphics for its ability to simulate a wide
range of materials and lighting conditions.

Example: Continuing with our 3D cube example, let's render it


using Phong shading. In Phong shading, the color of each pixel
on the surface of the cube is computed by interpolating surface
normals across the face of each polygon. This allows for more
accurate shading calculations, taking into account the angle
between the surface normal, light direction, and viewer's
direction at each pixel. As a result, Phong shading produces
more realistic shading effects, including smooth specular
highlights and accurate representation of surface curvature. The
cube appears even more visually appealing and realistic
compared to Gouraud shading.
In summary, flat shading provides a basic and computationally
inexpensive approach to rendering objects, but it may result in a faceted
appearance. Gouraud shading improves upon flat shading by
interpolating colors across polygon surfaces, resulting in smoother
shading transitions. Phong shading further enhances realism by
calculating pixel colors based on interpolated surface normals, leading to
accurate representation of lighting effects such as specular highlights.

Color spaces:
Color spaces are mathematical models that describe how colors can be
represented numerically. They provide a systematic way to specify colors
in terms of numerical values, which is essential for digital image
processing, computer graphics, and color reproduction. Here are some
common color spaces:

1. RGB (Red, Green, Blue):

 RGB is an additive color model where colors are represented as


combinations of red, green, and blue primary colors. Each color
component is typically represented as an 8-bit value (ranging
from 0 to 255) or a floating-point value (ranging from 0.0 to
1.0). The RGB color space is widely used in digital displays,
cameras, and computer graphics.
2. CMY (Cyan, Magenta, Yellow):
 CMY is a subtractive color model used in color printing, where
colors are represented as combinations of cyan, magenta, and
yellow primary colors. CMYK extends CMY by adding black (K)
to improve color accuracy and reduce ink consumption in
printing.

3. HSV (Hue, Saturation, Value):


 HSV and HSL are cylindrical color spaces that represent colors
based on their hue (the type of color), saturation (the intensity
or purity of the color), and value (brightness in HSV) or
lightness (perceived brightness in HSL). These color spaces are
often used in graphic design and image editing software for
intuitive color manipulation.
4. HLS Color Model

HLS stands for Hue Light Saturation. It is a double hexcone subset. The
maximum saturation of hue is S= 1 and L= 0.5. It is conceptually easy for
people who want to view white as a point.

You might also like