Professional Documents
Culture Documents
UNIT 4 p1
UNIT 4 p1
3D display methods in
Computer Graphics
By
Miss. T. Sarah Jeba Jency, M.Sc., M.Phil., B.Ed.,
What is 3d display methods in computer
graphics?
3D computer graphics (in contrast to 2D
computer graphics) are graphics that utilize
a three dimensional representation of
geometric data that is stored in the
computer for the purposes of performing
calculations and rendering 2D images.
Such images may be for later display or for
real-time viewing. We are talk about the
followings.
•Parallel Projection.
•Perspective Projection.
•Depth Cueing
Parallel Projection:
A parallel projection is a projection of an object in three-dimensional space onto a fixed plane,
known as the projection plane or image plane, where the rays, known as lines of sight or projection
lines, are parallel to each other.
Notes:
• Project points on the object surface along parallel lines onto the display plane.
• Parallel lines are still parallel after projection.
• Used in engineering and architectural drawings.
• Views maintain relative proportions of the object.
Perspective Projection :
The perspective projection, on the other
hand, produces realistic views but does
not preserve relative proportions. In
perspective projection, the lines of
projection are not parallel. Instead , they
all converge at a single point called the
‘center of projection’ or ‘projection
reference point’.
Note:
• To easily identify the front and back of display objects.
• Depth information can be included using various methods.
• A simple method to vary the intensity of objects according to their distance from the viewing position.
• Eg: lines closest to the viewing position are displayed with the higher intensities
and lines farther away are displayed with lower intensities.
Visible line and surfaceidentification
I. When we view a picture containing non-transparent objects and surfaces, then we cannot see
those objects from view which are behind from objects closer to eye.
II. We must remove these hidden surfaces to get a realistic screen image. The identification and
removal of these surfaces is called Hidden-surface problem
Depth buffer is used to store depth values for x,y position, as surfaces are processed
0≤depth≤1
The frame buffer is used to store the intensity value of color value at each position x,y
Scan-Line Method:
The Edge Table − It contains coordinate endpoints of each line in the scene, the inverse slope of
each line, and pointers into the polygon table to connect edges to surfaces.
The Polygon Table − It contains the plane coefficients, surface material properties, other surface
data, and may be pointers to the edge table.
Area-Subdivision Method:
A. Surrounding surface − One that completely encloses the area.
B. Overlapping surface − One that is partly inside and partly outside the area.
C. Inside surface − One that is completely inside the area.
D. Outside surface − One that is completely outside the area.
A-Buffer Method:
The A-buffer expands on the depth buffer method to allow transparencies. The key data structure in
the A-buffer is the accumulation buffer.
Each position in the A-buffer has two fields −
Depth field − It stores a positive or negative real number
Intensity field − It stores surface-intensity information or a pointer value
If depth > = 0, the number stored at that position is the depth of a single surface overlapping the
corresponding pixel area. The intensity field then stores the RGB components of the surface color
at that point and the percent of pixel coverage.
If depth < 0, it indicates multiple-surface contributions to the pixel intensity. The intensity field then
stores a pointer to a linked list of surface data. The surface buffer in the A-buffer includes −
RGB intensity components
Opacity Parameter
Depth
Percent of area coverage
Surfaceidentifier
Surface Rendering:
Surface rendering involves setting the surface intensity of objects according to the lighting
conditions in the scene and according to assigned surface characteristics. The lighting conditions
specify the intensity and positions of light sources and the general background illumination
required for a scene.
On the other hand the surface characteristics of objects specify the degree of transparency and
smoothness or roughness of the surface; usually the surface rendering methods are combined with
perspective and visible surface identification to generate a high degree of realism in a displayed
scene.
Setthe surface intensity of objects accordingto
Lighting conditions in the scene
Assigned surface characteristics
Lighting specifications include the intensity and positions of light
sources and the general background illumination required for
ascene.
Surface properties include degree of transparencyand how
rough or smooth of the surfaces