Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 32

Q1 What are the major application areas of computer graphics?

Ans.

Figures on a computer screen can be drawn using polygons. To fill those figures with
color, we need to develop some algorithm.There are two famous algorithms for this
purpose: Boundary fill and Scanline fill algorithms.
Boundary filling requires a lot of processing and thus encounters few problems in
real time. Thus the viable alternative is scanline filling as it is very robust in nature.
This article discusses how to use Scanline filling algorithm to fill colors in an image.
Scanline Polygon filling Algorithm
Scanline filling is basically filling up of polygons using horizontal lines or scanlines.
The purpose of the SLPF algorithm is to fill (color) the interior pixels of a polygon
given only the vertices of the figure. To understand Scanline, think of the image
being drawn by a single pen starting from bottom left, continuing to the right, plotting
only points where there is a point present in the image, and when the line is
complete, start from the next line and continue.
This algorithm works by intersecting scanline with polygon edges and fills the
polygon between pairs of intersections.

Special cases of polygon vertices:


1. If both lines intersecting at the vertex are on the same side of the scanline,
consider it as two points.
2. If lines intersecting at the vertex are at opposite sides of the scanline,
consider it as only one point.
Components of Polygon fill:
1. Edge Buckets: It contains an edge’s information. The entries of edge bucket
vary according to data structure you have used.In the example we are taking
below, there are three edge buckets namely: ymax, xofymin,
slopeinverse.
2. Edge Table: It consistsof several edge lists -> holds all of the edges that
compose the figure. When creating edges, the vertices of the edge need to be
ordered from left to right and thee edges are maintained in increasing yMin
order. Filling is complete once all of the edges are removed from the ET
3. Active List: IT maintains the current edges being used to fill in the
polygon.Edges aree pushed into the AL from the Edge Table when an edge’s
yMin is equal to the current scan line being processed.
The Active List will be re-sorted after every pass.
Algorithm:

1. We will process the polygon edge after edge, and store in the edge Table.
2. Storing is done by storing the edge in the same scanline edge tuple as the
lowermost point's y-coordinate value of the edge.
3. After addition of any edge in an edge tuple, the tuple is sorted using insertion
sort, according to its xofymin value.
4. After the whole polygon is added to the edge table, the figure is now filled.
5. Filling is started from the first scanline at the bottom,and continued till the top.
6. Now the active edge table is taken and the following things are repeated for
each scanline:
i.Copy all edge buckets of the designated scanline to the active edge tuple
ii. Perform an insertion sort according to the xofymin values
iii. Remove all edge buckets whose ymax is equal or greater than the
scanline
iv. Fillup pairs of edges in active tuple, if any vertex is got, follow these
instructions:
- If both lines intersecting at the vertex are on the same side of the
scanline, consider it as two points.
- If lines intersecting at the vertex are at opposite sides of the
scanline, consider it as only one point.
v. Update the xofymin by adding slopeinverse for each bucket.
Q2

Q3 Explain Scan line polygon filling algorithms. (Steps should be written).

Ans.
Computer graphics deals with creation, manipulation and storage of different type of
images and objects.
Some of the applications of computer graphics are:
1. Computer Art:
Using computer graphics we can create fine and commercial art which include
animation packages, paint packages. These packages provide facilities for
designing object shapes and specifying object motion.Cartoon drawing,
paintings, logo design can also be done.

2. Computer Aided Drawing:


Designing of buildings, automobile, aircraft is done with the help of computer
aided drawing, this helps in providing minute details to the drawing and
producing more accurate and sharp drawings with better specifications.

3. Presentation Graphics:
For the preparation of reports or summarising the financial, statistical,
mathematical, scientific, economic data for research reports, managerial
reports, moreover creation of bar graphs, pie charts, time chart, can be done
using the tools present in computer graphics.
4. Entertainment:
Computer graphics finds a major part of its utility in the movie industry and
game industry. Used for creating motion pictures , music video, television
shows, cartoon animation films. In the game industry where focus and
interactivity are the key players, computer graphics helps in providing such
features in the efficient way.

5. Education:
Computer generated models are extremely useful for teaching huge number of
concepts and fundamentals in an easy to understand and learn manner. Using
computer graphics many educational models can be created through which
more interest can be generated among the students regarding the subject.

6. Training:
Specialised system for training like simulators can be used for training the
candidates in a way that can be grasped in a short span of time with better
understanding. Creation of training modules using computer graphics is simple
and very useful.

7. Visualisation:
Today the need of visualise things have increased drastically, the need of
visualisation can be seen in many advance technologies , data visualisation
helps in finding insights of the data , to check and study the behaviour of
processes around us we need appropriate visualisation which can be achieved
through proper usage of computer graphics
8. Image Processing:
Various kinds of photographs or images require editing in order to be used in
different places. Processing of existing images into refined ones for better
interpretation is one of the many applications of computer graphics.

9. Machine Drawing:
Computer graphics is very frequently used for designing, modifying and
creation of various parts of machine and the whole machine itself, the main
reason behind using computer graphics for this purpose is the precision and
clarity we get from such drawing is ultimate and extremely desired for the safe
manufacturing of machine using these drawings.

10. Graphical User Interface:


The use of pictures, images, icons, pop-up menus, graphical objects helps in
creating a user friendly environment where working is easy and pleasant,
using computer graphics we can create such an atmosphere where everything
can be automated and anyone can get the desired action performed in an
easy fashion.
These are some of the applications of computer graphics due to which it’s popularity
has increased to a huge extend and will keep on increasing with the progress in
technology.

Q-4. Write the Bresenham line generating algorithm for slope m is greater
than 1. Derive the various decision parameters also.

To chooses the next one between the bottom pixel S and top pixel T.
            If S is chosen
            We have xi+1=xi+1       and       yi+1=yi
            If T is chosen
            We have xi+1=xi+1       and       yi+1=yi+1

The actual y coordinates of the line at x = xi+1is


            y=mxi+1+b
The distance from S to the actual line in y direction
            s = y-yi

The distance from T to the actual line in y direction


            t = (yi+1)-y

Now consider the difference between these 2 distance values


      s-t

When (s-t) <0 ⟹ s < t

The closest pixel is S

When (s-t) ≥0 ⟹ s < t

The closest pixel is T

This difference is
            s-t = (y-yi)-[(yi+1)-y]
                    = 2y - 2yi -1

            

Substituting m by   and introducing decision variable


                di=△x (s-t)

                di=△x (2   (xi+1)+2b-2yi-1)


                        =2△xyi-2△y-1△x.2b-2yi△x-△x
                di=2△y.xi-2△x.yi+c

Where c= 2△y+△x (2b-1)

We can write the decision variable di+1 for the next slip on


                di+1=2△y.xi+1-2△x.yi+1+c
                di+1-di=2△y.(xi+1-xi)- 2△x(yi+1-yi)

Since x_(i+1)=xi+1,we have


                di+1+di=2△y.(xi+1-xi)- 2△x(yi+1-yi)

Special Cases

If chosen pixel is at the top pixel T (i.e., di≥0)⟹ yi+1=yi+1


                di+1=di+2△y-2△x

If chosen pixel is at the bottom pixel T (i.e., di<0)⟹ yi+1=yi


                di+1=di+2△y

Finally, we calculate d1
                d1=△x[2m(x1+1)+2b-2y1-1]
                d1=△x[2(mx1+b-y1)+2m-1]
Since mx1+b-yi=0 and m =  , we have
                d1=2△y-△x

Step1: Start Algorithm

Step2: Declare variable x1,x2,y1,y2,d,i1,i2,dx,dy

Step3: Enter value of x1,y1,x2,y2


                Where x1,y1are coordinates of starting point
                And x2,y2 are coordinates of Ending point

Step4: Calculate dx = x2-x1
                Calculate dy = y2-y1
                Calculate i1=2*dy
                Calculate i2=2*(dy-dx)
                Calculate d=i1-dx

Step5: Consider (x, y) as starting point and xend as maximum possible value of x.


                If dx < 0
                        Then x = x2
                        y = y2
                          xend=x1
                If dx > 0
                    Then x = x1
                y = y1
                        xend=x2

Step6: Generate point at (x,y)coordinates.

Step7: Check if whole line is generated.


                If x > = xend
                Stop.

Step8: Calculate co-ordinates of the next pixel


                If d < 0
                    Then d = d + i1
                If d ≥ 0
          Then d = d + i2
                Increment y = y + 1

Step9: Increment x = x + 1

Step10: Draw a point of latest (x, y) coordinates

Step11: Go to step 7

Step12: End of Algorithm
Q 5 is in PDF

Q6
Q 7 Window:
1. A world-coordinate area selected for display is called a window.
2. In computer graphics, a window is a graphical control element.
3. It consists of a visual area containing some of the graphical user interface of
the program it belongs to and is framed by a window decoration.
4. A window defines a rectangular area in world coordinates. You define a
window with a GWINDOW statement. You can define the window to be larger
than, the same size as, or smaller than the actual range of data values,
depending on whether you want to show all of the data or only part of the
data.
Viewport:
1. An area on a display device to which a window is mapped is called a viewport.
2. A viewport is a polygon viewing region in computer graphics. The viewport is
an area expressed in rendering-device-specific coordinates, e.g. pixels for
screen coordinates, in which the objects of interest are going to be rendered.
3. A viewport defines in normalized coordinates a rectangular area on the
display device where the image of the data appears. You define a viewport
with the GPORT command. You can have your graph take up the entire
display device or show it in only a portion, say the upper-right part.
{Q . Find the normalization transformation window to viewport composite matrix, with window
lower left corner at (1,1) and upper right corner at (3,5) on to a viewport with lower left corner at
(5,5) and upper right corner at (10, 10).}

Now the window parameters are,

Xwmin =1, Xwmax =3,

Ywmin =1, Ywmax = 5.

Xvmin =5, Xvmax =10,

Yvmin = 5, Yvmax = 10.

First of all, calculate scaling factor of x coordinate Sx and scaling factor of y coordinate Sy using
above mentioned formula.

 Sx = ( 10 – 5 ) / ( 3 - 1 ) = 5 / 2,
 Sy = ( 10 – 5 ) / ( 5 - 1 ) = 5 / 4,

N=
1 0 0

0 1 0

0 0 1

=>

5/2 0 0
0 5/4 0
0 0 1
=>

0 0 -1
0 1 -1
0 0 1
N=
5/2 0 -5/2
0 5/4 -5/4
0 0 1
Q8
Q-10. Write C language functions for flood fill and boundary
fill Algorithm

// flood fill function

void flood(int x, int y, int new_col, int old_col)

{
    // check current pixel is old_color or not
    if (getpixel(x, y) == old_col) {
  
        // put new pixel with new color
        putpixel(x, y, new_col);
  
        // recursive call for bottom pixel fill
        flood(x + 1, y, new_col, old_col);
  
        // recursive call for top pixel fill
        flood(x - 1, y, new_col, old_col);
  
        // recursive call for right pixel fill
        flood(x, y + 1, new_col, old_col);
  
        // recursive call for left pixel fill
        flood(x, y - 1, new_col, old_col);
    }
}

// boundary fill function

1 void boundaryfill(int x,int y,int f_color,int b_color)


2 {
3 if(getpixel(x,y)!=b_color && getpixel(x,y)!=f_color)
4 {
5 putpixel(x,y,f_color);
6 boundaryfill(x+1,y,f_color,b_color);
7 boundaryfill(x,y+1,f_color,b_color);
8 boundaryfill(x-1,y,f_color,b_color);
9 boundaryfill(x,y-1,f_color,b_color);
10 }
11 }
12 //getpixel(x,y) gives the color of specified pixel

Q11. Write a C program to draw line between two


points using DDA algorithm.
#include<stdio.h>
#incude<conio.h>
#include<graphics.h>
  
//Function for finding absolute value
int abs (int n)
{
    return ( (n>0) ? n : ( n * (-1)));
}
  
//DDA Function for line generation
void DDA(int X0, int Y0, int X1, int Y1)
{
    // calculate dx & dy
    int dx = X1 - X0;
    int dy = Y1 - Y0;
  
    // calculate steps required for generating pixels
    int steps = abs(dx) > abs(dy) ? abs(dx) : abs(dy);
  
    // calculate increment in x & y for each steps
    float Xinc = dx / (float) steps;
    float Yinc = dy / (float) steps;
  
    // Put pixel for each step
    float X = X0;
    float Y = Y0;
    for (int i = 0; i <= steps; i++)
    {
        putpixel (X,Y,RED);  // put pixel at (X,Y)
        X += Xinc;           // increment in x at each step
        Y += Yinc;           // increment in y at each step
        delay(100);          // for visualization of line-
                             // generation step by step
    }
}
  
// Driver program
void main()
{
    int gd = DETECT, gm;
  
    // Initialize graphics function
    initgraph (&gd, &gm, "");   
  
    int X0, Y0, X1, Y1;
printf(“Enter the value of X0,Y0,X1,Y1 respectively”);
scanf(“%d%d%d%d”,&X0,&Y0,&X1,&Y1);
    DDA(X0,Y0,X1,Y1);

Q12
Parametric Continuity Conditions

To ensure a smooth transition from one section of a piecewise


parametric curve
to the next, we can impose various continuity conditions at the
connection
points. If each section of a spline is described with a set of parametric
coordinate
functions of the form
x=x(u) , y=y(u) , z= z(u) , u1<=u<=u2
we set parametric continuity by matching the parametric derivatives of
adjoining
curve sections at their common boundary.

Geometric Continuity Conditions


An alternate method for jolning two successive curve sectwns is to specify
conditions
for geometric continuity. In this case, we only require parametric derivatives
of the two sections to be proportional to each other at their comnwn boundary
instead of equal to each other.
Zero-order geometric continuity, described as Go cont~nuityi,s the same as
zero-order parametric continuity. That is, the two curves sections must have the
same coordinate position at the boundary point. First-order geometric continu-
ity, or G' continuity, means that the parametric first derivatives are proportional
at the intersection of two successive sections. If we denote the parametric position
on the curve as P h ) , the direction of the tangent vector P'(u), but not necessarilv
.,
its ma g n itude, will be the same for two successive curve sections at their
joining point under GI continuity. Second-order geometric continuity, or G2 continuitv.
means that both the first and second ~aramet r icd erivatives of the two 2.
curve sections are proportional at their boundary. Under GL continuity, curvatures
of two curve sections will match at the joining position.
A curve generated with geometric continuity conditions is similar to one
generated with parametric continuity, but with slight differences in curve shape.
Figure 10-25 provides a comparison of geometric and parametric continuity. With
geometric continuity, the curve is pulled toward the section with the greater tangent
vector.

Random and Raster Display System

Raster-Scan Displays
The most common type of graphics monitor employing a CRT is the raster-scan
display, based on television technology. In a raster-scan system, the electron
beam is swept across the screen, one row at a time from top to bottom. As the
eledron beam moves across each row, the beam intensity is turned on and off to
create a pattern of illuminated spots. Picture definition is stored in a memory
area called the refresh buffer or frame buffer. This memory area holds the set of
intensity values for all the screen points. Stored intensity values are then retrieved
from the refresh buffer and "painted" on the screen one row (scan line) at
a time (Fig. 2-7). Each screen point is referred to as a pixel or pel (shortened
fonns of picture element). The capability of a raster-scan system to store intensity
information for each screen point makes it well suited for the realistic displav
of scenes containing subtle shading and color patterns. Home television sets and
printers are examples of other systems using raster-scan methods.
intensity range for pixel positions depends on the capability of the raster
system. In a simple black-and-white system, each screen point is either on or off,
so only one bit per pixel is needed to control the intensity of screen positions. For
a bilevel system, a bit value of 1 indicates that the electron beam is to be turn4
on at that position, and a value of 0 indicates that the beam intensity is to be off.
Additional bits are needed when color and intensity variations can be displayed.
Up to 24 bits per pixel are included in high-quality systems, which can require
severaI megabytes of storage for the frame buffer, depending on the resolution of
the system. A system with 24 bits per pixel and a screen resolution of 1024 bv
1024 requires 3 megabytes of storage for the frame buffer. On a black-and-white
system with one bit per pixeI, the frame buffer is commonly called a bitmap. For
systems with multiple bits per pixel, the frame buffer is Aten referred to as a
pixmap.
Refreshing on raster-scan displays is carried out at the rate of 60 to 80
frames per second, although some systems are designed for higher refresh rates.
Sometimes, refresh rates are described in units of cycles per second, or Hertz
(Hz), where a cycle corresponds to one frame. Using these units, we would describe
a refresh rate of 60 frames per second as simply 60 Hz. At the end of each
scan line, the electron beam returns to the left side of the screen to begin displaving
the next scan line. The return to the left of the screen, after refreshing each

scan line, is called the horizontal retrace of the electron beam. And at the end of
each frame (displayed in 1/80th to 1/60th of a second), the electron beam returns
(vertical retrace) to the top left comer of the screen to begin the next frame.
On some raster-scan systems (and in TV sets), each frame is displayed in
two passes using an interlaced refresh pmedure. In the first pass, the beam
sweeps across every other scan line fmm top to bottom. Then after the vertical retrace,
the beam sweeps out the remaining scan lines (Fig. 2-8). Interlacing of the
scan lines in this way allows us to see the entire s m n displayed in one-half the
time it would have taken to sweep amss all the lines at once fmm top to bottom.
Interlacing is primarily used with slower refreshing rates. On an older, 30 frameper-
second, noninterlaced display, for instance, some flicker is noticeable. But
with interlacing, each of the two passes can be accomplished in 1/60th of a second,
which brings the refresh rate nearer to 60 frames per second. This is an effective
technique for avoiding flicker, providing that adjacent scan lines contain similar
display information.

Random-Scan Displays
When operated as a random-scan display unit, a CRT has the electron beam directed
only to the parts of the screen where a picture is to be drawn. Randomscan
monitors draw a picture one line at a time and for this reason are also referred
to as vector displays (or stroke-writing or calligraphic diisplays). The
component lines of a picture can be drawn and refreshed by a random-scan sys

tem in any specified order (Fig. 2-9). A pen plotter operates in a similar way and
is an example of a random-scan, hard-copy device.
Refresh rate on a random-scan system depends on the number of lines to be
displayed. Picture definition is now stored as a set of linedrawing commands in
an area of memory r e f e d to as the refresh display file. Sometimes the refresh
display file is called the display list, display program, or simply the refresh
buffer. To display a specified picture, the system cycles through the set of commands
in the display file, drawing each component line in turn. After all linedrawing
commands have been processed, the system cycles back to the first line
command in the list. Random-scan displays arr designed to draw all the component
lines of a picture 30 to 60 times each second. Highquality vector systems are
capable of handling approximately 100,000 "short" lines at this refresh rate.
When a small set of lines is to be displayed, each rrfresh cycle is delayed to avoid
refresh rates greater than 60 frames per second. Otherwise, faster refreshing oi
the set of lines could bum out the phosphor.
Random-scan systems are designed for linedrawing applications and cannot
display realistic shaded scenes. Since pidure definition is stored as a set of
linedrawing instructions and not as a set of intensity values for all screen points,
vector displays generally have higher resolution than raster systems. Also, vector
displays produce smooth line drawings because the CRT beam directly follows
the line path. A raster system, in contrast, produces jagged lines that are plotted
as d h t e point sets.

C) Vanishing point and perspective foreshortening

Perspective Projection
In perspective projection farther away object from the viewer, small it appears. This
property of projection gives an idea about depth. The artist use perspective projection
from drawing three-dimensional scenes.

Two main characteristics of perspective are vanishing points and perspective


foreshortening. Due to foreshortening object and lengths appear smaller from the center
of projection. More we increase the distance from the center of projection, smaller will be
the object appear.

Vanishing Point
It is the point where all lines will appear to meet. There can be one point, two point, and
three point perspectives.

One Point: There is only one vanishing point as shown in fig (a)

Two Points: There are two vanishing points. One is the x-direction and other in the y
-direction as shown in fig (b)

Three Points: There are three vanishing points. One is x second in y and third in two
directions.

In Perspective projection lines of projection do not remain parallel. The lines converge at
a single point called a center of projection. The projected image on the screen is
obtained by points of intersection of converging lines with the plane of the screen. The
image on the screen is seen as of viewer's eye were located at the centre of projection,
lines of projection would correspond to path travel by light beam originating from object.
d) Projection and its types
Projection
It is the process of converting a 3D object into a 2D object. It is also defined as mapping
or transformation of the object in projection plane or view plane. The view plane is
displayed surface.
Perspective Projection
In perspective projection farther away object from the viewer, small it appears. This
property of projection gives an idea about depth. The artist use perspective projection
from drawing three-dimensional scenes.

Two main characteristics of perspective are vanishing points and perspective


foreshortening. Due to foreshortening object and lengths appear smaller from the
center of projection. More we increase the distance from the center of projection,
smaller will be the object appear.

Vanishing Point
It is the point where all lines will appear to meet. There can be one point, two point,
and three point perspectives.

One Point: There is only one vanishing point as shown in fig (a)

Two Points: There are two vanishing points. One is the x-direction and other in the y
-direction as shown in fig (b)

Three Points: There are three vanishing points. One is x second in y and third in two
directions.

In Perspective projection lines of projection do not remain parallel. The lines converge
at a single point called a center of projection. The projected image on the screen is
obtained by points of intersection of converging lines with the plane of the screen.
The image on the screen is seen as of viewer's eye were located at the centre of
projection, lines of projection would correspond to path travel by light beam
originating from object.

Important terms related to perspective


1. View plane: It is an area of world coordinate system which is projected into
viewing plane.
2. Center of Projection: It is the location of the eye on which projected light
rays converge.
3. Projectors: It is also called a projection vector. These are rays start from the
object scene and are used to create an image of the object on viewing or view
plane.
Anomalies in Perspective Projection
It introduces several anomalies due to these object shape and appearance gets
affected.

1. Perspective foreshortening: The size of the object will be small of its


distance from the center of projection increases.
2. Vanishing Point: All lines appear to meet at some point in the view plane.
3. Distortion of Lines: A range lies in front of the viewer to back of viewer is
appearing to six rollers.
Foreshortening of the z-axis in fig (a) produces one vanishing point, P1. Foreshortening
the x and z-axis results in two vanishing points in fig (b). Adding a y-axis foreshortening
in fig (c) adds vanishing point along the negative y-axis.

Parallel Projection
Parallel Projection use to display picture in its true shape and size. When projectors are
perpendicular to view plane then is called orthographic projection. The parallel
projection is formed by extending parallel lines from each vertex on the object until they
intersect the plane of the screen. The point of intersection is the projection of vertex.
Parallel projections are used by architects and engineers for creating working drawing of
the object, for complete representations require two or more views of an object using
different planes.

1. Isometric Projection: All projectors make equal angles generally angle is of


30°.
2. Dimetric: In these two projectors have equal angles. With respect to two
principle axis.
3. Trimetric: The direction of projection makes unequal angle with their principle
axis.
4. Cavalier: All lines perpendicular to the projection plane are projected with no
change in length.
5. Cabinet: All lines perpendicular to the projection plane are projected to one half
of their length. These give a realistic appearance of object.
e)Z-Buffer Algorithm
It is also called a Depth Buffer Algorithm. Depth buffer algorithm is simplest image
space algorithm. For each pixel on the display screen, we keep a record of the depth of
an object within the pixel that lies closest to the observer. In addition to depth, we also
record the intensity that should be displayed to show the object. Depth buffer is an
extension of the frame buffer. Depth buffer algorithm requires 2 arrays, intensity and
depth each of which is indexed by pixel coordinates (x, y).

Algorithm
For all pixels on the screen, set depth [x, y] to 1.0 and intensity [x, y] to a background
value.

For each polygon in the scene, find all pixels (x, y) that lie within the boundaries of a
polygon when projected onto the screen. For each of these pixels:

(a) Calculate the depth z of the polygon at (x, y)

(b) If z < depth [x, y], this polygon is closer to the observer than others already
recorded for this pixel. In this case, set depth [x, y] to z and intensity [x, y] to a value
corresponding to polygon's shading. If instead z > depth [x, y], the polygon already
recorded at (x, y) lies closer to the observer than does this new polygon, and no action
is taken.

3. After all, polygons have been processed; the intensity array will contain the solution.

4. The depth buffer algorithm illustrates several features common to all hidden surface
algorithms.

5. First, it requires a representation of all opaque surface in scene polygon in this case.
6. These polygons may be faces of polyhedral recorded in the model of scene or may
simply represent thin opaque 'sheets' in the scene.

7. The IInd important feature of the algorithm is its use of a screen coordinate system.
Before step 1, all polygons in the scene are transformed into a screen coordinate system
using matrix multiplication.

Limitations of Depth Buffer


1. The depth buffer Algorithm is not always practical because of the enormous size
of depth and intensity arrays.
2. Generating an image with a raster of 500 x 500 pixels requires 2, 50,000 storage
locations for each array.
3. Even though the frame buffer may provide memory for intensity array, the depth
array remains large.
4. To reduce the amount of storage required, the image can be divided into many
smaller images, and the depth buffer algorithm is applied to each in turn.
5. For example, the original 500 x 500 faster can be divided into 100 rasters each
50 x 50 pixels.
6. Processing each small raster requires array of only 2500 elements, but execution
time grows because each polygon is processed many times.
7. Subdivision of the screen does not always increase execution time instead it can
help reduce the work required to generate the image. This reduction arises
because of coherence between small regions of the screen.

You might also like