Professional Documents
Culture Documents
CG Assignment 20th July
CG Assignment 20th July
Ans.
Figures on a computer screen can be drawn using polygons. To fill those figures with
color, we need to develop some algorithm.There are two famous algorithms for this
purpose: Boundary fill and Scanline fill algorithms.
Boundary filling requires a lot of processing and thus encounters few problems in
real time. Thus the viable alternative is scanline filling as it is very robust in nature.
This article discusses how to use Scanline filling algorithm to fill colors in an image.
Scanline Polygon filling Algorithm
Scanline filling is basically filling up of polygons using horizontal lines or scanlines.
The purpose of the SLPF algorithm is to fill (color) the interior pixels of a polygon
given only the vertices of the figure. To understand Scanline, think of the image
being drawn by a single pen starting from bottom left, continuing to the right, plotting
only points where there is a point present in the image, and when the line is
complete, start from the next line and continue.
This algorithm works by intersecting scanline with polygon edges and fills the
polygon between pairs of intersections.
1. We will process the polygon edge after edge, and store in the edge Table.
2. Storing is done by storing the edge in the same scanline edge tuple as the
lowermost point's y-coordinate value of the edge.
3. After addition of any edge in an edge tuple, the tuple is sorted using insertion
sort, according to its xofymin value.
4. After the whole polygon is added to the edge table, the figure is now filled.
5. Filling is started from the first scanline at the bottom,and continued till the top.
6. Now the active edge table is taken and the following things are repeated for
each scanline:
i.Copy all edge buckets of the designated scanline to the active edge tuple
ii. Perform an insertion sort according to the xofymin values
iii. Remove all edge buckets whose ymax is equal or greater than the
scanline
iv. Fillup pairs of edges in active tuple, if any vertex is got, follow these
instructions:
- If both lines intersecting at the vertex are on the same side of the
scanline, consider it as two points.
- If lines intersecting at the vertex are at opposite sides of the
scanline, consider it as only one point.
v. Update the xofymin by adding slopeinverse for each bucket.
Q2
Ans.
Computer graphics deals with creation, manipulation and storage of different type of
images and objects.
Some of the applications of computer graphics are:
1. Computer Art:
Using computer graphics we can create fine and commercial art which include
animation packages, paint packages. These packages provide facilities for
designing object shapes and specifying object motion.Cartoon drawing,
paintings, logo design can also be done.
3. Presentation Graphics:
For the preparation of reports or summarising the financial, statistical,
mathematical, scientific, economic data for research reports, managerial
reports, moreover creation of bar graphs, pie charts, time chart, can be done
using the tools present in computer graphics.
4. Entertainment:
Computer graphics finds a major part of its utility in the movie industry and
game industry. Used for creating motion pictures , music video, television
shows, cartoon animation films. In the game industry where focus and
interactivity are the key players, computer graphics helps in providing such
features in the efficient way.
5. Education:
Computer generated models are extremely useful for teaching huge number of
concepts and fundamentals in an easy to understand and learn manner. Using
computer graphics many educational models can be created through which
more interest can be generated among the students regarding the subject.
6. Training:
Specialised system for training like simulators can be used for training the
candidates in a way that can be grasped in a short span of time with better
understanding. Creation of training modules using computer graphics is simple
and very useful.
7. Visualisation:
Today the need of visualise things have increased drastically, the need of
visualisation can be seen in many advance technologies , data visualisation
helps in finding insights of the data , to check and study the behaviour of
processes around us we need appropriate visualisation which can be achieved
through proper usage of computer graphics
8. Image Processing:
Various kinds of photographs or images require editing in order to be used in
different places. Processing of existing images into refined ones for better
interpretation is one of the many applications of computer graphics.
9. Machine Drawing:
Computer graphics is very frequently used for designing, modifying and
creation of various parts of machine and the whole machine itself, the main
reason behind using computer graphics for this purpose is the precision and
clarity we get from such drawing is ultimate and extremely desired for the safe
manufacturing of machine using these drawings.
Q-4. Write the Bresenham line generating algorithm for slope m is greater
than 1. Derive the various decision parameters also.
To chooses the next one between the bottom pixel S and top pixel T.
If S is chosen
We have xi+1=xi+1 and yi+1=yi
If T is chosen
We have xi+1=xi+1 and yi+1=yi+1
This difference is
s-t = (y-yi)-[(yi+1)-y]
= 2y - 2yi -1
Special Cases
Finally, we calculate d1
d1=△x[2m(x1+1)+2b-2y1-1]
d1=△x[2(mx1+b-y1)+2m-1]
Since mx1+b-yi=0 and m = , we have
d1=2△y-△x
Step1: Start Algorithm
Step4: Calculate dx = x2-x1
Calculate dy = y2-y1
Calculate i1=2*dy
Calculate i2=2*(dy-dx)
Calculate d=i1-dx
Step9: Increment x = x + 1
Step11: Go to step 7
Step12: End of Algorithm
Q 5 is in PDF
Q6
Q 7 Window:
1. A world-coordinate area selected for display is called a window.
2. In computer graphics, a window is a graphical control element.
3. It consists of a visual area containing some of the graphical user interface of
the program it belongs to and is framed by a window decoration.
4. A window defines a rectangular area in world coordinates. You define a
window with a GWINDOW statement. You can define the window to be larger
than, the same size as, or smaller than the actual range of data values,
depending on whether you want to show all of the data or only part of the
data.
Viewport:
1. An area on a display device to which a window is mapped is called a viewport.
2. A viewport is a polygon viewing region in computer graphics. The viewport is
an area expressed in rendering-device-specific coordinates, e.g. pixels for
screen coordinates, in which the objects of interest are going to be rendered.
3. A viewport defines in normalized coordinates a rectangular area on the
display device where the image of the data appears. You define a viewport
with the GPORT command. You can have your graph take up the entire
display device or show it in only a portion, say the upper-right part.
{Q . Find the normalization transformation window to viewport composite matrix, with window
lower left corner at (1,1) and upper right corner at (3,5) on to a viewport with lower left corner at
(5,5) and upper right corner at (10, 10).}
First of all, calculate scaling factor of x coordinate Sx and scaling factor of y coordinate Sy using
above mentioned formula.
Sx = ( 10 – 5 ) / ( 3 - 1 ) = 5 / 2,
Sy = ( 10 – 5 ) / ( 5 - 1 ) = 5 / 4,
N=
1 0 0
0 1 0
0 0 1
=>
5/2 0 0
0 5/4 0
0 0 1
=>
0 0 -1
0 1 -1
0 0 1
N=
5/2 0 -5/2
0 5/4 -5/4
0 0 1
Q8
Q-10. Write C language functions for flood fill and boundary
fill Algorithm
{
// check current pixel is old_color or not
if (getpixel(x, y) == old_col) {
// put new pixel with new color
putpixel(x, y, new_col);
// recursive call for bottom pixel fill
flood(x + 1, y, new_col, old_col);
// recursive call for top pixel fill
flood(x - 1, y, new_col, old_col);
// recursive call for right pixel fill
flood(x, y + 1, new_col, old_col);
// recursive call for left pixel fill
flood(x, y - 1, new_col, old_col);
}
}
Q12
Parametric Continuity Conditions
Raster-Scan Displays
The most common type of graphics monitor employing a CRT is the raster-scan
display, based on television technology. In a raster-scan system, the electron
beam is swept across the screen, one row at a time from top to bottom. As the
eledron beam moves across each row, the beam intensity is turned on and off to
create a pattern of illuminated spots. Picture definition is stored in a memory
area called the refresh buffer or frame buffer. This memory area holds the set of
intensity values for all the screen points. Stored intensity values are then retrieved
from the refresh buffer and "painted" on the screen one row (scan line) at
a time (Fig. 2-7). Each screen point is referred to as a pixel or pel (shortened
fonns of picture element). The capability of a raster-scan system to store intensity
information for each screen point makes it well suited for the realistic displav
of scenes containing subtle shading and color patterns. Home television sets and
printers are examples of other systems using raster-scan methods.
intensity range for pixel positions depends on the capability of the raster
system. In a simple black-and-white system, each screen point is either on or off,
so only one bit per pixel is needed to control the intensity of screen positions. For
a bilevel system, a bit value of 1 indicates that the electron beam is to be turn4
on at that position, and a value of 0 indicates that the beam intensity is to be off.
Additional bits are needed when color and intensity variations can be displayed.
Up to 24 bits per pixel are included in high-quality systems, which can require
severaI megabytes of storage for the frame buffer, depending on the resolution of
the system. A system with 24 bits per pixel and a screen resolution of 1024 bv
1024 requires 3 megabytes of storage for the frame buffer. On a black-and-white
system with one bit per pixeI, the frame buffer is commonly called a bitmap. For
systems with multiple bits per pixel, the frame buffer is Aten referred to as a
pixmap.
Refreshing on raster-scan displays is carried out at the rate of 60 to 80
frames per second, although some systems are designed for higher refresh rates.
Sometimes, refresh rates are described in units of cycles per second, or Hertz
(Hz), where a cycle corresponds to one frame. Using these units, we would describe
a refresh rate of 60 frames per second as simply 60 Hz. At the end of each
scan line, the electron beam returns to the left side of the screen to begin displaving
the next scan line. The return to the left of the screen, after refreshing each
scan line, is called the horizontal retrace of the electron beam. And at the end of
each frame (displayed in 1/80th to 1/60th of a second), the electron beam returns
(vertical retrace) to the top left comer of the screen to begin the next frame.
On some raster-scan systems (and in TV sets), each frame is displayed in
two passes using an interlaced refresh pmedure. In the first pass, the beam
sweeps across every other scan line fmm top to bottom. Then after the vertical retrace,
the beam sweeps out the remaining scan lines (Fig. 2-8). Interlacing of the
scan lines in this way allows us to see the entire s m n displayed in one-half the
time it would have taken to sweep amss all the lines at once fmm top to bottom.
Interlacing is primarily used with slower refreshing rates. On an older, 30 frameper-
second, noninterlaced display, for instance, some flicker is noticeable. But
with interlacing, each of the two passes can be accomplished in 1/60th of a second,
which brings the refresh rate nearer to 60 frames per second. This is an effective
technique for avoiding flicker, providing that adjacent scan lines contain similar
display information.
Random-Scan Displays
When operated as a random-scan display unit, a CRT has the electron beam directed
only to the parts of the screen where a picture is to be drawn. Randomscan
monitors draw a picture one line at a time and for this reason are also referred
to as vector displays (or stroke-writing or calligraphic diisplays). The
component lines of a picture can be drawn and refreshed by a random-scan sys
tem in any specified order (Fig. 2-9). A pen plotter operates in a similar way and
is an example of a random-scan, hard-copy device.
Refresh rate on a random-scan system depends on the number of lines to be
displayed. Picture definition is now stored as a set of linedrawing commands in
an area of memory r e f e d to as the refresh display file. Sometimes the refresh
display file is called the display list, display program, or simply the refresh
buffer. To display a specified picture, the system cycles through the set of commands
in the display file, drawing each component line in turn. After all linedrawing
commands have been processed, the system cycles back to the first line
command in the list. Random-scan displays arr designed to draw all the component
lines of a picture 30 to 60 times each second. Highquality vector systems are
capable of handling approximately 100,000 "short" lines at this refresh rate.
When a small set of lines is to be displayed, each rrfresh cycle is delayed to avoid
refresh rates greater than 60 frames per second. Otherwise, faster refreshing oi
the set of lines could bum out the phosphor.
Random-scan systems are designed for linedrawing applications and cannot
display realistic shaded scenes. Since pidure definition is stored as a set of
linedrawing instructions and not as a set of intensity values for all screen points,
vector displays generally have higher resolution than raster systems. Also, vector
displays produce smooth line drawings because the CRT beam directly follows
the line path. A raster system, in contrast, produces jagged lines that are plotted
as d h t e point sets.
Perspective Projection
In perspective projection farther away object from the viewer, small it appears. This
property of projection gives an idea about depth. The artist use perspective projection
from drawing three-dimensional scenes.
Vanishing Point
It is the point where all lines will appear to meet. There can be one point, two point, and
three point perspectives.
Two Points: There are two vanishing points. One is the x-direction and other in the y
-direction as shown in fig (b)
Three Points: There are three vanishing points. One is x second in y and third in two
directions.
In Perspective projection lines of projection do not remain parallel. The lines converge at
a single point called a center of projection. The projected image on the screen is
obtained by points of intersection of converging lines with the plane of the screen. The
image on the screen is seen as of viewer's eye were located at the centre of projection,
lines of projection would correspond to path travel by light beam originating from object.
d) Projection and its types
Projection
It is the process of converting a 3D object into a 2D object. It is also defined as mapping
or transformation of the object in projection plane or view plane. The view plane is
displayed surface.
Perspective Projection
In perspective projection farther away object from the viewer, small it appears. This
property of projection gives an idea about depth. The artist use perspective projection
from drawing three-dimensional scenes.
Vanishing Point
It is the point where all lines will appear to meet. There can be one point, two point,
and three point perspectives.
Two Points: There are two vanishing points. One is the x-direction and other in the y
-direction as shown in fig (b)
Three Points: There are three vanishing points. One is x second in y and third in two
directions.
In Perspective projection lines of projection do not remain parallel. The lines converge
at a single point called a center of projection. The projected image on the screen is
obtained by points of intersection of converging lines with the plane of the screen.
The image on the screen is seen as of viewer's eye were located at the centre of
projection, lines of projection would correspond to path travel by light beam
originating from object.
Parallel Projection
Parallel Projection use to display picture in its true shape and size. When projectors are
perpendicular to view plane then is called orthographic projection. The parallel
projection is formed by extending parallel lines from each vertex on the object until they
intersect the plane of the screen. The point of intersection is the projection of vertex.
Parallel projections are used by architects and engineers for creating working drawing of
the object, for complete representations require two or more views of an object using
different planes.
Algorithm
For all pixels on the screen, set depth [x, y] to 1.0 and intensity [x, y] to a background
value.
For each polygon in the scene, find all pixels (x, y) that lie within the boundaries of a
polygon when projected onto the screen. For each of these pixels:
(b) If z < depth [x, y], this polygon is closer to the observer than others already
recorded for this pixel. In this case, set depth [x, y] to z and intensity [x, y] to a value
corresponding to polygon's shading. If instead z > depth [x, y], the polygon already
recorded at (x, y) lies closer to the observer than does this new polygon, and no action
is taken.
3. After all, polygons have been processed; the intensity array will contain the solution.
4. The depth buffer algorithm illustrates several features common to all hidden surface
algorithms.
5. First, it requires a representation of all opaque surface in scene polygon in this case.
6. These polygons may be faces of polyhedral recorded in the model of scene or may
simply represent thin opaque 'sheets' in the scene.
7. The IInd important feature of the algorithm is its use of a screen coordinate system.
Before step 1, all polygons in the scene are transformed into a screen coordinate system
using matrix multiplication.