Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 13

Chap 2

1. Define Graphic, Graphics & Computer Graphics. Discuss about different types of computer
graphics.
 Graphic: A graphic is an image or visual representation of an object.
Graphics: Graphics are visual images or designs on some surface, such as a wall, canvas, screen, paper, or
stone to inform, illustrate, or entertain. Graphics often combine text, illustration, & color. Graphics can be
functional or artistic.
Computer Graphics: Computer graphics are simply images displayed on a computer screen.

There are basically 2 types of computer graphics. They are:


 Raster graphics is a dot matrix data structure, representing a generally rectangular grid of pixels, or
points of color, viewable via a monitor, paper, or other display medium. Raster images are stored in
image files with varying formats.
 Vector graphics is the use of polygons to represent images in computer graphics. Vector graphics
are based on vectors, which lead through locations called control points or nodes. It is the creation of
digital images through a sequence of mathematical statements that place lines & shapes in a given
two-dimensional or three-dimensional space. In vector graphics, the file that results from a graphic
artist's work is created & saved as a sequence of vector statements.

2. Define Pixel, Resolution & Aspect Ratio.


 Pixel: A pixel is the smallest unit of a digital image or graphic that can be displayed & represented on a
digital display device. A pixel is represented by a dot or square on a computer monitor display screen.
Basically pixel is the combination of 3 dots called RGB.
Resolution: In computers, resolution is the number of pixels contained on a display monitor, expressed in
terms of the number of pixels on the horizontal axis & the number on the vertical axis. The sharpness of the
image on a display depends on the resolution & the size of the monitor.
Aspect Ratio: An aspect ratio is an attribute that describes the relationship between the width & height of an
image. Aspect ratio is expressed by the symbolic notation: X:Y. The values of X & Y are not the actual
width & height of the image, but describe the relationship between them.

3. Write the characteristics of vector & raster graphics. Differentiate between them.
 Characteristics of raster graphics:
 A raster graphics is a dot matrix data structure.
 A raster is a grid of x & y coordinates on a display space.
 Representing a generally rectangular grid of pixels, or points of color. So color can’t be changed.
 Resolution is defined in the number of dots per inch (dpi). Scaling up an image decrease the image
quality.
 The most commonly used file formats are: BMP (Bitmap), TIFF (Tag Interleave Format), JPEG
(joint Photographic Expert Group), GIF (Graphics Interchange Format), PNG (Portable Network
Graphic), PSD (Adobe PhotoShop)
 The larger the raster image, & the more colors used – the larger the file.

Characteristics of Vector graphics:


 Vector graphics is made up of paths.
 It is the creation of digital images through a sequence of mathematical statements.
 Changing the color of the vector object – since it does not use pixels – is relatively easy.
 Vector images can easily be ‘scaled’ up or down without loosing any image quality.
 The most commonly used file formats are: EPS (Encapsulated PostScript), WMF (Windows
Metafile), AI (Adobe Illustrator), DXF (AutoCAD), SVG (Scalable Vector Graphics).
 These files are relatively small in size by comparison.
Raster vs. Vector:
 The basic difference between raster & vector is that a raster image is made up of pixels, whereas a
vector image is made up of paths.
 A raster images uses different colored pixels, which are arranged in a manner to display an image. A
vector image is made up of paths, each with a mathematical formula, which tells the path how the
each part of the image is shaped & what color it is bordered with or filled by.
 Raster images are mainly used for high density images, or images with many different colors, as each
pixel can have a different color, which can be overlapped for different colors & shapes. A vector
image is limited, as an image has to be looped & closed off before it can be filled with color.
 Vector images can be resized without any visible negative impacts. A raster image, on the other hand,
is visibly pixilated when enhanced.
 Vector images are often used for logos, letterheads, texts, & other designs. Raster images are used for
photographs & high colored images in web & print.
 Vector images cannot be used in an electronic format; they must first be converted to a raster image.
Conversion from vector to raster is easier, rather than the other way around.

4. Briefly describe about RGB & CMY Color Model.


The purpose of a color model is to facilitate the specification of colors in some standard generally accepted
way. In essence, a color model is a specification of a 3-D coordinate system & a subspace within that system
where each color is represented by a single point.
RGB Color Model: In the RGB model, each color appears as a combination of red, green, & blue. This
model is called additive, & the colors are called primary colors. The primary colors can be added to produce
the secondary colors of light magenta (red plus blue), cyan (green plus blue), & yellow (red plus green). The
combination of red, green, & blue at full intensities makes white.
The color subspace of interest is a cube shown in Figure: ‘a’, in which RGB values are at three corners;
cyan, magenta, & yellow are the three other corners, black is at their origin; & white is at the corner farthest
from the origin.
The gray scale extends from black to white along the diagonal joining these two points. The colors are the
points on or inside the cube, defined by vectors extending from the origin.
CMY Color Model: The CMY color model is a subset of the RGB model & is primarily used in color print
production. CMY is an acronym for cyan, magenta, & yellow along with black (noted as K). The CMY color
space is subtractive, meaning that cyan, magenta yellow, & black pigments or inks are applied to a white
surface to subtract some color from white surface to create the final color. For example cyan is white minus
red, magenta is white minus green, & yellow is white minus blue. Subtracting all colors by combining the
CMY at full saturation should, in theory, render black. The CMY cube is shown in Figure ‘b’, in which
CMY values are at three corners; red, green, & blue are the three other corners, white is at the origin; &
black is at the corner farthest from the origin.
5. Define Direct Coding. Write the advantages & disadvantages of Direct Coding.
 In computer graphics, direct coding is an algorithm that provides some amount of storage space for each
pixel so that the pixel is coded with a color.
Advantage of Direct Coding:
 Direct Coding allows a certain amount of storage space for each pixel to code its color.
 It allows each primary color to have 256 different intensity levels.
 A pixel can take on a color from 16.7 million possible choices in direct coding.
 It allows the representation of black & white & gray-scale images.
 It provides simplicity & supports a variety of application.
Disadvantage of Direct Coding: If an image contains a different color in every pixel, the 23-bit
representation’s ability of direct coding to have 16.7 million different colors appear simultaneously in a
single image will seem to be overkill.

Chap 3
6. Describe the Bresenham's line algorithm.
 Bresenham's line algorithm: Bresenham's line algorithm produces mathematically accurate results using
only integer addition, subtraction, & multiplication by 2. The method works as follows:
Assume that we want to scan convert the following line where 0< m<1. we start with p1'(x1',y1'), then select
subsequent pixels as we work our way to the right, any one pixel position at a time in the horizontal
direction towards p2'(x2',y2'). Once a pixel is chosen at any step the next pixel is either the one to its right or
the one to its right & up due to the limit on m. Using the rotation of the given figure the coordinates of the
last chosen pixel upon entering step i are( xi.yi). Now the next pixel is chosen, between bottom pixel S & the
top pixel T.
If S is chosen then
xi+1 =x1+1 ….(1) yi+1 = yi ….(2)
If T is chosen we have,
xi+1 =x1+1 & yi+1 = yi+1 ….(3)
The actual y coordinate of the line at x=xi+1 is y=mxi+1+b ….(4)
The distance from S to the actual line in the y direction s= y–yi ….(5)
The distance from T to the actual line in the y direction is t = (yi+1) –y ….(6)
Now consider the distance between two distance values: s–t.
When (s–t)<0, then s<t & the closest pixel is s. Conversely, when (s–t)>=0, then s>t & the closest pixel is T.
This difference is,
s–t = (y–yi) – [(yi+1) –y] = 2y–2yi–1=2m(xi+1)+2b–2yi–1 ….(7)
Substituting m by ∆y/∆x in equ. (7), so
s–t = 2 ∆y/∆x(xi+1)+2b-2yi–1
∆x (s–t) = 2 ∆y(xi+1)+2b∆x –2yi∆x –∆x
∆x (s-t) = 2 ∆yxi+2 ∆y+2b∆x–2yi∆x –∆x
∆x (s-t) = 2∆yxi–2∆xyi+2∆y+∆x(2b–1) ….(8)
Now we introducing decision variable di=∆x(s–t) which has the same sign as (s–t) since ∆x is positive in our
case, so we have
di = 2∆yxi – 2∆xyi+C [Here, C=2∆y+∆x(2b–1)] ….(9)
Similarly, we can write the decision variable di+1 for the next step as,
di+1 = 2∆yxi+1 – 2∆xyi+1+C ….(10)
Now (10) – (9)
di+1 – di = 2∆y(xi+1 – xi) – 2∆x(yi+1 – yi) ….(11)
Since xi+1= xi+1, so
From (11) di+1 = di+2∆y(xi+1 – xi) – 2∆x(yi+1 – yi) = di+2∆y– 2∆x(yi+1 – yi) ….(12)
If the chosen pixel is the top pixel T (i.e. di ≥ 0) then yi+1=yi+1, so,
from (12) di+1= di+ 2∆y– 2∆x(yi+1– yi) = di+ 2∆y– 2∆x = di+ 2(∆y–∆x) ….(13)
On the other hand, if the chosen pixel is the bottom pixel S (i.e. di <0) then yi+1=yi, so
From (12) di+1= di+ 2∆y– 2∆x(yi– yi) = di+ 2∆y ….(14)
Hence from equation (13) & (14) we have,
di+1= di+ 2(∆y–∆x) if di ≥ 0
di+ 2∆y if di < 0
Finally we calculate d1, the base case value for this recursive formula from the original definition of the
decision variable,
d1 = ∆x[2m(x1+1)+2b–2y1 – 1] = ∆x[2(mx1+b–y1) +2m–1] ….(15)
Since mx1+b–y1=0, we have from (15)
d1 = ∆x [0+2m-1] = 2∆x*∆y/∆x – ∆x = 2∆y – ∆x

In summary, Bresenham's algorithm for scan-converting a line from p1'(x1',y1') to p2'(x2',y2') with x1'< x2' &
0<m<1 can be stated as follows:
int x = x1', y = y1';
int dx = x2' = x1', dy = y2' – y1', dT = 2(dy –dx), ds=2dy,
int d =2dy-dx;
setPixel(x,y);
while(x<x2'){
x++;
if(d<0)
d=d+ds
else{
y++;
d=d+dT; }
setPixel(x,y); }

Fig: Scan-converting a line Fig: choosing pixel in Bresenham's circle algorithm

7. Describe the Bresenham's circle algorithm.


 Bresenham's circle algorithm: Scan-converting a circle using works as follows.:
If the eight-way symmetry of a circle is used to generate a circle, points will only have to be generated
through a 45° angle. &, if points are generated from 90° to 45°, moves will be made only in the +x & -y
directions.
The best approximation of the true circle will be described by those pixels in the raster that fall the least
distance from the true circle. In the following figure if points are generated from 90° & 45° each new points
closest to the true circle can be found by taking either of two actions:
(1) Move in the direction one unit or
(2) Move in the x direction one unit & move in the negative y direction one unit.
Therefore, a method of selecting between these two choices is all that is necessary to find the points closest
to the true circle.
Assume that (xi, yi) are the coordinates of the last scan converted pixel upon entering step i. Let, the distance
from the origin to pixel T squared minus the distance to the true circle squared = D(T). Then let the distance
from the origin to pixel S squared minus the distance to the true circle squared = D(S). As the coordinates of
T are (xi +1, yi) & those of S are (xi +1, yi –1), the following expressions can be developed:
D (T) = (xi+1)2+yi2 –r2 …. (1) D (S) = (xi+1)2+ (yi –1)2 –r2 ….(2)
This function D provides a relative measurement of the distance from the center of a pixel to the true circle.
Since D (T) will always be positive & D (S) will always be negative, a decision variable di may be defined as
follows: di=D (T)+D (S) ….(3)
Therefore
di = (xi+1)2+yi2 –r2 + (xi+1)2+ (yi –1)2 –r2 = 2(xi+1)2+yi2+(yi-1)2-2r2 ….(4)
If di < 0,then |D(T)|<|D(S)| & pixel T is chosen,
If di ≥0,then |D(T)| ≥|D(S)| & pixel S is chosen,
Now we write the decision variable for the next step,
From (4) di+1 = 2(xi+1+1)2+yi+12+ (yi+1 –1)2 –2r2 ….(5)
Hence, di+1 – di = 2(xi+1+1)2+yi+12+ (yi+1 –1)2 –2(xi+1)2 –yi2 –(yi-1)2 ….(6)
Since xi+1 = xi+1, we have
di+1 = di +2(xi+1+1)2+yi+12+ (yi+1 –1)2 –2(xi+1)2 –yi2 –(yi-1)2
= di +2(xi+2)2+yi+12+ (yi+1 –1)2 –2(xi+1)2 –yi2 – (yi-1)2
= di +2(xi2+4xi+4) +yi+12+ (yi+12 –2yi+1+1) –2(xi2+2xi+1)2 –yi2 – (yi2-2yi+1)
= di +4xi+2(yi+12 –yi2) – 2(yi+1 –yi) +6 …. (7)
If T is chosen pixel (i.e di<0) then yi+1=yi & so from (6)
di+1= di +4xi+2(yi2 –yi2) – 2(yi –yi) +6 [yi+1=yi]
= di+4xi+6 …. (8)
On the other h&, if s is the chosen pixel (i.e di≥0) then yi+1= yi –1 & so,
From (7) di+1 = di +4xi+2[(yi –1)2 –yi2] – 2(yi –1 –yi) +6
= di +4xi+2[yi2 –2yi+1–yi2] + 8
= di +4[xi –yi] + 10 …. (9)
Hence from (8) & (9) we get,
di+1 = di+4xi+6 if di<0
di +4[xi –yi] + 10 if di≥0

Finally, we set (0,r) to be the starting pixel coordinates & compute the base case value di for this recursive
formula from the original definition of di,
From (4) di= 2(0+1)2+r2+(r-1)2 –2r2 = 3 –2r

We can now summarize the algorithm for generating all the pixel coordinates in the 90° to 45° octant that are
needed when scan-converting a circle of radius r,
int x=0, y=r, d=3-2r,
while (x≤y){
setPixel(x,y);
if (d<0)
d = d+4x+6;
else {
d = d+4(x –y)+10;
y – –;
}
x++;
}
8. Describe the Midpoint circle algorithm.
 Midpoint circle algorithm is based on the following function for testing the spatial relationship between
an arbitrary point (r, y) & a of radius r centered at the origin:

<0 (x,y) inside the circle


f(x,y) = x2+y2-r2 =0 (x,y) on the circle
>0 (x,y) outside the circle

Now consider the coordinates of the point halfway between pixel T & pixel s in the figure (xi+1, yi – 1/2).
This called the midpoint & we use it to define a decision parameter:
pi = f(xi+1, yi – 1/2) = (xi+1)2 + (yi – 1/2)2 – r2
If pi is negative, the midpoint inside the circle, & we choose pixel T. On the other h&, if pi is positive (or
equal to zero), the midpoint is outside the circle (or on the circle) & we choose pixel S. Similarly, the
decision on parameter for the step is
pi+1 = (xi+1+1)2 + (yi+1 – 1/2)2 – r2
Since xi+1 = xi+1, we have
pi+1 – pi= [(xi+1+1)+1]2 – (xi+1)2+ (yi+1 – 1/2)2 – (yi – 1/2)2
Hence
pi+1 = pi + 2(xi+1)+1 + (y2i+1 – y2i)– (yi+1 – yi)
If pixel T is chosen (meaning pi<0), we have yi+1 = yi, On the other h&, if pixel S is chosen (meaning pi ≥0),
we have yi+1 = yi – 1. Thus,
pi+1 = pi + 2(xi+1)+1 if pi<0
pi + 2(xi+1)+1 – 2(yi+1 – 1) if pi≥0
We can continue to simplify this in terms of (xi,yi) & get
pi+1 = pi + 2xi+3 if pi<0
pi + 2(xi – yi)+5 if pi≥0
Or we can write it in terms of (xi+1, yi+1) & have
pi+1 = pi + 2xi+1+1 if pi<0
pi + 2(xi+1 – yi+1)+1 if pi≥0
Finally, we compute the initial value for the decision parameter using the original definition of pi & (0, r):
pi = (0+1)2+(r-1/2)2 – r2 = 5/4 - r
One can see that this is not really integer computation. However, when r is an integer we can simply set p1=1
– r. The error of being 1/4 less than the precise value dose not prevent p1 from.getting the appropriate sign. It
does not affect the rest of the scan conversation process either, because the decision variable is only updated
wih integer increments in subsequent steps.
The fallowing is a description of this midpoint circle algorithm that generates the pixel coordinates in the
90⁰ to 45⁰ octant:
int x=0,y=r,p=1- r,
while( x<=y) {
setPixel(x,y);
if(p<0)
p=p+2x+3;
else {
p=p+2(x-y)+5
y--; }
x++ }
9. Describe Region Filling, Flood-fill Algorithm, Boundary-fill Algorithm. Differentiate between 4-
connected & 8-connected pixels.
 Region Filling: Region filling is the process of “coloring in” a definite image area or region. Regions
may be defined at the pixel or geometrical level. At the pixel level, we describe a region either in terms of
the bounding pixels that outline it or as the totality of pixels that comprise it. In the first case the region is
called boundary-defined & the collections of algorithms used for filling such a region are collectively called
boundary fill algorithms. The other type of region is called an interior-defined region & the algorithms are
called flood-fill algorithms. At the geometric level a region is defined or enclosed by such contouring
elements as connected lines & curves. For example, a polygonal region, or a filled polygon, is defined by a
closed polyline, which is a polyline that has the end of the last line connected to the beginning of the first
line.

Flood-fill Algorithm: This algorithm also begins with a starting pixel inside the region: necks to see if the
pixel has the region's original color. If the answer is yes, it fills the pixel with a new color & uses each of the
pixel's neighbors as a new seed in a recursive call. If the answer is no, it returns to the caller is method shares
great similarities in its operating principle with the fill algorithm. It a particularly useful when the region to
be filled has no uniformly colored boundary on the other h&, a region that has a well-defined boundary but is
itself multiply colored would be better h&led by the boundary fill method.

A Boundary-fill Algorithm: This is a recursive algorithm that begins with a starting pixel called a seed,
inside the region. The algorithm checks to see if this pixel is a boundary pixel or has already been filled. If
the answer is no, it fills the pixel & makes a recursive call to itself using each & every neighboring pixel as a
new seed. If the answer is yes, the algorithm simply returns to its caller. This algorithm works elegantly on
an arbitrarily shaped region by chasing & filling all non-boundary pixels that are connected to the seed,
either directly or indirectly through a chain of neighboring relations.

4-Connected vs. 8-Connected: There are two ways in which pixels are considered connected to each other
to form a -continuous" boundary. One method is called 4-connected, where a pixel may have up to four
neighbors; the other is called 8-connected, where a pixel may have up to eight neighbors. Using the 4-
connected approach, the pixels do not define a region since several pixels such as A & B are not connected.
However using the 8-connected definition we identity a triangular region. Using the 8-connected approach,
we do not have an enclosed region since "interior" pixel C is connected to "exterior" pixel D. On the other
h&, if we use the 4-connected definition we have a triangular region since no interior pixel is connected to
the outside. We use the 8-connected definition for the boundary pixels & the 4-connected definition for the
interior pixels. In fact, using the same definition for both boundary & interior pixels would simply result in
contradiction. For example, if we use the 8-connected approach we would have pixel A connected to pixel B
(continuous boundary) & at the same time pixel C connected to pixel D (discontinuous boundary). On the
other h&, if we use the 4-connectd definition we would have pixel A disconnected from pixel B
(discontinuous boundary) & at the same time pixel C disconnected from pixel D (continuous boundary).
10. What is aliasing effect? What are the different types of aliasing effect? Describe briefly. What is
anti-aliasing effect? Write the techniques of anti aliasing.
 Aliasing: Scan conversion is essentially a systematic approach to mapping objects that are defined in
continuous space to their discrete approximation. The various forms of distortion that result from this
operation are collectively referred to as the aliasing effects of scan conversion.
Different aliasing effect:
Staircase: A common example of aliasing effects is the staircase or jagged appearance we see when scan-
converting a primitive such as a line or a circle. We also see the stair steps or “jaggies” along the border of a
filled region.
Unequal Brightness: Another artifact that is less noticeable is the unequal brightness of lines of different
orientation. A slanted line appears dimmer than a horizontal or vertical line, although all are presented at the
same intensity level. The reason for this problem can be explained using the figure where the pixels on the
horizontal line are placed one unit apart, whereas those on the line are approximately 1.414 units apart. This
difference in density produces the perceived difference in brightness.

The picket fence problem: The picket fence problem occurs when object is not aligned with, or does not fit
into, the pixel grid properly. Figure (a) shows a picket fence where the distance between two adjacent pickets
is not a multiple of unit distance pixels. Scan converting it normally into the image space will result in
uneven distances between pickets since the endpoints of the will have to be snapped to coordinates [Fig. (b)].
This is sometimes called global aliasing, as the overall length of the picket fence is approximately correct.
On the other hand, an attempt to maintain equal spacing will greatly distort the overall length of the fence
[Fig. (c)]. This is sometimes called local aliasing, as the distances between pickets are kept close to their true
distances.

Anti Aliasing: There are techniques that can greatly reduce aliasing artifacts & improve the appearance of
images without increasing their resolution. These techniques are collectively referred to as anti-aliasing
effect.
Techniques of anti aliasing: Pre-filtering & Post filtering are 2 types of general-purpose anti-aliasing
techniques. The concept of filtering originates from the field of signal processing, where true intensity values
are continuous signals that consist of elements of various techniques. A pre-filtering technique works on the
true signal in the continuous space to derive proper values for individual pixels (filtering before samoling),
whereas a post--filtering technique takes discrete samples of the continuous signal and uses the samples to
compute pixel values (sampling before filtering).

Chap 5
11. Define WCS, VCS, NDCS, Viewport, and Window to viewport mapping, Work station
transformation, Clipping, Clipping Window, Convex polygon & Concave Polygon.
 WCS: Objects are placed into the scene by modeling transformation to a master coordinate system,
commonly referred to as the world coordinate system.
VCS: Sometimes an additional coordinate system called the viewing coordinate system is introduced to
simulate the effect of moving and/or tilting the camera.
NDCS: Normalized device coordinate system is a tool in which a unit (1 x 1) square whose lower left corner
is at the origin of the coordinate system defines the display area of a virtual display device.
Viewport: A rectangular viewport with its edges parallel to the axes of the NDCS is used to specify a sub-
region of the display area that embodies the image.
Window to viewport mapping: The process that converts object coordinates in WCS to normalized device
coordinates is called window to viewport mapping or normalized transformation.
Work station transformation: The process that maps normalized device coordinates to discrete
device/image coordinate is called work station transformation.
Clipping: Clipping is an operation that eliminates object or portions of objects that are not visible through
the window to ensure the proper construction of the corresponding image.
Clipping Window: Clipping may occur in the world coordinate or viewing coordinate space, where the
window used to clip the objects known as the clipping window.
Convex & Concave Polygon: A polygon is called convex, if the line joining any two interior points of the
polygon lies completely inside the polygon. A non-convex polygon said to be concave. A polygon with
vertices P1, ……,PN is said to be positively oriented if a tour of the vertices in the given order produces a
counterclockwise & negatively oriented if a tour of the vertices in the given order produces a clockwise.

12. Discuss about the clipping categories available.


All lines fall into one of the following 3 clipping categories:
1. Visible – both endpoints of the line lie within the window.
2. Not visible – the line definitely lies outside the window. This will occur if the line form (x1,y1) to
(x2,y2) satisfies any one of the following 4 inequalities:
x1, x2 > xmax y1, y2 > ymax
x1, x2 < xmin y1, y2 < ymin
3. Clipping Candidate – The line is in neither category 1 or 2.

13. Write the Weiler-Atherton algorithm.


 Let the clipping window be initially called the clip polygon, and the polygon to be clipped subject
polygon. We start with an arbitrary vertex of the subject polygon and trace around its border in the clockwise
direction until an intersection with the clip polygon is encountered:
 If the edge enters the clip polygon, record the intersection point and continue to trace subject
polygon.
 If the edge leaves the clip polygon, record the intersection point and make a right turn to follow
polygon in the same manner.
Whenever our path of traversal a sub-polygon we output the sub-polygon as part of the overall result We to
trace the forms recorded intersection point that then continue rest of the original subject polygon from a
marks the beginning of a edge or portion of an edge. The algorithm terminates when the entire border of the
original subject polygon has been traced exactly once.

14. Define polygon meshes. How to represent a polygon meshes?


 Polygon Meshes: A polygon meshes is a collection of edges, vertices and polygons connected such that
each edge is shared by a most two polygons. An edge connects two vertices and a polygon is a close
sequence of edge.
There are 3 polygon mesh representations. They are:
1. Explicit Representation: In explicit representation each polygon is represented by a list of vertex
coordinates. P = ((x1,y1,z1), (x2,y2,z2), …… , (xn,yn,zn))
2. Pointer to a vertex list: Polygons defined with pointers to a vertex list have each vertex in the polygon
mesh stored just once. V = ((x1,y1,z1), (x2,y2,z2), …… , (xn,yn,zn)). A polygon is defined by a list of
indices into the vertices list.
3. Pointer to an Edge list: Polygons by pointer to an edge, we again have the vertex list V, but represent
a polygon as a list of pointers to an edge list, in which each edge occurs just once. In turn each edge
in the edge list points to the two vertices in the vertex list defining the edge, and also to the one or
two polygons to which the edge belongs.
Image Processing
15. Write the processes of DCT & DWT.
 DCT (Discrete Cosine Transform) Process:
1. The image is broken into 8X8 blocks of pixels
2. Working from left to right, top to bottom, the DCT is applied to each block.
3. Each block is compressed through quantization
4. The array of compressed blocks that constitute the image is stored in a drastically reduced amount of
space.
DWT (Discrete Wavelet Transform) Process:
1. Find the average of each pair of samples
2. Find the difference between the average & sample
3. Fill the first half with averages
4. Fill the second half with differences
5. Repeat the process on the first half

16. Write the advantage & disadvantages of DWT & Hadamard Transform.
 Advantages of DWT:
1. Higher flexibility: Wavelet function can be freely chosen. Such as Haar, Morlet & Daubechies.
2. No need to divide the input coding into non-overlapping 2-D blocks & it has higher compression
ratios avoid blocking artifacts.
3. Transformation of the whole image
4. Better identification of which data is relevant to human perception.
Disadvantages of DWT
1. The cost of computing DWT as compared to DCT is higher.
2. The use of larger DWT basis functions produces blurring & ringing noise near edge regions in
images or video frames
3. Longer compression time
4. Lower quality than JPEG at low compression rates
Advantages of Hadamard Transform:
1. Good energy compaction property
2. Low loss in image information (higher image fidelity)
3. Greater reliability of watermark detection.
4. Higher data hiding capacity than others.
5. Hadamard transform is faster than sinusoidal transforms. The fast Hadamard transform (FHT) has
been used for high speed
6. applications. The Hadamard only requires additions, but no multiplication.
7. The Hadamard is used in image compression when DCT is too
8. costly & cannot be done in real-time.
Disadvantages of Hadamard Transform:
1. Depends on N×N (Square size) Hadamard matrix, N= 2n, n=1,2,3,…, No other size is allowed.
2. It performs well on block wise signal such as 8×8 or 4 × 4.
18. Write the characteristics and energy compaction properties of Hadamard transform. Why we use
8×8 pixel groups in DCT?
 Characteristics of Hadamard Matrix
 It is an orthogonal square matrix.
 Only has +1 and -1 element values.
 It has unique sequences.
Energy Compaction Property: Hadamard transform packs most of the energy into the upper left corner of the
transformed matrix (DC) and AC coefficients arrange in Zigzag order from low frequency components to
high frequency components.
In 8x8 blocks can drop a lot of the information without creating unacceptable blocking artifacts. That’s why
8*8 bocks is used in DCT.

19. What do you mean by LL, LH, HL, HH? Why the quality 1 or 100 is not used for quantization in
DCT?
 LL : Represents the approximated version of the original at half the resolution.
LH: The LH block contains vertical edges
HL: The HL block contains horizontal edges
HH: we find edges of the original image in diagonal direction

The JPEG standard quantization matrix of the DCT coefficients with a quality level of 50 provides high
compression and excellent decompressed image quality. Let n is level of Quality. When n is greater than 50,
we get on less compression and high quality and when n is less than 50 we get on more compression and less
quality.
20. Define Digital Image Processing. Write down the fundamental steps of digital image processing.
 Digital Image Processing: An image may be defined as a two-dimensional function, f (x, y), where x &
y are spatial coordinates & the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray
level of the image at that point. When x, y & the intensity values of f are all finite, discrete quantities, we call
the image a digital image. The field of digital image processing refers to processing digital image by means
of a digital computer. A digital image is composed of a finite number of elements, each of which has a
particular location & value. These elements are called picture elements, image elements, pels & pixels.
There are so many steps of fundamental steps of image processing. They are-
1. Image acquisition: Acquisition could be as simple as being given an image that is already in digital
form. The image acquisition stage involves preprocessing such as scaling.
2. Image Enhancement: The idea behind enhancement techniques is to bring out detail that is obscured
or simply to highlight certain features of interest in an image.
3. Image Restoration: Image restoration is an area that also deals with improving the appearance of an
image. Image enhancement is subjective whereas image restoration is objective. In this sense
restoration techniques tend to be based on mathematical or probabilistic models of image degradation.
4. Color image Processing: Color image processing is an area that has been giving in importance to be
cause of the significant increase in the use of digital images over the internet.
5. Wavelets: Wavelets are the foundation for representing images in various degrees of resolution. Such
as image date compression, pyramidal representation etc.
6. Compression: Compression deals with technique for reducing the storage required to save an image or
the bandwidth required to transmit it.
7. Morphological-processing: Morphological processing deals with the tools for extracting image
components that are useful in the representation & description of shape.
8. Segmentation: Segmentation procedures partition an image into its constituent parts or objects.
9. Representation & description: Representation & description always follow the output of a
segmentation stage, which usually is row pixel data; constitute either the boundary of a region or all the
points in the region itself.
10. Recognition Object: Recognition is the process that assigns a label to an object based on its
descriptors.
11. Knowledge base: Knowledge about a problem domain is coded into an image processing system in the
form of a knowledge database. This knowledge may be as simple as detailing region of an image where
the information of interest is known to be located.

21. Write down the components of image processing system and describe it.
 Components of image processing:
** With reference to Sensing, two elements are required to acquire digital images. The 1st is a physical
device that is sensitive to the energy radiated by the object we wish to image. The 2nd called a digitizer is a
device for converting the output of the physical sensing device into digital form.
** Specialized image processing hardware usually consists of the digitizer plus hardware that performs
other primitive operations, such as an Arithmetic Logic unit (ALU) performs arithmetic & logic operations
in parallel on entire images.
** The computer is an image processing system is a general purpose computer & can range form a PC to
supercomputer.
** Software for image processing consists of specialized modules that perform specific tasks.
** Mass storage capability is a must in size 1024*1024 pixels in which the intensity of each pixel is an 8bit
quantity, requires one megabyte of storage space if the image is not compressed.
** Image displays in use today are mainly color TV monitors. Monitors are driven by the output of image &
graphics display cards that are an internal part of the computer system.
** Hardcopy devices for recording images include laser printers, film cameras heat sensitive devices, inkjet
units & digital units such as optical & CD-ROM disks.
** Networking is almost a default in any computer system in use today. Because of the large amount of data
inherent in image processing applications, the key consideration in image transmission is bandwidth.
Fortunately, this situation is improving quickly as a result of optical fiber& other broadband technologies.

22. Short Notes


 Neighbors of a pixel: A pixel p at coordinates (x, y) has four horizontal &ventricle neighbors whose
coordinates are given by
(x+1, y), (x-1, y),(x, y+1), (x, y-1)
This sets of pixels called the 4-neighboues of p, is denoted by N4(P). The 4 diagonal neighbors of p have
coordinates
(x+1, y+1), (x+1, y-1),(x-1, y+1), (x-1, y-1)
and are denoted b ND(p). This points, together with the 4-neighbors are called the 8-neighbors of p, denoted
by N8(p).
Adjacency of a pixel: Let v be the set of intensity values used to define adjacency. In a binary image v= {1}
if we are referring to adjacency of pixels with value 1. We consider three types of adjacency
a) 4-adjaceancy: Two pixels p &q with values from v are 4-adjacent if q is in the set N4.
b) 8-adjaceancy: Two pixels p & q with values from v are 8-adjacent if q is in the set N8.
c) m-adjacency: Two pixels p & q with values from v are m adjacent if
1) q is in N4(p)or
2) q is in ND(p) & set N4(p)∩N4(q) has no pixels whose values are form v.

Connectivity: Two pixels p & q are said to be connected in S if there exists a path between them consisting
entirely of pixels in S. For any pixels p in S, the set of pixels that are connected to it in S is called a
connected component of S. If it only has one connected component, then set S is called connected set.
Region: Let R be the subsets of pixels in an image. We call R a region of the image if R is a connected set.
Two regions Ri & Rj are said to be adjacent if their union forms a connected set. Region that are not adjacent
are said to be disjoint.

Boundary: The boundary of a region R is the set of points that are adjacent to points in the compliments of
R. In other way, the border of a region is the set of pixels in the region that have one background neighbor.

23. Describe about the image sharpening and image smoothing filters.
Image sharpening using frequency domain filters:
** Ideal High pass filter: A 2-D ideal high pass filter is defined as
H(u,v) ={0 if D(u, v) ≤D0}
={1 if D(u, v)>D0}
Here, D (u, v) is distance between a point (u, v) in the frequency domain.
** Butter-Worth High pass filter: A 2-D Butter worth high pass filter is define d as
H (u,v)=1/(1+[D0/ D(u, v)]2n)
** Gaussian High pass filter: The transfer function of Gaussian highpass filter with cutoff frequency locus
at a distance D0 from the origin is given by
H(u,v)=1 – e-D^2(u, v) / 2D2
Image smoothing using frequency domain filters:
** Ideal Low pass filter: A 2-D ideal low pass filter is defined as
H(u,v) ={1 if D(u, v) ≤D0}
={0 if D(u, v)>D0}
Here, D (u, v) is distance between a point (u, v) in the frequency domain.
** Butter worth low pass filters: The transfer function of a Butterworth low pass filter of order n & with
cutoff frequency at a distance D0 from the original as
H(u, v)=1/1+[D(u. v)/D0]2n
** Gaussian low pass filter: The transfer function of Gaussian highpass filter with cutoff frequency locus at
a distance D0 from the origin is given by
H(u, v)= e –D^2(u, v) / 2D2
where as D(u, v) is the distance from center of the frequency rectangle.

You might also like