Professional Documents
Culture Documents
Cs 8092 Computer Graphics Iii Yr Vi Sem 2017 Regulation: Unit I Illumination and Color Models Cs8092
Cs 8092 Computer Graphics Iii Yr Vi Sem 2017 Regulation: Unit I Illumination and Color Models Cs8092
Cs 8092 Computer Graphics Iii Yr Vi Sem 2017 Regulation: Unit I Illumination and Color Models Cs8092
Light Sources
A light source is anything that makes light, whether natural and artificial. Natural light sources include
the Sun and stars.Artificial light sources include lamp posts and televisions.
Without light sources we could not see the world around us, however not every object we can see is a
light source. Many objects simply reflect light from a light source.
Light Sources is an activity that invites students to investigate where light comes from, how it travels
and how it can be used, before they use the power of light to explore the Universe!
A light source is anything that makes light. There are natural and artificial light sources. A few
examples of natural light sources include the Sun, stars and candles. A few examples of artificial light
sources include light bulbs, lamp posts and televisions. Without light sources we could not see the
world around us, however, not every object we see is a light source. Many objects simply reflect light
from a light source, for example tables, trees and the Moon.
Light from the Sun includes all the colours of the rainbow. When this light hits the moon it is reflected
back to Earth and enters our eyes allowing us to see the Moon.
Types of light sources
There are countless sources of light, but they can all be categorized under either of the two following
categories-
Natural sources
Artificial sources
VEC/IT
CS 8092 COMPUTER GRAPHICS III YR VI SEM 2017 REGULATION
VEC/IT
CS 8092 COMPUTER GRAPHICS III YR VI SEM 2017 REGULATION
Light can be produced by accelerating charges in a luminescent material. One common way of doing it
is bypassing current through the material.
Example- Fluorescent tube light, electric bulb
Gas Discharge Sources:
Passing electricity through certain gases at very low pressure can produce light too.
Example – Neon lamp, Sodium lamp.
Illumination model, also known as Shading model or Lightning model, is used to calculate the
intensity of light that is reflected at a given point on surface. There are three factors on which lightning
effect depends on:
1. Light Source :
Light source is the light emitting source. There are three types of light sources:
1. Point Sources – The source that emit rays in all directions (A bulb in a room).
2. Parallel Sources – Can be considered as a point source which is far from the surface
(The sun).
3. Distributed Sources – Rays originate from a finite area (A tubelight).
Their position, electromagnetic spectrum and shape determine the lightning effect.
2. Surface :
When light falls on a surface part of it is reflected and part of it is absorbed. Now the surface
structure decides the amount of reflection and absorption of light. The position of the surface and
positions of all the nearby surfaces also determine the lightning effect.
3. Observer :
The observer’s position and sensor spectrum sensitivities also affect the lightning effect.
1. Ambient Illumination :
Assume you are standing on a road, facing a building with glass exterior and sun rays are falling on
that building reflecting back from it and the falling on the object under observation. This would
be Ambient Illumination. In simple words, Ambient Illumination is the one where source of light is
indirect.
The reflected intensity I amb of any point on the surface is:
2. Diffuse Reflection :
Diffuse reflection occurs on the surfaces which are rough or grainy. In this reflection the brightness of
a point depends upon the angle made by the light source and the surface.
VEC/IT
CS 8092 COMPUTER GRAPHICS III YR VI SEM 2017 REGULATION
3. Specular Reflection :
When light falls on any shiny or glossy surface most of it is reflected back, such reflection is known as
Specular Reflection.
Phong Model is an empirical model for Specular Reflection which provides us with the formula for
calculation the reflected intensity I spec:
VEC/IT
CS 8092 COMPUTER GRAPHICS III YR VI SEM 2017 REGULATION
5. In traditional newspaper and magazine production, this process is carried out photographically by
projection of a transparency through a 'halftone screen' onto film.
6. The screen is a glass plate with a grid etched into it.
7. Different screens can be used to control the size and shape of the dots in the half toned image.
8. In computer graphics, half toning reproductions are approximated using rectangular pixel region
say 2 x 2 pixels or 3 x 3 pixels.
9. These regions are called as “Halftone Patters” or “Pixel Patterns”.
10. 2 x 2 pixel patterns for creating five intensity levels are shown in figure 43.
Dithering technique:
1. Another technique for digital half toning is dithering.
2. It is the technique for approximating halftones without reducing resolution, as pixel grid patterns
do.
3. Dithering can be accomplished by Thresholding the image against a dither matrix.
4. To obtain n2 intensity levels, it is necessary to setup an n x n dither matrix Dn whose elements are
distinct positive integers in the range of 0 to n2 – 1.
5. Matrix for 4 intensity level and 9 intensity level is shown below.
VEC/IT
CS 8092 COMPUTER GRAPHICS III YR VI SEM 2017 REGULATION
Properties of Light
Reflection of light
Refraction of light
Diffraction of light
Interference of light
Polarization of light
Dispersion of light
Scattering of light
VEC/IT
CS 8092 COMPUTER GRAPHICS III YR VI SEM 2017 REGULATION
Light behaves differently when it falls on different objects. When light falls on the surface of a non
When light falls on transparent objects, it is transmitted to the other side. That is why ,we can see
across transparent objects.
When light falls on rough opaque objects, most part of this light is absorbed and changed into heat
energy. A black surface absorbs most of the light.
When light falls on a smooth shiny surface, it bounces off in one particular direction. This
bouncing off of light is called reflection of light.
The negative values in the representation of color by R-G-B-values is unpleasant. Thus the
Commission International de l'Éclairage (CIE) defined in 1931 another base in terms of (virtual)
primaries , (the luminous-efficiency function) and , which allows to match all visible
colors as linear combinations with positive coefficients only (the so called CHROMATICITY
VEC/IT
CS 8092 COMPUTER GRAPHICS III YR VI SEM 2017 REGULATION
VEC/IT
CS 8092 COMPUTER GRAPHICS III YR VI SEM 2017 REGULATION
The DOMINANT WAVELENGTH of some color is given by the intersection of the ray from to the
color with the curved boundary formed by the pure colors.
Some colors (purples and magentas) are non-spectral, i.e. have no dominant wavelength (since the
intersection of the rays hit the boundary in the flat part). But they have have a COMPLEMENTARY
VEC/IT
CS 8092 COMPUTER GRAPHICS III YR VI SEM 2017 REGULATION
COLOR COMPLEMENTARY to some color are opposite to on the line through . E.g. we have the
following complementary pairs: red-cyan, green-magenta, and blue-yellow.
EXCITATION PURITY is a ratio of the distances from the color and the dominant wavelength to .
The CIE chromaticity diagram can also be used to visualize the COLOR GAMUTS (i.e. the ranges of
producible colors) for various output devices:
Figure: Color gamut
The CIE chromaticity diagram can also be used to visualize the COLOR GAMUTS (i.e. the ranges of producible
colors) for various output devices:
VEC/IT
CS 8092 COMPUTER GRAPHICS III YR VI SEM 2017 REGULATION
Figure: Color gamut
Color Models
The purpose of a color model is to facilitate the specification of colors in some standard generally
accepted way. In essence, a color model is a specification of a 3-D coordinate system and a subspace
within that system where each color is represented by a single point.
Each industry that uses color employs the most suitable color model. For example, the RGB color model
is used in computer graphics, YUV or YCbCr are used in video systems, PhotoYCC* is used in
PhotoCD* production and so on. Transferring color information from one industry to another requires
VEC/IT
CS 8092 COMPUTER GRAPHICS III YR VI SEM 2017 REGULATION
transformation from one set of values to another. Intel IPP provides a wide number of functions to convert
different color spaces to RGB and vice versa.
In the RGB model, each color appears as a combination of red, green, and blue. This model is called
additive, and the colors are called primary colors. The primary colors can be added to produce the
secondary colors of light (see Figure "Primary and Secondary Colors for RGB and CMYK Models") -
magenta (red plus blue), cyan (green plus blue), and yellow (red plus green). The combination of red,
green, and blue at full intensities makes white.
The color subspace of interest is a cube shown in Figure "RGB and CMY Color Models" (RGB values are
normalized to 0..1), in which RGB values are at three corners; cyan, magenta, and yellow are the three
other corners, black is at their origin; and white is at the corner farthest from the origin.
The gray scale extends from black to white along the diagonal joining these two points. The colors are the
points on or inside the cube, defined by vectors extending from the origin.
Thus, images in the RGB color model consist of three independent image planes, one for each primary
color.
As a rule, the Intel IPP color conversion functions operate with non-linear gamma-corrected images
R'G'B'.
The importance of the RGB color model is that it relates very closely to the way that the human eye
perceives color. RGB is a basic color model for computer graphics because color displays use red, green,
and blue to create the desired color. Therefore, the choice of the RGB color space simplifies the
VEC/IT
CS 8092 COMPUTER GRAPHICS III YR VI SEM 2017 REGULATION
architecture and design of the system. Besides, a system that is designed using the RGB color space can
take advantage of a large number of existing software routines, because this color space has been around
for a number of years.
However, RGB is not very efficient when dealing with real-world images. To generate any color within
the RGB color cube, all three RGB components need to be of equal pixel depth and display resolution.
Also, any modification of the image requires modification of all three planes.
The CMYK color model is a subset of the RGB model and is primarily used in color print production.
CMYK is an acronym for cyan, magenta, and yellow along with black (noted as K). The CMYK color
space is subtractive, meaning that cyan, magenta yellow, and black pigments or inks are applied to a white
surface to subtract some color from white surface to create the final color. For example
(see Figure "Primary and Secondary Colors for RGB and CMYK Models"), cyan is white minus red,
magenta is white minus green, and yellow is white minus blue. Subtracting all colors by combining the
CMY at full saturation should, in theory, render black. However, impurities in the existing CMY inks
make full and equal saturation impossible, and some RGB light does filter through, rendering a muddy
brown color. Therefore, the black ink is added to CMY. The CMY cube is shown in Figure "RGB and
CMY Color Models", in which CMY values are at three corners; red, green, and blue are the three other
corners, white is at the origin; and black is at the corner farthest from the origin.
The HLS (hue, lightness, saturation) and HSV (hue, saturation, value) color models were developed to be
more “intuitive” in manipulating with color and were designed to approximate the way humans perceive
and interpret color.
Hue defines the color itself. The values for the hue axis vary from 0 to 360 beginning and ending with red
and running through green, blue and all intermediary colors.
Saturation indicates the degree to which the hue differs from a neutral gray. The values run from 0, which
means no color saturation, to 1, which is the fullest saturation of a given hue at a given illumination.
Intensity component - lightness (HLS) or value (HSV), indicates the illumination level. Both vary from 0
(black, no light) to 1 (white, full illumination). The difference between the two is that maximum
saturation of hue (S=1) is at value V=1 (full illumination) in the HSV color model, and at lightness L=0.5
in the HLS color model.
The HSV color space is essentially a cylinder, but usually it is represented as a cone or hexagonal cone
(hexcone) as shown in the Figure "HSV Solid", because the hexcone defines the subset of the HSV space
with valid RGB values. The value V is the vertical axis, and the vertex V=0 corresponds to black color.
Similarly, a color solid, or 3D-representation, of the HLS model is a double hexcone (Figure "HSV
Solid") with lightness as the axis, and the vertex of the second hexcone corresponding to white.
Both color models have intensity component decoupled from the color information. The HSV color space
yields a greater dynamic range of saturation. Conversions from RGB To HSV/RGB To HSV and vice-
versa in Intel IPP are performed in accordance with the respective pseudo code algorithms given in the
descriptions of corresponding conversion functions.
HSV Solid
VEC/IT
CS 8092 COMPUTER GRAPHICS III YR VI SEM 2017 REGULATION
HLS Solid
A picture is completely specified by the set of intensities for the pixel positions in the display. Shapes and
colors of the objects can be described internally with pixel arrays into the frame buffer or with the set of
the basic geometric – structure such as straight line segments and polygon color areas. To describe
structure of basic object is referred to as output primitives. Each output primitive is specified with input
co-ordinate data and other information about the way that objects is to be displayed. Additional output
primitives that can be used to constant a picture include circles and other conic sections, quadric surfaces,
Spline curves and surfaces, polygon floor areas and character string.
Points and Lines Point plotting is accomplished by converting a single coordinate position furnished by
an application program into appropriate operations for the output device. With a CRT monitor, for
example, the electron beam is turned on to illuminate the screen phosphor at the selected location Line
drawing is accomplished by calculating intermediate positions along the line path between two specified
end points positions. An output device is then directed to fill in these positions between the end points
VEC/IT
CS 8092 COMPUTER GRAPHICS III YR VI SEM 2017 REGULATION
Digital devices display a straight line segment by plotting discrete points between the two end points.
Discrete coordinate positions along the line path are calculated from the equation of the line. For a raster
video display, the line color (intensity) is then loaded into the frame buffer at the corresponding pixel
coordinates. Reading from the frame buffer, the video controller then plots “the screen pixels”. Pixel
positions are referenced according to scan-line number and column number (pixel position across a scan
line). Scan lines are numbered consecutively from 0, starting at the bottom of the screen; and pixel
columns are numbered from 0, left to right across each scan line
To load an intensity value into the frame buffer at a position corresponding to column x along scan line y,
setpixel (x, y) To retrieve the current frame buffer intensity setting for a specified location we use a low
level function getpixel (x, y)
Given that the two endpoints of a line segment are specified at positions (x1,y1) and (x2,y2) as in figure
we can determine the values for the slope m and y intercept b with the following calculations
VEC/IT
CS 8092 COMPUTER GRAPHICS III YR VI SEM 2017 REGULATION
For lines with m = 1, Δx = Δy and the horizontal and vertical deflections voltage are equal.
VEC/IT
CS 8092 COMPUTER GRAPHICS III YR VI SEM 2017 REGULATION
Algorithm
#define ROUND(a) ((int)(a+0.5))
void lineDDA (int xa, int ya, int xb, int yb)
{
int dx = xb - xa, dy = yb - ya, steps, k;
float xIncrement, yIncrement, x = xa, y = ya;
if (abs (dx) > abs (dy) steps = abs (dx) ;
else steps = abs dy);
xIncrement = dx / (float) steps;
yIncrement = dy / (float) steps
setpixel (ROUND(x), ROUND(y) ) :
for (k=0; k<steps; k++)
{
x += xIncrement;
y += yIncrement;
setpixel (ROUND(x), ROUND(y));
}
}
Algorithm Description:
Step 1 : Accept Input as two endpoint pixel positions
Step 2: Horizontal and vertical differences between the endpoint positions are assigned to parameters dx
and dy (Calculate dx=xb-xa and dy=yb-ya).
Step 3: The difference with the greater magnitude determines the value of parameter steps.
Step 4 : Starting with pixel position (xa, ya), determine the offset needed at each step to
generate the next pixel position along the line path.
Step 5: loop the following process for steps number of times
a. Use a unit of increment or decrement in the x and y direction
b. if xa is less than xb the values of increment in the x and y directions are 1 and m
c. if xa is greater than xb then the decrements -1 and – m are used.
VEC/IT
CS 8092 COMPUTER GRAPHICS III YR VI SEM 2017 REGULATION
7. Iterate the calculation for xIncrement and yIncrement for steps(6) number of times
8. Tabulation of the each iteration
k x Y Plotting
points
(Rounded to
Integer)
0 0+0.66=0.66 0+1=1 (1,1)
1 0.66+0.66=1. 1+1=2 (1,2)
32
2 1.32+0.66=1. 2+1=3 (2,3)
98
3 1.98+0.66=2. 3+1=4 (3,4)
64
4 2.64+0.66=3. 4+1=5 (3,5)
3
5 3.3+0.66=3.9 5+1=6 (4,6)
6
Result :
VEC/IT
CS 8092 COMPUTER GRAPHICS III YR VI SEM 2017 REGULATION
An accurate and efficient raster line generating algorithm developed by Bresenham, that uses only
incremental integer calculations.
In addition, Bresenham’s line algorithm can be adapted to display circles and other curves.
To illustrate Bresenham's approach, we- first consider the scan-conversion process for lines with positive
slope less than 1.
Pixel positions along a line path are then determined by sampling at unit x intervals. Starting from the left
endpoint (x0,y0) of a given line, we step to each successive column (x position) and plot the pixel whose
scan-line y value is closest to the line path.
To determine the pixel (xk,yk) is to be displayed, next to decide which pixel to plot the column
xk+1=xk+1.(xk+1,yk) and .(xk+1,yk+1). At sampling position xk+1, we label vertical pixel separations
from the mathematical line path as d1 and d2. The y coordinate on the mathematical line at pixel column
position xk+1 is calculated as
y =m(xk+1)+b (1)
Then
d1 =
y-yk
=
m(xk
+1)+
b-yk
d2 =
(yk+
1)-y
=
yk+1
-
m(xk
+1)-
b
VEC/IT
To determine which of the two pixel is closest to the line path, efficient test that is based on the
difference between the two pixel separations
d1- d2 = 2m(xk+1)-2yk+2b-1 (2)
A decision parameter Pk for the kth step in the line algorithm can be obtained by rearranging
equation (2). By substituting m=Δy/Δx where Δx and Δy are the vertical and horizontal
separations of the endpoint positions and defining the decision parameter as
pk = Δx (d1- d2)
= 2Δy xk.-2Δx. yk + c (3)
The sign of pk is the same as the sign of d1- d2,since Δx>0
Parameter C is constant and has the value 2Δy + Δx(2b-1) which is independent of the pixel
position and will be eliminated in the recursive calculations for Pk.
If the pixel at yk is “closer” to the line path than the pixel at yk+1 (d1< d2) than decision
parameter Pk is negative. In this case, plot the lower pixel, otherwise plot the upper pixel.
Coordinate changes along the line occur in unit steps in either the x or y directions.
To obtain the values of successive decision parameters using incremental integer calculations. At
steps k+1, the decision parameter is evaluated from equation (3) as
Pk+1 = 2Δy xk+1-2Δx. yk+1 +c
Subtracting the equation (3) from the preceding equation
Pk+1 - Pk = 2Δy (xk+1 - xk) -2Δx(yk+1 - yk)
But xk+1= xk+1 so that
P0 = 2Δy-Δx (5)
Bresenham’s line drawing for a line with a positive slope less than 1 in the following outline of
the algorithm.
The constants 2Δy and 2Δy-2Δx are calculated once for each line to be scan converted.
Bresenham’s line Drawing Algorithm for |m| < 1
1. Input the two line endpoints and store the left end point in (x0,y0)
2. load (x0,y0) into frame buffer, ie. Plot the first point.
3. Calculate the constants Δx, Δy, 2Δy and obtain the starting value for the decision parameter as
P0 = 2Δy-Δx
4. At each xk along the line, starting at k=0 perform the following test
Δx = 10 Δy=8
The initial decision parameter has the value
p0 = 2Δy- Δx = 6
and the increments for calculating successive decision parameters are
2Δy=16 2Δy-2 Δx= -4
We plot the initial point (x0,y0) = (20,10) and determine successive pixel positions along
the line path from the decision parameter as
Tabulation
k pk (xk+1,
yK+1)
0 6 (21,11)
1 2 (22,12)
2 -2 (23,12)
3 14 (24,13)
4 10 (25,14)
5 6 (26,15)
6 2 (27,16)
7 -2 (28,16)
8 14 (29,17)
9 10 (30,18)
Result
Advantages
Algorithm is Fast
Uses only integer calculations
Disadvantages
Circle-Generating Algorithms
General function is available in a graphics library for displaying various kinds of curves,
including circles and ellipses.
Properties of a circle
A circle is defined as a set of points that are all the given distance (xc,yc).
Bresenham’s line algorithm for raster displays is adapted to circle generation by setting up
decision parameters for finding the closest pixel to the circumference at each sampling step.
Square root evaluations would be required to computer pixel siatances from a circular path.
Bresenham’s circle algorithm avoids these square root calculations by comparing the squares of
the pixel separation distances. It is possible to perform a direct distance comparison without a
squaring operation.
In this approach is to test the halfway position between two pixels to determine if this midpoint
is inside or outside the circle boundary. This method is more easily applied to other conics and
for an integer circle radius the midpoint approach generates the same pixel positions as the
Bresenham circle algorithm.
For a straight line segment the midpoint method is equivalent to the bresenham line algorithm.
The error involved in locating pixel positions along any conic section using the midpoint test is
limited to one half the pixel separations.
Successive midpoint decision parameter values and the corresponding coordinate positions along
the circle path are listed in the following table
Ellipse-Generating Algorithms
Properties of ellipses
An ellipse can be given in terms of the distances from any point on the ellipse to two fixed
positions called the foci of the ellipse. The sum of these two distances is the same values for all
points on the ellipse.
If the distances to the two focus positions from any point p=(x,y) on the ellipse are labeled d1
and d2, then the general equation of an ellipse can be stated as
d1+d2=constant
Expressing distances d1 and d2 in terms of the focal coordinates F1=(x1,y2) and F2=(x2,y2)
sqrt((x-x1)2+(y-y1)2)+sqrt((x-x2)2+(y-y2)2)=constant
By squaring this equation isolating the remaining radical and squaring again. The general ellipse
equation in the form
Ax2+By2+Cxy+Dx+Ey+F=0
The coefficients A,B,C,D,E, and F are evaluated in terms of the focal coordinates and the
dimensions of the major and minor axes of the ellipse.
The major axis is the straight line segment extending from one side of the ellipse to the other
through the foci. The minor axis spans the shorter dimension of the ellipse, perpendicularly
bisecting the major axis at the halfway position (ellipse center) between the two foci.
An interactive method for specifying an ellipse in an arbitrary orientation is to input the two foci
and a point on the ellipse boundary.
Ellipse equations are simplified if the major and minor axes are oriented to align with the
coordinate axes. The major and minor axes oriented parallel to the x and y axes parameter rx for
this example labels the semi major axis and parameter ry labels the semi minor axis
((x-xc)/rx)2+((y-yc)/ry)2=1
Using polar coordinates r and θ, to describe the ellipse in Standard position with the parametric
equations
x=xc+rxcos θ
y=yc+rxsin θ
Angle θ called the eccentric angle of the ellipse is measured around the perimeter of a bounding
circle.
We must calculate pixel positions along the elliptical arc throughout one quadrant, and then we
obtain positions in the remaining three quadrants by symmetry
The midpoint ellipse method is applied throughout the first quadrant in two parts.
The below figure show the division of the first quadrant according to the slope of an ellipse with
rx<ry.
In the x direction where the slope of the curve has a magnitude less than 1 and unit steps in the y
direction where the slope has a magnitude greater than 1.
Region 1 and 2 can be processed in various ways
1. Start at position (0,ry) and step clockwise along the elliptical path in the first quadrant shifting
from unit steps in x to unit steps in y when the slope becomes less than
-1
2. Start at (rx,0) and select points in a counter clockwise order.
2.1 Shifting from unit steps in y to unit steps in x when the slope becomes greater than -1.0
2.2 Using parallel processors calculate pixel positions in the two regions simultaneously
3. Start at (0,ry)
step along the ellipse path in clockwise order throughout the first quadrant ellipse function
(xc,yc)=(0,0)
fellipse (x,y)=ry2x2+rx2y2 –rx2 ry2
which has the following properties:
fellipse (x,y) <0, if (x,y) is inside the ellipse boundary
=0, if(x,y) is on ellipse boundary
>0, if(x,y) is outside the ellipse boundary
Thus, the ellipse function fellipse (x,y) serves as the decision parameter in the midpoint
algorithm.
Starting at (0,ry):
Unit steps in the x direction until to reach the boundary between region 1 and region 2. Then
switch to unit steps in the y direction over the remainder of the curve in the first quadrant.
At each step to test the value of the slope of the curve. The ellipse slope is calculated
dy/dx= -(2ry2x/2rx2y)
At the boundary between region 1 and region 2
dy/dx = -1.0 and 2ry2x=2rx2y
to more out of region 1 whenever
2ry2x>=2rx2y
The following figure shows the midpoint between two candidate pixels at sampling position
xk+1 in the first region.
To determine the next position along the ellipse path by evaluating the decision parameter at this
mid point
P1k = fellipse (xk+1,yk-1/2)
= ry2 (xk+1)2 + rx2 (yk-1/2)2 – rx2 ry2
if P1k <0, the midpoint is inside the ellipse and the pixel on scan line yk is closer to the ellipse
boundary. Otherwise the midpoint is outside or on the ellipse boundary and select the pixel on
scan line yk-1
At the next sampling position (xk+1+1=xk+2) the decision parameter for region 1 is calculated
as
Or
p10 = ry2 - rx2 ry + ¼ rx2
over region 2, we sample at unit steps in the negative y direction and the midpoint is now taken
between horizontal pixels at each step. For this region, the decision parameter is evaluated as
p2k = fellipse(xk +½ ,yk - 1)
= ry2 (xk +½ )2 + rx2 (yk - 1)2 - rx2 ry2
1. If P2k >0, the mid point position is outside the ellipse boundary, and select the pixel at xk.
2. If P2k <=0, the mid point is inside the ellipse boundary and select pixel position xk+1.
To determine the relationship between successive decision parameters in region 2 evaluate the
ellipse function at the sampling step : yk+1 -1= yk-2.
P2k+1 = fellipse(xk+1 +½,yk+1 -1 )
=ry2(xk +½) 2 + rx2 [(yk+1 -1) -1]2 - rx2 ry2
or
p2k+1 = p2k -2 rx2(yk -1) + rx2 + ry2 [(xk+1 +½)2 - (xk +½)2]
With xk+1set either to xkor xk+1, depending on the sign of P2k. when we enter region 2, the
initial position (x0,y0) is taken as the last position. Selected in region 1 and the initial decision
parameter in region 2 is then
p20 = fellipse(x0 +½ ,y0 - 1)
P10=ry2-rx2ry +(1/4)rx2
3. At each xk position in region1 starting at k=0 perform the following test. If P1k<0, the next
point along the ellipse centered on (0,0) is (xk+1, yk) and
x=x+xc, y=y+yc
8. Repeat the steps for region1 unit 2ry2x>=2rx2y
Example : Mid point ellipse drawing
Input ellipse parameters rx=8 and ry=6 the mid point ellipse algorithm by determining raster
position along the ellipse path is the first quadrant. Initial values and increments for the decision
parameter calculations are
2ry2 x=0 (with increment 2ry2=72 )
2rx2 y=2rx2 ry (with increment -2rx2= -128 )
For region 1 the initial point for the ellipse centered on the origin is (x0,y0) = (0,6) and the initial
decision parameter value is
p10=ry2-rx2ry2+1/4rx2=-332
Successive midpoint decision parameter values and the pixel positions along the ellipse are listed
in the following table.
k p1k xk+1,yk+ 2ry2xk+1 2rx2yk+1
1
0 -332 (1,6) 72 768
1 -224 (2,6) 144 768
2 -44 (3,6) 216 768
3 208 (4,5) 288 640
4 -108 (5,5) 360 640
5 288 (6,4) 432 512
6 244 (7,3) 504 384
{
x++;
px+=twoRy2;
p+=Rx2-py+px;
}
ellipsePlotPoints(xCenter, yCenter,x,y);
}
}
void ellipsePlotPoints(int xCenter, int yCenter,int x,int y);
{
setpixel (xCenter + x, yCenter + y);
setpixel (xCenter - x, yCenter + y);
setpixel (xCenter + x, yCenter - y);
setpixel (xCenter- x, yCenter - y);
}
Attributes of output primitives
Any parameter that affects the way a primitive is to be displayed is referred to as an attribute
parameter. Example attribute parameters are color, size etc. A line drawing function for example
could contain parameter to set color, width and other properties.
1. Line Attributes
2. Curve Attributes
3. Color and Grayscale Levels
4. Area Fill Attributes
5. Character Attributes
6. Bundled Attributes
Line Attributes
Basic attributes of a straight line segment are its type, its width, and its color. In some graphics
packages, lines can also be displayed using selected pen or brush options
Line Type
Line Width
Pen and Brush Options
Line Color
Line type
Possible selection of line type attribute includes solid lines, dashed lines and dotted lines.
To set line type attributes in a PHIGS application program, a user invokes the function
setLinetype (lt)
Where parameter lt is assigned a positive integer value of 1, 2, 3 or 4 to generate lines that are
solid, dashed, dash dotted respectively. Other values for line type parameter it could be used to
display variations in dot-dash patterns.
Line width
Implementation of line width option depends on the capabilities of the output device to set the
line width attributes.
setLinewidthScaleFactor(lw)
Line width parameter lw is assigned a positive number to indicate the relative width of line to be
displayed. A value of 1 specifies a standard width line. A user could set lw to a value of 0.5 to
plot a line whose width is half that of the standard line. Values greater than 1 produce lines
thicker than the standard.
Line Cap
We can adjust the shape of the line ends to give them a better appearance by adding line caps.
There are three types of line cap. They are
Butt cap
Round cap
Projecting square cap
Butt cap obtained by adjusting the end positions of the component parallel lines so that the thick
line is displayed with square ends that are perpendicular to the line path.
Round cap obtained by adding a filled semicircle to each butt cap. The circular arcs are centered
on the line endpoints and have a diameter equal to the line thickness
Projecting square cap extend the line and add butt caps that are positioned one-half of the line
width beyond the specified endpoints.
1. A miter join accomplished by extending the outer boundaries of each of the two lines until
they meet.
2. A round join is produced by capping the connection between the two segments with a circular
boundary whose diameter is equal to the width.
3. A bevel join is generated by displaying the line segment with but caps and filling in tri
angular gap where the segments meet
Line color
A poly line routine displays a line in the current color by setting this color value in the frame
buffer at pixel locations along the line path using the set pixel procedure.
We set the line color value in PHlGS with the function
setPolylineColourIndex (lc)
Nonnegative integer values, corresponding to allowed color choices, are assigned to the line
color parameter lc
Example : Various line attribute commands in an applications program is given by the following
sequence of statements
setLinetype(2);
setLinewidthScaleFactor(2);
setPolylineColourIndex (5);
polyline(n1,wc points1);
setPolylineColorIindex(6);
poly line (n2, wc points2);
This program segment would display two figures, drawn with double-wide dashed lines. The
first is displayed in a color corresponding to code 5, and the second in color 6.
Curve attributes
Parameters for curve attribute are same as those for line segments. Curves displayed with
varying colors, widths, dot –dash patterns and available pen or brush options
Color or gray scales are incorporated into a frame buffer rater graphics device by using
additional bit planes. The intensity of each pixel on the CRT is controlled by a corresponding
pixel location in each of the N bit planes. The binary value from each of the N bit planes is
loaded into corresponding positions in a register.php. The resulting binary number is
interpreted as an intensity level between 0 (dark) and 2n -1 (full intensity).
This is converted into an analog voltage between 0 and the maximum voltage of the electron
gun by the DAC. A total of 2 N intensity levels are possible. Figure given below illustrates a
system with 3 bit planes for a total of 8 (2 3) intensity levels. Each bit plane requires the full
complement of memory for a given raster resolution; e.g., a 3-bit plane frame buffer for a 1024
X1024 raster requires 3,145,728 (3 X 1024 X1024) memory bits.
An increase in the number of available intensity levels is achieved for a modest increase in
required memory by using a lookup table. Upon reading the bit planes in the frame buffer, the
resulting number is used as an index into the lookup table. The look up table must contain
2N entries. Each entry in the lookup table is W bit wise. W may be greater than N. When this
occurs, 2W intensities are available; but only 2N different intensities are available at one time. To
get additional intensities,
Because there are three primary colours, a simple color frame buffer is implemented with three
bit planes, one for each primary color. Each bit plane drives an individual color gun for each of
the three primary colors used in color video. These three primaries (red, green, and blue) are
combined at the CRT to yield eight colors.
The Pixel Addressing feature controls the number of pixels that are read from the ROI.
Pixel Addressing is controlled by two parameters – a Pixel Addressing mode and a value. The mode
of Pixel Addressing can be decimation (0), averaging (1), binning (2) or resampling (3). The Pixel
Addressing value ranges from 1 to 6. Not all Pixel Addressing values are supported on all cameras.
With a Pixel Addressing value of 1, the Pixel Addressing mode has no effect and all pixels in the
ROI will be returned. For Pixel Addressing values greater than 1, the number of pixels will be
reduced by the square of the value. A Pixel Addressing value of 2 results in 1/4 of the pixels; a value
of 3 results in 1/9 of the pixels; a value of 4 results in 1/16 of the pixels; a value of 6 results in 1/36 of
the pixels.
The Pixel Addressing mode determines how the number of pixels is reduced. The decimation
mode will skip all the pixels in the block except for the first group of four. At the highest Pixel
Addressing value of 6, a 12 x 12 block of pixels is reduced to 2 x 2. At this level of reduction detail
in the scene can be lost and color artifacts introduced on a color sensor.
In decimation mode, the Pixel Addressing values produce the following results:
Value of 2: Read out 2 pixels, skip 2 pixels
Value of 3: Read out 2 pixels, skip 4 pixels
Value of 4: Read out 2 pixels, skip 6 pixels
Value of 6: Read out 2 pixels, skip 10 pixels
For monochrome cameras, examples of the blocks of pixels to be reduced (when using decimation)
or summed (when binning) are shown below:
Pixel Addressing Value of 2 x 1
The averaging mode (available on some PL-B cameras) will average pixels at the same location
of the 2x2 kernels within the larger block of pixels. On a color camera, the averaged pixels are
all the same color, resulting in a 2x2 Bayer pattern kernel. This allows details in the blocks to be
detected and reduces the effects of the color artifacts.
The binning mode will sum pixels at the same location of the 2x2 kernels within the larger
block of pixels. On a color camera, the binned pixels are all the same color, thus reducing the
block to a 2x2 Bayer pattern kernel. Unlike binning with CCD sensors, this summation occurs
after the image is digitized, so that no increase in sensitivity will be noticed but a dark image will
appear brighter.
The resampling mode is only applicable to some color cameras. This mode uses a different
approach involving the conversion of the Bayer pattern in the blocks to RGB pixels. With a
Pixel Addressing value of 1, resampling has no effect. With a Pixel Addressing mode of 2 or
more, resampling will convert the block of 10-bit pixels to one 30-bit RGB pixel by averaging
the red, green and blue channels. The 30-bit RGB value is output from the camera when the
video format is set to YUV mode. Resampling will create images with the highest quality and
the least artifacts.
Pixel Addressing will reduce the amount of data coming from the camera. However, only the
Decimate mode will permit an increase in the frame rate. Averaging, binning and resampling
modes will have the same frame rate as if the Pixel Addressing value was 1.
Filled Area Primitives
Filled Area Primitives: Area filling is a method or process that helps us to fill an object, area, or
image. We can easily fill the polygon. The polygon filling is defined as filling or highlighting all
the pixels. The pixels appear inside the polygon shape with any color other than the background
color.
There are two algorithms or methods used to fill the polygon.
Seed Fill Algorithm
Scan Line Algorithm
Seed Fill Algorithm
In this method, we will select a seed or starting point inside the boundary. We can further divide
the seed fill into two parts.
1. Flood-fill Algorithm
2. Boundary-fill Algorithm
Flood-fill Algorithm
Flood-fill algorithm helps to define a region in the boundary, attached to a point in the multi-
dimensional array. It is similar to the bucket tool used in the paint program. The stack-based
recursive function is used to implement the algorithm. In flood -fill algorithm, we replace all the
associated pixels of the selected color with a fill color. We also check the pixels for a particular
interior color, not for boundary color.
Algorithm of Flood-fill
Procedure flood_fill (p, q, fill_color, Old_color: Integer)
Var
Current: Integer
Current = getpixel (p, q)
If
(Current = Old_color)
Then
Start
setpixel (p, q, fill_color);
flood_fill (p, q+1, fill_color, Old_color);
flood_fill (p, q-1, fill_color, Old_color);
flood_fill (p+1, q, fill_color, Old_color);
flood_fill (p-1, q, fill_color, Old_color);
End;
Note- Getpixel () defines the color of specified pixel.
Setpixel () set the pixel with the specified color.
Algorithm of Boundary-fill
Procedure boundary fill (p, q, fill color, boundary)
Step 1: Initialize the boundary of the region.
Step 2: Get the interior pixel (p, q). Now, define an Integer called current pixel and assign it to
(p, q).
Current = getpixel (p, q)
Step 3: If
(current pixel != boundary) and (current pixel != fill)
Then
Setpixel(p, q, fill);
Boundary fill (p+1, q, fill, boundary);
Boundary fill (p-1, q, fill, boundary);
Boundary fill (p, q+1, fill, boundary);
Boundary fill (p, q-1, fill, boundary);
Step 4: End;
Problems with Boundary-fill algorithm
Sometimes it may not fill all the regions.
In the 4-connected method, it does not fill corner pixels. Because it only checks the
adjacent position of the pixel.
Scan-Line Algorithm
The Scan-Line Algorithm is an area filling algorithm. In this algorithm, we can fill the polygons
through horizontal lines or scan lines. The scan-line intersects the edges of the polygon, and the
polygon is filled between pairs of the intersection. The main purpose of this algorithm is to fill
color in the interior pixels of the polygon.