Professional Documents
Culture Documents
Cad (Unit-3)
Cad (Unit-3)
for images (of objects) enhanced by removing the hidden parts that would not
be seen if objects were constructed from opaque material in real life. Edges
and surfaces of a given object may be hidden (invisible) or visible in a given
image depending on the viewing direction. The determination of hidden edges
and surfaces is considered one of the most challenging problems in computer
graphics. Its solution typically demands large amounts of computer time and
memory space. Techniques to reduce these demands and improve efficiencies
of related algorithms exist and are discussed here.
The solution to the problem of removing hidden edges and surfaces draws on
various concepts from computing, mainly sorting and geometric modeling, mainly
projection and intersection. This problem can also be viewed as a visibility
problem. Therefore, a clear understanding of it and its solution is useful and can
be extended to solve relevant engineering problems. Consider, for example, the
vision and path planning problems in robotics applications. In the vision problem,
the camera location and orientation provide the viewing direction which, in turn,
can be used to determine the hidden edges and surfaces of objects encountered in
the robot working environment. In the path planning problem, the knowledge of
when a given surface changes from visible to hidden (via finding silhouette edges
and curves as seen later in this section) can be utilized to find the minimum path of
the robot end effector. Points on the surface where its status changes from visible to
Invisible or vice versa can be considered as critical points which the path planning
algorithm can use as an input. Another example is the display of finite element
meshes where the hidden elements are removed. In this ease, each element is trated
G a Planar polygon and the collection of elements that forms the meshed object,
from afinite element point of view, forms a polyhedron from a computer graphics
Viewpoint.
A wide variety of hidden line and hidden surface removing (visibility) algofithms
existence today, The development of these algorithms is influenced by the 518
CAD/CAM Theory and
types of graphics display devicc% they the'/ and by the type of
data structure or gcojjjctric njodcji/j',!
surface, or solid modeJing), Sojnc utiji'//% (9/j/ef/'4?u, serial, processing to
speed up their CXCCUt ion, 'J '}jc ;ujtj
if onc special-
purpose hardware to supporl hidden linc ',jnd is not restricted to a
singJe algorithm, J Jovvcvcr, if if; not different algorithmic
formulations info form that fo bc
93.1 Visibility of Object View" visibility of parts ol' olljeetN ol' Il heene
depend" on IJ)C of' viewing
before
witl) smaller the cojnpnrjson, zj, coordil)lltch 'I'J)ih lic c105CJ' hilnply 10
rcvcrncti the viewer.lhc coojparisojj
Theo and Practice
520 CAD/CAM •y
z
Fig. 9.3 Depth Comparison for Orthographic
Projection
(a) Boxes and polygons do not overlap (b) Boxes overlap and polygons do
not ort]
(c) Boxes and polygons overlap (d) Minimax test of individual edges
we draw nonconvex a line from polygons, the point two under other testing methods
to infinity can be as used. shown In in the Fig. first 9.5a. method,The
Singularcase I in the arises figure). which If one needs of special the polygon
treatment edges to lies guarantee on the semi-infinite the consistency life, ofa
the
(b) Angle method
(a) Intersection method
The second method for nonconvex polygons (Fig. 9.5b) computes the sum
Another the important normal vector use of to the the plane plane. equation The
first in hidden three coefficients line removal a, is achievedc Of
by using b and Eq.' (9.1) represent the normal to the plane and the
vector [a, b, c, d] represents the homogeneous coordinates of this normal. The
coefficient d is found by knowinga point on the plane. In Chap. 5, we have
discussed the various ways of finding plane equation. Figure 9.6b shows how
the normal to il face can be used to decide its visibility. The basic idea of the
test (Fig, 9.6b) is that faces whose outward normal vector points toward the
viewing eye are visible (face Fl) while invisible (face 172). This test is
implemented by calculating the dot product
nd Oismeasurcd from N to S, the dot product is' positive when N points
toward the viewing when the face and its edges are visible, The right-hand
Bide of' Eq, (9.2) this givesdirection component of N along the direction
of S, For orthographic projection'
524 CAD/CAM Theory and Practice
ä
with the zv axis. Thus the surface test can be stated follows, normal has a
positive component in the 4, direction ore visible nnd those has a negative 4,
component are not visible. The by solve the hidden line problem except for single
convex polyhedro, Even for convex polyhedra, the test may fail for perspective
projection il' more
93.24
Computing Silhouettes
Aset Ofedges that separates visible faces invisible faces of an object with to a
given viewing direction is called silhouette edges (or silhouettes). The Signs of the
determin e the ZvC0mponents silhouette. An of edge normal that vectors is purt of of
the the object silhouette faces is can characterized be utilized usto
IntersectiOn Of one visible face and one invisible face. An edge that is the tion
Oftwo visible faces is visible, but does not contribute to the silhouette.
Visual 525
Intersection Of two invisible faces produces nn invisible edge, Figute 9.7<1 ws
the silhouette of a cube.
Silhouette
(a) Silhouette edges
Figure 9.7b shows how to compute the silhouette'edges. One cube is orientedat an
angle with the axes of the VCS and the other is parallel to the axes. In the flfr't case, the
Zy component of each normal is calculated (shown dashed in the figure)' The edges
between faces FJ and 1/2, FJ and 1/6, 172 and F3und F5, F3 and F4and F4 and F6 are
silhouette edges according to the above criterion, If a normal docs not have a
ZP component, as in the second case ofFig. 9.7b, additional informativn is
needed to compute the silhouette, This case implies that the corresponding
526 CAD/CAM Theory and Practice
face is parallel to theZP axis and either the Xv or % axjB and perpendicular to the
remaining axis. For example, face is parallel to bolh the Zj, and % axes and
perpendiculJ to the Xv, axis. Therefore, the face normal is parallel to one of the VCS
axes, Ifthis normal points in the positive direction of the axis, the face is visible. Face F4
is visible and F2 is not. Similarly, face is visible while 1/6 is not.
Determining silhouette curves for curved surfaces follows a similar approach' but is
more involved. The silhouette curve of' a surface is a curve on the surface along which
the Zy component of the surface normal is zero, as shown in Fig'
9}' Realisnl =
Invisiblc
Silhouette curve
Visible
.G i.t;æ: bunt
edges are directed angle 0 clockwise, about A (Fig. the 9.9c), angle the criterion edge
appears. reverses. Notice The angle that criterion if the faceis sometimes referred to as
the Vorticity Of edge CD with respect to point A.
¯
panially
edge AB
ID
in their sorting is a tneasure techniques of how are rapid a etTective scene or than its itnage
other changes. algoritluns It describes that do not.the
changes If,forexatnple, maining in the an deck initially can sorted be a of scene great deck
or use or its cards in inuuge re-sorting is slightly from it. one shuffled. Similarly, place to
the nnother the coherencegradualcan
N in the appearance of
1. Edge Coherence: The visibility of an edge changes only when it crosses another edge.
2.Face Coherence : If a part of a face is visible, the entire face is probably visible.
Moreover, peneta•ation of faces is a relatively rare occurrence and therefore is not
usually checked by hidden removal algorithms.
3. Geontetric Coherence : Edges that share the some vertex or faces thnt share the same edge
have similar visibilities in most cases. For extunple, if three edges (the case of a box) share the
same vertex, they may be all visible, all invisible, or two visible and one invisible. The proper
combination depends on the angle between any two edges (less or greater than 180 0) nnd on
the location of any of the edges relative to the plane defined by the other two edges.
4. Frante Coherence: A picture does not change very much from frume to frame.
5• Scan-line Coherence: Segments of scene visible on one scun line ure most probably
visible on the next line.
64 Area Coherence: A particular element (mea) Ol' on image und its neighbors are likely to have
the same visibility and to be influenced by the same t'uee.
7' Depth Coherence : The different surfaces ut u given, screen location generally well
separated in depth relative to the depth range ol' eueh.
533
The first three types of coherence ore bused while the lust t'ouv tmage- space based.
If an image exhibits n porticlllnr predominant cohetvnee, coherence would form
the basis of the reloted hidden line reunovol ulgot'ittuu viewing point is not
nc•cdcd.
Formulation and Implementation TJ)C two-
e hidden 11
line reJnoval problem cun be stilted us Collowse For given tluve- dinjcn
%ionaJ
projection
is
known
ilnage
(picture) space, J! is the XP plane of the VCS.
A set generic steps that can implejncnt a solution to the above probJcm shown in Fig, 9. J 2,
three-dirncnsional scene ig a set of threc-dijncnsion;d Each objcct is defined by its and
topology. A modcJ is an representation. The root of the tree represents (he three-dirncnsionaJ
Datab;tscs of' wirefrajne and surface models order have of their to bc edges modified (clockwi%cto bc
identity faces (polygons) and the counterclockwise). These modifications arc typicalJy achicvcd
in an interactive way by requesting the user to digitize the edges of each face in a given order.
The second step is to apply the proper geometric transformations (Chap. 8) based on the
viewing direction to the three-dimensional sccnc data to obtain the two-dimensional image " raw
'i data. At this stage, the image is a maze ofaJJ edge; (visible and invisible), These
transformations arc modified as discussed earlierto
also produce the depth information which is stored in the image database fordepth. comparison purposes
later.
534 CAD/CAM Theory and Practice
The next steps (sorting and applying visibility techniques) shown in we Fig. apply9.12 are
interreJated and may bc difficult to identify as such. Nevertheless, one or more of the visibility
techniques covered earlier with the aid of a sorting technique. The surface test to eliminate the back
faces is usually sufficient to solve the hidden Jine problem if the scene is only one convex polyhedron
without holes' Otherwise a con)bination of' techniques are usually required. It is this combinati0n and
sorting techniques that differentiate the existing algorithms. In order to apply the visjbiJi(y
techniques to the image data, the gorling of this data by cither polygons or edges required.
V/fjl//// Rea/iB'//
=
the cojnpletion or (he sorting according to the vißibility criteria gct by
thc
visibility from techniques, the image the hidden data. The edges last (or step parts in of'edgcs) the
algorithm have been to digpJay jdcnfificd and/orand
plot the final ilnages.
9.3.6 Sample Hidden Line Algorithms
The basis ora visibility algorithJn is to test whether a
picture elcmcnt is hidden by or not. A visibility algorithm can
described as va(//, C) where //and C sets of the hidden and (he
covering candidate edges ofthe scene consideration
respectively. Since every edge in (he // set has to be tested
against all the cdgcs in
535
C set, the algori(hl)) requires // C visibility tests. The different visibility algolithlns
arc Jnainly distinguished by the strategy used to apply the visibility test, which in turn
(l)e // and C sets.
The efficiency of a visibility algorithm depends on the choice of the // and C
Existing hidden line removal algorithms can take one of' three approaches: the
edge-oriented approach, silhouette (contour)-oricntcd approach or area-oriented
approach, To exan)ine the efficiency of cach approach, Jet E and S bc the number
of the edges and silhouette edges of a picture respectively, The edge-oriented
approach forjns the basis of most existing hidden Jine algorithms. The underlying
strategy is the explicit' convutation of' the visible segments of%dJj individual
edges.
The visibility calculation consists of the test of aJJ edges against aJJ surfaces.
Therefore, // = E and C = E. Thus E E visibility tests arc rcqåired. This approach
isincflicient because it tests alb edges against each other, whether they intersect or
not. It does not recognize the structural information of a picture. Sorting of all
edges improves the efficiency of the approach.
I
n the silhouette (contour)-orien(cd approach, the silhouette edges are calculated
first. The visibility calculation consists of testing all the cdgcs against all the
silhouette edges only, Thus, all the edges are hidden candidates but only the
silhouette edges are covering candidates; that is, /-/=E, The calculation rate is
estimated at E This approach is more efficient than [he edge-oriented approach.
However, some edges (after sor(ing) wiJJ still be tested against some
nonintersecting silhouettes. Moreover, intersections ofhomogeneously (fully)
hidden edges against silhouette edges are unnecessarily computed, However, the
tests ofhomogeneously covering (visible) edges against ench Other are il voided
completely.
The area-oriented approach tiiJJ)B Ille recognition ol' the visible areas of a picture. This
approach calculates 'he BiJJjouelle edges and connects them to form closed Polygons
(areas). The connection be done in S' steps. The visibility
IlkSand C' = begin Sand with the calculation (he tcBI ol' ilJJ rule BiJhouetlc is S "I'hi8
edgcg approach against works each other. better Thusthan
$jJhouettc-orjented ilpprouch, First, gor[ing iB reduced to the silhouette edges
Only silhouettes (contours) ;jre (egled ngginsl nonintersecting silhouettes, The c
Joinogeneously o!' covering intersection, edgeH iB pojlJlB n voided,ol' boll)
homogeneously hidden and
There exists [l wide wealth or hidden line [llgorilhmg that utilize one or more of' e
viBibiJi(y techniqtJCB djßcugsed Jn Sec, 9,312 and follow one of the three
536 CAD/CAM Theory and Practice
approaches described above. These include the priority algorithm, the plane_sw paradigm,
algorithms for scenes of high complexities, algorithms for finite ele eep models of planar
elements, area-oriented algorithms, the overlay algorithmmt%t surfaces defined by u-v grids
(Chap. 5) and algorithms for projected
In the remainder of this section, we discuss the priority algorithmgrid surfaces. and an
shouldarea. oriented algorithm. Readers who are interested in details ofother algorithms
-
tuay be an atva as in the ease of Fl and F4, an edge as for faces Fl
and an etnpty set (no intersection) as for faces Fl and F6. In the case
of an of intersection (A in Fi2. 9.13a), the (x„. yv) coordinates of a
point c inside can be cotnputed (notice the comer points ofA are
known). Utilizing (9.3) for both faces Fl and F4. the two
corresponding Ä. values of point c can be calculated and eotupated. The
face with the highest values is assigned the highest priority. In the case
of an edge of
intersection, both
faces ate assigned the
satne priority.
They obviously do
not obscure
each other.
especially
after the petnoval of
the back faces. In
the case of no face
(a)
intetxeetion. no priority is assigned.
Face Priority Face Priority Face Priority Face Priority
list list list list list list list list
538 CAD/CAM Theory and Practice
Iteration I Iteration 2 Iteration 3 Iteration 4
Let us apply the above stategy to the scene of Fig. 9.13. F • and in edges. Therefote both
faces are assigned priority I l. Intersects F F intersect in an atea. Using the depth test and
assuming the depth than that of Ft, is assigned priority 2. When we intersect faces F 4 we
obtain an empty set, that is, no priority assignment is possible. I and In thisF case,
the face Ft is moved to the end of the face list and the sorting process to detertnine priorities
starts all over again. In each iteration, the first facein the face list is assigned priority l. The
end of each iteration is detected byno intersection. Figur 9.13b shows four iterations before
obtaining the final priority list. In iteration 4, faces F4 to 176 are assigned the priority 1 first.
When Fa is intetu•cted with Fl the depth test shows that Fl has higherprion'ty. Thus, Fl is
assigned priority I and priority of Fo to 176 is dropped to 2.
4. Reorder the face and priority lists so that the highest priority is on top ofthe list. In this case,
the face and priority lists are [Fl, 172, 173, 174, F5, and Il, I, l, 2, 2, 21 respectively.
5. In the case of a raster display% hidden line removal is done by the hardware (frame buffer of the
display). We simply display the faces in the reverse order of their priority. Any faces that would
have to be hidden by others would thus be displayed first. but would be covered later either
partiallyor entirely by faces of higher priority.
6. In the case of a vector display, the hidden line removal must be done by software by determining
coverings. For this purpose, the edges ofa face are compared with all other edges of higher priority.
An edge list can be created that maintains a list of all line segments that will have to be drawn as
Visibility techniques such as the containment test (Sec. 9.3.2.2) and edge intersection (Sec. 9.3.2.5)
are useful in this case. Figure 9,13c shows the scene with hidden lines removed.
In some scenes, ambiguities may result after applying the priority test• Figure 9.14 shows an example.
The figure shows a case in which the otderoffaces is cyclic. Face Fl covets 172, 172 covers 173 and F3
covers Fl. The reader is encoursl!' to find the priority list that produces this cyclic ordering and coverage.
To this ambiguity, additional software to determine coverings (similar to that Ofßtor displays) must be
added to the priority algorithrn,
9.3.6.2 Area-Oriented Algorithm
The area-oriented algorithm described here subdivides the data set ofa given in a stepwise fashion until all
visible areas in the scene detertuined d edge and displaydstructlltl'
The data structure that the algorithlll operates on is of the winge discussed in Chap. 6 and shown in Figs.
6.29 and 6.30. In this algorithlll in the priority algorithm, no penetration faces is allowed. Considering
the scene shown in Fig, 9.1311, the area-oriented algorithm can be described as follows
l. Identify silhouette polygons. Silhouette polygons are polygons
2. Assign quantitative hiding (QH) values to edges of silhouette polygons. This is achieved
by intersecting the polygons (the containment test can be utilized first as a quick test).
The intersection points define the points where the value of QH may change..Applying
the depth test to the points of intersection (PI and P2 in Fig. 9.15), we determine the
segments of the Silhouette edges that are hidden. For example, if the depth tect at PI
shows that zv at PI of Sl is smaller than that ofS2, edge Cl C2 is partially visible.
Similarly, the depth test at P2 shows that edge C2 C3 is also partially visible. To
determine which segment of an edge is visible, the visibility test of Sec. 9%3.2.5 can be
used. Determination of the values of QH at the various edges of edge segments of
s
ilhouette polygons is based on the depth test. Step 2 in Fig. 9.15 shows the values of QH.
A value of 0 indicates that an edge or segment is visible and a value of I indicates that an
edge or segment is invisible.
3• Determine the visible silhouette segments. From the v.alues of QH, the visibility of
silhouette segments can be determined with the following rules in mind. If a closed
silhouette polygon is completely invisible, it has notto be con sidered any further.
Otherwise, its segments with the least QH values are visible (step 3 in Fig. 9.15).
4' Intersect the visible silhouette segments with partially visible faces. This 'Step is used to
determine if the silhouette segments hide or partially hide non- silhouette edges in
partially visible faces. In Fig. 9.15, edges El to E
are intersected with the internal edges (edges of the square in the face) Of F1 and the
visible segments of the internal edges are determined. By accessing only the silhouette
edges of the covering silhouette polygon only and the partially visible face only, the
algorithm avoids any unnecessary calculations.
541 CAD/CAM Theory and Practice
5.
Step J.
CAD/CAM Thtvry Practice
The two hidden line algorithms discussed in this section are sample
algorithms to enable understanding of the basic nature of the hidden line
removal problem. area-oriented algorithm is more efficient than the
priority algorithm because it hardly involves any unnecessary edge/face
intersection.
verlay hidden line algorithm mentioned in Sec. 9.3.6 is suitable for curved
The o
surfaces. The algorithm begins by calculating the u—v grid using the surface
equation. It then creates the grid surface with linear edges. Various criteria
discussed
Previously can be utilized to determine the visibility of the grid surface.
There is no best hidden line algorithm. Many algorithms exist and some are more
efficient and faster in rendering images than others for certain applications.
Finnware and parallel-processing computations of hidden line algorithms make it
Possible to render images in real time. This adds to the difficulty of deciding on a
best algorithm
section algorithm. for Painter's raster displays. algorithtll is satuple a priority algorithtns algoritlun
are as covered describein n d this area-cohin the sectionprevftis
CAD/CAM Thtvry Practice
hidden solid removal problem involves the display of solid models with hidden hes or
surfaces removed. Due to the completeness and unambiguity of solid models as
discussed in Chap. 6, the hidden solid removal is done fully automatically. Therefore,
commands available on CAD/CAM systems for hidden solid removal, say a "hide
solid" command, require minimum user input. In fact, all that is needed toexecute the
command is that the user identifies the solid to be hidden by digitizing it data
structure of a solid model (see Fig. 6.29) has all the necessary information to solve
the hidden line or hidden surface problem. All the algorithms discussed in Secs. 9.3
and 9.4 are applicable to hidden solid removal of B-rep models. Selected algorithms
such as the z-buffer have been extended to CSG models.
For displaying CSG models, both the visibility problem and the problem of
combining the primitive solids into one composite model have to be solved. There
tethree approaches to display CSG models. The first approach converts the CSG into
a boundary
dgorithms. The second model approach that utilizes can be rendered a spatial
subdivision with the standard strategy. hidden To simplifysurface
The CSG ray-tracing and scan-line algorithms utilize this attractiveness of the
approach lies in the conversion of the dimensional solid/solid intersection problem
into a one-dimensional intersection calculation. Due to its popularity and generality,
the section covers in rnore detail the ray-tracing (also called ray-casting) algorithm
When the focal length, the distance between the focal point and screen, is parallel views
result (Fig. 8, 19) and all light rays becomes parallel to the and perpendicular to the screen
(the Xv Yv plane), Figure 9.18 shows the of a camera model as described here, The XYZ
coordinate system shown IS same as the VCS. We have dropped the subscript v for The
XYZsystem is taken to be the center of the screen.
Visual Realism = 547
Fig.
9.18 Camera Model(or Ray Tracing
A ray is a straight line which is best defined in a parametric form as a point
zo) and a direction vector (Ax, Ay, AZ). Thus, a ray is defined as
[(xo, yo, zo)
Ay, AZ)]. For a parameter t, ala] point (x, j, z) on the ray is given by x exo
+ tAx
(9.8)
form allows points on a ray to be ordered and accessed via a single
parameter t' Thus' a ray in a parallel view that passes through the pixel (x, y) is
defined as (G' Y' O) (O, 0, 1)]. In a perspective view, the ray is defined by 0, ze) (x.
Y' given the screen center (0, 0, 0) and the focal point (0, 0, ze). In the parallel
548 CAD/CAM Theory and Practice
view, taken to be zero at the pixel location while it is zero at the focal point (and at the pixel
location) in the perspective view.
A ray-tracing algorithm takes the above ray definition given by Eqs. (9.8) as an
Input and output information about how the ray intersects the scene. Knowing the
cmnera model and the solid in the scene, the algorithm can find where the
given-r enters and exits the solid, as shown in Fig. 9.19 for a parallel view.
information is an ordered list of ray parameters, ti, which denotes the e output points
and a list of pointers, Si, to the surfaces (faces) through which the ray enter/exitpas
The ray enters the solid at point t), exits at 12, enters at % and finally exits at t Point
ti is closest to the screen and point t4 is furthest away. The lists of ray
parameters and surface pointers suffice for various applications.
Ray
Screen
A ray-tracing algorithm for CSG models consists of three main modules: ray/
primitive intersection, ray/primitive classification and ray/solid (or ray/ Object)
classification.
Ray/primitive Intersection Utilizing the CSG tree structure, the general ray/ solid
intersection problem reduces to the ray/primitive intersection problem. The ray
enters and exits the solid v.ia the faces and the surfaces of the primitives•
For convex primitives (such as a block, cylinder, cone and sphere), the ray/ primitive
intersection test has four possible outcomes: no intersection (the ray misses the primitives),
the ray is tangent to (touches) the primitive at one point, the ray
lies on a face of the primitive, or the ray ,intersects the primitive at two different
points. In the case of a torus, the ray may be tangent to it at one or two points and
may intersect it at as many as four points.
The ray/primit.ive intersection is evaluated in the local coordinate system Of the
primitive (see Figv 6.4) because it is very easy to find. The equation of a primitiß (6.61) to
(6.66)] expressed in its local coordinate system is simple and
NdeIt,ndent of any rigid-body elliptic motion cylinders the primitive in the scene may
are undergo all the same to be primitiveoriented ystem: a right
circular cylinder at the origin. Similarly,
ellif0ids are the sarne sphere, elliptic cones are the same right circular cone and
Illelepipeds are all the same block.
a ray originating in the screen coordinate system (SCS), it must be into the primitive
(local) coordinate system (PCS) via the scene (model) system (MCS) in order to find the
ray/primitive intersection points. primitive has its local-to-scene transfom and inverse, but there
is only one and inverse. The local-to-scene [TLS] and scene-to-screen [Tssl ffansformation
matrices are determined from user input to orient primitives or define views of the scene
respectively. The transformation matrix [TJ that uansforms a ray from a local-to-screen
coordinate system is given by
(9.10)
(9.11)
is zero, the ray is parallel to the plane, so they do not intersect. If the point of Qrsection lies
within the bounds of the primitive, then it is a good ray/ primitive by Eq. (6.61) point. is The
bounds test for this point on the XY plane of a block given
Rean•anging gives
+ (Ay)21 + Ar + yo Ay) -1- xo + yo R 0
Using the quadratic fornitlla, we lind
t as
wher
(9.16) 18
C = xo + Yo ¯
Obviously, the ray will intersect the cylinder only if A 0 and (132
Visual Realism = 551
Having found the one or two values oft, the bounds test for the cylindrical surfaceis
O IAz) (9.17) Intersecting rays with a torus is more
complicated because it is a quartic surface.
It is left as an exercise (see the problems at the end of the chapter) for the reader, Ray
[primitive Classification The classification of a ray with respect primitive is simple.
Utilizing the set membership classification function introduced in Sec. 6.5.3 and the
ray/primitive intersection points, the "in, out," and "on" segments of the ray can be found.
As shown in Fig. 9.19, the odd intersection points signify the beginning of "in" segments
and the end of "out" segments.
For the convex primitives, if the ray misses or touches the primitive at one point, it is
classified as completely "out." If the ray intersects the primitive in two different points, it is
divided into three segments: "out-in-out." If the ray lies ona face of the primitive, it is
classified as "out-on-out." With respect to a torus, a r ay is classified as "out," "out-in-out,"
or "out-in-out-in-out."
Ray/solid Classification Combining ray/primitive classifications produces the ray/solid (or
ray/object) classification. Ray/solid classification produces the "ill'" "on," and/or "out"
segments of the ray with respect to the solid. It also reorders ray/primitive intersection points
and gives the closest point and surface ofthe solid to the camera. To combine ray/primitive
classifications, a ray-tracing algorithm starts at the top of the CSG tree, recursively descends
to the bottom, classifies the ray with respect to the solid primitives and then returns up the
tree combining the classifications of the left and right subtrees. Combining the "on"
segments the use of neighborhood information as discussed in Chap. 6. Figures 9.20
9.21 illustrate ray/soJid classification. The solid lines in Fig. 9.21 are "in" The combine
operation is a three-step process. First, the ray/primitive points from the left and right
subtrees are merged in sorted order, forming segmented composite ray. Second, the
segments of the composite ray are classified according to the boolean operator and the
classifications Of the left and right along these segments. Third, the composite ray is
simplified by merging segments with the same classification. Figure 9.22 illustrates these
three steps
552 CAD/CAM Theory and Practice
345
,lefltettt operator is tvplaeexi by Table: I
9, defines
Nilesthe
LuR—————
LnR——
LuR
Fig. 9.21 Combining Ray Classifications
Union OUT
OUT OUT
OUT OUT
Intersection
OUT OUT
OUT our
OUT OUT OUT
Difference OUT
OUT IN
OUT OUT
OUT OUT OUT
end
find
if
end
end
{pixel
556 CAD/CAM Theory and Practice
.if Sl in pixel (x,y) is different than Sl in pixel then display a pixel-long horizontal
line centered end {if}
Step
figure 9.24 shows the tree of box enclosures for the solid shown in Fig.
9.20. ray/box intersection test is basically two dimensional bccausc rays
usually screen and extend infinitely into the scene. When rays are bounded
th (as in mass property applications), a ray/dcpth test can bc added, UnJikc
union and intersection operators, the subtraction operator docs not obcy
rules of algebra. The enclosurc of A — B is equal to the enclosure of ofB.
regardless
9.5.1.4 Remarks
The sampling rate is under user control. As the sampling becomes sparser,
chance that solid edges and slivers may be overlooked becomes larger.
Line drawings. still the Inost cotnmon means of conununicating the
geometry
tneehanical parts, are liniited in features their ability other cannot to than
portray be shape represented intricate such as in surface shapes. line
drawingsfinish Shadedorof
- 559
color itnages convey shape information that Shaded itnages
can also convey
Inaterial type (plastic or metallic look).
Shaded-itnage-renderihg algorithms filter information by displaying only the visible
surface. Many spatial relationships that are unresolved in simple Wireframe displays
become clear with shaded displays. Shaded images are easier to interpret because they
resemble the real objects. Shaded images also have viewing problems not present in
wireframe displays. Objects of interest may be hidden or partially obstructed from view,
in which case various shaded images may be obtained from various viewing points.
Critical geometry such as lines, arcs and vertices are not explicitly shown. Well-known
techniques such as shaded-image/ wireframe overlay (Fig. 9.25), transparency and
sectioning can be used to resolve these problems.
by an image), solving the a pinhole hidden camera surface model removal is almost
problem universally to deterrnine lised. which Rendering objects beginsand/or
portions of objects are visible in the scene. As the visible surfaces are found, they must
be broken down into pixels and shaded correctly, This process must take into account
the position and color of the light sources and the position, orientati0n and surface
propenies of the visible objects.
ViS11al Realislll
560 CAD/CAM Theory and Practice
be adjusted clearly to minimize if the spot the size user's is too perception small but the of
the sharpness raster array. of the An image array suffers of dotsif
appears the spot size is too large. The fidelity of a display is a measure of how the light
energy calculated by a shading model is reproduced on the screen. Nonlinearities in the
intensity control circuits or in the phosphor response can distort the amount Ofenergy
actually emitted at a pixel.
Ip=ld+ls + l/,
(9.18)
where 1/1, Is and Il, are respectively the resulting intensity (the amount ofshade) at
point P, the intensity due to the diffuse reflection component of the point light source,
the intensity due to the specular reflection component and the intensity due to ambient
light. Equation (9.18) is written in a vector form to enable modeling of colored
surfaces. For the common red, green and blue color system, Eq. (9.18) represents three
scalar equations, one for each color. For simplicity of presentation, we develop Eq.
(9.18) for one color and therefore refer to it as Id + Is + Il, from now on (drop the
vector notation).
To develop the intensity components in Eq. (9.18), consider the shading model
shown in Fig. 9.26. The figure shows the geometry of shading a point P on a surface S
due to a point light source. An incident ray falls from the source to patan angle q
(angle of incidence) measured from the surface unit normal fi at R The unit vector I
points from the light source to P. The reflected ray leaves P with an angle ofreflection
6 (equal to the angle of incidence) in the direction defined by the unit vector F . The
unit vector i' defmes the direction from P to the viewingeye.
Light source
ILKd(fi.i) (9.20)
562 CAD/CAM Theory and Practice
Note that since diffuse light is radiated equally in all directions, the position Ofthe
viewing eye is not required by the computations and the maximum intensity occurs
when the surface is perpendicular to the light source. On the other hand, if the
Visual Realism - 563
angle ofincidence 0 exceeds 90, the surface is hidden from the light source and
Id be set to zero. A sphere shaded with this model (diffuse reflection only) will
brightest at the point on the surface between the center of the sphere and the
light source and will be completely dark on the far half of the sphere from the
light. shading models assume the point source oflight to be coincident with the
eye, so no shadows can be cast. For parallel projection, this means that light
rays striking a surface are all parallel. This means that fi • I is constant for the
entire surface, that is, the intensity Id is constant for the surface, as shown by
Eq. (9.20).
9,6.1.2 Specular Reflection
specularreflection is a characteristic of shiny surfaces. Highlights visible on
shiny surfaces are due to specular reflection while other light reflected from
these surfaces is caused by diffuse reflection. The location ofa highlight on a
shiny surface depends on the directions of the light source and the viewing eye.
Ifyou illuminate an apple gith a bright light, you can observe the effects of
specular reflection. Note that at
the highlight the apple appears to be white (not red), which is the color of
the incident light.
Shiny surface
(9.21)
(9.2
2)
554 —
Perfect mirror
Dull surface
If both the viewing eye and the point source of light are coincident
,Ti2
at F • becomes constant for the entire surface. This is because fi •
Ip=1aKa + .i)+
(9.24)
If W(B) is set to the constant Ks, this equation becomes
i ) + K3(F (9.5)
All the unit vectors can be calculated from the geometry of the shading while constants
and intensities on the right-hand side of the above equation•re assumed by the model.
Additional intensity terms can be added to the equatiéß more shading effects such as
shadowing, transparency and texture Some ofthese effects are discussed later in this
section.
on the surface geometry and can be computed a new each point This would require a
large number of calculations. sometimes they are ev incrementally. If a bicubic surface
is to be shaded in this way, its p$,11netnC
= 555
is used to subdivide the visible it surfaces into patches are determined, whose
sizes the are unit equal normal to or vector less than of each the patchpixel
After is calculated exactly and the patch is shaded using
Eq. (9.25).
Most surfaces, including those that are curved, are described by polygonal meshes
the visible surface calculations are to be performed by the majority ofrendering The
majority of shading techniques are therefore applicable to objects odelcd as
polyhedra. Among the many existing shading algorithms, we discuss of them:
constant shading, Gourand or first-derivative shading and Phong or second-
derivative shading.
9.6.2.1 Constant Shading the simplest and less realistic shading algorithm.
Since the unit normal vector polygon never changes, polygons will have just
one shade. An entire polygon has a single Intensity value calculated from Eq.
(9.25). Constant shading makes the polygonal representation obvious and
produces unsmooth shaded images (intensity discontinuities). Actually, if the
CAD/CAM Theory and Practice
viewing eye or the light source is very close to the surface, the shade of the
pixels within the polygon will differ significantly.
The choice of point P (Fig. 9.26) within the polygon becomes necessary if the light
source and the viewing eye are not placed at infinity. Such a choice affects
calculation of the vectors I and Q. P can be chosen to be the center of the polygon.
On the other hand, I and can be calculated at the polygon corners and the average
of these values can be used in Eq. (9.25).
9.6.2.2 Gourand Shading
Gourand shading is a popular form of intensity interpolation or first-derivative
ding. Gourand proposed a technique to eliminate (not completely) intensity
sha
2 (NA + NB) and NV (NC + ND) are used to interpolate shades between Polygons
A and B and C and D respectively. Thus, smooth shading occurs along
14
In Scan lie
ND 13
c
/ND
[2
(a) Surface normals (b) Intensity interpolation along polygon edges
between. Let us consider a display with 12 bits ar of color red shade, output, or that
any is, variation;212 simultaneous colors. If we decide that 64 shading grades per
color are requ obtain fairly smooth shades, then 4096/64 = 64 gross different colors
are POSI Actually only 63 colors are possible as the remaining one is reserved
background. Within each color, 64 different shading grades are posst'ble.This%3
that six bits of each pixel are reserved for colors and the Other six for lookup table
of the display would reflect this subdivision, as shown in Usually, most significant
the six least bits (MSB) significant correspond bits (LSB) to the correspond color, so
that to the when shade interpand01311
performed to obtain shading grades the six most Significant bits (i.e•' remain the
same,
557
t.sn
Addtvss Color Sluule
linekgmund
Light
2
tot Dork
4032 l.ight
63
Dork
4095
11
558
where Il and 12 are the intensities of the front and back faces respectively,
calculated using, say, Eq. (9.25). K is a constant that measures the
transparency of' lhc front face: when K = 0, the face is perfectly
transparent and does not change the intensity of the pixel; when K = l, the
front face is opaque and transmits no light. Sometimes transparency is
referred to as X-ray due to the silnilarity in effect.
Pixel
ample 9.1 Apply the shading model given by Eq. (9.25) to the scene of the boxes shown
in Fig. 9.13a. Assume that the visible surfaces have been identified as shown in Fig. 9.13m
Use n = 100 for the specular reflection component.
CA placed infinity,
Solution Let us assunte that the point light with the in use VCs 23),
viewing eve. "lilble 0.2 enn deci The table shows the
oc
(Kv) 100.1
Faee Ft:
b.A1ee
100
=
wide variety of shading algorithms of solids exist and are directly
related to ree of these algorithms are described later in F representation
scheme utilized. Th this section. The most popular algorithms are:
l. Shaded displays can be generated from B-reps (exact or faceted) by
the algorithms of surface shading discussed in Secs. 9.6.2 and 9.6.3.
These algorithms are particularly simple when B-reps are
triangulations or other tessellations. Special-purpose tiling engines
based on algorithms for displaying tessellations are
commercially available. Other VLSI chips for high-speed display of
tessellation are also available.
2. Algorithms to generate shaded images of octree and spatial
enumeration representations exist and have been implemented in
special-purpose hardware. Octree (for the first) and voxel (for the
second) machines are available.
3. Shading of CSG models can be achieved directly by utilizing the
CSG tree or indirectly by converting CSG into B-rep and then
574 Practice
utilizing B-rep-based algorithms. TWO of the popular shading
algorithms that work directly on CSG are ray-racing and a-buffer
(depth-buffer) methods. Ray tracing is general and can also be
applied to all solid modeling representations.
9.6.4.1 A Ray-Tracing Algorithm for CSG
The ray-tracing algorithm described in Sec. 9.5.1 is considered a
natural tool for making shaded images. To make a shaded image, cast
one ray per pixel into the scene and identify the visible surface Sl (Fig.
9.19) in this pixel. Compute the surface normal at the visible point tl.
Use Eq. (9.25) to calculate the pixel intensity value. Processing all
pixels in this way produces a raster-type picture of the scene.
Special effects and additional realism are possible by adding
transparency and shadowing. Transparent surfaces may be modeled
with or without refraction. Nonrefractive transparency is easy because
the ray remains as one straight line (no refraction by going through the
solid). With transparency, more than one surface may be visible at the
pixel. If surface Sl (Fig. 9.19) transmits any light, then the sufface
normal at point t2 must be computed to calculate the intensity at the
point. Thenetintensity at the given pixel can be calculated using Eq.
(9.30). IfS2 transmits
Cast a second ray connecting the visible point ti with the point light
source as shown in Fig. 9.32.
2• If the ray intersects any surface (S3) between the visible (relative to
the viewing eye) point and the light source, then the point is in
shadow.
3• Ifthe surface that shadows (S3) is transparent, then attenuate the
intensity of the light that passes through and add it to the intensity of
the pixel at point tr. A z-Buffer Algorithm for B-Rep and CSG
Depth-buffer or z-buffer algorithms that operate on B-reps, especially
polygonal
are simple and well known. The basic algorithm defines two
arrays, Information array of z(x, the y) and screen the intensity pixels
located array I(x, by y), to (x, store y) values. the depth Consideringand
intensity
Visual Realistn 575
CAD/CAM Theory and
Fig. 9.33, a a-buffer algorithm for B-rep can be described as föllows. Initialize
the For y) each and face I(x, of y) the arrays solid, with check a large the
respectivelythe face (p
and and the viewing eye. If d is smaller than the z value in the proper element
of the a(x, y) array, write d onto z(x, y) and compute the intensity using Eq.
(9.25). Update the intensity buffer by writing the intensity value onto I(x, y).
At the end of the scan, the intensity buffer contains the correct image values.
Typically, the frame buffer of the graphics display is used to store the intensity
array. Thus, the display can be updated incrementally, whenever a new
intensity for a pixel is computed, by writing the new value onto the frame
buffer.
To apply the above-described a-buffer algorithm for B-rep to CSG models,
it must be modified to reflect the fact that the actual solid's faces are not
available. Instead, the primitives' faces must be scanned to determine' the
elements ofthe z(x, y) and I(x, y) arrays. Scanning the primitives' faces yields a
superset of the points needed by the z-buffer algorithm. Exploiting the
generate-and-classify paradigm based on the point-membership classification,
we can discard the points, from the superset, that are not on the actual solid's
faces. The z-buffer algorithm for CSG models is described as follows.
Initialize the z(x, y) and I(x, y) arrays as in B-rep. For each face ofeach
primitive of the solid, check the distance d between each point on the face and
the viewing eye. If d is smaller than the z value in the proper element of z(x,
y), then classify the point with respect to the solid (by traversing the CSG tree).
If the point is classified as "on" the solid, then write d onto z(x, y) and compute
the intensity and write it onto I(x, y). Otherwise, if the point is classified as
"in" or "out" the solid, discard it and move to the next point.
576 Practice
Viewing eye
¯
4 (x, y)
Viewing
Visual Realistn 577
Fig. 9.33 The z-Buffer Algorithm for B-rep Models
CAD/CAM Theory
The value of each quadrant of the new quadtree node is determined by the p air of octants
aligned with that quadrant. Octants 0 and 4, 1 and 5, 2 and 6 and 3 and 7 determine the results
of quadrants 0, 1, 2 and 3 respectively. The rear octants (4, 5, 6, 7) are processed by the
hidden surface algorithm before the front (0, 1, 2' 3) for all four quadrants. This is because the
input to the algorithm is the rear quadrant to the octant being processed and the output is the
front quadrant. After both the rear and front octants are processed, the quadtree holds the
visible result from the octree.
The visible quadrants stored in the quadtree stmcture can be shaded via a suitable shading
model, e.g., Eq. (9.25). Surface normals are required to compute the intensity of light
reflected to the viewing eye. In the octree structure, no explicit information about surface
orientation is stored. The shading algorithm for octree-encoded Objects must therefore
Vigual 579
extract as much surface shading information as possible• TO approximate the surface
normal, the four octants representing the top, bottom left
Realignt =
arid right (Fig. 9.35b) neighbors of the octant being proccsscd (call it thc active octant) are
utilized. If the active octant has a void (completely empty) octant on thc left side, a surface
normal exists in this direction. If any other neighbors are void, surface normal vectors must
lie along the same area as well. The computations of these vectors are easy because the
spatial orientations of the octants in thc octrce are known.
Transparency can be modeled with octree structures. For opaque matcrials (as described
above), the quadtree is simply overwritten if an octant is found to obscure it. For transparent
materials, however, the intensity of the old quadrant is used to add a contribution to the
existing quadrant by using, say, Eq. (9.30).
9.7
fie use of colors in CAD/CAM has two main objectives: facilitate creating geometry and
display images. Colors can be used in geometric construction. In this case various wireframe,
surface, or solid entities can be assigned different colors to distinguish them. Color is one of
the two main ingredients (the second being texture) ofshaded images produced by shading
algorithms. In some engineering applications such as finite element analysis, colors can be
used effectively to display contour images such as stress or heat-flux contours.
Black and white raster displays provide achromatic colors while color displays (or television sets)
provide chromatic color. Achromatic colors are described as black, various levels of gray (dark or
light gray) and white. The only attribute of achromatic light is its intensity, or amount. A scalar value
between 0 (as black) and I (as white) is usually associated with the intensity. Thus, a medium gray is
assigned a value of 0.5. For multiple-plane displays, different levels (scale) of gray can be
produced. For example, 256 (28) different levels of gray (intensities) per pixel can be produced for an
eight-plane display. The pixel value % (which is related to the voltage of the deflection
beam) is related to the intensity level [i by the following equation:
(9.31)
The values C and %depends on the display in use. If the raster display has no
lookup table, Vi (e.g., 00010111 in an eight-plane display) is placed directly in the
Proper pixel. If there is a table, i is placed in the pixel and Vi is placed in entry i of the
table. Use of the lookup table in this manner is called gamma correction, after the exponent
in Eq. (9.31 ).
Chromatic colors produce more pleasing effects on the human vision system than
achromatic colors. However, they are more complex to study and generate. Color is created
by taking advantage of the fundamental trichromacy of the human eye• Three different
colored images are combined additively at photo-receptors in the eye to form a single
580and Practice
perceived image whose color is a combination of the three Prime colors. Each of the three
images is created by an electron gun acting on a Yolorphosphor. Using shadow-mask
technology, it is possible to make the images
Intermingle on the screen, causing the colors to mix together because of spatial
'I'lteort!
ptv.\itnity, the wide range Well-saturated Ots (lesitvd colors.red. green and blue colors
typically used (o produce
Gree
'Magenta
n White
Yellow
,'Bia-c-k- Green
Green Black
Gray
Red
Magenta Cyan
White
fagenta
Cyan Blue
Fig. 9.37
Blue Magenta
View A-A 0.0
Black Gray
600 ed S Blue
Cyan 60060
(9.32)
The unit column vector represents white in the RGB tnodel or black in the
CMY model.
9.7.1.3 YIQ Model
The Y IQ space is used in raster color graphics. It has been in use as a television broadcast
standard since 1953 when it was adopted by the National Television Standartls Committee
(MRSC) of the United States. It was designed to be compatible with black and white
television broadcast. ne Yaxis of the color model corresponds to the luminance (the total
arnount of light). The I axis encodes chrominance information along a blue-green to
omnge vector and the Q axis encodes chrominance information along a yellow-green to
magneta vector.
The conversion frotn YIQ coordinates to RGB coordinates is defined by the following
equation:
1.0 0.95 0.62
= 1.0 -0.28 -0.64 (9.33)
1.0 -1.11 1.73
9.7.1.4 HSV Model
This color model (shown in Fig. 9.37c) is user oriented because it is based on what artists
use to produce colors (Hue, Saturation and Value). It is contrary to the RGB, CMY and
YIQ models which are hardware oriented. The model approximates the perceptual
properties of hue, saturation and value.
The conversion from HSV coordinates to RGB coordinates can be defined as follows.
The hue value H (range from 00 to 3600) defines the angle of any point on or inside the
single hexacone shown in Fig. 9.37c. Each side of the cone bound s 600 as shown in view A-
A. If we divide a given H by 60, we obtain an integer part i and a fractional partf. ne integer
part i is between 0 and 5 (H = 360 is treated as ifH = 0). Let us define the following
quantities:
(9.34)
Major commercial CAD/CAM systems provide their users with shading packages that
implement most of the shading and coloring concepts covered in Secs. 9.6 and 9•7. A
shading command with the appropriate shading attributes and modifiers allows users
to shade either surface or solid models. A generic syntax for a typical shading
command may look as follows:
SHADE (geometric model resolution) (geometric model entities))
The type of geometric model could be a "yllface" or "solid." The "shade surface"
Command would require the user to digitize all the }urt'aces of the model that are to
Clt
nptcr
Computer Animation
conventional nninuttion, even Cor liljns, ig a vcry labor-intcnsivc process involving
hundreds. perhaps thousands, ol' hours. The mqjority of the time is spent in making all drawings ol' the
characters and the backgrounds for both thc kcyframes and thc inbctwcens. computer anirnation
is a viable solution to allevin(c the shortcomings of conventional
The science ol' coinputer anilnation has advanced cnough to meet the demands of modeling and
sitnulation in engineering applications.
Two classes ol' coinputcr aninltltion can be distinguished: entertainment and engineering.
role of cotnputers in the first class can be identified from Figure 14.1. Computer graphics
techniques can be used to generate the drawings of kcyframes and inbctweens. The drawings of
keyfr:unes can be created with an interactive graphics editor, can be digitized, or can be produced
by prograrnming. The inbctwcens can bc compJctcly calculated by the computer by interpolation
along complex paths of motion, The use of layers in gcncrating these drawings is very helpful.
Drawings that are shared by more than one frame arc stored on separate layers and shared by all
the frames by overlaying the proper layers.
Shading techniques can be used to paint the drawings of the various frames. Not only can these
techniques add visual realism to the animation, but they can also provide a tenfold reduction in
preparation time.
Filming an animated film can be assisted by computer. If filming is done with a camera, its
movements can be controlled by the computer, or virtual cameras can be completely programmed. If a
film is recorded with a video recorder, the computer can control the recorden
Modeled animation (sometimes called 3D animation), on the other hand, is an entirely different
medium. Jt opens up the possibility of utilizing the available computer graphics and
CAD techniques to create scenes, movements, and images which are difficult t0<ichieve by
means. In particular, accurate representations of objects, as well as smooth and complex 3D
movements, are faciJitated, After the storyboard is prepared for a film, modeled animation is
achieved as follows:
Create geometric description. Jn order to allow the complete and general 3D animation Of
drawn objects, they should be described as geometric models. A high degree of realism can be
added to these models after their images are generated with shading attributes such as color,
texture, reflectance, translucency, and so forth.
2 Generate frames. With animation software, we can use geometric transformations to create frames, both
keyframes and inbetweens.
3' Perform line testing. After all frames are generated, their corresponding images are gen. erated by
shading them. These images can be animated and displaye(l on the graphics display to test the
moveJnenls in real time,
Record sequence. When all the frames are satisfactory, they are recorded frame by frame onto video recorders
or films.