Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

2021

KOTEBE METROPOLITIAN
UNIVERSITY
DEPARTMENT OF COMPUTER SCIENCE

COMPUTER GRAPHCICS ASSIGNMENT

GROUP MEMBERS
1. YONATHAN BERHANU
2. FERDE NIGUSSIE
3. FIYORY TASSEW
4. MANDFERO MARU
5. YONAS
GRAPHICS COORDINATE SYSTEMSAND VIEWING
PIPELINE

A coordinate system is a way of assigning numbers to point. In two dimensions, you need
a pair of numbers to specify a point. The coordinates are often referred to as x and y. Coordinates
are just numbers that we assign to them so that we refer to them easily and work them
mathematically. In three dimensions, there is three numbers to specify a point. The third
coordinate is often called z. The z-axis is perpendicular to both the x-axis and the y-axis.

The term Viewing Pipeline describes a series of transformations, which are passed by
geometry data to end up as image data being displayed on a device. It is just a sequence of
transformations that every primitive has to undergo before it is displayed. Although the details of
these transformations can vary slightly from package to package, they are generally very similar.

1. Global Coordinate System


Global coordinate system is an absolute reference frame. They are used to define the coordinate
locations of nodes and key points in space. They can also be used to identify or select solid
model and finite element model entities based on their location(s) in space. There are four
predefined global coordinate systems in ANSYS:

CS 0—Global Cartesian (X, Y, Z)


CS 1—Global cylindrical (R, θ, Z)
CS 2—Global spherical (R, θ, φ)
CS 5—Global cylindrical (R, θ, Y)

The global coordinate systems are all right handed and share the same global origin (0,0,0). All
new entities are created and all existing entities are selected in the active coordinate system. Only
one coordinate system can be active at a given time. By default, the active coordinate system is
CS 0 (Global Cartesian).
2. Local coordinate systems
Local coordinate systems are coordinate systems other than the global coordinate system.
You can specify restraints and loads in any desired direction. For example, when defining a force
on a cylindrical face, you can apply it in the radial, circumferential, or axial directions. Similarly
if you choose a spherical face, you can choose the radial, longitude, or latitude directions. In
addition, you can use reference planes and axes.

A local coordinate system (LCS) is a set of x, y and z axes associated with each node in the
model. It is often preferable to use a local coordinate system for assigning constraints and loads
to simplify the constraint or load to one direction.
To define a local coordinate system you need three reference points or nodes in the model. Given
three reference locations the x, y and z axis is defined as follows:

 Coordinate 1 defines the origin


 Coordinate 2 defines the local x direction
 Coordinate 3 defines the x y plane, with the local Y axis passing closest to coordinate 3
3. Viewing coordinate system ?
Usually a left handed system called the UVN system is used. An object in world coordinate
space, whose vertices are (x, y,z) can be expressed in term of view
coordinates(u,v,n).Generating a view of an object in 3D is similar to photographing the
object. Whatever appears in the viewfinder is projected onto the flat film surface.
Depending on the position, orientation and aperture size of the camera corresponding views
of the scene is obtained. For a particular view of a scene first we establish viewing-
coordinate system. A view-plane (or projection plane) is set up perpendicular to the viewing
z-axis. World coordinates are transformed to viewing coordinates, then viewing coordinates
are projected onto the view plane. xw zw yw xv zv yv P0=(x0 , y0 , z0).To obtain a series of
views of a scene , we can keep the the view reference point fixed and change the direcion of
N. This corresponds to generating views as we move around the viewing coordinate origin.
P0 V N N

4. Object space and Image space


Object space is the space in relation to an optical system in which are located the objects to be
imaged by the system. It is the 3 dimensional space in which a graphic object is defined.
Object-space method is implemented in the physical coordinate system in which objects are
described. It compares objects and parts of objects to each other within the scene definition to
determine which surfaces, as a whole, we should label as visible. Object-space methods are
generally used in line-display algorithms. It compares parts of objects to each other to determine
which surfaces should be labeled to determine which surfaces should be labeled as visible (use of
bounding boxes, and check limits along ) each direction). Order the surfaces being drawn, such
that it provides the correct impression of depth variations and positions.
It deals with object definition directly, compares the objects and part of objects to each other
within the scene definition to determine which surfaces, as a whole to be labeled as visible,
initiated for vector graphics system and Used for accuracy and continuous operation.
Image Space Method deals with the projected images of the objects. Visibility is decided point
by point at Visibility is decided point by point at each pixel position on the projection plane.
Screen resolution can be a limitation. In this method, visibility is decided point by point at each
pixel position on the projection plane. It initiated for raster scan systems. User for time saving
and discrete operation. Image space method is implemented in the screen coordinate system in
which the objects are viewed. In an image-space algorithm, visibility is decided point by point at
each pixel Position on the view plane. Most hidden line/surface algorithms use image-space
methods.

Object Space Image Space

1. Image space is object based. It concentrates on 1. It is a pixel-based method. It is concerned with


geometrical relation among objects in the scene. the final image, what is visible within each raster
pixel.

2. Here surface visibility is determined. 2. Here line visibility or point visibility is


determined.

3. It is performed at the precision with which each 3. It is performed using the resolution of the
object is defined, No resolution is considered. display device.

4. Calculations are not based on the resolution of 4. Calculations are resolution base, so the change
the display so change of object can be easily is difficult to adjust.
adjusted.

5. These were developed for vector graphics 5. These are developed for raster devices.
system.

6. Object-based algorithms operate on continuous 6. These operate on object data.


object data.
7. Vector display used for object method has large 7. Raster systems used for image space methods
address space. have limited address space.

8. Object precision is used for application where 8. There are suitable for application where
speed is required. accuracy is required.

9. It requires a lot of calculations if the image is to 9. Image can be enlarged without losing
enlarge. accuracy.

10. If the number of objects in the scene increases, 10. In this method complexity increase with the
computation time also increases. complexity of visible parts.

SURFACE RENDERING
Surface rendering involves the careful collection of data on a given object in order to create a
three-dimensional image of that object on a computer. It is an important technique used in a
variety of industries. Surface rendering is used in a number of industries, such as in health care.
There, parts of the body are rendered so doctors can closely examine specific areas of a patient or
wounds they may have incurred. Archaeologists also use rendering to make an image of very
fragile objects in order to examine them without harming them. Surface rendering represents a
visualization technique which is well established for three-dimensional imaging of sectional
image data. 

1. POLYGON TABLES

 Keep pointers back in the edge table for consideration of the edges associated to
the polygon surface under construction. This tabular representation of a polygon surface
is demonstrated in Figure. Such representations assist one to rapidly refer to the data
associated to a polygon surface. Also, while the data is put for processing then the
processing can be fairly efficient, leading to efficient display of the object under
identification.

In polygon table, the surface is specified by the set of vertex coordinates and
associated attributes. As shown in the following figure, there are five vertices, from v1 to
v5. Each vertex stores x, y, and z coordinate information which is represented in the table
as v1: x1, y1, z1. The Edge table is used to store the edge information of polygon. In the
following figure, edge E1 lies between vertex v1 and v2 which is represented in the table
as E1: v1, v2.Polygon surface table stores the number of surfaces present in the polygon.
From the following figure, surface S1 is covered by edges E1, E2 and E3 which can be
represented in the polygon surface table as S1: E1, E2, and E3.

2. VISIBLE SURFACE DETECTION ?

When we view a picture containing non-transparent objects and surfaces, then we cannot see
those objects from view which are behind from objects closer to eye. We must remove these
hidden surfaces to get a realistic screen image. The identification and removal of these surfaces
is called Hidden-surface problem. When a 3D object need to be displayed on the 2D screen, the
parts of the screen that are visible from the chosen viewing position is identified. Algorithms to
detect visible objects are referred to as visible-surface detection method. The hidden surface
removal is the procedure used to find which surfaces are not visible from a certain view. The
procedure of hidden surface identification is called as hiding, and such an algorithm is called a
‘hider’.

You might also like