Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 24

Graphics

NAME\Ahmed Gamal Abbas ID\41710176

1) Point P=(x,y) can be expressed as P=xu+yv + O, where u and v are unit vectors in the x and y directions
respectively. Write P in matrix form.

2) Accomplish the following transformation to the vertices:

v1=(0.5, -0.5, 0.0)

v2=(-0.5, 0.5, 0.0)

Transformations:

T 4,4

R Z,-45 o

#include <GL/gl.h>

#include <GL/glut.h>

#include <GL/glu.h>
#include <iostream>

using namespace std;

#define myFirstTriangle glColor3f(0.0,1.0,0.0); glVertex3f(-0.5, -0.5, 0.0);

glColor3f(1.0,0.0,0.0); glVertex3f(0.5, -0.5, 0.0); glColor3f(0.0,0.0,1.0); glVertex3f(0, 1,

0.0);

#define mySecondTriangle glColor3f(1.0,0.0,0.0);glVertex3f(-0.9, -0.1,

0.0);glColor3f(0.0,0.0,1.0);glVertex3f(0.1, -0.1, -0.9);glColor3f(0.0,1.0,0.0); glVertex3f(-

0.1, 0.9, 0.0);

int n = 1;

float counter = 100.0;

void display()

/*this function draws over and over everyloop*/

glClearColor(0.0,0.0,0.0,1.0);

glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);

glRotatef(counter/150,0.0,0.0,1.0);

//counter +=(-n*1.0);

/*if (counter == (-n*100))

n = -n;

counter = (n*100.0);

}*/

glBegin(GL_TRIANGLES);

myFirstTriangle;

glEnd();

/*Second triangle*/

//glLoadIdentity();

//glRotatef(counter/10,0.0,1.0,0.0);

/*glBegin(GL_TRIANGLES);
mySecondTriangle;

glEnd();*/

glutSwapBuffers();/*explain double buffers, and how it may work without this

one*/

void reshape (int w, int h)

glViewport(0,0,w,h);

gluLookAt(0.0, 0.0, 0.0, 0.0, 0.0, -1.0, 0.0, 1.0, 0.0);

//glLoadIdentity();

void initOpenGL()

int main (int argc, char* argv[])

//build the window..

//initialize and startup freeglut

glutInit(&argc, argv);

//set modes to opengl operating environment

glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH);

//how big the window and where to locate it

glutInitWindowSize(500, 500);

glutInitWindowPosition(100,100);

glutCreateWindow("first opengl app"); //window will be displayed but will

disappear right away

//You need to tell glut to put the window into a loop so it is

//always drawn on the screen. Basically you would like to tell it

//to run until it you ask it to stop.


//this is in the glut library:

//****glutMainLoop();

initOpenGL();

glutDisplayFunc(display);//tells system which function to draw in each frame

glutIdleFunc(display);

/* whenever a window is created or changes made to its size, then glut should

know that*/

glutReshapeFunc(reshape);

glutMainLoop(); //now looping but the window is showing somthing

return 0; }

3) Points P=(x,y,z,w) was transferred (via some transformation matrices) to point P'=(x',y',z',1). Explain
the situation. And derive a formula for x',y' and z'

4) What are the five camera model parameters?

1) Camera position in 3D coordinates

2) Pointing direction: This is the direction the camera is looking at

3) Up direction: this is the direction the camera is rotated at deviated from the up

4) Viewing angle generated by the size of the file

5) Near and far clipping planes


5) Explain how do the vertices data get processed in OpenGL in case the camera moves (rotates,

translates, changes viewing direction,.. etc)? Run an opengl routine that proves concept your

answer

represents the set of stages of the OpenGL rendering pipeline where a sequence of vertices are
processed via a series of Shaders. The subsequent shader stages takes its data from the previous one.
Many of these shader stages are optional, and the last active stage in any rendering operation provides
vertex data to Vertex Post-Processing and beyond.

Translate

void glTranslate{fd}(TYPEx, TYPE y, TYPEz);

Multiplies the current matrix by a matrix that moves (translates) an object by the given x, y, and z values
(or moves the local coordinate system by the same amounts).

Note that using (0.0, 0.0, 0.0) as the argument for glTranslate*() is the identity operation - that is, it has
no effect on an object or its local coordinate system.

Rotate

void glRotate{fd}(TYPE angle, TYPE x, TYPE y, TYPE z);

Multiplies the current matrix by a matrix that rotates an object (or the local coordinate system) in a
counterclockwise direction about the ray from the origin through the point (x, y, z). The angle parameter
specifies the angle of rotation in degrees.

The effect of glRotatef(45.0, 0.0, 0.0, 1.0), which is a rotation of 45 degrees about the z-axis, is shown in

Note that an object that lies farther from the axis of rotation is more dramatically rotated (has a larger
orbit) than an object drawn near the axis. Also, if the angle argument is zero, the glRotate*() command
has no effect.

Scale

void glScale{fd}(TYPEx, TYPE y, TYPEz);

Multiplies the current matrix by a matrix that stretches, shrinks, or reflects an object along the axes.
Each x, y, and z coordinate of every point in the object is multiplied by the corresponding argument x, y,
or z. With the local coordinate system approach, the local coordinate axes are stretched, shrunk, or
reflected by the x, y, and z factors, and the associated object is transformed with them.
glScale*() is the only one of the three modeling transformations that changes the apparent size of an
object: Scaling with values greater than 1.0 stretches an object, and using values less than 1.0 shrinks it.
Scaling with a -1.0 value reflects an object across an axis. The identity values for scaling are (1.0, 1.0,
1.0). In general, you should limit your use of glScale*() to those cases where it is necessary. Using
glScale*() decreases the performance of lighting calculations, because the normal vectors have to be
renormalized after transformation

#include <windows.h> // for MS Windows

#include <GL/glut.h> // GLUT, include glu.h and gl.h

/* Initialize OpenGL Graphics */

void initGL() {

// Set "clearing" or background color

glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // Black and opaque

/* Handler for window-repaint event. Call back when the window first appears and

whenever the window needs to be re-painted. */

void display() {

glClear(GL_COLOR_BUFFER_BIT); // Clear the color buffer

glMatrixMode(GL_MODELVIEW); // To operate on Model-View matrix

glLoadIdentity(); // Reset the model-view matrix

glTranslatef(-0.5f, 0.4f, 0.0f); // Translate left and up

glBegin(GL_QUADS); // Each set of 4 vertices form a quad

glColor3f(1.0f, 0.0f, 0.0f); // Red

glVertex2f(-0.3f, -0.3f); // Define vertices in counter-clockwise (CCW) order

glVertex2f( 0.3f, -0.3f); // so that the normal (front-face) is facing you

glVertex2f( 0.3f, 0.3f);

glVertex2f(-0.3f, 0.3f);
glEnd();

glTranslatef(0.1f, -0.7f, 0.0f); // Translate right and down

glBegin(GL_QUADS); // Each set of 4 vertices form a quad

glColor3f(0.0f, 1.0f, 0.0f); // Green

glVertex2f(-0.3f, -0.3f);

glVertex2f( 0.3f, -0.3f);

glVertex2f( 0.3f, 0.3f);

glVertex2f(-0.3f, 0.3f);

glEnd();

glTranslatef(-0.3f, -0.2f, 0.0f); // Translate left and down

glBegin(GL_QUADS); // Each set of 4 vertices form a quad

glColor3f(0.2f, 0.2f, 0.2f); // Dark Gray

glVertex2f(-0.2f, -0.2f);

glColor3f(1.0f, 1.0f, 1.0f); // White

glVertex2f( 0.2f, -0.2f);

glColor3f(0.2f, 0.2f, 0.2f); // Dark Gray

glVertex2f( 0.2f, 0.2f);

glColor3f(1.0f, 1.0f, 1.0f); // White

glVertex2f(-0.2f, 0.2f);

glEnd();

glTranslatef(1.1f, 0.2f, 0.0f); // Translate right and up

glBegin(GL_TRIANGLES); // Each set of 3 vertices form a triangle

glColor3f(0.0f, 0.0f, 1.0f); // Blue

glVertex2f(-0.3f, -0.2f);

glVertex2f( 0.3f, -0.2f);

glVertex2f( 0.0f, 0.3f);


glEnd();

glTranslatef(0.2f, -0.3f, 0.0f); // Translate right and down

glRotatef(180.0f, 0.0f, 0.0f, 1.0f); // Rotate 180 degree

glBegin(GL_TRIANGLES); // Each set of 3 vertices form a triangle

glColor3f(1.0f, 0.0f, 0.0f); // Red

glVertex2f(-0.3f, -0.2f);

glColor3f(0.0f, 1.0f, 0.0f); // Green

glVertex2f( 0.3f, -0.2f);

glColor3f(0.0f, 0.0f, 1.0f); // Blue

glVertex2f( 0.0f, 0.3f);

glEnd();

glRotatef(-180.0f, 0.0f, 0.0f, 1.0f); // Undo previous rotate

glTranslatef(-0.1f, 1.0f, 0.0f); // Translate right and down

glBegin(GL_POLYGON); // The vertices form one closed polygon

glColor3f(1.0f, 1.0f, 0.0f); // Yellow

glVertex2f(-0.1f, -0.2f);

glVertex2f( 0.1f, -0.2f);

glVertex2f( 0.2f, 0.0f);

glVertex2f( 0.1f, 0.2f);

glVertex2f(-0.1f, 0.2f);

glVertex2f(-0.2f, 0.0f);

glEnd();

glFlush(); // Render now

/* Handler for window re-size event. Called back when the window first appears and
whenever the window is re-sized with its new width and height */

void reshape(GLsizei width, GLsizei height) { // GLsizei for non-negative integer

// Compute aspect ratio of the new window

if (height == 0) height = 1; // To prevent divide by 0

GLfloat aspect = (GLfloat)width / (GLfloat)height;

// Set the viewport to cover the new window

glViewport(0, 0, width, height);

// Set the aspect ratio of the clipping area to match the viewport

glMatrixMode(GL_PROJECTION); // To operate on the Projection matrix

glLoadIdentity();

if (width >= height) {

// aspect >= 1, set the height from -1 to 1, with larger width

gluOrtho2D(-1.0 * aspect, 1.0 * aspect, -1.0, 1.0);

} else {

// aspect < 1, set the width to -1 to 1, with larger height

gluOrtho2D(-1.0, 1.0, -1.0 / aspect, 1.0 / aspect);

/* Main function: GLUT runs as a console application starting at main() */

int main(int argc, char** argv) {

glutInit(&argc, argv); // Initialize GLUT

glutInitWindowSize(640, 480); // Set the window's initial width & height - non-square

glutInitWindowPosition(50, 50); // Position the window's initial top-left corner

glutCreateWindow("Model Transform"); // Create window with the given title

glutDisplayFunc(display); // Register callback handler for window re-paint event

glutReshapeFunc(reshape); // Register callback handler for window re-size event


initGL(); // Our own OpenGL initialization

glutMainLoop(); // Enter the infinite event-processing loop

return 0;

6) A point (x,y,z,1) was transferred by CV (i.e. camera and viewing matrices) and the outcome was
(x',y',z',w). What does w represent? How can we bring this point to image space? How can we bring this
point to screen space?

the fourth column represents the translation vector (origin or position) (W) of the space represented by
the transformation matrix.

Computing the coordinates of a point from camera space onto the canvas can be done using perspective
projection (camera space to image space). This process requires a simple division of the point's x- and y-
coordinate by the point's z-coordinate. Before projecting the point onto the canvas, we need to convert
the point from world space to camera space. The resulting projected point is a 2D point defined in image
space (the z-coordinate can be discarded). then convert the 2D point in image space to Normalized
Device Coordinate (NDC) space. In NDC space (image space to NDC space)

7) write and run an opengl routine to explain the use and effect of:

- gluLookAt() command

creates a viewing matrix derived from an eye point, a reference point indicating the center of the scene,
and an up vector

- gluPerspective() command

create a symmetric perspective projection matrix and multiply the matrix by the current matrix

- the use of depth buffer

determine which portions of objects are visible within the scene. When two objects cover the same x
and y positions but have different z values, the depth buffer ensures that only the closer object is visible

- the use of the modelview matrix stac

constructing hierarchical models, in which complicated objects are constructed from simpler ones

8) Explain the difference between using glDraw() versus glDrawArrays(). Write a code that uses

glDrawArrays(). Talk about advantages.


With glDrawArrays, OpenGL pulls data from the enabled arrays in order, vertex 0, then vertex 1, then
vertex 2, and so on. With glDrawElements, you provide a list of vertex numbers. OpenGL will go through
the list of vertex numbers, pulling data for the specified vertices from the arrays.

void init(void)

glClearColor (0.0, 0.0, 0.0, 0.0);

glShadeModel (GL_FLAT);

void display(void)

glEnableClientState (GL_COLOR_ARRAY);

glClear (GL_COLOR_BUFFER_BIT);

glColor4f (0.0, 0.0, 1.0, 1.0);

glLoadIdentity ();

glTranslatef(0, 0, -20);

const GLfloat triVertices[] = {

0.0f, 1.0f, 0.0f,

-1.0f, -1.0f, 0.0f,

1.0f, -1.0f, 0.0f

};

glVertexPointer(3, GL_FLOAT, 0, triVertices);

glDrawArrays(GL_TRIANGLES, 0, 3);

glDisableClientState(GL_VERTEX_ARRAY);
glFlush ();

void reshape (int w, int h)

glViewport (0, 0, (GLsizei) w, (GLsizei) h);

glMatrixMode (GL_PROJECTION);

glLoadIdentity ();

glFrustum (-1.0, 1.0, -1.0, 1.0, 1.5, 20.0);

glMatrixMode (GL_MODELVIEW);

int main(int argc, char** argv)

glutInit(&argc, argv);

glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB);

glutInitWindowSize (400, 400);

glutInitWindowPosition (100, 100);

glutCreateWindow (argv[0]);

init ();

glutDisplayFunc(display);

glutReshapeFunc(reshape);

glutMainLoop();

return 0;

9) Explain the use of opengl command glBindBuffer(). Write and run a code using glBindBuffer().

glBindBuffer lets you create or use a named buffer object.

Calling glBindBuffer with target set


to GL_ARRAY_BUFFER or GL_ELEMENT_ARRAY_BUFFER and buffer set to the

name of the new buffer object binds the buffer object name to the target.

When a buffer object is bound to a target, the previous binding for that

target is automatically broken.

Buffer object names are unsigned integers. The value zero is reserved, but

there is no default buffer object for each buffer object target.

Instead, buffer set to zero effectively unbinds any buffer object previously

bound, and restores client memory usage for that buffer object target.

Buffer object names and the corresponding buffer object contents are local

to the shared object space of the current GL rendering context.

You may use glGenBuffers to generate a set of new buffer object names.

The state of a buffer object immediately after it is first bound is a zero-sized

memory buffer with GL_STATIC_DRAW usage.

geometry 3d.create vertex& normalbuffer only()

**

*Creates the vertex and normal buffers only. This is typically used for a

* VertexAnimationObject3D's frames.

* @see VertexAnimationObject3D

*/

public void createVertexAndNormalBuffersOnly() {

((FloatBuffer)

mBuffers.get(VERTEX_BUFFER_KEY).buffer).compact().positi

on(0);

((FloatBuffer)

mBuffers.get(NORMAL_BUFFER_KEY).buffer).compact().positi

on(0);

createBuffer(mBuffers.get(VERTEX_BUFFER_KEY),

BufferType.FLOAT_BUFFER, GLES20.GL_ARRAY_BUFFER);
createBuffer(mBuffers.get(NORMAL_BUFFER_KEY),

BufferType.FLOAT_BUFFER, GLES20.GL_ARRAY_BUFFER);

GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER,

0);

GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);

10) Derivation of all model transformation matrices (rotation, translation ,and scaling) in 2D and 3D. You
should understand how geometrical transformations take place. And how to cascade multiple
transformations.

Translations and Rotations on the xy-Plane

We intend to translate a point in the xy-plane to a new place by adding a vector <h, k>
. It is not difficult to see that between a point (x, y) and its new place (x', y'), we
have x' = x + h and y' = y + k. Let us use a form similar to the homogeneous
coordinates. That is, a point becomes a column vector whose third component is 1.
Thus, point (x,y) becomes the following:

Then, the relationship between (x, y) and (x', y') can be put into a matrix form like the
following:

Therefore, if a line has an equation Ax + By + C = 0, after plugging the formulae


for x and y, the line has a new equation Ax' + By' + (-Ah - Bk + C) = 0.

If a point (x, y) is rotated an angle a about the coordinate origin to become a new point
(x', y'), the relationships can be described as follows:
Thus, rotating a line Ax + By + C = 0 about the origin a degree brings it to a new
equation:

(Acosa - Bsina)x' + (Asina + Bcosa)y' + C = 0

Translations and rotations can be combined into a single equation like the following:

The above means that rotates the point (x,y) an angle a about the coordinate origin and
translates the rotated result in the direction of (h,k). However, if translation (h,k) is
applied first followed by a rotation of angle a (about the coordinate origin), we will
have the following:
Therefore, rotation and translation are not commutative!

In the above discussion, we always present two matrices, A and B, one for


transforming x to x' (i.e., x'=Ax) and the other for transforming x' to x (i.e., x=Bx').
You can verify that the product of A and B is the identity matrix. In other
words, A and B are inverse matrices of each other. Therefore, if we know one of
them, the other is the inverse of the given one. For example, if you know A that
transforms x to x', the matrix that transforms x' back to x is the inverse of A.

Let R be a transformation matrix sending x' to x: x=Rx'. Plugging this equation


of x into a conic equation gives the following:

Rearranging terms yields

This is the new equation of the given conic after the specified transformation. Note
that the new 3-by-3 symmetric matrix that represents the conic in a new position is the
following:

Now you see the power of matrices in describing the concept of transformation.

Translations and Rotations in Space

Translations in space is similar to the plane version:

The above translates points by adding a vector <p, q, r>.


Rotations in space are more complex, because we can either rotate about the x-axis,
the y-axis or the z-axis. When rotating about the z-axis, only coordinates
of x and y will change and the z-coordinate will be the same. In effect, it is exactly a
rotation about the origin in the xy-plane. Therefore, the rotation equation is

With this set of equations, letting a be 90 degree rotates (1,0,0) to (0,1,0) and (0,1,0)
to (-1,0,0). Therefore, the x-axis rotates to the y-axis and the y-axis rotates to the
negative direction of the original x-axis. This is the effect of rotating about the z-axis
90 degree.

Based on the same idea, rotating about the x-axis an angle a is the following:

Let us verify the above again with a being 90 degree. This rotates (0,1,0) to (0,0,1)
and (0,0,1) to (0,-1,0). Thus, the y-axis rotates to the z-axis and the z-axis rotates to the
negative direction of the original y-axis.

But, rotating about the y-axis is different! It is because the way of measuring angles.
In a right-handed system, if your right hand holds a coordinate axis with your thumb
pointing in the positive direction, your other four fingers give the positive direction of
angle measuring. More precisely, the positive direction for measuring angles is from
the z-axis to x-axis. However, traditionally the angle measure is from the x-axis to
the z-axis. As a result, rotating an angle a about the y-axis in the sense of a right-
handed system is equivalent to rotating an angle -a measuring from the x-axis to the z-
axis. Therefore, the rotation equations are
Let us verify the above with rotating about the y-axis 90 degree. This rotates (1,0,0) to
(0,0,-1) and (0,0,1) to (1,0,0). Therefore, the x-axis rotates to the negative direction of
the z-axis and the z-axis rotates to the original x-axis.

A rotation matrix and a translation matrix can be combined into a single matrix as
follows, where the r's in the upper-left 3-by-3 matrix form a rotation
and p, q and r form a translation vector. This matrix represents rotations followed by a
translation.

You can apply this transformation to a plane and a quadric surface just as what we did
for lines and conics earlier.

11) Understand camera parameters and how they are reflected in matrices (remember OpenGL

implementation manifests all parameters in matrix format).


Usually, the camera parameters are represented in a 3 × 4 projection matrix called the camera matrix.
The extrinsic parameters define the camera pose (position and orientation) while the intrinsic
parameters specify the camera image format (focal length, pixel size, and image origin)

12) Derivation of camera transformation matrix 'C' (which is similar to model transformation) and
derivation of perspective projection matrix 'V'.
13) Understand all spaces that we used (viewing space, frustum space, image space, screen space) also
how to move between them (using different matrices).

View space

The view space is what people usually refer to as the camera of OpenGL (it is sometimes also known as
camera space or eye space). The view space is the result of transforming your world-space coordinates
to coordinates that are in front of the user's view. The view space is thus the space as seen from the
camera's point of view. This is usually accomplished with a combination of translations and rotations to
translate/rotate the scene so that certain items are transformed to the front of the camera. These
combined transformations are generally stored inside a view matrix that transforms world coordinates
to view space. In the next chapter we'll extensively discuss how to create such a view matrix to simulate
a camera.

Clip space

At the end of each vertex shader run, OpenGL expects the coordinates to be within a specific range and
any coordinate that falls outside this range is clipped. Coordinates that are clipped are discarded, so the
remaining coordinates will end up as fragments visible on your screen. This is also where clip space gets
its name from.

Because specifying all the visible coordinates to be within the range -1.0 and 1.0 isn't really intuitive, we
specify our own coordinate set to work in and convert those back to NDC as OpenGL expects them.

To transform vertex coordinates from view to clip-space we define a so called projection matrix that
specifies a range of coordinates e.g. -1000 and 1000 in each dimension. The projection matrix then
converts coordinates within this specified range to normalized device coordinates (-1.0, 1.0) (not
directly, a step called Perspective Division sits in between). All coordinates outside this range will not be
mapped between -1.0 and 1.0 and therefore be clipped. With this range we specified in the projection
matrix, a coordinate of (1250, 500, 750) would not be visible, since the x coordinate is out of range and
thus gets converted to a coordinate higher than 1.0 in NDC and is therefore clipped.

frustum space
The view frustum is typically obtained by taking a frustum—that is a truncation with parallel planes—of
the pyramid of vision, which is the adaptation of (idealized) cone of vision that a camera or eye would
have to the rectangular viewports typically used in computer graphics. Some authors use pyramid of
vision as a synonym for view frustum itself. consider it truncated.

The exact shape of this region varies depending on what kind of camera lens is being simulated, but
typically it is a frustum of a rectangular pyramid (hence the name). The planes that cut the frustum
perpendicular to the viewing direction are called the near plane and the far plane. Objects closer to the
camera than the near plane or beyond the far plane are not drawn. Sometimes, the far plane is placed
infinitely far away from the camera so all objects within the frustum are drawn regardless of their
distance from the camera.

Viewing-frustum culling is the process of removing from the rendering process those objects that lie
completely outside the viewing frustum. Rendering these objects would be a waste of resources since
they are not directly visible. To make culling fast, it is usually done using bounding volumes surrounding
the objects rather than the objects themselves.

Local space

Local space is the coordinate space that is local to your object, i.e. where your object begins in. Imagine
that you've created your cube in a modeling software package (like Blender). The origin of your cube is
probably at (0,0,0) even though your cube may end up at a different location in your final application.
Probably all the models you've created all have (0,0,0) as their initial position. All the vertices of your
model are therefore in local space: they are all local to your object.

The vertices of the container we've been using were specified as coordinates between -0.5 and 0.5 with
0.0 as its origin. These are local coordinates.

World space

If we would import all our objects directly in the application they would probably all be somewhere
positioned inside each other at the world's origin of (0,0,0) which is not what we want. We want to
define a position for each object to position them inside a larger world. The coordinates in world space
are exactly what they sound like: the coordinates of all your vertices relative to a (game) world. This is
the coordinate space where you want your objects transformed to in such a way that they're all
scattered around the place (preferably in a realistic fashion). The coordinates of your object are
transformed from local to world space; this is accomplished with the model matrix.

The model matrix is a transformation matrix that translates, scales and/or rotates your object to place it
in the world at a location/orientation they belong to. Think of it as transforming a house by scaling it
down (it was a bit too large in local space), translating it to a suburbia town and rotating it a bit to the
left on the y-axis so that it neatly fits with the neighboring houses. You could think of the matrix in the
previous chapter to position the container all over the scene as a sort of model matrix as well; we
transformed the local coordinates of the container to some different place in the scene/world.

14) Understand how clipping works, and how clipping in viewing space is a different process than

clipping in the image space.

clipping in the context of computer graphics is a method to selectively enable or disable rendering
operations within a defined range of interest. Mathematically, clipping can be described with the
terminology of constructive geometry. A rendering algorithm draws only pixels at the intersection
between the clip area and the scene model (for example 3D configurator). Lines and areas outside the
visible volume (frustum) are removed.

Clip areas are often specified to improve rendering performance. A well selected clip allows the renderer
to save time and energy by skipping calculations to pixels that the user cannot see. The pixels to draw
are inside the clipart. Pixels that are not drawn are outside the Clip Area. More informally, pixels that are
not to be drawn are called clipped.

Clipping to image space

In two-dimensional graphics, a clipart area can be defined so that pixels are only drawn within the
boundaries of a window or frame. Clip areas can also be used to selectively control pixel rendering for
aesthetic or artistic purposes. In many implementations, the final clip area is the composite (or
intersection) of one or more application-defined shapes and any hardware limitations of the system.

In a sample application, an image editing program is recommended. A user application can render the
image into a viewport. As the user zooms and scrolls to see a smaller portion of the image, the
application can set a clip boundary so that pixels outside the viewport are not rendered. In addition, GUI
widgets, overlays, and other windows or frames can hide some pixels from the original image. In this
sense, the Clip Area is the interaction of the application-defined “User Clip” and the “Device Clip”, which
is forced by the software and hardware implementation of the system. The application software can use
this clip information to save computing time, energy and storage space and avoid working with pixels
that are not visible.

Clipping to viewing space

In three-dimensional graphics, the terminology of clipping can be used to describe many related
features. Typically, “clipping” refers to operations in the plane that work with rectangular shapes, and
“culling” to more general methods of selectively processing scene model elements. This terminology is
not rigid, and the exact use varies from source to source.

Scene model elements include geometric primitives: points or nodes, line segments or edges, polygons
or faces, and more powerful model objects such as curves, splines, faces, and even text. In complicated
scene models, individual elements can be selectively deactivated (truncated), e.g. for reasons of visibility
within the viewport (backface culling), orientation (backside alveation), darkening by other scenes or
model elements (occlusion culling, depth or z clipping). There are sophisticated algorithms to efficiently
detect and perform such clipping. Many optimized clipping methods are based on a specific hardware
acceleration logic provided by a GPU.

The concept of clipping can be extended to a higher dimensionality using methods of abstract algebraic
geometry.

15) What is meant by depth buffers and how could they be used in OpenGL?

In order to use the depth test, the current Framebuffer must have a depth buffer. A depth buffer is an
image that uses a depth image format. The Default Framebuffer may have a depth buffer, and user-
defined framebuffers can attach depth formatted images (either depth/stencil or depth-only) to the
GL_DEPTH_ATTACHMENT attachment point.

If the current framebuffer has no depth buffer, then the depth test behaves as if it is always disabled.

17) Given is a display method which draws three triangles. The triangles are centered on the z-axis and
parallel to the xy-plane. The corresponding vertices of the three triangles have the same x- and y-
coordinates, but different z-values. The first triangle drawn by the display method is red and closest to
the view point. The second triangle is green, and the third triangle blue and furthest from the view
point. The view point is on the z-axis and a perspective projection is used to render the scene. What will
happen if we disabled the depth buffer?

When disabled, the depth comparison and subsequent possible updates to the depth buffer value are
bypassed and the fragment is passed to the next operation.
18) In OpenGL graphical primitives are defined by vertices which are transformed by the OpenGL
pipeline. List the sequence of this pipeline

The OpenGL rendering pipeline is initiated when you perform a rendering operation. Rendering
operations require the presence of a properly-defined vertex array object and a linked Program
Object or Program Pipeline Object which provides the shaders for the programmable pipeline stages.

Once initiated, the pipeline operates in the following order:

1. Vertex Processing:

1. Each vertex retrieved from the vertex arrays (as defined by the VAO) is acted upon by
a Vertex Shader. Each vertex in the stream is processed in turn into an output vertex.

2. Optional primitive tessellation stages.

3. Optional Geometry Shader primitive processing. The output is a sequence of primitives.

2. Vertex Post-Processing, the outputs of the last stage are adjusted or shipped to different
locations.

1. Transform Feedback happens here.

2. Primitive Assembly

3. Primitive Clipping, the perspective divide, and the viewport transform to window space.

3. Scan conversion and primitive parameter interpolation , which generates a number


of Fragments.

4. A Fragment Shader processes each fragment. Each fragment generates a number of outputs.

5. Per-Sample_Processing, including but not limited to:

1. Scissor Test

2. Stencil Test

3. Depth Test

4. Blending

5. Logical Operation

6. Write Mask

You might also like