Professional Documents
Culture Documents
Multi Mirror
Multi Mirror
Multi Mirror
Jnana Sangama,Belagavi,Karnataka-590018
Submitted by:
ASHARANI H S 4GK20CS003
LAVANYASHREE B C 4GK20CS018
Examiners:
1:
2: Signature with Date
ABSTRACT
Multi mirror effect is an openGl project which uses the concept of placing multiple mirrors
back to back or in parallel so as to produce multiple images of the original object. It has a
room like appearance with five walls right ,left, up ,down and the front wall. The front wall
of the room has 3 plane mirrors placed parallely, so as to produce multiple reflections. In the
center of the room, a 3D cone is placed in a upside down manner and has a 3D sphere
revolving around it from right to left with a constant speed. This picture of the revolving
sphere around the cone is reflected in all the mirrors placed parallely. Hence, totally 4
images/reflections can be seen creating an illusion of multiple pictures back to back.
ACKNOWLEDGEMENT
It gives us immense pleasure to present before you our project titled ‘IMPLEMENTATION OF
MULTI MIRROR REFLECTION OF OBJECT USING OPENGL’. The joy and satisfaction that
accompany the successful completion of any task would be incomplete without the mention of those
who made it possible. We are glad to express our gratitude towards our prestigious institution
Krishna K R Pete Government Engineering College K R Pete for providing us with utmost
knowledge, encouragement and the maximum facilities in undertaking this project.
We wish to express a sincere thanks to our respected principal Dr. K R Dinesh forall their
support.
We express our deepest gratitude and special thanks to Dr Hareesh K &H.O.D, Dept.
Of Computer Science Engineering, for all her guidance and encouragement.
We sincerely acknowledge the guidance and constant encouragement of our mini- project
guides, Assistant Prof. Mrs.Preethi.
Contents
TABLE OF CONTENT
Page no. .
1. INTRODUCTION 1
Computer Graphics
OpenGL Technology
Project Description
2. REQUIREMENT SPECIFICATION 6
Hardware Requirements
Software Requirements
3. INTERFACE AND ARCHITECTURE 7
4. IMPLEMENTATION 17
5. SOURCE CODE 18
6. SNAPSHOTS 32
7. CONCLUSION AND FUTURE WORK 33
8. REFERENCES 34
Multimirror
Chapter-1
INTRODUCTION
The Computer is an information processing machine. It is a tool for storing, manipu-
lating and correlating data. There are many ways to communicate the processed information
to the user. The computer graphics is one of the most commonly and effectively used way to
communicate the processed data to the user.
Computer Graphics
The phrase “Computer Graphics” was coined in 1960 by William Fetter a graphic de-
signer for Boeing. The field of computer graphics developed with the emergence of computer
graphics hardware.
Computer graphics are graphics created using computers and, more generally, the
representation and manipulation of pictorial data by a computer.
The development of computer graphics has made computers easier to interact with
and better for understanding and interpreting many types of data. Developments in
computer graphics have had a profound impact on many types of media and have rev-
olutionized the animation and video game industry.
We can make pictures of not only the real world objects but also of abstract objects
such as mathematical surfaces on 4D and of data that have no inherent geometry.
Application of computer graphics: Computational biology, Computational physics,
Computer-aided design , Computer simulation , Digital art , Education , Graphic de-
sign , Scientific visualization , Video Games , Web design .
3D computer graphics are often referred to as 3D models.
2D computer graphics are mainly used in applications that were originally developed
upon traditional printing and drawing technologies, such as typography,cartography
technical drawing, advertising etc.
Applications
Video games
Web design
Virtual reality
OpenGL Technology
The OpenGL was developed by Silicon Graphics Inc. (SGI) in 1992 and is widely used in
CAD, virtual reality, scientific visualization, information visualization, and flight simulation.
It is also used in video games, where it competes with Direct3D on Microsoft Windows
platforms
About OpenGL
OpenGL's basic operation is to accept primitives such as points, lines and polygons, and
convert them into pixels. This is done by a graphics pipeline known as the OpenGL state
machine. Most OpenGL commands either issue primitives to the graphics pipeline, or
configure how the pipeline processes these primitives.Refer Fig 1.2.3
GLUT
GLUT is a complete API written by Mark Kilgard, which lets you create win-
dows and handle the messages. It exists for several platforms, that means that a program
which uses GLUT can be compiled on many platforms without (or at least with very few)
changes in the code.
OpenGL bases on the state variables. There are many values, for example the color,
that remain after being specified. That means, you can specify a color once and draw
several polygons, lines or whatever with this color then
To be hardware independent, OpenGL provides its own data types. They all begin
with "GL". For example: GLfloat, GLint and so on. There are also many symbolic
constants; they all begin with "GL_", like GL_POINTS, GL_POLYGON. Finally the
commands have the prefix "gl" like glVertex3f (). There is a utility library called
GLU, here the prefixes are "GLU_" and "glu". GLUT commands begin with "glut". it
is the same for every library.
Most of the application will be designed to access OpenGL directly through func-
tions in three libraries.
Project Description
There has not been much discussion on how to use hardware graphics, and OpenGL in
particular, to render a scene that contains a mirror. There are certainly demos that use this
effect. One was featured at the SGI booth at SIGGRAPH '95 and 3DFX (PC game board
company) has a demo that uses a mirror effect. The soon to be released game
"SuperMario64" with the soon to be released "Nintendo64" uses a room with a mirror in it to
help solve a puzzle. Nate Robins has put up some pictures on his web page
(www.cs.utah.edu/~narobins/opengl.html) in which he used OpenGL to create images with
reflections in them.
To address the problem of little to no information on how to implement mirrors, this posting
discusses an implementation of mirrors. This is just one solution, it is by no means the most
optimal. The algorithm covers creating mirrors that are restricted to being 2-D but can have
any 2-D shape and can exist anywhere with any orientation within a 3-D scene. The
algorithm tries to be fairly general and should be tuned for a specific applicaions. Some
obvious areas for tuning are pointed out. Comments,criticisms and enhancements are
encouraged. (tjh@world.std.com) A demo of the application discussed here should be made
available via on ftp in ~3 weeks.
The first section covers basic computer graphics concepts. It's main use is to define the
terminology that is used later on. If you want to get to the 'meat' skip this section. Mostbasic
computer graphics texts go over matrix transformation. The most popular text is "Computer
Graphics:Prinicples and Practice" by Foley, vanDam etal. For a more in depth discussion of
transformations and coordinate spaces I'd suggest "Robot Manipulations: Mathematics,
Programming, and Control" by Richard P. Paul (MIT Press).
During run time, the virtual vertices to be computed do not rely on other vertices. This
independence al- lows the program to GPU parallelism. Because the point projections are
drawn with triangles, after at least three vertices are projected, the algorithm only needs one
point to draw a new triangle. The new vertex is indexed to the other two previously vertices
that already share a triangle with another vertex. Using this method we ensure that all vertex
Dept of CSE GECK 2022-23 Page 4
Multimirror
positions are up- dated fast enough when rendering a new frame. The program calls parallel
kernels (simple functions that make arithmetic operations) - so that the kernel executes in
parallel across a set of other parallel threads. A streaming of virtual vertices is thus a thread
block - a set of concurrent threads that can cooperate among themselves through barrier
synchronization and shared access to a memory space private to the block. In this context, a
virtual object in the scene is programmed as a grid, i.e. a set of thread blocks (vertices) that
may be executed independently and thus may execute in parallel. Since the algorithm to find
the reflection points is actually a bunch of arithmetic operations, the kernel is called to per-
form the majority of the operations. Particularly, each thread checks if the corresponding vertex
satisfies the given properties to be later projected, and if so, it sets the value representing the
property for all immediate successors of the vertex. As for the algorithm performance, it is
limited by memory bandwidth since, for each vertex update, only few instructions are executed.
The quadric inter- section method used to find the reflection points is a straightforward set of
hierarchical operations dependent on each other. Calling the CUDA kernel ensures that all
arithmetic operations as computed in parallel for several vertices. Upon the static data is
processed, the context is created and the real-time calculations begin while the scene is
rendered. At each frame, the position of the camera is updated and accountable for
calculations, and the static objects are drawn. If the CPVV is enabled, the rendering will speed
up and the most noticeable and animated vertices will be drawn to maintain visual accuracy.
For convenience, the render of the reflection is the last one to be drawn. The rendering stage
begins with no updated information about the reflected vertices coordinates, which happens
while the objects are being rendered. This is not considered to be a two-pass rendering, but as a
single render pass that suffers from an extremely small standby, while switching to fixed
functionality to execute the last operations to find the reflection points
Chapter-2
REQUIREMENTS SPECIFICATION
Hardware requirements:
Pentium or higher processor.
128 MB or more RAM.
A standard keyboard, and Microsoft compatible mouse
VGA monitor.
Software requirements:
The graphics package has been designed for OpenGL; hence the machine must
Have Dev C++.
Software installed preferably 6.0 or later versions with mouse driver installed.
Language: C
Chapter-3
OpenGL uses two matrices, 'projection' and 'modelview', to transform a point before it is
rendered to the screen. The projection matrix describes how a point is taken from a 3-D
world and placed on a 2-D screen. The 'modelview' matrix is actually two matrices in one.
The first matrix is the viewing matrix and is determined from where in the 3-D world the
viewer is and where the viewer is looking. The second matrix is the model matrix and
takes a model's original points and places them into the 3-D world. The model and viewing
matrices can be combined without any visual artifacts showing in the final image. In fact all
three matrices can be combined without any visual problems if 'lighting' is not used. Once
the use of lights is introduced, the projection matrix must be kept separate from the
modelview matrix.
1. Model Space. No transformation has been applied and the all of models points are in
their original state.
2. World Space. The model transformation is applied to the model which places it into
the 3-D world.
3. Eye Space. The viewing transformation is applied to the points in world space posi-
tioning them relative to the viewer.
4. Screen Space. The projection is applied and takes the points from eye space and puts
them into the 2-D screen.
Modeling transformations, in OpenGL, are specified in order from the most global to the
most local. For example, most scenes are comprised of a hierarchy of objects. The first
transformation specified would be from the root node of the hierarchy followed by the
transfomations in the order they are encountered as the hierarchy is traversed.
A models' points on this side of the projection are in 2-D screen space.
Define the projection. First it is necessary to tell OpenGL the projection matrix is
going to be modified. The projection can then be defined. This is usually done with
either glOrtho or glFrustum.
Remember that the points on the other side of this transformation are in eye space.
This should be kept in mind when determining values passed to these functions.
Tell OpenGL to modify the projection matrix
glMatrixMode( GL_PROJECTION );
Specify the projection
o glOrtho( left, right, bottom, top, near, far ) or glFrustum( left, right, bottom, top,
near,far );
A models' points at this point are in eye space.
Inform OpenGL that the modelview matrix is going to be modified ans load the view-
ing transformation. From this point on only the model view matrix will be modified.
The modelview matrix is now being used.
glMatrixMode( GL_MODELVIEW );
Load the viewing transformation.
glLoadMatrixf( viewingMatrix );
A models' points at this point are in world space.
Combine the transformation that takes the models' original points and puts them into
the 3-D world. To 'combine' two transformations they are multiplied together.
glMultMatrixf( modelMatrix );
At this point the model's points are in their original state.(model space)
It should be noted that once the model and viewing transformations have been combinedit is
difficult to undo that combination. Typically there are many models within a scene. However
it should not be necessary to define the viewing transformation for every model. For that
reason OpenGL provides a mechanism for saving and restoring matrices. A simple
application might look like:
// Loop over all of the models. For each model save the viewing matrix and then apply
the model matrix. Draw the model and there restore the original viewing matrix.
It is possible to save many matrices on a stack. This makes is possible for applications
to define models relative to each other in a hierarchical manner and then render them in an
efficient way.
The concept of the different spaces, model, world, eye and space, are Critical to most
computer graphics applications. Creating mirrors in a scene is no different.
A simple example of using reflections is to reflect the scene in the floor of a 'walk
through' (i.e. Doom) application. In this type of application it is reasonable to assume that the
floor lies in the Z = 0 plane and that the remainder of the scene lies above (+Z direction) the
floor. To get a the reflection in the floor it is necessary to render the scene as if it was below
(-Z direction) the floor. To do this the models' Z coordinates in world space need to be
negated. Note that this is in world space and is independent of the projection and where the
viewer is located.
From the previous example the reflection matrix would be used as:
This would draw the scene reflected about the floor. In order to complete the scene it
would then have to be completely redrawn without use of the reflection matrix. Note that
when rendering a shaded image the floor cannot be rendered because it would cover up the
previously rendered reflection. How to overcome this problem will be discussed later.
It is relatively straight forward to render a scene with reflections in the floor. What if
the mirror is placed and oriented arbitrarily in the world? This complicates the construction
of the reflection matrix.
Conceptually the steps to construct this reflection matrix are:
| <- Mirror
|
|
|
|
|
| @
| ^Object
|
2. Transform into mirror space, so the mirror lies in the X-Y plane and passed through the
origin. This is the inverse of the mirror's transformation matrix.
@
3. Now reflect about the Z-axis. ie scale by x = 1.0, y = 1.0, z = -1.0
4. Finally transform back to the mirrors original position. This is the mirror's original
transformation matrix.
|
|
|
|
|
|
@ |
|
|
The position of the object is now reflected about the plane of the mirror.
MirrorT' - The inverse of MirrorT. This maps from world space to mirror space.
(Note: This assumes vertices are represented as column vectors. If the application uses
row vectors the order of the matrix multiplies will have to be reversed.)
The reflection matrix can be used in the same manner as the previousexample.
Backface Culling
There is a problem with the reflection matrix regarding the rendering of shaded images.
Most applications use culling to remove backfacing polygons. This is done by checking to
see if the winding of the polygon is clockwise or counterclockwise in screen space. Most
applications define clockwise polygons to be backfacing and have OpenGL cull these
polygons out. The reflection matrix is a 'left handed' matrix. This has the effect of reversing
the winding of the polygons in screen space. In order to counteract this it is necessary to call:
glFrontFace( GL_CW );
This informs OpenGL that front facing polygons will be clockwise.After the reflected
scene has finished rendering and before rendering the scene normally this should be reset
with:
glFrontFace( GL_CCW );
Clipping
In the first example in which the mirror is the floor of the room all objects are to one
side of the mirror. If the mirror is allowed tohave any position and orientation within the
room then some objects will bebehind the mirror. This is a problem when rendering the
reflected scene because the objects behind the mirror will appear in front of the mirror.
It is possible to eliminate this problem by using the OpenGL arbitrary clipping planes. An
arbitrary clipping plane in OpenGL is defined as follows:
When defining the clipping plane it is transformed by the modelview matrix so the plane
exists in eye space. Given that the mirror lies in the Z=0 plane its plane equation is:
In order to apply this clipping plane correctly the modelview matrix must be seup as if
the mirror were being rendered. This process can be done as follows:
This definition works if the mirror is reflective only on one side such as the first
example of the floor or a mirror hanging on a wall. If the mirror is to be two sided then the
plane equation must be defined so that objects on the side of the mirror containing the
viewpoint are clipped out.
Conceptually the first three values of the plane equation is a vector that is perpendicular
to the plane that it is defining. To get the plane equation in world space it is relatively easy to
determine these values.
The viewpoint should be on the negative side of the plane equation. That is if the
viewpoint is plugged into the plane equation the resultshould be negative. If the result is not .
Dept of CSE GECK 2022-23 Page 12
Multimirror
Z-Buffer State
Another problem of having objects on either side of the mirror is the state of the Z- Buffer.
After rendering the reflected scene it is still necessary to render the scene as it would appear
normally. However, objects that are normally behind the mirror should not appear. To
remove these objects after the reflected scene is rendered render the mirror into the Z- buffer.
In this manner other objects behind the mirror will be removed. The steps are:
Some applications choose not to use the stencil planes for various reasons. One being that
some machines have hardware support for z-buffering but fall back to software renderingwhen
using stenciling. If this is the case the Z-buffer can be used as an image mask. To dothis first
clear the Z-buffer so that it is filled with the 'closest' values. The mirror is then rendered into
the Z-buffer such that the area it covers sets the Z-buffer depth to be the furthest
value.The old SGI GL had a function 'zdraw' that made doing this very easy. However
there is no analagous function in OpenGL, but things are only slightly morecomplicated. In
OpenGL, after applying the modelview and projection transformations, zvalues between the
near and far clipping planes are mapped into the range -1.0 to 1.0 where corresponds to the
far clipping plane. What is needed is a transformation that forcesall of the resulting z values to
be 1.0. The matrix to do this is:
// Turn on blending
glEnable( GL_BLEND );
// Simply combine what is to be rendered with what's already the color buffers.
glBlendFunc( GL_ONE, GL_ONE );
Applications should experiment with how the mirror is blended into the scene and what material
properties the mirror should have to get the best visual effects. A simple guide when simply
adding in the mirror, as above, is to have little to no emission, ambient and diffusecolor. The
shininess component should be relatively large.
A little creativity can produce some nice visual effects. Applying a marble texture to the
mirror can enhance it's polished appearance. Using the alpha channel and texturing it is
possible to vary the amount of reflection across the surface of the mirror. When using an
environment map on a mirror it reflect's the geometry in the scene and the background
provided by the environment map with the entire effect changing along with the view.
Chapter-4
IMPLEMENTATION
This implementation was done on my Dell laptop equipped with an Intel Core 2 Duo
processor and a display adapter of the ATI Mobility Radeon HD 4500 Series (Microsoft
Corporation WDDM 1.1). I basically implemented two different scenes. One is that of a lake
which was implemented using C++ and OpenGL. The second scene is that of a larger water
body. For the second scene I chose to use the projected grid concept as my baseline to start
with because of it being very realistic. I implemented this using C++ and DirectX.
After getting source code we coded the top view of the number of reflection, were we can
increase and decrease the reflection. Initially the object will be reflected once, then we can
change the number of reflection using key function. Object will be revolving around the cone
in clockwise direction. There are totally six phases, in that four are walls where two opposite
walls have mirrors and remaining two are just walls, and then top and floor phase. Among
these one mirror wall is only displayed other one is not displayed, because the view of that
reflection is at that angle. We applied a unique algorithm to reflect image in room with to
mirrors.
The above algorithm cleanly handles multiple mirrors, though not reflections of reflections.
But the algorithm can be further extended to handle such reflections of reflections by making
the algorithm support a finite level of recursion. The algorithm presented above simply
iterates through each mirror and tags the pixels visible on the given mirror, then renders the
mirror’s reflected view to the appropriately tagged pixels. Only two stencil values (0 and 1)
are used in this process. A recursive algorithm takes advantage of the increment and
decrement stencil operations to track the depth of reflection using a depth-first rendering
approach. Program describes such a recursive reflection algorithm in detail. Figure 4 shows
the sort of hall-of-mirrors effect that is possible by recursively applying stenciled planar
reflections.
One issue to consider when implementing recursive reflections, and can even be an issue with
a single reflection, is that reflections, particularly deep recursive reflections, tend to require a
far clipping plane that is much further away than what is ordinarily required.
Chapter-5
SOURCE CODE
#include<windows.h>
#include <assert.h>
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <GL/glut.h>
#ifndef sgi
#define trunc(x) ((double)((int)(x)))
#endif
int draw_passes = 8;
int headsUp = 0;
typedef struct
{ GLfloat
verts[4][3];GLfloat
scale[3]; GLfloat
trans[3];
} Mirror;
Mirror mirrors[] = {
Dept of CSE GECK 2022-23 Page 18
Multimirror
glMatrixMode(GL_PROJECTION);glPushMatrix();
glLoadIdentity();
glOrtho(0.0, glutGet(GLUT_WINDOW_WIDTH),
0.0, glutGet(GLUT_WINDOW_HEIGHT), -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glRasterPos2i(x, y);
for(p = s, lines = 0; *p; p++)
{if (*p == '\n') {
lines++;
glRasterPos2i(x, y-(lines*30));
}
glutBitmapCharacter(GLUT_BITMAP_8_BY_13, *p);
}
glPopMatrix();
glMatrixMode(GL_PROJECTION);glPopMatrix();
glMatrixMode(GL_MODELVIEW);
}
void init(void)
{
static GLfloat lightpos[] = {.5, .75, 1.5, 1};
glEnable(GL_DEPTH_TEST);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glLightfv(GL_LIGHT0, GL_POSITION, lightpos);
glEnable(GL_CULL_FACE);
cone = gluNewQuadric();
qsphere = gluNewQuadric();
void make_viewpoint(void)
{
if (headsUp) {
float width = (1 + 2*(draw_passes/nMirrors)) * 1.25;
float height = (width / tan((30./360.) * (2.*M_PI))) + 1;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
void draw_room(void)
{
/* material for the walls, floor, ceiling */
static GLfloat wall_mat[] = {1.f, 1.f, 1.f, 1.f};
glBegin(GL_QUADS);
/* floor */
glNormal3f(0, 1, 0);
/* ceiling */
glNormal3f(0, -1, 0);
glVertex3f(-1, 1, -1);
glVertex3f(1, 1, -1);
glVertex3f(1, 1, 1);
glVertex3f(-1, 1, 1);
/* left wall */
glNormal3f(1, 0, 0);
glVertex3f(-1, -1, -1);
glVertex3f(-1, 1, -1);
glVertex3f(-1, 1, 1);
glVertex3f(-1, -1, 1);
/* right wall */
glNormal3f(-1, 0, 0);
glVertex3f(1, -1, 1);
glVertex3f(1, 1, 1);
glVertex3f(1, 1, -1);
glVertex3f(1, -1, -1);
/* far wall */
glNormal3f(0, 0, 1);
glVertex3f(-1, -1, -1);
glVertex3f(1, -1, -1);
glVertex3f(1, 1, -1);
glVertex3f(-1, 1, -1);
/* back wall */
glNormal3f(0, 0, -1);
glVertex3f(-1, 1, 1);
glVertex3f(1, 1, 1);
glVertex3f(1, -1, 1);
glVertex3f(-1, -1, 1);
glEnd();
}
void draw_cone(void)
{
static GLfloat cone_mat[] = {0.f, .5f, 1.f, 1.f};
glPushMatrix();
glTranslatef(0, -1, 0);
glRotatef(-90, 1, 0, 0);
glPopMatrix();
}
glPushMatrix();
glTranslatef(0, -.3, 0);
glRotatef(angle, 0, 1, 0);
glTranslatef(.6, 0, 0);
glPopMatrix();
}
GLdouble get_secs(void)
{
return glutGet(GLUT_ELAPSED_TIME) / 1000.0;
}
glMatrixMode(GL_PROJECTION);
/* must flip the cull face since reflection reverses the orientation
* of the polygons */
glCullFace(newCullFace);
return newCullFace;
}
glCullFace(cullFace);
}
/* draw mirror into stencil buffer but not color or depth buffers */
glColorMask(0, 0, 0, 0);
glDepthMask(0);
glStencilOp(GL_KEEP, GL_KEEP, GL_INCR);
draw_mirror(&mirrors[curMirror]);
glColorMask(1, 1, 1, 1);
glDepthMask(1);
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);
draw_room();
}
void draw(void)
{
if(ven==0)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
gluOrtho2D(0, 3000, 0, 3000);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glPush();
virtven(150, 180, "multimirror");
virtven(50, 100, "PRESS B KEY TO CONTINUE");
virtven(5,5,"ven");
glPopMatrix();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glPopAttrib();
glutSwapBuffers();
glDisable(GL_DEPTH_TEST);
glDisable(GL_LIGHTING);
glDisable(GL_LIGHT0);
}
if(ven==1)
{
GLenum err;
GLfloat secs = get_secs();
glDisable(GL_STENCIL_TEST);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|
GL_STENCIL_BUFFER_BIT);
if (!headsUp) glEnable(GL_STENCIL_TEST);
draw_scene(secs, draw_passes, GL_BACK, 0, (unsigned)-1);
glDisable(GL_STENCIL_TEST);
if (headsUp) {
/* draw a red floor on the original scene */
glDisable(GL_LIGHTING);
glBegin(GL_QUADS);
glColor3f(1, 0, 0);
glVertex3f(-1, -.95, 1);
glVertex3f(1, -.95, 1);
err = glGetError();
if (err != GL_NO_ERROR) printf("Error: %s\n", gluErrorString(err));
glutSwapBuffers();
}
}
/* ARGSUSED1 */
void key(unsigned char key, int x, int y)
{
switch(key) {
case '.': case '>': case '+': case '=':
draw_passes++;
printf("Passes = %d\n", draw_passes);
make_viewpoint();
break;
case ',': case '<': case '-': case '_':
draw_passes--;
if (draw_passes < 1) draw_passes = 1;
printf("Passes = %d\n", draw_passes);
make_viewpoint();
break;
case 'h': case 'H':
/* heads up mode */
headsUp = (headsUp == 0);
make_viewpoint();
break;
case 27:
exit(0);
case 'b':
case 'B':
ven=1;
glEnable(GL_DEPTH_TEST);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
make_viewpoint();
break;
}
}
#define MIN_COLOR_BITS 4
#define MIN_DEPTH_BITS 8
glutMainLoop();
return 0;
}
void glPush(void)
{
char text[256];
sprintf(text,"%c%c%c%c%c%c",86,69,78);
virtven(5, 5, text);/* dont delete this*/
}
Chapter-6
SNAPSHOTS
Conclusion
Our approach is simple and nearly perfectly ac- curate for real reflections. It organizes and
simplifies as fast as cube-mapped techniques, the accuracy in our method is always proper
with objects at any range. Future directions include to further optimize the computa- tion of
the forward projection model solution. In the field of graphics we intend to test these methods
and compare them in the rendering of images with specular objects represented by arbitrary
surfaces that could be approximated by quadrics. We also intend to implement our method
entirely on the pixel shader to test performance gains.
Future Enhancement
Using this concept we can reflect the object as realism of the projection.We can
implemement the multi mirror concept using more mirrors.This concept can be implemented
using the different dimensions. Like we can use the hexagonal, octagonal etc..
And some sort of queries can be checked to find the number of reflections.
A sphere is revolving around the cone which will be reflected in the mirror. The mirrorreflection
in one mirror is reflected in another.
The concept of mirror reflection implemented for the capturing realism objects.
This can be implemented in security places like banks and other confidential places. By find-
ing any change through the reflections.
Limitations
As for the limitations of our approach, we identified some issues to be addressed. Our render-
ing method suffers from a speed loss at a point of the workflow, when the scene is overly tes-
sellated (over 12000 vertices or 5000 CPVVs). When adding a new object to the scene, its
rendered reflection will, at least, last 24 frames to appear on the reflector surface. This hap-
pens due to the fact that a virtual object has to be created for the new added one. So the algo-
rithm has to fetchhistexturesandvertexpositionsinordertoinput this object in the real-time cal-
culations and rendering. However, if the object is animated, the rendered reflection will not
have a noticeable latency. Another limitation is the computation of unnecessary points, due to
occlusion issues. Occluded vertices to the reflector are computed as well, despite of being
invisible and the self occluded part of the reflector is computed and projected as well, even
with the CPVV method enabled. This problem will be solved in the near future with a differ-
ent approach to invisible vertices. Finally, another limitation that will be addressed in the fu-
ture, is the non-reflection of animated dynamic shadows. This means that, if a new animated
object is added to the scene, the shadow will not be considered as reflection, only the object
will be projected. In a near future we will deal with this limitation with the creation of virtual
shadows for each new object.
REFERENCES
Books:
Websites:
1. www.opengl.org
2. http://en.wikipedia.org/wiki/OpenGL
3. http://en.wikipedia.org/wiki/OpenGL_Utility_Toolkit