Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Algorithms for Atmospheric Special Eects in Graphics and their Implementation

M.Tech Project - Second Stage Report


Submitted in partial fullllment of the requirements for the degree of

Master of Technology

by

Pisith Hao Roll No: 06329201

under the guidance of

Prof. Sharat Chandran

Department of Computer Science and Engineering Indian Institute of Technology, Bombay Mumbai

Abstract
Nowadays, the current graphics hardwares computation speed is improving. This also increases the possibility of simulating the complex environment especially atmospheric special eects. Their high degree of realism becomes more challenging. In this report, we present special eects related to rainy environment and a brief introduction to Delta3D selected for algorithms implementation. Three principle concepts of simulation of rain falling eects: using particle system editor, physical properties-based method, and using cube map and environment mapping and their drawback are also included.

Introduction

Special Eects, especially atmospheric eects such as rain, cloud, snow and other outdoor scenes in interactive applications (games, training systems...), are important in creating realistic environments. As current graphics hardwares computation speed is improving, their high degree of realism is also required to immerse the user in a visually convincing environment. However, rendering these eects is a hard problem, especially in real time. Rain is a very complex atmospheric physical phenomenon and consists of numerous eects, Figure 1, such as: rain falling (light rain, moderate rain, heavy rain, and extreme rain), puddling on the streets by raindrops, splashing of raindrop, water rippling, raindrops dripping o objects surfaces, and water streaming o on the streets and objects. The objective of the project is to create algorithms of the aforementioned eects and implement them on Delta3D, which is detailed in next section. Moreover, eects should be able to run in real time. The reminder of the report is organized as follows. In section 2, Delta3D is presented briey. Section 3 describes three concepts of creating rain falling eect and nally section 4 presents the conclusion and future work.

Figure 1: Eects of rainy environment are objective of the project

Delta3D

Delta3D [1][2] is an Open Source engine which can be used for games, simulations, or other graphical applications. Its modular design integrates other well-known Open Source projects such as Open Scene Graph [3], Open Dynamics Engine [4], Character Animation Library [5], and OpenAL [6] as well as projects such as Trolltechs Qt, Crazy Eddies GUI (CEGUI), Xerces-C, Producer, InterSense Tracker Drivers, HawkNL, and the Game Networking Engine (GNE). It has a high-level, cross-platform (Win32 and Linux) C++ API designed with programmers in mind to soften the learning curve, but always makes lower levels of abstraction available to the developer. Programmers can develop content through the level editor-they can write Python script to the Delta3D API or to the underlying tools directly. Delta3D uses the standard Lesser GNU Public License (LGPL) [7]. Its completely modular and allows a best-of-breed approach whereby any module can be swapped out if a better option becomes available. Figure 2 shows the Delta3D architecture. All the products in the bottom layer are existing open source projects. Delta3D unies them into one consistent API with associated tools.

Figure 2: All the products in the bottom layer are existing open source projects. Delta3D unies them
into one consistent API with associated tools

2.1

Eect in Delta3D

Delta3D contains many eects such as motion models (Fly, UFO, Walk, Orbit, and First Person), smoke, explosion, animation blending, and particularly Graphical Particle Eect Editor. This editor allows developers to use graphical tool to change the properties of a particle system and see the eects immediately in real time.

2.2

Why Delta3D

Delta3D is selected for implementation of our algorithms due to the fact that Delta3D leverages the success of existing open source tool such as OpenSceneGraph and contains an interesting particle system editor available in source code, which is allowed to modify to meet our requirements (realistic special eects). Hence its a good starting point for our project.

Rain Falling Eect

In this section, we describe three main concepts of creating rain falling eect : using Graphical Particle Eect Editor [8], physical properties-based method [9], and using cube map and environment mapping.

3.1

Using Delta3D Particle Editor

The principle concept is to create rstly a texture of a raindrop and mapped it on a small billboard (particle) that will always face to the viewer (camera). Secondly, a rectangular emitter is made to shoot randomly particles straight down from a particular height. Thirdly, some properties of particle have to be set such as life of drop (for example 2 seconds, its best to have raindrops disappear when they hits the ground), size of drop (or particle, for example from 0.15m to 0.25m) , various numbers of particles creation (for example from 150 particles/sec to 200 particles/sec), and initial velocity rang of drops (for example from 6 m/s to 10 m/s) so that raindrops dont fall at a regular rates, numbers, and patterns. Even though the particles are supposed to be random, most people will notice a pattern after a while. So nally, to further stop this from happening, emitters (or layers) have to be created by putting dierent numbers into these emitters properties and especially making dierent raindrop textures. Whenever a raindrop hits the wet surface of at area such as the surface of the water in pool, circular of ripple (water ring) will occurs. This eect can be made by this particle system editor in a similar manner. At rst, texture of water ring has to be mapped to a particle that always lay at on the ground. Then a rectangular emitter is made to shoot randomly particles but the velocity shooting has to be set to zero in order to keep the particle from oating upwards. Lastly, the size of particle (or water ring) has to be set from 0.00m to 0.16m so that enlargement of water ring can be seen. Figure 3 and 4 show two raindrop textures and two water ring textures respectively. Theses textures and a simple terrain are import to the particle system editor whose properties are set with the example values mentioned above. Our result is shown in gure 5. Figure 6 presents an unexpected result when raindrops are viewed downward from a particular height. This happened due to the raindrop textures are mapped on billboards that always face to the camera (viewer).

Figure 3: two raindrop textures

Figure 4: two water ring textures

Figure 5: rain falling using a simple terrain, two raindrop textures and two water ring textures above

Figure 6: rain falling is viewed downward from a particular height

Figure 7: Shape of drops: (a) Compared shapes of raindrops of radii R = 1mm, 1:5mm, 2mm, 2:5mm
and 3mm [Ros00]. (b) Shape of a droplet of undistorted radius 0:5. (c) Shape of a droplet of undistorted radius 1:0. (d) Shape of a droplet of undistorted radius 3:0. (e) Shape of a droplet of undistorted radius 4:5

3.2

Using Physical Properties-Based Method

This method is based on physical properties (geometrical, dynamic and optical) of raindrops. Ross [10] proves that falling raindrops look more like ellipsoids. Small raindrops are almost spherical, and bigger raindrops get fattened at the bottom. Figure 7 shows typical raindrop shapes for common undistorted radii. Figure 8 illustrates directions of reected/refracted rays. At an interface, the law of reection describes the directions of the reected ray, and Snells law describes the direction of the refracted ray. The basic concept is capturing rst an image of the background scene to a texture. Afterward this texture is mapped onto the raindrops according to optical laws (Only Snells law). Reections law is not participated since a raindrop appears rather small on the screen, and reection is visible only in a small part of each raindrop. Figure 9 shows the comparison of a water-drop simulated using this concept (left) with an image of a real falling droplet (right). A photograph of the original scene was used as a background image for the simulated drop. The bottom images show a close view of the original and simulated drops. As the real drop just left the tap, its shape is not yet stabilized and is not perfectly spherical, and so it does not behave exactly as the simulated one. Figure 10 presents rain falling simulated using this method. This image is still lack realism due to many raindrops should be able to see inside the biggest raindrop and not only inside that drop. In addition, as this method does not care about reection, caustic on raindrops are not seen when the light source placed behind viewer. Collision detection and merging of raindrops are not included in this method either.

Figure 8: Reection / refraction of a ray in a raindrop

Figure 9: Left: An image simulated with this concept. Right: A photograph of a real raindrop

Figure 10: rain falling simulated using this Physical Properties-Based Method

3.3

Using cube map and environment mapping

Environment mapping is a rendering technique used to create photorealistic images of an object by mapping the environment on the object. This technique can be done by using a cube map that contains six faces which are mapped by six images captured from the scene surrounding the object. So the main point of this method is map the environment on each raindrop. This method is based on cube map and environment mapping [11] (by Biswarup Choudhury, Phd student working under Prof. Sharat Chandran). The following subsection, three primary functions of the environment mapping, Pattern Generation, Map Generation and Relighting and Compositing, are presented briey. Next subsection details our main method of making realistic rain falling. 3.3.1 Three primary functions

Pattern Generation, this function is used to make unique color code for each pixel of cube map by generating patterns. Each pattern (a set of colors) is used to map on a cube map for nding the map. Patterns depend on the number of pixels in the cube and number of colors used for coding the pixels. They are generated by the following formula. KxK is the resolution of a cube face. For example, the number of patterns required for a cube map (with each face) of resolution 512x512 and 3 colors for coding is 13. log(|Pixels|) log K x K x 6 = log(|Colors|) log c 7

Figure 11: Rendered images of a transparent torus, exhibiting refractive and reective properties, under
novel environments

Map Generation, this function generates a map, stores the location and value of contribution pixel on cube map, by checking each pixel color of the input images. These input images are created by rendering of object A put at location X in the cube whose six faces are mapped by generated patterns. Since pattern contains unique color codes, do the created input image. According to these codes, contribution pixels on cube map can be found. Relighting and Compositing, this function use the generated map to get new good quality images by rendering object A put at location X in the cube whose six faces are mapped by six images captured from any scenes. Once we get the map of object A, we use that map to render it in a new real world environment to see how the beautiful illumination eects created by refraction or reection caused by object A. Figure 11 shows rendered images of a transparent torus, exhibiting refractive and reective properties, under novel environments. Hence, if object A is raindrops then we can get a perfection of rain falling.

Figure 12: division of huge cube in L layers, R rows, and C columns in order to get L*R*C small cubes

3.3.2

Our main method

The key idea is to initially make an imaginary huge cube in the scene and set the camera (the viewpoint) in the middle of the cube. Wherever the camera moves (forward, backward, leftward, rightward, upward, downward), the cube is also moved along with it since we want that camera to be always located in the middle of the cube. We then divide that cube into L layers and each layer are divided into R rows and C columns, so we can imagine that we have L*R*C small cubes inside that huge cube. Finally, we create a map, helps rendering realism of raindrops, for each small cube by using above functions. Figure 12 presents division of huge cube in L layers, R rows, and C columns in order to get L*R*C small cubes. Our algorithm creating realistic rain falling has two parts: pre-processing and processing step. In pre-processing step, we use Pattern Generation and Map Generation to generate maps of a raindrop put in the middle of the small cube one by one. Eventually, we get M*R*C maps will be used in processing step. In processing step, we capture six images of the scene every rendering frame and mapped these images on six faces of the huge cube. Then we check position of each raindrop and used the correspond map to render those raindrops. For example if a raindrop positions in a small cube numbered tree than the map numbered three is used by Relighting and Composition function to render that raindrop. Rain is generated only inside the huge cube since in pre-processing step the maps are created for small cubes, which are inside that huge cube. The size of that huge cube is big enough to convince viewers with realism of rain falling. For the size of a small cube, it depends on the distance between camera and the cube. The biggest small cube is the farthest cube from the camera due to the fact that eyes cant see clearly small raindrops in that small cube.

Algorithms Pre-processing step: 1. generate patterns 2. For each small cube in the huge cube Generate map for a raindrop set in the middle of small cube Processing step: For each raindrop that is visible Get the raindrop position Find the small cube that the raindrop is inside it Use map of that small cube to render the raindrop

3.4

Implementation and result

The implementation of new three functions Pattern Generation, Map Generation and Relighting and Compositing have already done in Matlab therefore we can use the rst two functions (Matlab codes) for our pre-processing step and convert only the third function from Matlab to C++ codes for processing step. In order that new C++ function can read the map, a Matlab function is created for transforming Matlab format map to binary le map. After checking the Matlab codes of above three functions, its seems the map structure and some parts of the codes can probably be optimized to gain computation speed. Hence we restructure the map and modify some codes, particularly Relighting and Compositing functions codes. As a result, running time of modied function improves very much better than the original function and both output result (output image) of new and old function are the same. Detailed result of our experiment is shown below: These are our assumption which is used by both original and new Matlab codes: Resolution of each cube face, input images, and output image: 512x512 Number of colors for making unique color codes: 3 Number of reection: 4 + The following is result of using original Matlab code: - Map computation : 70 seconds (average)

- Relighting and Compositing : 30 seconds (average) + The following is result of after restructuring the map and partly modifying codes in Map Generation and Relighting and Compositing function:

10

- Map computation

: 70 seconds (average) + 10 = 80 seconds (average)

- Relighting and Compositing : 30 seconds (average) - 10 = 20 seconds (average) + The following is result of after slightly changing codes in Map Generation function and modifying main part of codes in Relighting and Compositing function: - Map computation : 80 seconds (average) - 5 = 75 seconds (average)

- Relighting and Compositing : 20 seconds (average) - 19 = 1 seconds (average) The above three functions do not use any geometry information so their computation time is not depend on the object complexity but the output image resolution. As raindrops are very small and move very quick, resolution of raindrop texture (output image) can be less than 128x128 and compositing time is also lesser than 1/4 seconds. Hence, our realism rain falling hopefully can be run in real time.

11

Conclusion and Future Work

In conclusion, weve found a new method generates visually convincing results of rain falling using cube map and environment mapping. Its result is a lot more realism than the other two methods. Weve also extremely optimized the original Matlab code of environment mapping. For third stage, we are focusing on improving computation speed rather than realism of raindrops as our new result is currently almost perfect. We are trying to nish sequentially the following 7 steps. Starting date and number of weeks to achieve each step are detail in the time schedule table below. 1. Continue implementation of new method since weve not completely done in second stage 2. Integrate new method to Delta3D Particle System Editor 3. Optimize rendering speed 4. Make streak rain falling 5. Add collision detection to new method 6. Create rain falling eect: light rain, moderate rain, heavy rain, and extreme rain 7. Create raindrop splashing eect

Third Stage Project Time Schedule Steps Starting Date Number of Week to Finish 1 February 1 3 weeks 2 February 22 3 weeks 3 March 14 3 weeks 4 April 4 3 weeks 5 April 25 3 weeks 6 May 16 2 weeks 7 May 30 3 weeks Total 20 weeks Rest 3 weeks (June, 20 2008 - July, 10 2008)

12

References
[1] Delta3D. www.delta3d.org, accessed, June 2007. Last viewed on July, 11 2007. [2] Rudy Darken, Perry McDowell, and Erik Johnson. Projects in VR: the Delta3D open source game engine. Computer Graphics and Applications, IEEE, Vol.25, Iss.3, May-June 2005. [3] Open Scene Graph. http://www.openscenegraph.org/, accessed, June 2007. Last viewed on July, 11 2007. [4] Open Dynamics Engine. http://www.ode.org/, accessed, June 2007. Last viewed on July, 11 2007. [5] 3D Character Animation Library. https://gna.org/projects/cal3d/, accessed, June 2007. Last viewed on July, 11 2007. [6] OpenAL,Cross-Platform 3D Audio. http://www.openal.org/, accessed, June 2007. Last viewed on July, 11 2007. [7] GNU Lesser General Public License. http://www.gnu.org/licenses/lgpl.html, accessed, June 2007. Last viewed on July, 11 2007. [8] Falling Rain Particle Eect. http://www.delta3d.org/lemgmt/visit.php?lid=80, accessed, June 2007. Last viewed on July, 11 2007. [9] Pierre Rousseau, Vincent Jolivet, Djamchid Ghazanfarpour.Realistic real-time rain rendering. Computers & Graphics 30, 4 (2006), 507-518. special issue on Natural Phenomena Simulation. [10] Ross, O.N. (2000) Optical Remote Sensing of Rainfall Microstructures, Freie Universit at Berlin, Fachbereich Physik, Diplom Thesis, 134pp. [11] Biswarup Choudhury, Fast Color-Space decomposition based Environment Matting, IITB, CSE, 2007 (Personal communication).

13

You might also like