Professional Documents
Culture Documents
Animation Technologies Assignment
Animation Technologies Assignment
Animation Technologies Assignment
04/06/2011
system that allows us to experience the Planet Mars by driving Mars Rovers from Earth. Incidentally, I find it hard to get any more information on this as a lot of sites on this specific topic seem to be forbidden! Looking at A Multimode Immersive Conceptual design system for architectural modelling and lighting [1] This experiment was carried out earlier this year and claims to be the first immersive system that allows simple conceptual design for textured geometric models and basic lighting design. This system builds on and combines techniques previously researched and tested by others. The real world environment First, I will look at the actual environment that enables the 3D holographic effect utilised in the experiment. The group used a four-sided stereoscopic, head-tracked immersive display. More specifically they used the BARCO i-Space 4 wall system [7] powered by a QuadroPlex NVidia card. An example of which is pictured below:
Image from reference [7] Here, stereoscopic projectors surround the area and bounce projections off the reflectors into the i-Space. The same technology that is utilised in 3D films is used here. Due to the separation between our eyes, each eye has a different viewpoint. When images from both eyes are sent to the brain, the difference between them creates a perception of depth. Each set of projectors pictured above is a set of two, each internally polarised to its corresponding left or right eye information. The user must wear glasses to filter the
different polarisations. The projection of stereoscopic imagery from these different angles allows for virtual information to be fed in via the computer and interpreted by the user in 3D space. User interaction Interaction between the user and the software takes place through a bi-manual interface. The users head is tracked and Input comes from their less dominant hand (which is also tracked) and a wand controlled by their dominant hand. The tracking utilises ultrasonic technology. The wand serves as an input device like a mouse allowing the user to click and drag different widgets, manipulating meshes and adjusting the lighting. The user can implement the following actions to the model: Add door, add window, connect room, control where the sun is, adjust window and wall sizes and positions. They can also drag a human avatar inside a room which will change the users viewpoint so that they have a 1:1 representation of the room. The user can manipulate far objects using the HOMER system (allowing the user to select an object with a light ray by pointing at it and then attaching a virtual hand to the object.) Different Modes
This shows the system being used in mixed mode which combines both table mode and immersive mode. INRIA / Photo Kekkonen [1] Another key feature is the ability to work in three different modes. In table mode the user can see a miniature version of the virtual world, enabling quicker and easier editing. Immersive mode offers a 1 : 1 representation which suits closer inspection. The immersive
mode is limited by the real world space but can be overcome with the ability to fly through the environment using the fly stick. There is also mixed mode which combines immersive and table modes. The user can switch between the modes by adjusting the velocity of their hand. Software and System The group have based the software on an earlier system by Cabral et al.[3] This allows users to manipulate geometry and texture directly, and automatically updates the model to respond to these changes. The user selects from a variety of model pieces in order to put the model together. Some vertices remain fixed while others are variable, this allows for manipulation without distorting the basic shape. The system was developed using C++/Open GL using in house libraries. User evaluations Problems were found in trying to identify where some virtual elements were in 3D space, shadows helped get around this problem. Generally the users found interaction to be natural and pleasant. Possible future applications of this technology for animation The ideas demonstrated here allow for an increased perception and hands on interactivity of the digital world. Our minds and bodies would be more involved in the creation of entertainment design, we would operate more like virtual builders rather then being confined to the limits of our workstations and 2D interfaces. Modelling With an increase of complexity in software, VR modelling could mirror what is currently achievable in Maya. Incorporating bi manual interaction and 3D models that seem to exist in real space would allow for greater control and faster productivity. Digital landscapes, scenery, props and characters could all be created with fuller involvement then ever before. Further applications could be created to apply virtual paint and textures to the models. A parallel can be drawn to modelling with clay. This would require involvement of tactile feedback to allow the user to feel the model. In an experiment entitled Touchable Holography by Hoshi et al. [2] a mergence between ultrasound tactile display, holographic display and hand tracking has been created, allowing the user to feel virtual raindrops splashing on their hands.
Image from reference [2] Scene Planning / Layout / Pre Visualisation Virtual reality could have useful implications for planning a scene. In the table view you would be able to identify more clearly where certain characters would be situated in order for the scene to support the narrative. You could work collaboratively, with a group of people standing around the table all looking at the same setting and talking about the best place to put the virtual cameras or characters. You could physically move the virtual cameras in order to obtain the most effective compositions which could be displayed in a separate 2d viewport. Animation Actors being motion tracked would be able to act within their virtual environment and situations. This may lead to more convincing acting as the actors would have more to go on. Perhaps with advancements in computational power and rendering speeds the actions performed by the actors would be directly linked and rendered in real time to the characters.
Conclusion If virtual reality becomes more prominent as a means of entertainment and media in the future then we should be careful not to loose awareness of our physical real world reality and identities. On a positive note, I think a more immersive and interactive virtual process would heighten the involvement of our bodies and minds. It would increase our physical engagement and reduce RSI. Technologically, mimicking real life interactions, should lead to a greater natural interface between the user and the software which may prove to be a more enjoyable experience. Bibliography Experiments (The following links contain videos and PDFs documenting the experiments) [1] M. Cabral, P. Vangorp, G. Chaurasia, E. Chapoulie, M. Hachet, G. Drettakis. A Multimode Immersive Conceptual Design System for Architectural Modelling and Lighting. REVES, 2011. http://www-sop.inria.fr/reves/Basilic/2011/CVCCHD11/ [2] T. Hoshi, M. Takahashi, K. Nakatsuma, H. Shinoda. Touchable Holography. SIGGRAPH 2009, 2009. http://www.alab.t.u-tokyo.ac.jp/~siggraph/09/TouchableHolography/SIGGRAPH09-TH.html [3] M. Cabral, S. Lefebvre, C. Dachsbacher, and G. Drettakis. Structure preserving reshape for textured architectural scenes. CGF(Eurographics), 2009. http://www-sop.inria.fr/reves/Basilic/2009/CLDD09/ History of Virtual reality [4] Strickland, Jonathan. (2007). Virtual Reality History. http://electronics.howstuffworks.com/gadgets/ other-gadgets/virtual-reality8.htm (2011) [5] Carlson, Wayne. (2003). Virtual Reality and Artificial Environments. http://design.osu.edu/carlson/history/lesson17.html (2011) [6] (2011). Virtual Reality. http://en.wikipedia.org/wiki/Virtual_reality#cite_note-3 (2011) BARCO [7] I-Space. Stereoscopic Environment http://www.barco.com/en/product/732 (2011)