Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

AN

ASSIGNMENT

ON

VIRTUAL ARCHITECTURE

Amity School of Architecture and Planning


Amity University Rajasthan

SUBMITTED BY
Saloni Chauhan
A20104015018
B.Arch. Fifth Year
ASAP, AUR

Amity School of Architecture and Planning


Amity University, Kant Kalwar, NH-11C, Jaipur
Virtual Reality
It’s an all too familiar scenario: an architect enters a building for the first time and the
space doesn’t quite match the vision of his or her design. However beautiful a static
rendered image may be, traditional design visualisation can only convey so much,
even when the scene is rendered at eyelevel with furniture for scale.

Architectural VR experience created by TruVision

At Gensler, design director and principal Hao Ko knows the feeling. “You still have to
make a translation in your mind, in terms of how tall this space is going to feel,” he
says. “More often than not, I’ll go to my own projects and I’ll be like, ‘Wow! That’s a
lot bigger than I expected.’
This, he says, is where virtual reality, or VR, comes in – and others in the industry
are starting to reach the same conclusion.
VR head-mounted displays (HMDs) such as the Oculus Rift and HTC Vive have the
power to change the way architects design and communicate buildings before they
are built. The wearer is instantly immersed in a true three dimensional environment
that gives an incredible sense of scale, depth and spatial awareness that simply
cannot be matched by traditional renders, animations or physical-scale models.
A VR experience with an HMD can fool your brain into thinking what you’re seeing is
actually real. The WorldViz ‘Walk the Plank’ demo at Nvidia’s GTC event in April
stopped me dead in my tracks. Even though I knew I was standing in an exhibition
hall, I literally could not step forward for fear of falling. The sense of presence was
overwhelming. It felt like I truly ‘existed’ in the scene and, from then on, the fight of
mind over matter was well and truly lost.
This sensation of actually being inside a building also makes VR an incredibly
powerful tool for communicating design intent. Clients, in particular, often don’t have
the ability to understand spatial relationships and scale simply by looking at a 2D
plan or 3D model. VR can evoke a visceral response in exactly the same way that
physical architecture can.
“We just had a client where we were showing some conceptual renderings and they
were having a hard time [understanding the building],” explains Mr Ko. “The second
we put goggles on them, it was like, ‘Oh yeah. Build that. That’s great. That’s what I
want.’”
VR can play an important role at all stages of the design-to-construction process,
from evaluating design options and showcasing proposals, to designing out errors
and ironing out construction and serviceability issues before breaking ground on site.
Even at the conceptual phase, VR can be an effective means of exploring the
relationships between spaces – the impact of light on a room at different times of the
day or year, or views from mezzanine floors. With a physical scale model or BIM
model on screen, you still have to imagine what it would be like to exist inside the
space. With VR, you actually experience the proportion and scale.

Augmented Reality

With augmented reality, architecture models are showcased in a completely new


way. The Augment app lets users easily manipulate 3D plans through their
smartphones or tablets.

As a key product feature, the company, ELK, gives customers the ability
to customize their prefabricated home interiors, while maintaining the design of
the exterior structure. Clients can move walls, delete others, and so on. Augmented
reality offers a real visualization of the final product for customers. Augment is
a key tool for ELK to showcase its product’s versatility, enabling its clients to
create their dream homes.
Save money and time: replace plans and prototype with 3D
models.

From manufacturing pieces to assembling buildings to modeling the landscaping, the


creation of an architecture plan is costly and time consuming. Augmented reality is an
efficient alternative. Visualize plans as never before.

LSI uses Augment to showcase its building projects in the real estate and construction
industry. To present its new student residence project, the firm employed Augment
during an event to great effect! The 2D blueprints were replaced with augmented
models; At the event, clients were engaged and immersed in LSI’s project.

Even more, Augment offers an amazing experience for users– potential buyers or
investors can project themselves into and move about the plans. You can visit
an apartment or go through the middle of buildings as if you were really inside.
Augmented reality is a practical and unique way to appreciate all the details of an
architecture model wherever you are.

Mixed Reality

Mixed Reality spans the purely virtual and purely real environments. Mixed reality in
construction, and in the context of the building industry and BIM modeling, is the
phase in which digital and real content co-exist, where architectural design collides
with reality, and where construction teams transform digital content into physical
objects. It helps users efficiently interpret physical and digital information, and the
spatial relations between them.
The interpretation of digital content and its translation to real-world objects heavily
depend on the user’s spatial understanding. This is an error-prone process and
demands a highly skilled workforce. Interpretation errors are common during the
design and construction stages, and often result in poor quality, cost overruns, and
schedule delays.

Visualizing digital content as holograms in the context of the physical world bridges
the gap between virtual and real, and eliminates inefficiencies in the current
workflow. In addition, while our physical world is finite, Mixed Reality presents the
opportunity for an infinite environment in which additional data such as schedule,
specs, and simulation can be overlaid onto the world, creating a hyper-reality
environment.

Foveated Rendering

Foveated imaging is a digital image processing technique in which the image


resolution, or amount of detail, varies across the image according to one or more
"fixation points". A fixation point indicates the highest resolution region of the image
and corresponds to the center of the eye's retina, the fovea.
The location of a fixation point may be specified in many ways. For example, when
viewing an image on a computer monitor, one may specify a fixation using a pointing
device, like a computer mouse. Eye trackers which precisely measure the eye's
position and movement are also commonly used to determine fixation points in
perception experiments. When the display is manipulated with the use of an eye
tracker, this is known as a gaze contingent display. Fixations may also be
determined automatically using computer algorithms.
Some common applications of foveated imaging include imaging sensor hardware
and image compression. For descriptions of these and other applications, see the list
below.
Foveated imaging is also commonly referred to as space variant imaging or gaze
contingent imaging.
Compression
Contrast sensitivity falls off dramatically as one moves from the center of the retina
to the periphery. In lossy image compression, one may take advantage of this fact in
order to compactly encode images. If one knows the viewer's approximate point of
gaze, one may reduce the amount of information contained in the image as the
distance from the point of gaze increases. Because the fall-off in the eye's resolution
is dramatic, the potential reduction in display information can be substantial. Also,
foveation encoding may be applied to the image before other types of image
compression are applied and therefore can result in a multiplicative reduction.

Foveated sensors
Foveated sensors are multiresolution hardware devices that allow image data to be
collected with higher resolution concentrated at a fixation point. An advantage to
using foveated sensor hardware is that the image collection and encoding can occur
much faster than in a system that post-processes a high resolution image in
software.
Simulation
Foveated imaging has been used to simulate visual fields with arbitrary spatial
resolution. For example, one may present video containing a blurred region
representing a scotoma. By using an eye-tracker and holding the blurred region fixed
relative to the viewer's gaze, the viewer will have a visual experience similar to that
of a person with an actual scotoma. The figure on the right shows a frame from a
simulation of a glaucoma patient with the eye fixated on the word "similar."
Video gaming
Foveated rendering is an upcoming video game technique which uses an eye
tracker integrated with a virtual reality headset to reduce the rendering workload by
greatly reducing the image quality in the peripheral vision (outside of the zone gazed
by the fovea).
At the CES 2016, Senso Motoric Instruments (SMI) demoed a new 250 Hz eye
tracking system and a working foveated rendering solution. It resulted from a
partnership with camera sensor manufacturer Omni vision who provided the camera
hardware for the new system.
Quality assessment
Foveated imaging may be useful in providing a subjective image quality
measure. Traditional image quality measures, such as peak signal-to-noise ratio, are
typically performed on fixed resolution images and do not take into account some
aspects of the human visual system, like the change in spatial resolution across the
retina. A foveated quality index may therefore more accurately determine image
quality as perceived by humans.
Image database retrieval
In databases that contain very high resolution images, such as a satellite
image database, it may be desirable to interactively retrieve images in order to
reduce retrieval time. Foveated imaging allows one to scan low resolution images
and retrieve only high resolution portions as they are needed. This is sometimes
called progressive transmission.

Teleportation

A system architecture for achieving long-distance, high-fidelity teleportation and


long-duration quantum storage is proposed. It uses polarization-entangled photons
and trapped-atom quantum memories and is compatible with transmission over
standard telecommunication fibre. An extension of this architecture permits long-
distance transmission and storage of Greenberger-Horne-Zeilinger states.
Architecture based on Quantum Cellular Automata which allows the use of only one
type of quantum gate per computational step, using nearest neighbour interactions.
The model is built in partial steps, each one of them analysed using nearest
neighbour interactions, starting with single-qubit operations and continuing with two-
qubit ones. A demonstration of the model is given, by analysing how the techniques
can be used to design a circuit implementing the Quantum Fourier Transform. Since
the model uses only one type of quantum gate at each phase of the computation,
physical implementation can be easier since at each step only one kind of input
pulse needs to be applied to the apparatus.

You might also like