Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

The first systems were developed and the concepts,

the fundamental concepts that we still use today were developed in the 1960s by
Ivan Sutherland who invented a machine code that he called The Sword of Damocles.
His idea was to lead towards the ultimate display.
I'll come back to that in a few minutes.
Twenty years later in the 1980s,
NASA Ames were also building virtual reality systems called The View system.
In the late 1980s,
Jaron Lanier who is the man who invented the term virtual reality,
he worked on the system called Reality Built For Two,
and probably he's one of the people who most popularized
the idea in the late 1980s and early 1990s.
The hardware concepts that we have today about
virtual realities really go back all that way to the 1960s.
Basically what the hardware involves is some way of
replacing our sensory operators by computer generated sensory operators.
In particular, let's think about vision,
the idea of the Sword of Damocles,
the original head-mounted display developed by Ivan Sutherland,
was that you had two eyepieces which were
essentially think of them as computer displays that you saw through small lenses,
and there was a big contraption that was hanging off
the ceiling which did mechanical tracking of your head movements.
So these eyepieces had computer displays on them,
very small computer displays that you saw through lenses.
But as you moved your head around,
so the scene you saw was updated based on the mechanical tracking.
So if I turned my head over here,
what you saw inside this Sword of Damocles head-mounted display would similarly
update.
You'd see a different part of the scene just like in real life as
I turn my head around so I see different parts of the real thing.
Now, why two eyepieces?
One for the left eye one for the right eye,
and each one had projected onto it or each one rather
displayed the vision appropriate for that eye.
So the left eye saw only a left eye view,
the right eye only a right eye view,
and the brain fuses those together just like in real life
into one overall three-dimensional stereo image.
So not only does what you see change according to your head movements,
but what you see is also in stereo.
So it gives you a very strong illusion that you're in
the place which is being displayed by the computer screens.
And let's remember that in the 1960s the kind of
computer graphics displays they had then were much simpler than what we have now.
Now we have full color displays with solid colors and it looks very realistic.
All they had in the 1960s were green lines.
So if I was in a room,
a virtual room depicted in these 1960 displays,
it would just be a set of green lines
mapping out the edges of the room and the objects in it and so on.
Yet nevertheless even at that time,
even with that kind of display,
Ivan Sutherland reported that people had this strong sense of what we now call
presence,
the sense of being in the world described by the computer displays.
So the fundamental, you asked about the magic,
the fundamental magic is that
the computer displays because they're tied to your sensory operators very closely,
you move your head and the image changes.
You see in stereo,
they give you the illusion that you're in the place depicted by
the virtual reality rather than in the real world where of course you really are.
So this is part of the magic,
this wow factor that Sylvia mentioned earlier.
Levels of Immersion in VR Systems

So Mel, can you explain what is the definition of immersion,


which is something often used to describe
VR experiences and also in particular, nowadays,
we have all these fascinating VR displays that enables
us to really see things in 3D with stereo vision?
But how does other aspect of VR,
which enable us to interact with it,
and how does that link back to the concept of immersion?
Okay. So I'm going to talk about
an ideal system rather than the systems that we have today.
So in an ideal system,
it would display in all sensory systems.
So what I mean by that, of course,
we are most familiar with vision, but,
of course, there's also sound which is pretty feasible.
But even if you think about sound,
there's many different ways that sound itself can be portrayed.
So it could just be sound not coming from any particular direction.
It could be specialized sound,
so that when in virtual reality,
an event is happening over there,
I hear it from over there.
Or it could represent how sound is reflecting through an environment a particular
room.
So it's properly modeled sound.
So just as with vision,
there's many different levels of vision that you could have from very simple
to realistic simulation of how light flows in an environment.
So the same is true of sound,
but all of these things,
more or less, is pretty well today.
So there's also haptics.
Haptics is two aspects,
the sense of touch when you touch something,
you feel it and different surfaces have different feelings.
And the other aspect of haptics is force feedback.
If something touches you or pushes you, you feel the force.
So a true ultimate virtual reality system would support both of those.
And today, it does to some extent.
So, for example, there are haptic devices where you
can feel particular kinds of touch feedback,
and there are haptic devices,
quite complicated ones, where you can feel forced feedback.
But, unlike vision, so one visual display device can represent any kind of visual input.
For haptics, it's not like that.
For each kind of haptics,
there's a special device.
So you can have a really good device that gives you
the feeling of pushing a needle through flesh, for example,
if you're using it for training in surgery,
or you can have a haptic device where you're touching materials and it feels realistic.
But there's no generalized haptic device in the sense of,
in real life, I can be walking along and
my elbow happens to brush against the wall and I feel it.
There's no generalized haptic device here in virtual reality,
which makes this possible,
and there's certainly no force feedback device.
For example, if a virtual character in virtual reality pushes my shoulder,
I'm typically never going to feel it because there's no device.
There's certainly no general device that does that.
So haptics is an area of which requires a lot of development to get towards,
let's say, what was Ivan Sutherland's dream of the ultimate display.
And another one is smell.
So, of course, there again are
particular systems that can deliver smell in virtual reality,
but there is again no generalized smell system.
One of the problems with smell is that once it's in a place,
it doesn't go away very easily.
So you can make a smell,
let's suppose you're in the virtual reality,
you're going into a place where there's been a fire,
and you smell the fire.
So something has to come in real life that makes you smell the fire.
But then when you go out of that place,
that smell of fire is still going to linger because it's going to be in the real world.
So if we go through the various sensory operator's vision and sound,
they're pretty well-cared for in today's systems.
But, of course, there's lots of room for improvement even in those.
Haptics is very, very good particular haptic devices but no generalized haptics.
The same is also true of smell.
Can I just add something about smell?
Because that just reminded me,
I visited a lab in Switzerland where, basically,
in order to simulate a smell and to solve that problem you've
mentioned to get the smell actually out of the way after you
change to a different environment is that they put
a little pipe into my nose which pumps oxygen constantly.
But then when I get close to a particular object in VR,
I can just smell that object,
but when I move away, obviously,
they start pumping oxygen instead.
So that's probably one way to do it.
So it's really fascinating to think about all this potentials of VR,
different VR systems we could develop in the future.
And so my question here is,
is there a way to actually measure immersion to kind
of have a way to compare different VR systems?
Is one more immersive than the other?
It's very important to understand what we mean by immersion in
virtual reality because it's a term overused.
So we might say, "Oh,
I felt very immersed," or "I felt this was immersive," and so on.
But what do we really mean by immersion?
To me, immersion is the description of a system.
It's a technical description of what a system can deliver.
So for example, if we take two head-mounted displays,
one that tracks in four degrees of freedom,
six degrees of freedom,
which means I can turn my head in any direction and I can translate my head like this,
and I can turn in any direction,
and always, the feedback will be correct.
The visual feedback and the sound feedback will be correct for
those movements because my head is tracked in all six degrees of freedom.
And then compare that with another head-mounted display,
which only tracks in terms of rotation.
So if I turn my head, it's okay,
but if I translate, nothing happens.
So these are two different types of immersion, and I would say,
the way I think about it is the first one that tracks in
six degrees of freedom is more immersive.
I mean, immersive in this technical sense because you can use
the first one to simulate the second because the second is a subset of the first.
The second one, we can only track rotations but not translations.
It's a subset of the one where you can track in all six degrees of freedom.
And this is actually really important because it gives you
qualitatively different experiences and different information.
Because if I go like this and nothing happens,
well actually, two things will happen.
One is I'm likely to get quite sick because I'm moving my head but
the world is not updating or my visual display is not updating accordingly.
And second, it means, well,
I can't look close to an object.
I can't look further away an object.
I can't look behind an object by moving my head.
And this is something very real.
For example, in today's head-mounted displays,
typically the ones based on phones where you slot your smartphone into a casing,
they only have the rotational kind of
head-tracking because they're based on the inertial system of the phone.
Whereas a head-mounted display,
which has a head-mounted display in itself,
not just a phone, typically,
they have the full six degrees of freedom tracking.
So if we generalize this idea,
I would say one system is more immersive than
another if the first can be used to simulate the second.
Should I give another example?
Actually, because we spent lots of time discussing in
this course the difference between CAVE and HMD,
so which one of these two systems you think is more immersive?
Remember, I'm talking about a technical definition.
I'm talking about specification of the system,
the hardware and software.
At this, moment I'm not talking about the effect of
those on you though I gave that as an example that they do have different effects.
But I'm only talking about the technical specification.
So a head-mounted display with full six degrees
of freedom tracking is more immersive than
the CAVE because I could use a head-mounted display to
simulate the whole process of going in a CAVE and being in a CAVE.
But I can't use a CAVE to simulate
the process of picking up a head-mounted display and putting it on.
That's just not possible.
Or if we take another example,
for many years, people talked about desktop virtual reality.
So desktop virtual reality was that you'd sit in front of
a monitor and maybe with stereo vision,
you wear glasses and so on and you see stereo on the screen.
Some people call this virtual reality or desktop virtual reality.
Virtual reality through a full head-tracked head-mounted display is more immersive
than
desktop virtual reality because I can use
the full head-mounted display system to simulate a desktop system.
So this is what I mean that there's various levels
of immersion that when you have system A and System B,
then you can use A to simulate B,
then A is more immersive than B.
The last point on this, this doesn't apply to all systems,
so I could have two systems X and Y,
where X can't be used to simulate Y,
and Y can't be used to simulate X.
This immersion is not like a full ordering of all possible systems,
is what's called in mathematics, a partial order.
So one example you can think of is probably when you are in
the CAVE comparing to when you're in a sort of IMAX cinema,
where they come very sort of simulate each other.
Yeah, they're different.
Yeah. Because in the cave, for instance,
when you turn back, normally, you don't have the back wall.
So it doesn't actually 100% simulate
experience you can have in IMAX and definitely not the other way.
Yeah. You couldn't use either of those to simulate the other.
Yeah. And I'm talking about simulation in principle.
I mean as an ideal,
not like an actual hardware and so on.

You might also like