Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 55

Introduction to

Augmented, Virtual and


Mixed Reality

Sameh S. Fahim
Sameh.fahim@eslsca.edu.eg
Course Contents
• Part I: Introduction to Augmented and Virtual Reality
• Part II: Understanding the Human Senses and Their
Relationship to Output/Input Devices
• Part III: Applications of Augmented and Virtual Reality
• Part IV: Human Factors, Legal, and Social Considerations
Video
Contents
Part I: Introduction to Augmented and Virtual Reality
1. Computer-Generated Worlds
• What Is Augmented Reality?
• What Is Virtual Reality?
• Conclusion

2. Understanding Virtual Space


• Defining Visual Space and Content
• Defining Position and Orientation in Three Dimensions
• Navigation
• Conclusion
Computer-Generated Worlds: Augmented Reality - AR origin
• Augmented Reality : is a general term applied to a variety of display
technologies capable of overlaying or combining alphanumeric, symbolic, or
graphical information with a user’s view of the real world.
• In the early 1990s
• By Boeing research scientist Tom Caudell (Caudell and Mizell, 1992)
• Can be traced back to the early 1900s and a patent filed by Irish telescope
maker Sir Howard Grubb.
• His invention (patent No.12108), titled “A New Collimating-Telescope Gun-
Sight for Large and Small Ordnance” describes a device intended for use in
helping aim projectile firing weapons
Computer-Generated Worlds: Augmented Reality - AR origin

• This image illustrates one of the grand challenges in shooting posed by the
human eye only being able to focus on one depth of field at a time.
• Grubb’s innovation directly inspired the development of more advanced gun
sights for use in military aircraft
Computer-Generated Worlds: Augmented Reality - AR origin

• The 1918 Oigee Reflector Sight built by the German optics manufacturer
Optische Anstalt Oigee used an electric lamp and collimating to create a
virtual targeting reticle on a partially reflecting glass element.
Computer-Generated Worlds: Augmented Reality - HUD
• HUD: Head-Up Displays
• Fighter aircraft and helicopters grew in complexity, the information processing
tasks required of pilots also increased dramatically.
• The sizeable array of sensors, weapons, avionics systems, and flight controls
increasingly resulted in pilots spending more time focusing on dials and
displays inside of the cockpit instead of what was happening outside of the
aircraft.
• US scientists introduced the first modern head-up (or heads-up) display
(HUDs), a transparent display mounted in front of the pilot that enables
viewing with the head positioned “up” and looking forward, instead of angled
down, looking at instruments lower in the cockpit.
Computer-Generated Worlds: Augmented Reality - HUD
• A typical HUD contains three primary
components:
• Projector unit
• Combiner (the viewing glass)
• Video generation computer (also known as
a symbol generator) (Previc and Ercoline, 2004).
• Information shown on the display, to
help pilots keep their eyes on the
environment includes:
• Altitude,
• Speed, This image shows the basic flight data
and symbology displayed in the HUD of
• Level of the aircraft to aid in flight control
a U.S. Marine Corps AV-8B Harrier
• Navigation ground-attack aircraft.
Computer-Generated Worlds: Augmented Reality – HMS/VTAS
• HMS: Helmet Mounted Sight.
• In the late 1960s by the South African Air Force (SAAF).
• The HMS aided pilots in the targeting of heat-seeking missiles.
• To this point, pilots had been required to maneuver an aircraft so the target
fell within view of the HUD.
• In the early 1970s, the U.S. Army deployed a head-tracked sight for the AH-1G
Huey Cobra helicopter to direct
• U.S. Navy deploying the first version of the Visual Target Acquisition System
(VTAS) to exploit the lock-on capabilities of the AIM-9G Sidewinder air-to-air
missile.
• In operation, the Sidewinder seeker or the aircraft radar was “slaved” to the
position of the pilot’s head.
Computer-Generated Worlds: Augmented Reality – HMS/VTAS
• HMS: Helmet Mounted Sight.
• Over years, there have been different helmet-mounted devices:
• Monocular (single image to one eye)
• Biocular (single image to both eyes)
• Binocular (separate viewpoint-corrected images to each eye)
• Visor projections, and more.

• Augmenting display technologies have transitioned from purely defense and


specialty application areas into commercially available products.
Computer-Generated Worlds: Augmented Reality – Smart Glasses
• On the left, an optical see-through
the user views the real world by looking
directly through monocular or binocular
optical elements.

• On the right, a video see-through


the real-world view is first captured by
one or two video cameras mounted on
the front of the display.
These images are combined with
computer-generated imagery and then
presented to the user.
Computer-Generated Worlds: Augmented Reality – Smart Glasses
• Handheld augmented reality
systems based on smartphones
and tablets display information
overlays and digital content tied
to physical objects and locations.

• In the example shown, the tablet


app is able to recognize an AR
icon embedded in the wall-
mounted illustration to reveal a
3D model of a valve handle, This handheld can be considered as :
which remains stable as the user
moves the tablet. Video See-Through
Computer-Generated Worlds: Virtual Reality
• Virtual Reality: The display technologies,
both worn and fixed placement, that provide
the user a highly compelling visual sensation
of presence, or immersion, within a 3D
computer model or simulation.
• This is accomplished via two primary
methods: As opposed to looking at a desktop
1. The use of stereoscopic head-mounted monitor, which is essentially a 2D window
(or head-coupled) displays.
On a 3D world, an immersive
2. large fully and semi-immersive virtual reality system provides the
projection-based systems such as user the visual sensation of actual
computer-assisted virtual environments presence inside the 3D model or
(CAVEs) and domes. simulation.
Computer-Generated Worlds: Virtual Reality
• The U.S. Air Force with Dr. Thomas Furness,
focused on development of virtual interfaces
for flight control (simulators).
• In 1982, Furness demonstrated a system
known as VCASS (Visually Coupled Airborne
Systems Simulator)
The image on the left shows an engineer
• The VCASS system used high-resolution CRTs wearing the U.S. Air Force Visually
to display visual information such as Coupled Airborne Systems Simulator
computer-generated 3D maps, sensor (VCASS) helmet while seated in a
imagery, and avionics data to the simulator laboratory cockpit (circa 1982).
The image on the right, The terrain
operator. scene, symbology, and avionics data
which are representative of the imagery
displayed to the user.
Computer-Generated Worlds: Conclusion: AR vs VR
Between the History of AR
and
the Current
Smartphone’s Capabilities
Video
Contents
Part I: Introduction to Augmented and Virtual Reality
1. Computer-Generated Worlds
• What Is Augmented Reality?
• What Is Virtual Reality?
• Conclusion

2. Understanding Virtual Space


• Defining Visual Space and Content
• Defining Position and Orientation in Three Dimensions
• Navigation
• Conclusion
Understanding Virtual Space: Visual Space
• Visual Space: the perceived space, or visual scene, of the virtual environment
being experienced by a user or participant.
• The scene perceived by the user is actually the result of:
1. Light enters the eye.
2. Passes through the cornea and lens.
3. Falls on the rods and cones of the retina.
4. Is converted into electrical impulses
5. Sent onto the brain for interpretation.
• The optical pathway of the eye and various sensory cues such as the
perception of depth, size, shape, distance, direction, and motion weigh
heavily on exactly what is ultimately “seen” by the user, but equally important
are the specifics of the manner in which stereo imagery is presented.
Understanding Virtual Space: Content
• Sophisticated 3D models, or simple individual
objects, can be created using any number of
CAD and geometric modeling programs.
• Examples of CADs: Autodesk 3ds MAX,
AutoCAD, Maya, and Dassault Systèmes (CATIA).
• The individual primitive elements used to create
the visual representation—such as points, straight
lines, circles, curves, triangles, and polygons—to
solids—such as cubes, cylinders, spheres, and
cones, as well as the infinite variety of complex
surfaces—are the product of mathematical
predefinition by an engineer or software designer.
Understanding Virtual Space: Position in Three Dimensions
• Primarily Coordinate Systems:

Cartesian Rectilinear Spherical (Polar) Cylinderic


Coordinate System Coordinate System Coordinate System
Video
Understanding Virtual Space: Orientation in Three Dimensions
• it is frequently necessary to both define
and track the orientation and the rotation
of objects and user viewpoints in relation
to the coordinate system of the object
space.
• We use Tait-Bryan angles, more accurately
expressed as roll, pitch, and yaw.

• The six degrees of freedom (frequently


abbreviated as 6 DOF):
• X , Y , Z (Coordination System)
• Roll , Pitch , Yaw (Rotation System) The three types of rotation:
roll, pitch, and yaw.
Understanding Virtual Space: Orientation in Three Dimensions
• it is frequently necessary to both define
and track the orientation and the rotation
of objects and user viewpoints in relation
to the coordinate system of the object
space.
• We use Tait-Bryan angles, more accurately
expressed as roll, pitch, and yaw.

• The six degrees of freedom (frequently


abbreviated as 6 DOF):
• X , Y , Z (Coordination System)
• Roll , Pitch , Yaw (Rotation System) The three types of rotation:
roll, pitch, and yaw.
Understanding Virtual Space: Navigation

• Navigation within most virtual environments is


handled in two ways:
1. By physical movement for close-in tasks
2. Via the use of manual interfaces for
traversing greater virtual distances
• Manual interfaces introduced over the years for
navigation purposes include: 3D mice, Wands,
Game controllers, Gesture recognition, Gloves,
Motion detecting rings, Joy sticks, Steering
wheels, Eye tracking navigation, and a host of
others devices providing myriad options to suit
various needs.
The VirtuSphere
Video
Course Contents
• Part I: Introduction to Augmented and Virtual Reality
• Part II: Understanding the Human Senses and Their
Relationship to Output/Input Devices
• Part III: Applications of Augmented and Virtual Reality
• Part IV: Human Factors, Legal, and Social Considerations
Contents
• Part II: Understanding the Human Senses and Their
Relationship to Output/Input Devices
3 The Mechanics of Sight
• The Visual Pathway
• Spatial Vision and Depth Cues
• Conclusion

4 Component Technologies of Head-Mounted Displays


• Display Fundamentals
• Related Terminology and Concepts
• Optical Architectures
• Conclusion
The Mechanics of Sight: The Visual Pathway
• Everything starts with light.
• Light is a form of electromagnetic
radiation that is capable of exciting
the retina and producing a visual
sensation
• The electromagnetic spectrum
includes:
1.Radio waves
2.Infrared
3.Visible light All the regions within the electromagnetic spectrum.
The color callout shows the small portion to which the
4.Ultraviolet retinas of the eyes have natural sensitivity.
5.x-rays, and gamma rays.
The Mechanics of Sight: The Visual Pathway
• Light rays have wavelengths (the distance between two consecutive crests of a wave).
• The longest wavelengths perceptible by humans correspond to light we see as
red (740 nm), and the shortest wavelengths correspond to light that we see as
violet (380 nm).
• In the real world, an object’s visible color is determined by the wavelengths of
light it absorbs or reflects.
The Mechanics of Sight: Image Inversion
• The eye is equipped with a compound
lens system.
• Light enters the eye by passing
between mediums, passing from air
into a denser medium (the cornea)
• The cornea performs ~80% of the
refraction for focusing, with the
crystalline lens performing the
remaining 20%.
• Following the refraction rules for The double convex shape of the crystalline
lens results in the inversion of the light
converging lens, light rays will pass field entering the eye.
through the focal point on the
opposite side.
Contents
• Part II: Understanding the Human Senses and Their
Relationship to Output/Input Devices
3 The Mechanics of Sight
• The Visual Pathway
• Spatial Vision and Depth Cues
• Conclusion

4 Component Technologies of Head-Mounted Displays


• Display Fundamentals
• Related Terminology and Concepts
• Optical Architectures
• Conclusion
Component Technologies of Head-Mounted Displays: Display Fundamentals
• All head-mounted displays for virtual and augmented reality incorporate the
same basic subsystems, although in widely varying sizes and configurations.
• In their simplest form, these displays consist of at least one image source and
optics in a head mount (Melzer, 1997).
• Depending on the specific display design and intended application, this basic
definition will expand to include a variety of different attributes and features,
such as second visual display channel, sensors for tracking gaze direction and
duration, and more.
Component Technologies of Head-Mounted Displays: Ocularity
• Head-mounted displays can be categorized by their ocularity, or the
specification of their design in serving one or two eyes.
• there are three types:
• Monocular
• Biocular
• Binocular.
Component Technologies of Head-Mounted Displays: Monocular
• Monocular Display: provides a single viewing channel via a small display
element and optics positioned in front of one eye, with the other eye free to
view the normal, real-world surroundings.
• Typically these devices are of a small form factor and are used as information
displays.
• Examples include various military aviation implementations and devices such
as Google Glass or the Vuzix M-100 Smart Glasses
Component Technologies of Head-Mounted Displays: Biocular
• A Biocular Display: provides a single viewing channel to both eyes.
• This type of display is most common with head-mounted cinematic viewers
as well as those applications in which an immersive capability is needed, but
without stereopsis.
• Generally this type of application is in relation to close proximity tasks.
• An example of such an implementation is an arc welding training system (Ch18)
Component Technologies of Head-Mounted Displays: Binocular
• Binocular Display: Each eye receives its own separate viewing channel with
slightly offset viewpoints mimicking the human visual system to create a
stereoscopic view.
Component Technologies of Head-Mounted Displays: Display Types
• Building on the concept of ocularity, there are three general categories for
head-mounted displays.
o The first provides a computer-generated replacement to true visual
surroundings
o The second two provide an enhanced view of the real environment.
1- Fully immersive displays (VR):
o Completely occlude/replace the user’s view to the outside world.
o Fully immersive stereoscopic HMD (Classic Virtual Reality)
o Combined with sensors to track position and orientation of the user’s head
o Provide the visual sensation of actual presence within a computer-
generated environment.
Component Technologies of Head-Mounted Displays: Display Types
2- Video See-Through Displays (AR):
o Are fully immersive design, but with the key difference being that the
primary imagery displayed within the device comes from either front-
facing video cameras or those set in a remote location.
o Depending on the specific application for which the device was designed,
those video signals can be combined with computer-generated imagery
and output from other sensors.
3- Optical See-Through Displays (AR):
o They are the key enabling technology underpinning the “Augmented
reality”.
o These systems are designed to overlay, supplement, or combine graphics,
symbology, and textual data with the user’s real-world view seen through
the optical elements.
Video
The Mechanics of Sight: Assignment #1
• We need a brief of the contents of the scientific paper: “Comparison of
optical and video see-through, head-mounted displays”
• Only Bulleted Definitions without drawings.
• From 3 – 4 pages, 1000 to 1350 words.
• Font: Times New Roman, 12, with 1.5 Line Spacing.
• Microsoft Word Format.
• Assignment File Name: Assignment1_StudentID_StudentFirstName
Ex: Assignment1_20123456_Mohammed
• To be Uploaded on Moodle
• Due Date: Saturday, March 2nd at 11:00 pm
Component Technologies of Head-Mounted Displays:
Imaging and Display Technologies
• The imaging and display technologies employed within head-mounted
viewing devices for virtual and augmented systems have made considerable
advances over the past two decades.
• What was once a realm completely dominated at the high end by CRTs has
been all but completely replaced by four key imaging and display
technologies, each providing a vastly improved, lightweight, and easier to
implement solution
• In the following sections we will explore the basic functionality of each display
type, highlighting the benefits and trade-offs within this application setting.
Component Technologies of Head-Mounted Displays:
Imaging and Display Technologies - Liquid Crystal Displays Panels
• Liquid Crystal Displays (LCDs): are a
relatively mature technology originally used
in near-eye displays for virtual and
augmented reality as far back as the 1980s
by groups such as
• NASA (Fisher et al., 1986),
• the University of North Carolina (Chung et al.,
1989),
• companies such as VPL Research (Conn et al.,
1989),
• LCDs are now used in many HDTVs, desktop
and laptop monitors, and tablet and cell
phone displays.
LCD Display
Component Technologies of Head-Mounted Displays:
Imaging and Display Technologies - Organic Light Emitting Diode Panels
• Organic Light-Emitting Diode
(OLED): is a solid-state display
technology based on organic (carbon
and hydrogen bonded) materials
that will emit light when electric
current is applied.
OLED Display
• There are two types of OLEDs:
o Passive Matrix (PMOLED)
o Active Massive (AMOLED)

AMOLED Display
Component Technologies of Head-Mounted Displays:
Digital Light Projector (DLP) Microdisplay
• A Texas Instruments digital light
projector (DLP) chip, technically
referred to as a digital micromirror
device (DMD), is known as spatial
light modulators.
• On the surface of this chip is an array
of up to two million individually
controlled micromirrors measuring,
each of which can be used to
represent a single pixel in a projected
image (Bhakta et al., 2014).
Component Technologies of Head-Mounted Displays:
Liquid Crystal on Silicon (LCoS) Microdisplay

• Liquid Crystal on Silicon


(LCoS) Imaging
technology: is a cross
between LCD and DLP
technologies.
LCDs – LEDs – OLED
Screens
Video
Component Technologies of Head-Mounted Displays:
Related Terminology and Concepts
• Light: Light is radiant energy capable of exciting photoreceptors in the human
eye.
• The human eye is only sensitive to a narrow band within the electromagnetic spectrum
falling between wavelengths (380 – 740) nanometers in length.
• Lumen (lm): is the unit of measure for quantifying luminous, the total
quantity of visible light energy emitted by a source.
• Most light measurements are expressed in lumens.
• You will frequently see this unit of measure used in relation to displays based on
projection technologies.
Component Technologies of Head-Mounted Displays:
Related Terminology and Concepts
• Luminous flux:—Luminous flux is a quantitative expression of light energy per
unit of time radiated from a source over wavelengths to which the human eye
is sensitive (380 nm–740 nm).
• The unit of measure for luminous flux is the Lumen (lm).

• Luminous intensity:—Luminous intensity is the luminous flux per solid angle


emitted or reflected from a point.
• The unit of measure for expressing luminous intensity is the lumen per steradian, or
candela (cd).
Component Technologies of Head-Mounted Displays:
Related Terminology and Concepts
• Candela:—A candela is the unit of measure for quantifying luminous power
per unit solid angle emitted by a point light source in a particular direction.

• Luminance:—Luminance is the measure of luminous intensity per unit area


projected in a given direction.

• Illuminance:—Illuminance is the luminous flux incident on a surface per unit


area.
• The unit of measure is the lux (lx), or lm/m2 (lumen per square meter).
• For a given luminous flux, the illuminance decreases as the illuminated area increases.
Component Technologies of Head-Mounted Displays:
Related Terminology and Concepts
• Brightness:—Brightness is purely a subjective attribute or property used to
express the luminance of a display.
• Brightness is perceived, not measured.
• From a technical standpoint, the words brightness and luminance should not be
interchanged (although they frequently are).
• Brightness is not a measurable quantity.
• Spatial Resolution:—Refers to the number of individual pixel elements of a
display and is presented as numerical values for both the vertical and the
horizontal directions.
• For instance, the spatial resolution of the Oculus Rift CV1 is 1200 × 1080 resolution per
eye.
• Pixel pitch:—Pixel pitch refers to the distance from the center of one pixel to
the center of the next pixel measured in millimeters.
Video

You might also like