MODULE 07 Virtual Reality

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 204

EMERGING TECHNOLOGIES

(CPE0051)
Module 7
Virtual Reality
1. Introduce the concept of virtual reality.
2. Introduce the concept of augmented reality.
3. Identify the applications for virtual reality.
4. Identify the applications for augmented reality.
What is Virtual Reality?

Virtual Reality (VR) is the use of computer technology to


create a simulated environment. Unlike traditional user
interfaces, VR places the user inside an experience. Instead
of viewing a screen in front of them, users are immersed and
able to interact with 3D worlds.
By simulating as many senses as possible, such as vision,
hearing, touch, even smell, the computer is transformed into
a gatekeeper to this artificial world. The only limits to near-
real VR experiences are the availability of content and
cheap computing power.
What’s the difference Between Virtual Reality and Augmented
Reality?

Virtual Reality and Augmented Reality are two sides of the


same coin. You could think of Augmented Reality as VR with
one foot in the real world: Augmented Reality simulates
artificial objects in the real environment; Virtual Reality
creates an artificial environment to inhabit.
In Augmented Reality, the computer uses sensors and
algorithms to determine the position and orientation of a
camera. AR technology then renders the 3D graphics as they
would appear from the viewpoint of the camera,
superimposing the computer-generated images over a user’s
view of the real world.
In Virtual Reality, the computer uses similar sensors and
math. However, rather than locating a real camera within a
physical environment, the position of the user’s eyes are
located within the simulated environment. If the user’s head
turns, the graphics react accordingly. Rather than
compositing virtual objects and a real scene, VR technology
creates a convincing, interactive world for the user.
Virtual Reality technology

Virtual Reality’s most immediately-recognizable component is


the head-mounted display (HMD). Human beings are visual
creatures, and display technology is often the single biggest
difference between immersive Virtual Reality systems and
traditional user interfaces.
For instance, CAVE automatic virtual environments actively
display virtual content onto room-sized screens. While they
are fun for people in universities and big labs, consumer and
industrial wearables are the wild west.
With a multiplicity of emerging hardware and software
options, the future of wearables is unfolding but yet unknown.
Concepts such as the HTC Vive Pro Eye, Oculus Quest and
Playstation VR are leading the way, but there are also
players like Google, Apple, Samsung, Lenovo and others
who may surprise the industry with new levels of immersion
and usability.
Whomever comes out ahead, the simplicity of buying a
helmet-sized device that can work in a living-room, office, or
factory floor has made HMDs center stage when it comes to
Virtual Reality technologies.
Virtual Reality and the importance of audio

Convincing Virtual Reality applications require more than just


graphics. Both hearing and vision are central to a person’s
sense of space. In fact, human beings react more quickly to
audio cues than to visual cues.
In order to create truly immersive Virtual Reality experiences,
accurate environmental sounds and spatial characteristics
are a must. These lend a powerful sense of presence to a
virtual world. To experience the binaural audio details that go
into a Virtual Reality experience, put on some headphones
and tinker with this audio infographic published by The Verge.
While audio-visual information is most easily replicated in
Virtual Reality, active research and development efforts are
still being conducted into the other senses. Tactile inputs
such as omnidirectional treadmills allow users to feel as
though they’re actually walking through a simulation, rather
than sitting in a chair or on a couch.
Haptic technologies, also known as kinesthetic or touch
feedback tech, have progressed from simple spinning-weight
“rumble” motors to futuristic ultrasound technology. It is now
possible to hear and feel true-to-life sensations along with
visual VR experiences.
Major players in Virtual Reality: Oculus, HTC, Sony

As of the end of 2018, the three best selling Virtual Reality


headsets were Sony’s PlayStation VR (PSVR), Facebook’s
Oculus Rift and the HTC Vive. This was not a surprise,
seeing as the same three HMDs had also been best sellers in
2017.
2019 sees the VR landscape broadening with Google, HP,
Lenovo, and others looking to grab a piece of the still-
burgeoning market.

Here’s a look at 2019’s major VR hardware manufacturers


and the devices they are manufacturing:
Oculus Rift, Oculus Rift S, Oculus Go, Oculus Quest

Originally funded as a Kickstarter project in 2012, and


engineered with the help of John Carmack (founder of Id
Software, of Doom and Quake fame), Oculus became the
early leader in Virtual Reality hardware for video games.
Facebook bought Oculus in 2014, and brought the
company’s high-end VR HMD to market for consumers. More
recently, Oculus has seen success with the lower-price,
lower-powered Oculus Go, and 2019 will see the release of
multiple new iterations on the hardware, including the
tethered Rift S and the stand-alone Oculus Quest.
HTC Vive, HTC Vive Pro Eye, HTC Cosmos, HTC Focus,
HTC Plus

The HTC Vive has been one of the best VR HMDs on the
market since its consumer release back in 2016.
Manufactured by HTC, the Vive was the first VR HMD to
support SteamVR. The Vive has been locked in fierce
competition with the Oculus Rift since release, as both
headsets aimed at the same top end of the VR enthusiast
market.
The Vive has proven itself a durable workhorse for enterprise
solutions, while also delivering one of the best consumer VR
experiences available. The Vive was first released back in
2016, and has gone through several iterations, with the
addition of a wireless module. The Vive Pro came out in 2018
and the Vive Pro Eye and the HTC Vive Cosmos are both
slated for release in the second half of 2019.
HMD + Smartphone Virtual Reality

There’s a second class of Virtual Reality HMD that is really


just a shell with special lens that pairs with a smartphone to
deliver a VR experience. These devices can sell for almost
nothing (and are often given away free), and deliver a scaled
down VR experience that still approaches the immersive
experiences generated by much-more expensive hardware.
Applications of Virtual
Reality
How Virtual Reality is being used today

Unsurprisingly, the video games industry is one of the largest


proponents of Virtual Reality. Support for the Oculus Rift
headsets has already been jerry-rigged into games like
Skyrim and Grand Theft Auto, but newer games like Elite:
Dangerous come with headset support built right in.
Many tried-and-true user interface metaphors in gaming have
to be adjusted for VR (after all, who wants to have to pick
items out of a menu that takes up your entire field of vision?),
but the industry has been quick to adapt as the hardware for
true Virtual Reality gaming has become more widely
available.
Virtual Reality and data visualization

Scientific and engineering data visualization has benefited for


years from Virtual Reality, although recent innovation in
display technology has generated interest in everything
from molecular visualization to architecture to weather
models.
VR for aviation, medicine, and the military

In aviation, medicine, and the military, Virtual Reality training


is an attractive alternative to live training with expensive
equipment, dangerous situations, or sensitive technology.
Commercial pilots can use realistic cockpits with VR
technology in holistic training programs that incorporate
virtual flight and live instruction.
Surgeons can train with virtual tools and patients, and
transfer their virtual skills into the operating room, and studies
have already begun to show that such training leads to faster
doctors who make fewer mistakes. Police and soldiers are
able to conduct virtual raids that avoid putting lives at risk.
Virtual Reality and the treatment of mental illness

Speaking of medicine, the treatment of mental illness,


including post-traumatic stress disorder, stands to benefit
from the application of Virtual Reality technology to ongoing
therapy programs.
Whether it’s allowing veterans to confront challenges in a
controlled environment, or overcoming phobias in
combination with behavioral therapy, VR has
a potential beyond gaming, industrial and marketing
applications to help people heal from, reconcile
and understand real-world experiences.
How Virtual Reality
Works
Virtual reality is a way to create a computer-generated
environment that immerses the user into a virtual world.
When we put on a VR headset it takes us to a simulated set-
up making us completely aloof from the actual surroundings.
If you ever have put on one you would know exactly what I
am talking about.
While from an experiential perspective we do understand the
concept but what about the technical backend that goes into
making it all possible. Do you really know how virtual reality
works? In this blog, we will understand the technology behind
virtual reality and the basic terminology surrounding the
development of a simulated ecosystem for a head-mounted
display (VR headset).
The Basics of How VR Works:

The primary subject of virtual reality is simulating the vision.


Every headset aims to perfect their approach to creating an
immersive 3D environment. Each VR headset puts up a
screen (or two – one for each eye) in front of eyes thus,
eliminating any interaction with the real world.
Two autofocus lenses are generally placed between the
screen and the eyes that adjust based on individual eye
movement and positioning. The visuals on the screen are
rendered either by using a mobile phone or HDMI cable
connected to a PC.
To create a truly immersive virtual reality there are certain
prerequisites – a frame rate of minimum 60fps, an equally
competent refresh rate and minimum 100-degree field of view
(FOV) (though 180 degrees is ideal). The frame rate is the
rate at which the GPU can process the images per second,
screen refresh rate is the pace of the display to render
images, and FOV is the extent to which the display can
support eye and head movement.
If either of these doesn’t work as per the standards the user
can experience latency i.e. too much time gap between their
actions and the response from the screen. We need the
response to be less than 20 milliseconds to trick the brain
which is achieved by combining all the above factors in the
right proportion.
Another issue that needs to be catered here is to prevent
tearing (cybersickness) resulting due to the inconsistency
between the frame rate and refresh rate. If the GPU’s fps is
more than the screen refresh rate then the image can
become distorted. To counter this issue, we limit the
framerate to the monitor’s refresh rate this done using a tech
called Vertical Sync (VSync).
Among the major headsets available today, Vive and Rift both
have 110-degree FOVs, Google Cardboard has 90, the
GearVR has 96 and the new Google Daydream offers up to
120 degrees. As for frame rate, both HTC Vive and Oculus
Rift come with 90hz displays, while the PlayStation VR offers
a 60hz display.
Other Elements of the VR Technology:

The Impact of Sound:

Sound effects, when synced with the visuals, can create very
engaging effects. By using a headphone and 3D sound
effects the user’s belief in the virtual environment can be
reassured.
While crafting sound effects due care needs to be taken
about the consistency between the graphics and the sound. If
you start playing horror music in the background of a fairy
tale movie it will just put the user off.
Eye and Head Tracking:

Eye and head tracking can be ensured using laser pointers,


led lights or mobile sensors. In mobile, we use the
accelerometer to detect three-dimensional movement,
gyroscope for angular movement and magnetometer to
identify the position relative to the Earth.
If we need to achieve a very high accuracy then cameras and
sensors can be installed in the room where you would use
the headset. Although this is a much costlier setup as
compared to using basic phone sensors.
source
Augmented Reality
What is Augmented Reality?

Augmented reality is the technology that expands our


physical world, adding layers of digital information onto it.
Unlike Virtual Reality (VR), AR does not create the whole
artificial environments to replace real with a virtual one. AR
appears in direct view of an existing environment and
adds sounds, videos, graphics to it.
A view of the physical real-world environment with
superimposed computer-generated images, thus changing
the perception of reality, is the AR.
The term itself was coined back in 1990, and one of the first
commercial uses were in television and military. With the rise
of the Internet and smartphones, AR rolled out its second
wave and nowadays is mostly related to the interactive
concept. 3D models are directly projected onto physical
things or fused together in real-time, various augmented
reality apps impact our habits, social life, and the
entertainment industry.
AR apps typically connect digital animation to a special
‘marker’, or with the help of GPS in phones pinpoint the
location. Augmentation is happening in real time and within
the context of the environment, for example, overlaying
scores to a live feed sport events.
Four Types of AR
Marker-based AR.

Some also call it to image recognition, as it requires a special


visual object and a camera to scan it. It may be anything,
from a printed QR code to special signs. The AR device also
calculates the position and orientation of a marker to position
the content, in some cases. Thus, a marker initiates digital
animations for users to view, and so images in a magazine
may turn into 3D models.
Markerless AR.

A.k.a. location-based or position-based augmented reality,


that utilizes a GPS, a compass, a gyroscope, and an
accelerometer to provide data based on user’s location. This
data then determines what AR content you find or get in a
certain area.
With the availability of smartphones this type of AR typically
produces maps and directions, nearby businesses info.
Applications include events and information, business ads
pop-ups, navigation support.
Projection-based AR.

Projecting synthetic light to physical surfaces, and in some


cases allows to interact with it. These are the holograms we
have all seen in sci-fi movies like Star Wars. It detects user
interaction with a projection by its alterations.
Superimposition-based AR.

Replaces the original view with an augmented, fully or


partially. Object recognition plays a key role, without it the
whole concept is simply impossible. We’ve all seen the
example of superimposed augmented reality in IKEA Catalog
app, that allows users to place virtual items of their furniture
catalog in their rooms.
Applications of
Augmented Reality
1. Medical Training

From operating MRI equipment to performing complex


surgeries, AR tech holds the potential to boost the depth and
effectiveness of medical training in many areas. Students at
the Cleveland Clinic at Case Western Reserve University, for
example, will now learn anatomy utilizing an AR
headset allowing them to delve into the human body in an
interactive 3D format.
2. Retail

In today's physical retail environment, shoppers are using


their smartphones more than ever to compare prices or look
up additional information on products they're browsing. World
famous motorcycle brand Harley Davidson is one great
instance of a brand making the most of this trend,
by developing an an AR app that shoppers can use in-store.
Users can view a motorcycle they might be interesting in
buying in the showroom, and customize it using the app to
see which colors and features they might like.
3. Repair & Maintenance

One of the biggest industrial use cases of AR is for repair and


maintenance of complex equipment. Whether it's a car motor
or an MRI machine, repair and maintenance staff are
beginning to use AR headsets and glasses while they
perform their jobs to provide them with useful information on
the spot, suggest potential fixes, and point out potential
trouble areas.
This use case will only continue to get stronger as machine-
to-machine IoT technology grows and can feed information
directly to AR headsets.
4. Design & Modeling

From interior design to architecture and construction, AR is


helping professionals visualize their final products during the
creative process. Use of headsets enables architects,
engineers, and design professionals step directly into their
buildings and spaces to see how their designs might look,
and even make virtual on the spot changes.
Urban planners can even model how entire city layouts might
look using AR headset visualization. Any design or modeling
jobs that involve spatial relationships are a perfect use case
for AR tech.
5. Business Logistics

AR presents a variety of opportunities to increase efficiency


and cost savings across many areas of business logistics.
This includes transportation, warehousing, and route-
optimization.
Shipping company DHL has already implemented smart AR
glasses in some of its warehouses, where lenses display to
workers the shortest route within a warehouse to locate and
pick a certain item that needs to be shipping. Providing
workers with more efficient ways to go about their job is one
of the best ROI use cases in today's business environment.
6. Tourism Industry

Technology has gone a long way towards advancing the


tourism industry in recent years, from review sites like
TripAdvisor to informative website like Lonely Planet. But AR
presents a huge opportunity for travel brands and agents to
give potential tourists an even more immersive experience
before they travel.
Imagine taking a virtual "Walkabout" Australia before on AR
glasses before booking a ticket to Sydney, or a leisurely stroll
around Paris to see what museums or cafes you might like to
visit. AR promises to make selling trips, travel, and vacations
a whole lot easier in the future.
7. Classroom Education

While technology like tablets have become widespread in


many schools and classrooms, teachers and educators are
now ramping up student's learning experience with AR. The
Aurasma app, for example, is already being used in
classrooms so that students can view their classes via a
smartphone or tablet for a more rich learning environment.
Students learning about astronomy might see a full map of
the solar system, or those in a music class might be able to
see musical notes in real time as they learn to play an
instrument.
8. Field Service

Whether it's something as small as an air conditioner, or as


large as a wind turbine, every day field service technicians
get dispatched to repair a piece of mission critical equipment
that needs to get up and running as soon as possible. Today,
these technicians can arrive on-site with AR glasses or
headsets and view whatever they're repairing to more quickly
diagnose - and fix - the problem.
And instead of having to thumb through a repair manual,
technicians can go about their business hands-free to get in
and out faster than ever.
9. Entertainment Properties

In the entertainment industry, it's all about building a strong


relationship with your branded characters and the audience.
Properties like Harry Potter are immensely successful
because readers of the books and watchers of the movies
feel like they know the characters, and are hungry for
additional content.
Entertainment brands are now seeing AR as a great
marketing opportunity to build deeper bonds between their
characters and audience. As a matter of fact, the makers of
AR sensation Pokemon Go are soon planning to release
a Harry Potter-themed AR game that fans can interact with
day in and day out.
10. Public Safety

In the event of an emergency today, people will immediately


reach for their smartphone to find out what's going on, where
to go, and whether their loved ones are safe. Moreover, first
responders arrive on the scene of a fire or earthquake trying
to figure out who needs help, and the best way to get them to
safety. AR is showing promise in solving both pieces of the
public safety puzzle.
First responders wearing AR glasses can be alerted to
danger areas, and show in real-time individuals that need
assistance while enabling to still be aware of their
surroundings. For those in need, geolocation enabled AR can
show them directions, and the best route to, safe zones and
areas with firefighters or medics.
https://www.marxentlabs.com/what-is-virtual-reality/
https://www.inc.com/james-paine/10-real-use-cases-for-
augmented-reality.html
https://thinkmobiles.com/blog/what-is-augmented-reality/
https://www.newgenapps.com/blog/how-vr-works-
technology-behind-virtual-reality/
EMERGING TECHNOLOGIES
(CPE0051)
Module 8
Artificial Intelligence
1. Introduce the concept of artificial intelligence.
2. Identify the different categories of AI.
3. Identify the different types of AI.
4. Identify the applications of AI.
5. Orient the students on issues regarding AI.
What is Artificial Intelligence?

Artificial intelligence (AI) is wide-ranging branch of computer


science concerned with building smart machines capable of
performing tasks that typically require human intelligence. AI
is an interdisciplinary science with multiple approaches, but
advancements in machine learning and deep learning are
creating a paradigm shift in virtually every sector of the tech
industry.
HOW DOES ARTIFICIAL INTELLIGENCE WORK?

Less than a decade after breaking the Nazi encryption


machine Enigma and helping the Allied Forces win World
War II, mathematician Alan Turing changed history a second
time with a simple question: "Can machines think?"
Turing's paper "Computing Machinery and Intelligence"
(1950), and it's subsequent Turing Test, established the
fundamental goal and vision of artificial intelligence.

At it's core, AI is the branch of computer science that aims to


answer Turing's question in the affirmative. It is the endeavor
to replicate or simulate human intelligence in machines.
The expansive goal of artificial intelligence has given rise to
many questions and debates. So much so, that no singular
definition of the field is universally accepted.

The major limitation in defining AI as simply "building


machines that are intelligent" is that it doesn't actually
explain what artificial intelligence is? What makes a machine
intelligent?
In their groundbreaking textbook Artificial Intelligence: A
Modern Approach, authors Stuart Russell and Peter Norvig
approach the question by unifying their work around the
theme of intelligent agents in machines. With this in mind, AI
is "the study of agents that receive percepts from the
environment and perform actions." (Russel and Norvig viii)
Norvig and Russell go on to explore four different approaches
that have historically defined the field of AI:

Thinking humanly
Thinking rationally
Acting humanly
Acting rationally
The first two ideas concern thought processes and
reasoning, while the others deal with behavior. Norvig and
Russell focus particularly on rational agents that act to
achieve the best outcome, noting "all the skills needed for the
Turing Test also allow an agent to act rationally." (Russel and
Norvig 4).
Patrick Winston, the Ford professor of artificial intelligence
and computer science at MIT, defines AI as "algorithms
enabled by constraints, exposed by representations that
support models targeted at loops that tie thinking, perception
and action together."
While these definitions may seem abstract to the average
person, they help focus the field as an area of computer
science and provide a blueprint for infusing machines and
programs with machine learning and other subsets of artificial
intelligence.
While addressing a crowd at the Japan AI Experience in
2017, DataRobot CEO Jeremy Achin began his speech by
offering the following definition of how AI is used today:
"AI is a computer system able to perform tasks that ordinarily
require human intelligence... Many of these artificial
intelligence systems are powered by machine learning, some
of them are powered by deep learning and some of them are
powered by very boring things like rules."
Categories of AI
Narrow AI:

Sometimes referred to as "Weak AI," this kind of


artificial intelligence operates within a limited context and is a
simulation of human intelligence. Narrow AI is often focused
on performing a single task extremely well and while these
machines may seem intelligent, they are operating under far
more constraints and limitations than even the most basic
human intelligence.
Artificial General Intelligence (AGI):

AGI, sometimes referred to as "Strong AI," is the kind of


artificial intelligence we see in the movies, like the robots
from Westworld or Data from Star Trek: The Next Generation.
AGI is a machine with general intelligence and, much like a
human being, it can apply that intelligence to solve any
problem.
Narrow Artificial Intelligence
Narrow AI is all around us and is easily the most successful
realization of artificial intelligence to date. With its focus on
performing specific tasks, Narrow AI has experienced
numerous breakthroughs in the last decade that have had
"significant societal benefits and have contributed to the
economic vitality of the nation," according to "Preparing for
the Future of Artificial Intelligence," a 2016 report released by
the Obama Administration.
A few examples of Narrow AI include:

Google search
Image recognition software
Siri, Alexa and other personal assistants
Self-driving cars
IBM's Watson
Machine Learning & Deep Learning
Much of Narrow AI is powered by breakthroughs in machine
learning and deep learning. Understanding the difference
between artificial intelligence, machine learning and deep
learning can be confusing. Venture capitalist Frank
Chen provides a good overview of how to distinguish
between them, noting:
"Artificial intelligence is a set of algorithms and intelligence to
try to mimic human intelligence. Machine learning is one of
them, and deep learning is one of those machine learning
techniques."
Simply put, machine learning feeds a computer data and
uses statistical techniques to help it "learn" how to get
progressively better at a task, without having been
specifically programmed for that task, eliminating the need for
millions of lines of written code. Machine learning consists of
both supervised learning (using labeled data sets) and
unsupervised learning (using unlabeled data sets).
Deep learning is a type of machine learning that runs inputs
through a biologically-inspired neural network architecture.
The neural networks contain a number of hidden layers
through which the data is processed, allowing the machine to
go "deep" in its learning, making connections and weighting
input for the best results.
Artificial General Intelligence
The creation of a machine with human-level intelligence that
can be applied to any task is the Holy Grail for many AI
researchers, but the quest for AGI has been fraught with
difficulty.
The search for a "universal algorithm for learning and acting
in any environment," (Russel and Norvig 27) isn't new, but
time hasn't eased the difficulty of essentially creating a
machine with a full set of cognitive abilities.
AGI has long been the muse of dystopian science fiction, in
which super-intelligent robots overrun humanity, but experts
agree it's not something we need to worry about anytime
soon.
Types of AI
AI or artificial intelligence is the simulation of human
intelligence processes by machines, especially computer
systems. These processes include learning, reasoning and
self-correction. Some of the applications of AI include expert
systems, speech recognition and machine vision. Artificial
Intelligence is advancing dramatically. It
is already transforming our world socially, economically and
politically.
AI was coined by John McCarthy, an American computer
scientist, in 1956 at The Dartmouth Conference where the
discipline was born. Today, it is an umbrella term that
encompasses everything from robotic process automation to
actual robotics. AI can perform tasks such as identifying
patterns in the data more efficiently than humans, enabling
businesses to gain more insight out of their data.
With the help from AI, massive amounts of data can be
analyzed to map poverty and climate change, automate
agricultural practices and irrigation,
individualize healthcare and learning, predict consumption
patterns, streamline energy-usage and waste-management.
Types of Artificial Intelligence:
Artificial Intelligence can be classified in several ways. The
first classifies the AI as either weak AI or strong AI. Weak AI
also known as narrow AI, is an AI system that is designed
and trained for a specific type of task. Strong AI, also known
as artificial general intelligence, is an AI system with
generalized human cognitive abilities so that when presented
with an unfamiliar task, it has enough intelligence to find a
solution.
The Turing Test, developed by mathematician Alan Turing in
1950, is a method used to determine if a computer can think
like a human, although the method is controversial. The
second example is from Arend Hintze, an assistant professor
of integrative biology and computer science and engineering
at Michigan State University. He categorized AI into four
types, and these were as follow:
Type 1: Reactive Machines. An example is Deep Blue, an
IBM chess program that can identify pieces on the chess
board and can make predictions accordingly. But the major
fault with this is that it has no memory and cannot use past
experiences to inform future ones. It also analyzes possible
moves of its own and its opponents. Deep Blue and AlphaGO
were designed for narrow purposes and cannot easily be
applied to any other situation.
Type2: Limited Memory. These AI systems can use past
experiences to inform future decisions. Most of the decision-
making functions in the autonomous vehicles have been
designed in this way.
Type 3: Theory of mind: This is a psychology term, which
refers to the understanding that the other have in their own
beliefs and intentions that impact the decisions they make. At
present this kind of artificial intelligence does not exist.
Type4: Self-awareness. In this category, AI systems have a
sense of self, have consciousness. Machines with self-
awareness understand their current state and can use the
information to infer what others are feeling. This type of AI
does not yet exist.
AI Technologies
Artificial Intelligence Technologies:

The market for artificial intelligence technologies is


flourishing. Artificial Intelligence involves a variety of
technologies and tools, some of the recent technologies are
as follows:
Natural Language Generation: it’s a tool that produces text
from the computer data. Currently used in customer service,
report generation, and summarizing business intelligence
insights.
Speech Recognition: Transcribes and transforms human
speech into a format useful for computer applications.
Presently used in interactive voice response systems and
mobile applications.
Virtual Agent: A Virtual Agentis a computer generated,
animated, artificial intelligence virtual character (usually with
anthropomorphic appearance) that serves as an online
customer service representative. It leads an intelligent
conversation with users, responds to their questions and
performs adequate non-verbal behavior. An example of a
typical Virtual Agent is Louise, the Virtual Agent of eBay,
created by a French/American developer VirtuOz.
Machine Learning: Provides algorithms, APIs (Application
Program interface) development and training toolkits, data,
as well as computing power to design, train, and deploy
models into applications, processes, and other machines.
Currently used in a wide range of enterprise applications,
mostly `involving prediction or classification.
Deep Learning Platforms: A special type of machine
learning consisting of artificial neural networks with multiple
abstraction layers. Currently used in pattern recognition and
classification applications supported by very large data sets.
Biometrics: Biometrics uses methods for unique recognition
of humans based upon one or more intrinsic physical or
behavioral traits. In computer science, particularly,
biometrics is used as a form of identity access management
and access control. It is also used to identify individuals in
groups that are under surveillance. Currently used in market
research.
Robotic Process Automation: using scripts and other
methods to automate human action to support efficient
business processes. Currently used where it is inefficient for
humans to execute a task.
Text Analytics and NLP: Natural language processing (NLP)
uses and supports text analytics by facilitating the
understanding of sentence structure and meaning, sentiment,
and intent through statistical and machine learning methods.
Currently used in fraud detection and security, a wide range
of automated assistants, and applications for mining
unstructured data.
Applications for AI
Artificial Intelligence in Healthcare: Companies are
applying machine learning to make better and faster
diagnoses than humans. One of the best-known technologies
is IBM’s Watson. It understands natural language and can
respond to questions asked of it. The system mines patient
data and other available data sources to form a hypothesis,
which it then presents with a confidence scoring schema.
AI is a study realized to emulate human intelligence into
computer technology that could assist both, the doctor and
the patients in the following ways:
- By providing a laboratory for the examination,
representation and cataloguing medical information
- By devising novel tool to support decision making and
research
- By integrating activities in medical, software and cognitive
sciences
- By offering a content rich discipline for the future scientific
medical communities.
Artificial Intelligence in business: Robotic process
automation is being applied to highly repetitive tasks normally
performed by humans. Machine learning algorithms are being
integrated into analytics and CRM (Customer relationship
management) platforms to uncover information on how to
better serve customers.
Chatbots have already been incorporated into websites and e
companies to provide immediate service to customers.
Automation of job positions has also become a talking point
among academics and IT consultancies.
AI in education: It automates grading, giving educators
more time. It can also assess students and adapt to their
needs, helping them work at their own pace.
AI in Autonomous vehicles: Just like humans, self-driving
cars need to have sensors to understand the world around
them and a brain to collect, processes and choose specific
actions based on information gathered. Autonomous vehicles
are with advanced tool to gather information, including long
range radar, cameras, and LIDAR. Each of the technologies
are used in different capacities and each collects different
information.
This information is useless, unless it is processed and some
form of information is taken based on the gathered
information. This is where artificial intelligence comes into
play and can be compared to human brain. AI has several
applications for these vehicles and among them the more
immediate ones are as follows:
- Directing the car to gas station or recharge station when it
is running low on fuel.
- Adjust the trips directions based on known traffic
conditions to find the quickest route.
- Incorporate speech recognition for advanced
communication with passengers.
- Natural language interfaces and virtual assistance
technologies.
AI for robotics will allow us to address the challenges in
taking care of an aging population and allow much longer
independence. It will drastically reduce, may be even bring
down traffic accidents and deaths, as well as enable disaster
response for dangerous situations for example the nuclear
meltdown at the fukushima power plant.
Cyborg Technology: One of the main limitations of being
human is simply our own bodies and brains. Researcher
Shimon Whiteson thinks that in the future, we will be able to
augment ourselves with computers and enhance many of our
own natural abilities.
Though many of these possible cyborg enhancements would
be added for convenience, others may serve a more
practical purpose. Yoky Matsuka of Nest believes that AI will
become useful for people with amputated limbs, as the brain
will be able to communicate with a robotic limb to give the
patient more control. This kind of cyborg technology would
significantly reduce the limitations that amputees deal with
daily.
Seven Most Pressing
Ethical Issues in AI
1. Job Loss and Wealth Inequality

One of the primary concerns people have with AI is future


loss of jobs. Should we strive to fully develop and integrate AI
into society if it means many people will lose their jobs — and
quite possibly their livelihood?
According to the new McKinsey Global Institute report, by the
year 2030, about 800 million people will lose their jobs to AI-
driven robots. Some would argue that if their jobs are taken
by robots, perhaps they are too menial for humans and that
AI can be responsible for creating better jobs that take
advantage of unique human ability involving higher cognitive
functions, analysis and synthesis.
Another point is that AI may create more jobs — after all,
people will be tasked with creating these robots to begin with
and then manage them in the future.
One issue related to job loss is wealth inequality. Consider
that most modern economic systems require workers to
produce a product or service with their compensation based
on an hourly wage. The company pays wages, taxes and
other expenses, with left-over profits often being injected
back into production, training and/or creating more business
to further increase profits. In this scenario, the economy
continues to grow.
But what happens if we introduce AI into the economic flow?
Robots do not get paid hourly nor do they pay taxes. They
can contribute at a level of 100% with low ongoing cost to
keep them operable and useful. This opens the door for
CEOs and stakeholders to keep more company profits
generated by their AI workforce, leading to greater wealth
inequality.
Perhaps this could lead to a case of “the rich” — those
individuals and companies who have the means to pay for
AIs — getting richer.
2. AI is Imperfect — What if it Makes a Mistake?

AIs are not immune to making mistakes and machine


learning takes time to become useful. If trained well, using
good data, then AIs can perform well. However, if we feed AIs
bad date or make errors with internal programming, the AIs
can be harmful. Teka Microsoft’s AI chatbot, Tay, which was
released on Twitter in 2016.
In less than one day, due to the information it was receiving
and learning from other Twitter users, the robot learned
to spew racist slurs and Nazi propaganda. Microsoft shut the
chatbot down immediately since allowing it to live would have
obviously damaged the company’s reputation.
Yes, AIs make mistakes. But do they make greater or fewer
mistakes than humans? How many lives have humans taken
with mistaken decisions? Is it better or worse when an AI
makes the same mistake?
3. Should AI Systems Be Allowed to Kill?
In this TEDx speech, Jay Tuck describes AIs as software that
writes its own updates and renews itself. This means that, as
programmed, the machine is not created to do what we want
it to do — it does what it learns to do. Jay goes on to
describe an incident with a robot called Tallon. Its
computerized gun was jammed and open fired uncontrollably
after an explosion killing 9 people and wounding 14 more.
Predator drones, such as the General Atomics MQ-1
Predator, have been existence for over a decade. These
remotely piloted aircraft can fire missiles, although US law
requires that humans make the actual kill decisions. But with
drones playing more of a role in aerial military defense, we
need to further examine their role and how they are used.
Is it better to use AIs to kill than to put humans in the line of
fire? What if we only use robots for deterrence rather than
actual violence?
The Campaign to Stop Killer Robots is a non-profit organized
to ban fully-autonomous weapons that can decide who lives
and dies without human intervention. “Fully autonomous
weapons would lack the human judgment necessary to
evaluate the proportionality of an attack, distinguish civilian
from combatant, and abide by other core principles of the
laws of war. History shows their use would not be limited to
certain circumstances.”
4. Rogue Ais

If there is a chance that intelligent machines can make


mistakes, then it is within the realm of possibility that an AI
can go rogue, or create unintended consequences from its
actions in pursuing seemingly harmless goals.
One scenario of an AI going rogue is what we’ve already
seen in movies like The Terminator and TV shows where a
super-intelligent centralized AI computer becomes self-aware
and decides it doesn’t want human control anymore.

Right now experts say that current AI technology is not yet


capable of achieving this extremely dangerous feat of self-
awareness; however, future AI supercomputers might.
The other scenario is where an AI, for instance, is tasked to
study the genetic structure of a virus in order to create a
vaccine to neutralize it. After making lengthy calculations the
AI formulated a solution where it weaponizes the virus
instead of making a vaccine out of it. It’s like opening a
modern day Pandora’s Box and again ethics comes into play
where legitimate concerns need to be addressed in order to
prevent a scenario like this.
5. Singularity and Keeping Control Over Ais

Will AIs evolve to surpass human beings? What if they


become smarter than humans and then try to control us? Will
computers make humans obsolete? The point at which
technology growth surpasses human intelligence is referred
to as “technological singularity.”
Some believe this will signal the end of the human era and
that it could occur as early as 2030 based on the pace of
technological innovation. AIs leading to human extinction —
it’s easy to understand why the advancement of AI is scary to
many people.
6. How Should We Treat AIs?

Should robots be granted human rights or citizenship? If we


evolve robots to the point that they are capable of “feeling,”
does that entitle them to rights similar to humans or animals?
If robots are granted rights, then how do we rank their social
status? This is one of the primary issues in “roboethics,” a
topic that was first raised by Isaac Asimov in 1942.
In 2017, the Hanson Robotics humanoid robot, Sophia, was
granted citizenship in Saudi Arabia. While some consider this
to be more of a PR stunt than actual legal recognition, it does
set an example of the type of rights AIs may be granted in the
future.
7. AI Bias

AI has become increasingly inherent in facial and voice


recognition systems, some of which have real business
implications and directly impact people. These systems are
vulnerable to biases and errors introduced by its human
makers. Also, the data used to train these AI systems itself
can have biases.
For instance, facial recognition algorithms made by Microsoft,
IBM and Megvii all had biases when detecting people’s
gender.

These AI systems were able to detect the gender of white


men more accurately than gender of darker skin men.
Similarly, Amazon’s.com’s termination of AI hiring and
recruitment is another example which exhibits that AI cannot
be fair; the algorithm preferred male candidates over female.
This was because Amazon’s system was trained with data
collected over a 10-year period that came mostly from male
candidates.
https://www.valluriorg.com/blog/artificial-intelligence-
and-its-applications/
https://builtin.com/artificial-intelligence
https://kambria.io/blog/the-7-most-pressing-ethical-
issues-in-artificial-intelligence/
EMERGING TECHNOLOGIES
(CPE0051)
Module 7
Virtual Reality
1. Introduce the concept of virtual reality.
2. Introduce the concept of augmented reality.
3. Identify the applications for virtual reality.
4. Identify the applications for augmented reality.
2019 sees the VR landscape broadening with Google, HP,
Lenovo, and others looking to grab a piece of the still-
burgeoning market.

Here’s a look at 2019’s major VR hardware manufacturers


and the devices they are manufacturing:
Oculus Rift, Oculus Rift S, Oculus Go, Oculus Quest

Originally funded as a Kickstarter project in 2012, and


engineered with the help of John Carmack (founder of Id
Software, of Doom and Quake fame), Oculus became the
early leader in Virtual Reality hardware for video games.
Facebook bought Oculus in 2014, and brought the
company’s high-end VR HMD to market for consumers. More
recently, Oculus has seen success with the lower-price,
lower-powered Oculus Go, and 2019 will see the release of
multiple new iterations on the hardware, including the
tethered Rift S and the stand-alone Oculus Quest.
HTC Vive, HTC Vive Pro Eye, HTC Cosmos, HTC Focus,
HTC Plus

The HTC Vive has been one of the best VR HMDs on the
market since its consumer release back in 2016.
Manufactured by HTC, the Vive was the first VR HMD to
support SteamVR. The Vive has been locked in fierce
competition with the Oculus Rift since release, as both
headsets aimed at the same top end of the VR enthusiast
market.
The Vive has proven itself a durable workhorse for enterprise
solutions, while also delivering one of the best consumer VR
experiences available. The Vive was first released back in
2016, and has gone through several iterations, with the
addition of a wireless module. The Vive Pro came out in 2018
and the Vive Pro Eye and the HTC Vive Cosmos are both
slated for release in the second half of 2019.
HMD + Smartphone Virtual Reality

There’s a second class of Virtual Reality HMD that is really


just a shell with special lens that pairs with a smartphone to
deliver a VR experience. These devices can sell for almost
nothing (and are often given away free), and deliver a scaled
down VR experience that still approaches the immersive
experiences generated by much-more expensive hardware.
Oculus Rift
HTC Vive
Samsung VR
Applications of Virtual
Reality
How Virtual Reality is being used today

Unsurprisingly, the video games industry is one of the largest


proponents of Virtual Reality. Support for the Oculus Rift
headsets has already been jerry-rigged into games like
Skyrim and Grand Theft Auto, but newer games like Elite:
Dangerous come with headset support built right in.
Many tried-and-true user interface metaphors in gaming have
to be adjusted for VR (after all, who wants to have to pick
items out of a menu that takes up your entire field of vision?),
but the industry has been quick to adapt as the hardware for
true Virtual Reality gaming has become more widely
available.
Virtual Reality and data visualization

Scientific and engineering data visualization has benefited for


years from Virtual Reality, although recent innovation in
display technology has generated interest in everything
from molecular visualization to architecture to weather
models.
VR for aviation, medicine, and the military

In aviation, medicine, and the military, Virtual Reality training


is an attractive alternative to live training with expensive
equipment, dangerous situations, or sensitive technology.
Commercial pilots can use realistic cockpits with VR
technology in holistic training programs that incorporate
virtual flight and live instruction.
Surgeons can train with virtual tools and patients, and
transfer their virtual skills into the operating room, and studies
have already begun to show that such training leads to faster
doctors who make fewer mistakes. Police and soldiers are
able to conduct virtual raids that avoid putting lives at risk.
Virtual Reality and the treatment of mental illness

Speaking of medicine, the treatment of mental illness,


including post-traumatic stress disorder, stands to benefit
from the application of Virtual Reality technology to ongoing
therapy programs.
Whether it’s allowing veterans to confront challenges in a
controlled environment, or overcoming phobias in
combination with behavioral therapy, VR has
a potential beyond gaming, industrial and marketing
applications to help people heal from, reconcile
and understand real-world experiences.
https://www.marxentlabs.com/what-is-virtual-reality/
https://www.inc.com/james-paine/10-real-use-cases-for-
augmented-reality.html
https://thinkmobiles.com/blog/what-is-augmented-reality/
https://www.newgenapps.com/blog/how-vr-works-
technology-behind-virtual-reality/
EMERGING TECHNOLOGIES
(CPE0051)
Module 7
Virtual Reality
1. Introduce the concept of virtual reality.
2. Introduce the concept of augmented reality.
3. Identify the applications for virtual reality.
4. Identify the applications for augmented reality.
What is Virtual Reality?

Virtual Reality (VR) is the use of computer technology to


create a simulated environment. Unlike traditional user
interfaces, VR places the user inside an experience. Instead
of viewing a screen in front of them, users are immersed and
able to interact with 3D worlds.
By simulating as many senses as possible, such as vision,
hearing, touch, even smell, the computer is transformed into
a gatekeeper to this artificial world. The only limits to near-
real VR experiences are the availability of content and
cheap computing power.
What’s the difference Between Virtual Reality and Augmented
Reality?

Virtual Reality and Augmented Reality are two sides of the


same coin. You could think of Augmented Reality as VR with
one foot in the real world: Augmented Reality simulates
artificial objects in the real environment; Virtual Reality
creates an artificial environment to inhabit.
In Augmented Reality, the computer uses sensors and
algorithms to determine the position and orientation of a
camera. AR technology then renders the 3D graphics as they
would appear from the viewpoint of the camera,
superimposing the computer-generated images over a user’s
view of the real world.
In Virtual Reality, the computer uses similar sensors and
math. However, rather than locating a real camera within a
physical environment, the position of the user’s eyes are
located within the simulated environment. If the user’s head
turns, the graphics react accordingly. Rather than
compositing virtual objects and a real scene, VR technology
creates a convincing, interactive world for the user.
Virtual Reality technology

Virtual Reality’s most immediately-recognizable component is


the head-mounted display (HMD). Human beings are visual
creatures, and display technology is often the single biggest
difference between immersive Virtual Reality systems and
traditional user interfaces.
For instance, CAVE automatic virtual environments actively
display virtual content onto room-sized screens. While they
are fun for people in universities and big labs, consumer and
industrial wearables are the wild west.
With a multiplicity of emerging hardware and software
options, the future of wearables is unfolding but yet unknown.
Concepts such as the HTC Vive Pro Eye, Oculus Quest and
Playstation VR are leading the way, but there are also
players like Google, Apple, Samsung, Lenovo and others
who may surprise the industry with new levels of immersion
and usability.
Whomever comes out ahead, the simplicity of buying a
helmet-sized device that can work in a living-room, office, or
factory floor has made HMDs center stage when it comes to
Virtual Reality technologies.
Virtual Reality and the importance of audio

Convincing Virtual Reality applications require more than just


graphics. Both hearing and vision are central to a person’s
sense of space. In fact, human beings react more quickly to
audio cues than to visual cues.
In order to create truly immersive Virtual Reality experiences,
accurate environmental sounds and spatial characteristics
are a must. These lend a powerful sense of presence to a
virtual world. To experience the binaural audio details that go
into a Virtual Reality experience, put on some headphones
and tinker with this audio infographic published by The Verge.
While audio-visual information is most easily replicated in
Virtual Reality, active research and development efforts are
still being conducted into the other senses. Tactile inputs
such as omnidirectional treadmills allow users to feel as
though they’re actually walking through a simulation, rather
than sitting in a chair or on a couch.
Haptic technologies, also known as kinesthetic or touch
feedback tech, have progressed from simple spinning-weight
“rumble” motors to futuristic ultrasound technology. It is now
possible to hear and feel true-to-life sensations along with
visual VR experiences.
Major players in Virtual Reality: Oculus, HTC, Sony

As of the end of 2018, the three best selling Virtual Reality


headsets were Sony’s PlayStation VR (PSVR), Facebook’s
Oculus Rift and the HTC Vive. This was not a surprise,
seeing as the same three HMDs had also been best sellers in
2017.
https://www.marxentlabs.com/what-is-virtual-reality/
https://www.inc.com/james-paine/10-real-use-cases-for-
augmented-reality.html
https://thinkmobiles.com/blog/what-is-augmented-reality/
https://www.newgenapps.com/blog/how-vr-works-
technology-behind-virtual-reality/
2019 sees the VR landscape broadening with Google, HP,
Lenovo, and others looking to grab a piece of the still-
burgeoning market.

Here’s a look at 2019’s major VR hardware manufacturers


and the devices they are manufacturing:

You might also like