Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 42

Artificial Intelligence

Ms. Sania Yousuf


What is Intelligence?

• Ability to Thinking
• Ability to learn
• Decision Making
• Ability to acquire
Knowledge
• Ability to learn from past
experience
Definition of AI?
• “Intelligence: The ability to learn and solve problems” Webster’s
Dictionary
• “Artificial intelligence (AI) is the intelligence exhibited by machines or
software” Wikipedia
• “The science and engineering of making intelligent machines”
McCarthy
• “The study and design of intelligent agents, where an intelligent agent
is a system that perceives its environment and takes actions that
maximize its chances of success.” Russel and Norvig AI book
What is AI?
Rational thinking?
• Rational thinking is the ability to consider the relevant variables of a
situation and to access, organize, and analyze relevant information
(e.g., facts, opinions, judgments, and data) to arrive at a sound
conclusion.

• Example: Choosing a university for further studies?


Collect all the relevant data analyze it then take decision.
Are humans rational?
1. Jack is looking at Anne, but Anne is looking at George. Jack is
married, but George is not. Is a married person looking at an
unmarried person?

• A) Yes
• B) No
• C) Cannot be determined

More than 80 percent of people choose C. But the correct


answer is A.
Are humans rational?
2. A bat and a ball cost $1.10 in total. The bat costs $1 more than the
ball. How much does the ball cost?

Many people give the first response that comes to mind—


10 cents. But if they thought a little harder, they would
realize that this cannot be right: the bat would then have
to cost $1.10, for a total of $1.20. 
Are humans rational?
3. Imagine that XYZ viral syndrome is a serious condition that affects one person in
1,000. Imagine also that the test to diagnose the disease always indicates correctly that
a person who has the XYZ virus actually has it. Finally, suppose that this test occasionally
misidentifies a healthy individual as having XYZ. The test has a false-positive rate of 5
percent, meaning that the test wrongly indicates that the XYZ virus is present in 5
percent of the cases where the person does not have the virus.
Next we choose a person at random and administer the test, and the person tests positive
for XYZ syndrome. Assuming we know nothing else about that individual's medical history,
what is the probability (expressed as a percentage ranging from zero to 100) that the
individual really has XYZ?
The most common answer is 95 percent. But that is wrong. People tend to ignore the first part of
the setup, which states that only one person in 1,000 will actually have XYZ syndrome. If the
other 999 (who do not have the disease) are tested, the 5 percent false-positive rate means that
approximately 50 of them (0.05 times 999) will be told they have XYZ. Thus, for every 51
patients who test positive for XYZ, only one will actually have it. The answer to the question,
then, is that the probability a person who tests positive for XYZ syndrome actually has it is one in
51, or approximately 2 percent.
Acting humanly: The Turing Test approach
• The Turing Test, proposed by Alan Turing (1950), was designed to
provide a satisfactory operational definition of intelligence. Turing
defined intelligent behavior as the ability to achieve human-level
performance in all cognitive tasks, sufficient to fool an interrogator.
• The computer would need to possess the following capabilities:
 Natural language processing to enable it to communicate successfully in English (or some other
human language);
 Knowledge representation to store information provided before or during the interrogation;
 Automated reasoning to use the stored information to answer questions and to draw new
conclusions;
 Machine learning to adapt to new circumstances and to detect and extrapolate patterns
 Computer vision to perceive objects, and
 Robotics to move them about.
Thinking humanly: The cognitive modelling
approach
• If we are going to say that a given program thinks like a human, we must
have some way of determining how humans think.
• We need to get inside the actual workings of human minds. There are
three ways to do this:
through introspection—trying to catch our own thoughts as they go by
through psychological experiments — observing a person in action
Through brain imaging — observing a brain in action
• If the program's input/output and timing behavior matches human
behavior, that is evidence that some of the program's mechanisms may
also be operating in humans.
Thinking rationally: The laws of thought
approach
• Reasoning and logic
• For example: Haris is a man and all men are mortal; therefore Haris is
mortal." These laws of thought were supposed to govern the operation of
the mind, and initiated the field of logic.
• The development of formal logic provided a precise notation for statements
about all kinds of things in the world and the relations between them.
• There are two main obstacles to this approach. First, it is not easy to take
informal knowledge and state it in the formal terms required by logical
notation. Second, there is a big difference between being able to solve a
problem "in principle" and doing so in practice.
Acting rationally: The rational agent
approach
• An agent is just something that perceives and acts. (This may be an
unusual use of the word, but you will get used to it.)
• In this approach, AI is viewed as the study and construction of rational
agents.
• In the "laws of thought" approach to AI, the whole emphasis was on
correct inferences. Making correct inferences is sometimes part of
being a rational agent, because one way to act rationally is to reason
logically to the conclusion that a given action will achieve one's goals,
and then to act on that conclusion.
THE FOUNDATIONS OF ARTIFICIAL
INTELLIGENCE

Neuroscience
History of AI
Early 20th century
• In the early 20th century, the concepts that would ultimately result in
AI started out in the minds of science fiction writers and scientists.

• In 1927, the sci-fi film Metropolis was released and featured an


artificially intelligent robot and in 1950 a visionary collection of short
stories by Isacc Asimov was published called I Robot.
The gestation of artificial intelligence (1943-
1956)
In 1943, a collaboration between Warren McCulloch and Walter Pitts introduced three ideas,

• knowledge of the basic physiology and function of neurons in the brain

• the formal analysis of propositional logic

• Turing's theory of computation

• A model of artificial neurons in which each neuron is characterized as being "on" or "off,"
with a switch to "on" occurring in response to stimulation by a sufficient number of
neighboring neurons.
The gestation of artificial intelligence (1943-
1956)
• Also in 1943, Alan Turing, who would invent the Turing test that
would essentially help us understand when machines reached
intelligence.

• Alan Turing (1953) were writing chess programs for von Neumann-
style conventional computers.

• built the first neural network computer in 1951


The Dartmouth Conference — 1956
• The term artificial intelligence was first used at a summer workshop
organized by professor John McCarthy at Dartmouth College. The
event brought together experts in the fields of machine learning and
neural networks to generate new ideas and debate how to tackle AI.
In addition to neural networks, computer vision, natural language
processing and more were on the agenda at that summer event.
Historical year 1958
There made three crucial contributions in one historic year: 1958
• In MIT AI Lab Memo No. 1, McCarthy defined the high-level language Lisp, which was to
become the dominant AI programming language. Lisp is the second-oldest language in
current use.

• After getting an experimental time-sharing system up at MIT, McCarthy eventually attracted


the interest of a group of MIT grads who formed Digital Equipment Corporation, which was
to become the world's second largest computer manufacturer

• McCarthy published a paper entitled Programs with Common Sense, in which he described
the Advice Taker, a hypothetical program that can be seen as the first complete AI system.
(It is remarkable how much of the 1958 paper remains relevant after more than 35 years).
The Chatbot ELIZA— 1966
• Before Alexa and Siri were a
figment of their developers’
imaginations, there was ELIZA—
the world’s first chatbot. As an
early implementation of natural
language processing, ELIZA was
created at MIT by Joseph
Weizenbaum. ELIZA couldn’t
speak, but she used text to
communicate.
Knowledge-based systems: The key to
power? (1969-1979)
• The picture of problem solving that had arisen during the first decade of AI research was of
a general-purpose search mechanism trying to string together elementary reasoning steps
to find complete solutions. Such approaches have been called weak methods, because they
use weak information about the domain

• The DENDRAL program (Buchanan et a/., 1969) was an early example of this approach. It
was developed at Stanford, where Ed Feigenbaum (a former student of Herbert Simon),
Bruce Buchanan (a philosopher turned computer scientist), and Joshua Lederberg (a Nobel
laureate geneticist) teamed up to solve the problem of inferring molecular structure from
the information provided by a mass spectrometer. The naive version of the program
generated all possible structures consistent with the formula, and then predicted what
mass spectrum would be observed for each, comparing this with the actual spectrum. As
one might expect, this rapidly became intractable for decent-sized molecules
 XCON and the rise of useful AI -- 1980
• Digital Equipment Corporation’s XCON expert learning system was deployed
in 1980 and by 1986 was credited with generating annual savings for the
company of $40 million.

• This is significant because until this point AI systems were generally regarded
as impressive technological feats with limited real-world usefulness.

• Now it was clear that the rollout of smart machines into business had begun
– by 1985 corporations were spending $1 billion per year on AI systems.
Principles of Probability— 1988
• IBM researchers publish A Statistical Approach to Language
Translation, introducing principles of probability into the until-then
rule-driven field of machine learning. It tackled the challenge of
automated translation between human languages – French and
English.
Internet – 1991
• When the worldwide web launched in 1991 when CERN researcher
Tim Berners-Lee published the hypertext transfer protocol (HTTP), the
world’s first website online, it made it possible for online connections
and data to be shared no matter who or where you are. Since data is
the fuel for artificial intelligence there’s little doubt that AI has
progressed where it is today thanks to Berners-Lee’s work.
Chess and AI– 1997
• Another milestone for AI is no doubt when
world chess champion Garry Kasparov was
defeated in a match of chess by IBM’s Deep
Blue supercomputer.
• This was a win for AI that allowed the
general population and not just those close
to the AI industry to understand the rapid
development and evolution of computers. In
this case, Deep Blue won by using its high-
speed capabilities (able to evaluate 200
million positions a second) to calculate every
possible option rather than analyzing game
play.
5 Autonomous Vehicles Complete the
DARPA Grand Challenge– 2005
• When the DARPA Grand Challenge
first ran in 2004, there were no
autonomous vehicles that
completed the 100-kilometer off-
road course through the Mojave
desert. In 2005, five vehicles made
it! This race helped spur the
development of autonomous
driving technology.
AI Wins Jeopardy! – 2011
• In 2011, IBM’s Watson challenged
human Jeopardy! players and ended
up winning the $1 million prize. This
was significant since prior challenges
against humans such as the Kasparov
chess match used the machine’s
stellar computing power. In
Jeopardy!, Watson had to compete in
a language-based, creative-thinking
game.
Deep Learning on Display – 2012
• AI learned to recognize pictures of cats in 2012. In this collaboration
between Stanford and Google, documented in the paper
 Building High-Level Features Using Large Scale Unsupervised Learning by
Jeff Dean and Andrew Ng, unsupervised learning of AI was accomplished.
Prior to this development, data needed to be manually labeled before it
could be used to train AI. With unsupervised learning, demonstrated with
the machines identifying cats, an artificial network could be put on the task.
In this case, the machines processed 10 million unlabeled pictures from
YouTube recordings to learn what images were cats. This ability to learn
from unlabeled data accelerated the pace of AI development and opened up
tremendous possibilities for what machines could help us do in the future.
Insightful Vision – 2015
• In 2015, the annual ImageNet
challenge highlighted that
 machines could outperform humans
 when recognizing and describing a
library of 1,000 images. Image
recognition was a major challenge for
AI. From the beginning of the contest
in 2010 to 2015, the algorithm’s
accuracy increased to 97.3% from
71.8%.
Gaming Capabilities Grow– 2016
• AlphaGo, created by Deep Mind which
is now a Google subsidiary, defeated
the world’s Go champion over five
matches in 2016. The number of
variations of the game makes brute
force impractical (there are more than
100,000 possible opening moves in Go
compared to 400 in chess). In order to
win, AlphaGo used neural networks to
study and then learn as it played the
game.
On the Road with Autonomous Vehicles–
2018
• 2018 was a significant milestone for
autonomous vehicles because they hit
the road thanks to Waymo’s
 self-driving taxi service in Phoenix,
Arizona. And it wasn’t just for testing.
There were 400 individuals who paid to
be driven by the driverless cars to work
and school within a 100-square-mile
area. There were human co-pilots who
could step in if necessary.
Year of 2019
• Robot Hand’s Dexterity: OpenAI’s successfully trained a robot hand called
Dactyl that adopted to the real-world environment in solving the Rubik’s
cube. 

• Deepfake – Bringing Picture To Life: Samsung, in May, created a system that


can transform facial images into video sequences. They used the generative
adversarial network (GAN) to create deep fake videos just by taking one
picture as input. 

• AI-Generated Synthetic Text: OpenAI, in February, released a small model


called Generative Pre-Training (GPT) to generate synthetic text
automatically. The firm eventually released the full version of the model,
GPT-2, in November. On writing a few sentences, the model perfectly picked
the context and generated text on its own. 

• Explainable AI: AI is making great strides, but understanding the


methodologies inside the black box is crucial for bringing thrust. Therefore, 
different companies released services to allow businesses to underline the
prime factors that lead to outcomes from their machine learning models.
AI in Covid-19 pandemic
• In service of pandemic prediction
AI can be used as an early outbreak warning system, BlueDot, an AI‐driven
algorithm not only successfully detected the outbreak of Zika virus in Florida but
also spotted COVID‐19, 9 days before the WHO released its statement alerting
people to the emergence of a novel coronavirus.

Researchers from the Huazhong University of Science and Technology (HUST)


and Tongji Hospital in Wuhan, Hubei have developed an AI diagnostic tool
(XGBoost machine learning‐based prognostic model) that can quickly analyse
blood samples to predict survival rates of COVID‐19 infected patients and it
turns out to be 90% accurate.6
AI in Covid-19 pandemic
• To track potentially infected persons

There are reports that facial recognition and geolocation


technology is being used in China to track individuals who may
have come in contact with infected persons. These tools can also
be used to track compliance with self-isolation and quarantine
orders, though whether or not democratic governments choose
to deploy such tools despite privacy concerns remains to be
seen.

Several AI‐based computer vision camera systems are deployed


in China and across the world to scan crowds for COVID‐19
symptoms and monitor people during lockdown
AI in Covid-19 pandemic
• To diagnose patients

FluSense, a contactless syndromic surveillance platform, is used to forecast seasonal flu and
other viral respiratory outbreaks, such as the COVID‐19 pandemic or SARS.

In Wuhan, China, an AI diagnostic tool is used to distinguish COVID‐19 from other types of
pneumonia within seconds by analyzing patients' chest CT scan images. The authors claimed that
their new model holds great potential to relieve the pressure off frontline radiologists, improve
early diagnosis, isolation and treatment, and thus contribute to the control of the epidemic

COVID‐Net, a deep learning model is designed to detect the COVID‐19 positive cases from chest
X‐rays and accelerate treatment for those who need it the most
AI in Covid-19 pandemic
• Diagnosing the virus structure

Google's DeepMind is helping scientist to study various features of the


SARS‐CoV‐2 (severe acute respiratory syndrome coronavirus 2) and has
predicted the protein structure of the virus.
AI in Covid-19 pandemic
• For service of patients in quarantine
Interestingly, AI‐powered autonomous service
robots and humanoid robots “Cloud Ginger (aka
XR‐1)” are used in hospitals at Wuhan, China. The
first is used to assist the healthcare workers to
deliver the foods and medicines to the patients
and the latter is used to entertain the patients
during quarantine.
AI in Covid-19 pandemic
• Finding a cure

AI companies have been aiding and abetting the race to find a


treatment for COVID-19; not only can intelligent algorithms help
determine the necessary attributes of the drug, they can also uncover
whether drugs previously used for other treatments could be an
effective cure for this virus. UK-based BenevolentAI is just one such
company that has used AI and machine learning to aid drug discovery. 
Other innovations in 2020
• C-THRU

The C-THRU platform consists of a helmet-mounted


device worn by each firefighter, a tablet application which
runs command coordination/video software, a cloud
archive of incidents, AR navigational tools (location
tracking, points of interest), object detection (lines around
objects, shape contour, see through smoke and darkness),
and proximity detection of nearby firefighters.  The
product is hands free, weighing about the same as two
iPhones, replacing the form factor of many helmet or body
mounted tools such as flashlights and radios.  All of these
features are designed to simplify understanding through
visual cues while providing support to first responders in
high stress situations. 
Other innovations in 2020

• Mojo Vision: Mojo Lens


Through the use of augmented reality (AR), data can be
presented on displays built into glasses or a headset. You
can see turn-by-turn directions while walking, important
steps for replacing an unfamiliar machine part, or talking
points for a presentation all without holding a device or
looking down at a screen. By using a wearable display,
AR helps you keep your concentration by providing
information heads-up and hands-free.
Other innovations in 2020
• Neuralink Brain chip

Developing ultra high bandwidth brain-


machine interfaces to connect humans and
computers.

The initial goal of our technology will be to


help people with paralysis to regain
independence through the control of
computers and mobile devices. Our devices
are designed to give people the ability to
communicate more easily via text or speech
synthesis, to follow their curiosity on the web,
or to express their creativity through
photography, art, or writing apps.
Future of AI (2021 and beyond)

https://www.youtube.com/watch?v=fmR5ELoRnSE

You might also like