Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 79

Unit 1

What is AI?
The term "Artificial Intelligence" refers to the simulation of human intelligence processes by machines, especially computer
systems. It also includes Expert systems, voice recognition, machine vision, and natural language processing (NLP).

AI programming focuses on three cognitive aspects, such as learning, reasoning, and self-correction.
•Learning Processes
•Reasoning Processes
•Self-correction Processes

Learning Processes
This part of AI programming is concerned with gathering data and creating rules for transforming it into useful information.
The rules, which are also called algorithms, offer computing devices with step-by-step instructions for accomplishing a
particular job.

Reasoning Processes
This part of AI programming is concerned with selecting the best algorithm to achieve the desired result.

Self-Correction Processes
This part of AI programming aims to fine-tune algorithms regularly in order to ensure that they offer the most reliable results
possible.
Artificial Intelligence is composed of two words Artificial and Intelligence, where Artificial defines "man-
made," and intelligence defines "thinking power", hence AI means "a man-made thinking power."

So finally, we can define AI as

"It is a branch of computer science by which we can create intelligent machines which can behave
like a human, think like humans, and able to make decisions."
The History of Artificial Intelligence

Artificial Intelligence is not a new word and not a new technology for researchers. This technology is much older than you
would imagine.
Maturation of Artificial Intelligence (1943-1952)

•Year 1943: The first work which is now recognized as AI was done by Warren McCulloch and Walter pits in 1943. They
proposed a model of artificial neurons.
•Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection strength between neurons. His rule
is now called Hebbian learning.
•Year 1950: The Alan Turing who was an English mathematician and pioneered Machine learning in 1950. Alan Turing
publishes "Computing Machinery and Intelligence" in which he proposed a test. The test can check the machine's ability
to exhibit intelligent behavior equivalent to human intelligence, called a Turing test.
The birth of Artificial Intelligence (1952-1956)
•Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial intelligence program"Which was named
as "Logic Theorist". This program had proved 38 of 52 Mathematics theorems, and find new and more elegant proofs for
some theorems.
•Year 1956: The word "Artificial Intelligence" first adopted by American Computer scientist John McCarthy at the
Dartmouth Conference. For the first time, AI coined as an academic field.
At that time high-level computer languages such as FORTRAN, LISP, or COBOL were invented. And the enthusiasm for
AI was very high at that time.
The golden years-Early enthusiasm (1956-1974)
•Year 1966: The researchers emphasized developing algorithms which can solve mathematical problems. Joseph Weizenbaum
created the first chatbot in 1966, which was named as ELIZA.
•Year 1972: The first intelligent humanoid robot was built in Japan which was named as WABOT-1.
The first AI winter (1974-1980)
•The duration between years 1974 to 1980 was the first AI winter duration. AI winter refers to the time period where computer
scientist dealt with a severe shortage of funding from government for AI researches.
•During AI winters, an interest of publicity on artificial intelligence was decreased.
A boom of AI (1980-1987)
•Year 1980: After AI winter duration, AI came back with "Expert System". Expert systems were programmed that emulate the
decision-making ability of a human expert.
•In the Year 1980, the first national conference of the American Association of Artificial Intelligence was held at Stanford
University.
The second AI winter (1987-1993)
•The duration between the years 1987 to 1993 was the second AI Winter duration.
•Again Investors and government stopped in funding for AI research as due to high cost but not efficient result. The expert
system such as XCON was very cost effective.
The emergence of intelligent agents (1993-2011)
•Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary Kasparov, and became the first computer to
beat a world chess champion.
•Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum cleaner.
•Year 2006: AI came in the Business world till the year 2006. Companies like Facebook, Twitter, and Netflix also started
using AI.
Deep learning, big data and artificial general intelligence (2011-present)​
•Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to solve the complex questions as well as
riddles. Watson had proved that it could understand natural language and can solve tricky questions quickly.​
•Year 2012: Google has launched an Android app feature "Google now", which was able to provide information to the user as a
prediction.​
•Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the infamous "Turing test."​
•Year 2018: The "Project Debater" from IBM debated on complex topics with two master debaters and also
performed extremely well.​
•Google has demonstrated an AI program "Duplex" which was a virtual assistant and which had taken hairdresser appointment
on call, and lady on other side didn't notice that she was talking with the machine.​
Now AI has developed to a remarkable level. The concept of Deep learning, big data, and data science are now trending like
a boom. Nowadays companies like Google, Facebook, IBM, and Amazon are working with AI and creating amazing devices.
The future of Artificial Intelligence is inspiring and will come with high intelligence.
Application of AI
1. AI in Astronomy
•Artificial Intelligence can be very useful to solve complex universe problems. AI technology can be helpful for
understanding the universe such as how it works, origin, etc.

2. AI in Healthcare
•In the last, five to ten years, AI becoming more advantageous for the healthcare industry and going to have a significant
impact on this industry.
•Healthcare Industries are applying AI to make a better and faster diagnosis than humans. AI can help doctors with
diagnoses and can inform when patients are worsening so that medical help can reach to the patient before hospitalization.

3. AI in Gaming
•AI can be used for gaming purpose. The AI machines can play strategic games like chess, where the machine needs to
think of a large number of possible places.

4. AI in Finance
•AI and finance industries are the best matches for each other. The finance industry is implementing automation, chatbot,
adaptive intelligence, algorithm trading, and machine learning into financial processes.
5. AI in Data Security
•The security of data is crucial for every company and cyber-attacks are growing very rapidly in the digital world. AI can be
used to make your data more safe and secure. Some examples such as AEG bot, AI2 Platform,are used to determine software
bug and cyber-attacks in a better way.

6. AI in Social Media
•Social Media sites such as Facebook, Twitter, and Snapchat contain billions of user profiles, which need to be stored and
managed in a very efficient way. AI can organize and manage massive amounts of data. AI can analyze lots of data to identify
the latest trends, hashtag, and requirement of different users.

7. AI in Travel & Transport


•AI is becoming highly demanding for travel industries. AI is capable of doing various travel related works such as from
making travel arrangement to suggesting the hotels, flights, and best routes to the customers. Travel industries are using AI-
powered chatbots which can make human-like interaction with customers for better and fast response.

8. AI in Automotive Industry
•Some Automotive industries are using AI to provide virtual assistant to their user for better performance. Such as Tesla has
introduced TeslaBot, an intelligent virtual assistant.
•Various Industries are currently working for developing self-driven cars which can make your journey more safe and secure.
. AI in Robotics:
•Artificial Intelligence has a remarkable role in Robotics. Usually, general robots are programmed such that they can perform
some repetitive task, but with the help of AI, we can create intelligent robots which can perform tasks with their own
experiences without pre-programmed.
•Humanoid Robots are best examples for AI in robotics, recently the intelligent Humanoid robot named as Erica and Sophia
has been developed which can talk and behave like humans.

10. AI in Entertainment
•We are currently using some AI based applications in our daily life with some entertainment services such as Netflix or
Amazon. With the help of ML/AI algorithms, these services show the recommendations for programs or shows.

11. AI in Agriculture
•Agriculture is an area which requires various resources, labor, money, and time for best result. Now a day's agriculture is
becoming digital, and AI is emerging in this field. Agriculture is applying AI as agriculture robotics, solid and crop monitoring,
predictive analysis. AI in agriculture can be very helpful for farmers.

12. AI in E-commerce
•AI is providing a competitive edge to the e-commerce industry, and it is becoming more demanding in the e-commerce
business. AI is helping shoppers to discover associated products with recommended size, color, or even brand.
Agents and Environments
Example of Agents
• Human agent
• Eyes, ears, skin, taste buds, etc. for sensors
• Hands, fingers, legs, mouth, etc. for actuators
• Powered by muscles
• Robot
• Camera, infrared, bumper, etc. for sensors
• Grippers, wheels, lights, speakers, etc. for
actuators
• Often powered by motors
• Software agent
• Functions as sensors
• Information provided as input to functions in the form of
encoded bit strings or symbols
• Functions as actuators
• Results deliver the output
Diagram of an agent
Simple Terms
Percept
 Agent’s perceptual inputs at any given instant
Percept sequence
 Complete history of everything that the agent has ever
perceived.
Agent function & program
Agent’s behavior is mathematically described by
 Agent function
 A function mapping any given percept sequence to an
action
Practically it is described by
 An agent program
 The real implementation
Vacuum-cleaner world
Perception: Clean or Dirty? where it is in?
Actions: Move left, Move right, suck, do nothing
Vacuum-Cleaner World

Percepts: location and contents, e.g., [A,Dirty]


Actions: Left, Right, Suck, NoOp
Agent’s function  look-up table
For many agents this is a very large table
Program implements the agent
function tabulated in last slide
Function Reflex-Vacuum-Agent([location,status]) return an
action
If status = Dirty then return Suck
else if location = A then return Right
else if location = B then return left
Agents and Their Environment

 A rational agent does “the right thing”


 The action that leads to the best outcome under the given circumstances

 An agent function maps percept sequences to actions


 Abstract mathematical description

 An agent program is a concrete implementation of the respective


function
 It runs on a specific agent architecture (“platform”)

 Problems:
 What is “ the right thing”

 How do you measure the “best outcome”


Agents and environments

• The agent function maps from percept histories to actions:


• [f: P*  A]

• The agent program runs on the physical architecture to


produce f
• agent = architecture + program
Artificial Intelligence a modern approach
Good Behavior: The Concept of Rationality

Concept of Rationality
Rational agent
 One that does the right thing
 = every entry in the table for the agent function
is correct (rational).
What is correct?
 The actions that cause the agent to be most
successful
 So we need ways to measure success.
Performance measure
Performance measure
 An objective function that determines
 How the agent does successfully
 E.g., 90% or 30% ?

An agent, based on its percepts


  action sequence :
if desirable, it is said to be performing well.
 No universal performance measure for all
agents
Performance measure
• Consider the vacuum-cleaner agent from the preceding section.

• We might propose to measure performance by the amount of dirt cleaned up in a single eight-hour shift.

• With a rational agent, of course, what you ask for is what you get.

• A rational agent can maximize this performance measure by cleaning up the dirt, then dumping it all on the floor, then
cleaning it up again, and so on.

• A more suitable performance measure would reward the agent for having a clean floor.

• For example, one point could be awarded for each clean square at each time step (perhaps with a penalty for electricity
consumed and noise generated).

• As a general rule, it is better to design performance measures according to what one actually wants in the environment,
rather than according to how one thinks the agent should behave.
Performance measure
A general rule:
 Design performance measures according to
 What one actually wants in the environment
 Rather than how one thinks the agent should behave

E.g., in vacuum-cleaner world


 We want the floor clean, no matter how the
agent behave
 We don’t restrict how the agent behaves
Rationality
What is rational at any given time depends on
four things:
 The performance measure defining the criterion of
success
 The agent’s prior knowledge of the environment

 The actions that the agent can perform

 The agents’s percept sequence up to now


Rational agent
For each possible percept sequence,
 an rational agent should select
 an action expected to maximize its performance
measure, given the evidence provided by the percept
sequence and whatever built-in knowledge the agent has
E.g., an exam
 Maximize marks, based on
the questions on the paper & your knowledge
Example of a rational agent
Performance measure
 Awards one point for each clean square
 at each time step, over 10000 time steps
Prior knowledge about the environment
 The geography of the environment
 Only two squares

 The effect of the actions


Example of a rational agent
Actions that can perform
 Left, Right, Suck and NoOp
Percept sequences
 Where is the agent?
 Whether the location contains dirt?

Under this circumstance, the agent is


rational.
One can see easily that the same agent would be irrational under different circumstances.

For example,

once all the dirt is cleaned up it will oscillate needlessly back and forth;

if the performance measure includes a penalty of one point for each movement left or right, the agent will fare poorly.

A better agent for this case would do nothing once it is sure that all the squares are clean.

If clean squares can become dirty again, the agent should occasionally check and re-clean them if needed.

If the geography of the environment is unknown, the agent will need to explore it rather than stick to squares A and B.
The Nature of Environments

Environment
Determine to a large degree the interaction between the “outside world” and the agent
The “outside world” is not necessarily the “real world” as we perceive it
In many cases, environments are implemented within computers
They may or may not have a close correspondence to the “real world”
Environment Properties
Fully observable vs. partially observable
Sensors capture all relevant information from the environment
Deterministic vs. stochastic (non-deterministic)
Changes in the environment are predictable
Episodic vs. sequential (non-episodic)
Independent perceiving-acting episodes
Static vs. dynamic
No changes while the agent is “thinking”
Discrete vs. continuous
Limited number of distinct percepts/actions
Single vs. multiple agents
Interaction and collaboration among agents
Competitive, cooperative
Properties of task environments
Fully observable vs. Partially observable
 If an agent’s sensors give it access to the complete state
of the environment at each point in time then the
environment is effectively and fully observable
 if the sensors detect all aspects
 That are relevant to the choice of action
Partially observable
• An environment might be Partially observable
because of noisy and inaccurate sensors or
because parts of the state are simply missing from
the sensor data.
• Example:
 A local dirt sensor of the cleaner cannot tell
 Whether other squares are clean or not
Properties of task environments
Deterministic vs. stochastic
 next state of the environment Completely determined
by the current state and the actions executed by the
agent, then the environment is deterministic,
otherwise, it is Stochastic.
 Strategic environment: deterministic except for actions
of other agents
• -Cleaner and taxi driver are:
 Stochastic because of some unobservable aspects  noise or
unknown
Properties of task environments
Episodic vs. sequential
 An episode = agent’s single pair of perception & action
 The quality of the agent’s action does not depend on other
episodes
 Every episode is independent of each other
 Episodic environment is simpler
 The agent does not need to think ahead
Sequential
 Current action may affect all future decisions
• -Ex. Taxi driving and chess.
Properties of task environments
Static vs. dynamic
 A dynamic environment is always changing
over time
 E.g., the number of people in the street
 While static environment
 E.g., the destination
Semidynamic
 environment is not changed over time
 but the agent’s performance score does
Properties of task environments

Discrete vs. continuous


 If there are a limited number of distinct states,
clearly defined percepts and actions, the
environment is discrete
 E.g., Chess game
 Continuous: Taxi driving
Properties of task environments

Single agent VS. multiagent


 Playing a crossword puzzle – single agent
 Chess playing – two agents
 Competitive multiagent environment
 Chess playing
 Cooperative multiagent environment
 Automated taxi driver
 Avoiding collision
Properties of task environments
Known vs. unknown
This distinction refers not to the environment itslef but to the
agent’s (or designer’s) state of knowledge about the
environment.
-In known environment, the outcomes for all actions are
given. ( example: solitaire card games).
- If the environment is unknown, the agent will have to learn
how it works in order to make good decisions.( example:
new video game).
Task environments
Task environments are the problems
 While the rational agents are the solutions
Specifying the task environment
 PEAS description as fully as possible
 Performance
 Environment

 Actuators

 Sensors

In designing an agent, the first step must always be to specify


the task environment as fully as possible.
Use automated taxi driver as an example
Task environments
Performance measure
 How can we judge the automated driver?
 Which factors are considered?
 getting to the correct destination
 minimizing fuel consumption
 minimizing the trip time and/or cost
 minimizing the violations of traffic laws
 maximizing the safety and comfort, etc.
Task environments
Environment
 A taxi must deal with a variety of roads
 Traffic lights, other vehicles, pedestrians, stray
animals, road works, police cars, etc.
 Interact with the customer
Task environments
Actuators (for outputs)
 Control over the accelerator, steering, gear
shifting and braking
 A display to communicate with the customers

Sensors (for inputs)


 Detect other vehicles, road situations
 GPS (Global Positioning System) to know where
the taxi is
 Many more devices are necessary
Task environments
A sketch of automated taxi driver
Examples of task environments
VacBot
VacBot PEAS
PEAS Description
Description
Performance Cleanliness of the floor
Measures
Time needed
Energy consumed
Environment
Grid of tiles
Dirt on tiles
Possibly obstacles, varying amounts of dirt
Actuators
Movement (wheels, tracks, legs, ...)
Dirt removal (nozzle, gripper, ...)

Sensors Position (tile ID reader, camera, GPS, ...)


Dirtiness (camera, sniffer, touch, ...)
Possibly movement (camera, wheel movement)
Chess
Chess PlayerPlayer PEAS PEAS
Description
Description
Performance Measures Winning the game
Time spent in the game

Environment Chessboard
Positions of every piece

Actuators Move a piece

Sensors Input from the keyboard


Specifying the
Environment (II)
PAGE Description:
Used for high-level characterization of agents
Percepts
Information acquired through the agent’s sensory system
Actions
Operations performed by the agent on the environment
through its actuators
Goals
Desired outcome of the task with a measurable
performance
Environment
Surroundings beyond the control of the agent
VacBot
VacBot PAGE
PAGE Description
Description

Percepts Tile properties like clean/dirty, empty/occupied


movement and orientation

Actions
Pick up dirt, move

Keep the floor clean


Goals

House, Apartment
Environment
Structure of agents
Agent = architecture + program
 Architecture = some sort of computing device (sensors
+ actuators)
 (Agent) Program = some function that implements the
agent mapping = “?”
 Agent Program = Job of AI
Agent programs
Input for Agent Program
 Only the current percept
Input for Agent Function
 The entire percept sequence
 The agent must remember all of them
Implement the agent program as
 A look up table (agent function)
Agent programs
Skeleton design of an agent program
Agent Programs
P = the set of possible percepts
T= lifetime of the agent
 The total number of percepts it receives
Size of the look up table

T t
P
Consider playing chess t 1
 P =10, T=150
 Will require a table of at least 10150 entries
Agent programs
Despite of huge size, look up table does what we
want.
The key challenge of AI
 Find out how to write programs that, to the extent
possible, produce rational behavior
 From a small amount of code
 Rather than a large amount of table entries
 E.g., a five-line program of Newton’s Method
 V.s. huge tables of square roots, sine, cosine, …
Types of agent programs
Four types
 Simple reflex agents
 Model-based reflex agents
 Goal-based agents
 Utility-based agents
Simple reflex agents
It uses just condition-action rules
 The rules are like the form “if … then …”
 efficient but have narrow range of applicability
 Because knowledge sometimes cannot be stated explicitly
 Work only
 if the environment is fully observable
Simple reflex agents
Simple reflex agents
Limitation of simple Reflex agent:

Simple reflex agents have the admirable property of being simple, but they turn out to be of very limited intelligence.
The agent in Figure shown will work only if the correct decision can be made on the basis of only the current percept-
that is, only if the environment is fully observable. Even a little bit of un-observability can cause serious trouble.

For example
We can see a similar problem arising in the vacuum world.

Suppose that a simple reflex vacuum agent is deprived of its location sensor, and has only a dirt sensor.

Such an agent has just two possible percepts: [Dirty] and [Clean].

It can Suck in response to [Dirty]

What should it do in response to [Clean]?

Moving Left fails (for ever) if it happens to start in square A,

and moving Right fails (for ever) if it happens to start in square B.


Infinite loops are often unavoidable for simple reflex agents operating in partially observable
environments.
Model-based Reflex Agents
For the world that is partially observable
 the agent has to keep track of an internal state
 That depends on the percept history
 Reflecting some of the unobserved aspects
 E.g., driving a car and changing lane
Requiring two types of knowledge
 How the world evolves independently of the agent
 How the agent’s actions affect the world
Example Table Agent
With Internal State
IF THEN
Saw an object ahead, Go straight
and turned right, and
it’s now clear ahead
Saw an object Ahead, Halt
turned right, and object
ahead again
See no objects ahead Go straight

See an object ahead Turn randomly


Model-based Reflex Agents

The agent is with memory


Model-based Reflex Agents
Goal-based agents
Current state of the environment is always not
enough
The goal is another issue to achieve
 Judgment of rationality / correctness
Actions chosen  goals, based on
 the current state
 the current percept
Goal-based agents
Conclusion
 Goal-based agents are less efficient
 but more flexible
 Agent  Different goals  different tasks
 Search and planning
 two other sub-fields in AI
 to find out the action sequences to achieve its goal
Goal-based agents
Utility-based agents
Goals alone are not enough
 to generate high-quality behavior
 E.g. meals in Canteen, good or not ?
Many action sequences  the goals
 some are better and some worse
 If goal means success,
 then utility means the degree of success (how
successful it is)
Utility-based agents (4)
Utility-based agents
it is said state A has higher utility
 If state A is more preferred than others
Utility is therefore a function
 that maps a state onto a real number
 the degree of success
Utility-based agents (3)
Utility has several advantages:
 When there are conflicting goals,
 Only some of the goals but not all can be achieved
 utility describes the appropriate trade-off
 When there are several goals
 None of them are achieved certainly
 utility provides a way for the decision-making
Learning Agents
After an agent is programmed, can it work
immediately?
 No, it still need teaching
In AI,
 Once an agent is done
 We teach it by giving it a set of examples
 Test it by using another set of examples
We then say the agent learns
 A learning agent
Learning Agents
Four conceptual components
 Learning element
 Making improvement
 Performance element
 Selecting external actions
 Critic
 Tells the Learning element how well the agent is doing with
respect to fixed performance standard.
(Feedback from user or examples, good or not?)
 Problem generator
 Suggest actions that will lead to new and informative experiences.
Learning Agents
Reference:​
Artificial Intelligence: A Modern Approach​
S. Russell, and P. Norvig.​
Prentice Hall, 3 edition, (2010)

You might also like