Download as pdf or txt
Download as pdf or txt
You are on page 1of 80

Artificial Intelligence(AI)

Ramjee Dixit (Asstt. Professor)


CSE Department
United College of Engineering & Research
Syllabus
Unit 1 : Introduction
Unit 2 : Problem Solving Methods

Unit 3 : Knowledge representation

Unit 4 : Software Agents

Unit 5 :Applications
Unit 1 : Introduction
 Introduction
Definition
Future of Artificial Intelligence

Characteristics of Intelligent Agents

Typical Intelligent Agents

Problem Solving Approach to Typical AI problems.


Unit 2 : Problem Solving Methods
Problem solving Methods
Search Strategies

Uninformed

Informed

Heuristics

Local Search Algorithms and Optimization Problems

Searching with Partial Observations

Constraint Satisfaction Problems

Constraint Propagation

Backtracking Search

 Game Playing

Optimal Decisions in Games

Alpha – Beta Pruning

Stochastic Games
Unit 3: Knowledge Representation
First Order Predicate Logic
Prolog Programming

Unification

Forward Chaining

Backward Chaining

Resolution

Knowledge Representation

Ontological Engineering

Categories and Objects

Events

Mental Events and Mental Objects

 Reasoning Systems for Categories

 Reasoning with Default Information


Unit 4 : SOFTWARE AGENTS

Architecture for Intelligent Agents


Agent communication

Negotiation and Bargaining

Argumentation among Agents

Trust and Reputation in Multi-agent systems.


Unit 5 : Appilcations
 AI applications
 Language Models

 Information Retrieval

 Information Extraction

 Natural Language Processing

 Machine Translation

Speech Recognition

Robot

Hardware

Perception

Planning

Moving
Text Books
1. S. Russell and P. Norvig, “Artificial Intelligence: A Modern Approach”,
Prentice Hall, Third Edition, 2009.
2. I. Bratko, “Prolog: Programming for Artificial Intelligence”, Fourth edition,
Addison-Wesley Educational Publishers Inc., 2011.
3.M. Tim Jones, “Artificial Intelligence: A Systems Approach(Computer
Science), Jones and Bartlett Publishers, Inc.First Edition, 2008
4.Nils J. Nilsson, "The Quest for Artificial Intelligence”, Cambridge University
Press, 2009.
5. William F. Clocksin and Christopher S. Mellish, “Programming in
Prolog: Using the ISO Standard”, Fifth Edition, Springer, 2003.
6. Gerhard Weiss, ”Multi Agent Systems”, Second Edition, MIT Press, 2013.
7. David L. Poole and Alan K. Mackworth, “Artificial Intelligence:
Foundations of Computational Agents”, Cambridge University Press, 2010.
Course Outcome
CO 1 Understand the basics of the theory and practice of K2
Artificial intelligence as a discipline and about
intelligent agents.

CO 2 Understand search techniques and gamming theory K2,K3

CO 3 The Student will learn to apply knowledge K3,K4


representation techniques and problem solving
strategies to common applications.

CO 4 Student should be aware of techniques used for K2,K3


classification and clustering.

CO 5 Student should aware of basics of pattern recognition K2,K4


and steps required for it.
Unit 1 : Introduction
 Introduction
Definition
Future of Artificial Intelligence

Characteristics of Intelligent Agents

Typical Intelligent Agents

Problem Solving Approach to Typical AI problems.


Introduction
“The science and engineering of making
intelligent machines, especially intelligent
computer programs”

 -John McCarthy
AI is study of how human brain think, learn,
decide and work when it tries to solve a problem.
Finally this study outputs intelligent software
systems.
•Aim of AI is to improve computer functions which
are related to human knowledge for example
learning, reasoning and problem solving.
•AI is an approach to make a computer, a robot, or
a product to think how smart human thinks.
AI Intelligence is intengible. It is composed of
-Reasoning
-Learning
- Problem Solving
- Perception
- Linguistic Intelligence
Definition
Artificial (man-made) & Intelligence(power of
thinking)
It means man-made thinking power

 “It is a branch of computer science by which we


can create intelligent machines which can behave
like humans, think like humans and able to make
decision”
Advantages

Reduction in human errors


Useful in risky areas

High reliability

Fast

Digital assidtant

Faster decison

Available 24*7
Disadvantages

High cost
can’t replace human

Lack of creativity

Risk of unemployment

No feeling and enotions


Types of AI

Type-1:Based on Capability
Artificial Narrow Intelligence

Artificial General Intelligence

Artificial Super Intelligence

Type-2 : Based on Functionality


Reactive Machines

Limited Memory

Theory of Minds

Self- Awareness
Type -1 : Based on Capability

 Artifical Narrow Intelligence


Artificial Narrow Intelligence (ANI) also known as “Weak” AI.
 AI that exists in our world today.

 Narrow AI is AI that is programmed to perform a single task.

 whether it’s checking the weather, being able to play chess,

or analyzing raw data to write journalistic reports.


Type -1 : Based on Capability

 Artifical General Intelligence


Artificial Narrow Intelligence (ANI) also known as “Strong” AI.
 It can perform variety of functions

 It is concept of machine with general intelligence that mimic

human intelligence or behaviours with ability to learn and


apply its to solve any problem.
Type -1 : Based on Capability

 Artificial Super Intelligence(ASI)


It is more capable than a human.
ASI is hypothetical means AI that doesn't mimic or
understand human intelligence and behaviours; ASI where
machines become self-aware and surpass capability of
human intelligence and ability.
ASI is purely speculative at this moment.
Type -2 : Based on Functionality

 Reactive Machines
These machines are most common form of AI applications.
Such AI systems do not store memory or experiences for

future actions.
These machines focus only on current scenario and react on

it as per best possible action.


Example Deep Blue and IBM's chess playing super
computer.
Type -2 : Based on Functionality

 Limited Memory
Limited memory machines can retain data for short period of
time.
While they can use this data for specific period of time. They

cannot add it to its library of their experiences.


Many self-driving cars use limited memory model they store

data such as
Speed of nearby cars
Distance of such cars

The speed limits and other information that can help them navigate on

roads
Type -2 : Based on Functionality

 Theory of Mind
AI should understand human emotions, beliefs and should
able to interact socially.
There are lot of efforts are going on to develop such
machines.
Type -2 : Based on Functionality

 Self-Awareness
Self-awareness AI is future of artificial intelligence. These
machines will be super intelligent and they will have their own
consciousness, sentiments and self-awareness.
These machines will be more smarter than human mind.

Hypothetical at this point.


AI Applications
Chatbots
AI in healthcare

Handwriting recognition

Speech recognitio

Natural language processing

AI in gaming

AI in finance

AI in robotics

AI in security

AI in social media

AI in education
Future of Artifical Intelligence
Till now we are in era of narrow AI
When machines become as intelligent as human then we say it

strong AI.
It we be able to under stand and take decision like human.

Similar to human machines will design another machine.

Last level of AI artificial super intelligence of singularity.

On the basis of trends in development it is certain that

singularity will arrive.


Futute of AI

lSome more points


lGastrobots- sustain themselves by eating naturally occurring substances.

lIf able to reorganize atoms at Nana-level them there is possibility to achieve immortality.

lNatural language translation.

lAI may be either blessing for humane or it may curse for human.
Intellegent agents
In artificial intelligence, an intelligent agent (IA) is
anything which perceives its environment, takes
actions autonomously in order to achieve goals, and
may improve its performance with learning or may
use knowledge.
They may be simple or complex.
A thermostat is considered an example of an
intelligent agent, as is a human being, as is any
system that meets the definition, such as a firm, a
state, or a biome.
Agent
Terminologies

Percept: the agent’s perceptual inputs


Percept sequence: the complete history of everything the
agent has perceived
Agent function maps any given percept sequence to an
action [f: p*-> A]
The agent program runs on the physical architecture to
produce f
Must first specify the setting for intelligent agent design.
PEAS: Performance measure, Environment, Actuators, Sensors
Example: the task of designing a self-driving car
Performance measure : Safe, fast, legal, comfortable trip
Environment Roads, other traffic, pedestrians
Actuators Steering wheel, accelerator, brake, signal, horn
Sensors Cameras, LIDAR (light/radar), speedometer, GPS, odometer
 engine sensors, keyboard
Human Agent

In case of human agent


Sensors : eye, ear, nose, skin, tongue.
Actuators : mouth, arm, leg etc.
Example of Intelligent Agents
Sensor: Sensor is a device which detects the change in
the environment and sends the information to other
electronic devices. An agent observes its environment
through sensors.
Actuators : Actuators are the component of machines
that converts energy into motion. The actuators are only
responsible for moving and controlling a system. An
actuator can be an electric motor, gears, rails, etc.

Effectors: Effectors are the devices which affect the


environment. Effectors can be legs, wheels, arms, fingers,
wings, fins, and display screen.
Structure of Agent

Structure of agent = Architecture + Agent Program


Architecture : It is the machinery that the agent
executes on. It is a device with sensors and actuators,
for example, a robotic car, a camera, a PC.
Agent Program: It is an implementation of an agent
function. An agent function is a map from the percept
sequence(history of all that an agent has perceived to
date) to an action.
Rational Agent

A rational agent is an agent which has clear preference, models


uncertainty, and acts in a way to maximize its performance measure
with all possible actions.
A rational agent is said to perform the right things.
AI is about creating rational agents to use for game theory and
decision theory for various real-world scenarios.
For an AI agent, the rational action is most important because in AI
reinforcement learning algorithm, for each best possible action, agent
gets the positive reward and for each wrong action, an agent gets a
negative reward.
Rationality

The rationality of an agent is measured by its


performance measure.
Rationality can be judged on the basis of following
points:
Performance measure which defines the success
criterion.
Agent prior knowledge of its environment.
Best possible actions that an agent can perform.
The sequence of percepts.
Characteristics of Intelligent Agents

Internal characteristics are


Learning/reasoning: an agent has the ability to learn from
previous experience and to successively adapt its own
behavior to the environment
Reactivity: an agent must be capable of reacting
appropriately to influences or information from its environment.
Autonomy: an agent must have both control over its actions
and internal states. The degree of the agent’s autonomy can be
specified. There may need intervention from the user only for
important decisions.
 Goal-oriented: an agent has well-defined goals and
gradually influence its environment and so achieve its own
goals.
External characteristics

Communication: an agent often requires an interaction with


its environment to fulfill its tasks, such as human, other agents,
and arbitrary information sources
Cooperation: cooperation of several agents permits faster
and better solutions for complex tasks that exceed the
capabilities of a single agent.
Mobility: an agent may navigate within electronic
communication networks.
Character: like human, an agent may demonstrate an
external behavior with many human characters as possible.
Agent Environment in AI

An environment is everything in the world which surrounds


the agent, but it is not a part of an agent itself.

An environment can be described as a situation in which an


agent is present.

The environment is where agent lives, operates and provides


the agent with something to sense and act upon it.

An environment is mostly said to be non-feministic.



Features of Environment

As per Russell and Norvig, an environment can have


various features from the point of view of an agent:
1. Fully observable vs Partially Observable
2. Static vs Dynamic
3. Discrete vs Continuous
4. Deterministic vs Stochastic
5. Single-agent vs Multi-agent
6. Episodic vs sequential
7. Known vs Unknown
8. Accessible vs Inaccessible
Fully Observable vs Partially Observable

lIf an agent sensor can sense or access the complete


state of an environment at each point of time then it is a
fully observable environment, else it is partially
observable.
lA fully observable environment is easy as there is no

need to maintain the internal state to keep track history of


the world.
lAn agent with no sensors in all environments then such

an environment is called as unobservable.


Fully Observable vs Partially Observable

lFor example, in a chess game, the state of the system,


that is, the position of all the players on the chess board,
is available the whole time so the player can make an
optimal decision such environment is fully observable
lAn example of a partially observable system would be a

card game in which some of the cards are discarded into


a pile face down. In this case the observer is only able to
view their own cards and potentially those of the dealer.
Deterministic vs Stochastic

When a uniqueness in the agent’s current state completely


determines the next state of the agent, the environment is said to
be deterministic.
The stochastic environment is random in nature which is not
unique and cannot be completely determined by the agent.
Example:
Chess – There would be only a few possible moves for a coin at
the current state and these moves can be determined
Self Driving Cars – The actions of a self-driving car are not
unique, it varies time to time.
Competitive vs Collaborative

An agent is said to be in a competitive environment when it


competes against another agent to optimize the output.

The game of chess is competitive as the agents compete with


each other to win the game which is the output.

An agent is said to be in a collaborative environment when


multiple agents cooperate to produce the desired output.

When multiple self-driving cars are found on the roads, they


cooperate with each other to avoid collisions and reach their
destination which is the output desired.
Single-agent vs Multi-agent

An environment consisting of only one agent is said to be a


single-agent environment.
A person left alone in a maze is an example of the single-agent
system.

An environment involving more than one agent is a multi-agent


environment.
The game of football is multi-agent as it involves 11 players in
each team.
Dynamic vs Static

An environment that keeps constantly changing itself when the


agent is up with some action is said to be dynamic.

A roller coaster ride is dynamic as it is set in motion and the


environment keeps changing every instant.

An idle environment with no change in its state is called a static


environment.

An empty house is static as there’s no change in the


surroundings when an agent enters.
Discrete vs Continuous

If an environment consists of a finite number of actions that can


be deliberated in the environment to obtain the output, it is said to
be a discrete environment.

The game of chess is discrete as it has only a finite number of


moves. The number of moves might vary with every game, but still,
it’s finite.

The environment in which the actions performed cannot be


numbered ie. is not discrete, is said to be continuous.

Self-driving cars are an example of continuous environments as


their actions are driving, parking, etc. which cannot be numbered.
Types of Agent

Agents can be grouped into five classes based on their degree of


perceived intelligence and capability.
 All these agents can improve their performance and generate
better action over the time.
These are given below:
•Simple Reflex Agent
•Model-based reflex agent
•Goal-based agents
•Utility-based agent
•Learning agent
Simple Reflex Agent

lThey ignore the rest of the percept history and act only on the
basis of the current percept.
lThe agent function is based on the condition-action rule.

lA condition-action rule is a rule that maps a state i.e, condition to

an action.
lIf the condition is true, then the action is taken, else not.

lThis agent function only succeeds when the environment is fully

observable.
lFor simple reflex agents operating in partially observable
environments, infinite loops are often unavoidable.
lIt may be possible to escape from infinite loops if the agent can

randomize its actions.


Simple Reflex Agent
Simple Reflex Agent

lProblems with Simple reflex agents are :


lVery limited intelligence.

lNo knowledge of non-perceptual parts of the state.

lUsually too big to generate and store.

lIf there occurs any change in the environment, then the

collection of rules need to be updated.


Model-Based Reflex Agent

It works by finding a rule whose condition matches the


l

current situation.
A model-based agent can handle partially observable
l

environments by the use of a model about the world.


The agent has to keep track of the internal state which
l

is adjusted by each percept and that depends on the


percept history.
Contd...

lThe current state is stored inside the agent which maintains


some kind of structure describing the part of the world which
cannot be seen.
lUpdating the state requires information about :

lhow the world evolves independently from the agent

l the agent’s actions affect the world.


Model-Based Reflex Agent
Goal -Based Agent

lThese kinds of agents take decisions based on how


far they are currently from their goal(description of
desirable situations).
lTheir every action is intended to reduce its distance

from the goal.


lThis allows the agent a way to choose among multiple

possibilities, selecting the one which reaches a goal


state.
Contd..

lThe knowledge that supports its decisions is


represented explicitly and can be modified, which
makes these agents more flexible.
lThey usually require search and planning.

lThe goal-based agent’s behavior can easily be


changed.
Contd..
Utility-Based Agents

lThe agents which are developed having their end


uses as building blocks are called utility-based agents.
lWhen there are multiple possible alternatives, then to

decide which one is best, utility-based agents are


used.
lThey choose actions based on a preference (utility)

for each state.


lSometimes achieving the desired goal is not enough.

We may look for a quicker, safer, cheaper trip to reach


a destination. Agent happiness should be taken into
consideration. Utility describes how “happy” the agent
is. Because of the uncertainty in the world, a utility
Utility-Based Agents

lThe agents which are developed having their end


uses as building blocks are called utility-based agents.
lWhen there are multiple possible alternatives, then to

decide which one is best, utility-based agents are


used.
lThey choose actions based on a preference (utility)

for each state.


Contd...

lSometimes achieving the desired goal is not enough. We


may look for a quicker, safer, cheaper trip to reach a
destination.
lAgent happiness should be taken into consideration.

l Utility describes how “happy” the agent is.

lBecause of the uncertainty in the world, a utility agent

chooses the action that maximizes the expected utility.


lA utility function maps a state onto a real number which

describes the associated degree of happiness.


Contd...
Learning Agents

lA learning agent in AI is the type of agent that can learn from


its past experiences or it has learning capabilities.
l It starts to act with basic knowledge and then is able to act

and adapt automatically through learning.


Contd...

A learning agent has mainly four conceptual components,


l

which are:
1.Learning element: It is responsible for making improvements by learning
from the environment
2.Critic: The learning element takes feedback from critics which describes
how well the agent is doing with respect to a fixed performance standard.
3.Performance element: It is responsible for selecting external action
4.Problem Generator: This component is responsible for suggesting actions
that will lead to new and informative experiences.
Contd...
Problem Solving Approaches to Typical AI
Problems

Problem-solving is commonly known as the method to


reach the desired goal or finding a solution to a given
situation.
In computer science, problem-solving refers to artificial
intelligence techniques, including various techniques
such as forming efficient algorithms, heuristics, and
performing root cause analysis to find desirable solutions.
In Artificial Intelligence, the users can solve the problem
by performing logical algorithms, utilizing polynomial and
differential equations, and executing them using
modeling paradigms.
Contd.....

There can be various solutions to a single problem,


which are achieved by different heuristics.
Also, some problems have unique solutions.
It all rests on the nature of the given problem.
Examples of Problems in AI
Chess
N-Queen problem
Tower of Hanoi Problem
Travelling Salesman Problem
Water-Jug Problem
Problem Solving Techniques

Following are some of the standard problem-solving


techniques used in AI
Heuristics
Types of Searching Algorithms
Evolutionary Computation
Genetic Algoritms
Heuristics

The heuristic method helps comprehend a problem and devises a


solution based purely on experiments and trial and error methods.
However, these heuristics do not often provide the best optimal
solution to a specific problem.
Instead, these undoubtedly offer efficient solutions to attain
immediate goals.
Therefore, the developers utilize these when classic methods do
not provide an efficient solution for the problem.
Since heuristics only provide time-efficient solutions and
compromise accuracy, these are combined with optimization
algorithms to improve efficiency.
Example: TSP

The most common example of using heuristic is the Travelling


Salesman problem.
There is a provided list of cities and their distances.
The user has to find the optimal route for the Salesman to return
to the starting city after visiting every city on the list.
The greedy algorithms solve this NP-Hard problem by finding the
optimal solution.
According to this heuristic, picking the best next step in every
current city provides the best solution.
Example: TSP

The most common example of using heuristic is the Travelling


Salesman problem.
There is a provided list of cities and their distances.
The user has to find the optimal route for the Salesman to return
to the starting city after visiting every city on the list.
The greedy algorithms solve this NP-Hard problem by finding the
optimal solution.
According to this heuristic, picking the best next step in every
current city provides the best solution.
Types of Searching Algoritms

Informed Search
Greedy Search
A* Search
Uninformed Search

Breadth-First Search
Depth First Search
Uniform Cost Search
Iterative Deepening Depth First Search
Bidirectional Search
Evolutionary Computation

In computer science, evolutionary computation is a family of


algorithms for global optimization inspired by biological evolution.
In technical terms, they are a family of population-based trial and
error problem solvers with a metaheuristic or stochastic
optimization character.

In evolutionary computation, an initial set of candidate solutions is


generated and iteratively updated.
Evolutionary computation techniques can produce highly
optimized solutions in a wide range of problem settings, making
them popular in computer science.
Genetic Algorithms

A class of evolutionary computation methods


A genetic algorithm is a search heuristic that is inspired by
Charles Darwin's theory of natural evolution.
This algorithm reflects the process of natural selection where the
fittest individuals are selected for reproduction in order to produce
offspring of the next generation.
Unit 2 : Problem Solving Methods
Problem solving Methods
Search Strategies

Uninformed

Informed

Heuristics

Local Search Algorithms and Optimization Problems

Searching with Partial Observations

Constraint Satisfaction Problems

Constraint Propagation

Backtracking Search

 Game Playing

Optimal Decisions in Games

Alpha – Beta Pruning

Stochastic Games
Problem Solving Methods
Problem solving Methods
Search Strategies

Uninformed

Informed

Heuristics

Local Search Algorithms and Optimization Problems

Searching with Partial Observations

Constraint Satisfaction Problems

Constraint Propagation

Backtracking Search

 Game Playing

Optimal Decisions in Games

Alpha – Beta Pruning

Stochastic Games

You might also like