Professional Documents
Culture Documents
Artificial Intelligence
Artificial Intelligence
M
VSSUT
Gate Smashers YT
INTRODUCTION
Artificial Intelligence
rtificial Intelligence (AI) is the part of computer science concerned with designing intelligent
A
computer systems, that is, systems that exhibit characteristics we associate with intelligence in
human behaviour – understanding language, learning, reasoning, solving problems, and so on.
● Intelligence:Intelligence is the ability to acquire,understand and apply knowledge to
achieve goals in the world.
Objectives of AI?
.
1 roblem-solving
P
2. Knowledge representation
3. Learning methods in AI in engineering Intelligent systems
4. Access the applicability, strengths and weaknesses of these methods in solving
engineering problems.
4 categories of AI definition
2. Search
a. extra information is used to guide the search. Search methods include:
i. Informed Searches
ii. Uninformed Searches
3. Reasoning
a. T ypically rule-based systems are developed using human expertise to identify the
rules of the problem;
b. A database of previous problems and solutions are searching for the closest
match to the current problem
4. Learning
a. A I systems use the ability to adapt or learn, based on the history, or knowledge, of
the system.
b. Learning takes the form of updating knowledge, adjusting the search,
reconfiguring the representation, and augmenting the reasoning.
c. Learning methods use statistical learning (using the number of the different
types of historical events to bias future action or to develop inductive
hypotheses, typically assuming that events follow some known distribution of
occurrence)
AGENTS
n agent is anything that can be viewed as perceiving its environment through sensors and
A
acting upon that environment through actuators. Example -
● Ahuman agenthas eyes, ears, and
other organs for sensors and hands,
legs, vocal tract, and so on for
actuators.
● Arobotic agentmight have cameras
and infrared range finders for
sensors and various motors for
actuators.
● Asoftware agentreceives
keystrokes, file contents, and
network packets as sensory inputs
and acts on the environment by
displaying on the screen, writing
files, and sending network packets.
Sensors
It is a device that senses changes in the environment & sends that information to other
electronic devices. An agent senses the environment through sensors only.
Actuators
It is a component of the machine thatconverts electricalsignals into mechanical motion.
Actuators are responsible for the movement & control of the system.
Goals of Agent
. H
1 igh Performance
2. Optimised Results
3. Rational Action
Rational Agent
rational agent is one that does the right thing,i,e, For each possible percept sequence, a
A
rational agent should select an action that is expected to maximize its performance measure,
given the evidence provided by the percept sequence and whatever built-in knowledge the agent
has.
What is rational at any given time depends on four things:
● The performance measure defines the criterion of success
● The agent’s prior knowledge of the environment
● The actions that the agent can perform
● The agent’s percept sequence to date
Task Environments
ask environments are essentially the “problems” to which rational agents are the “solutions.”
T
In designing an agent, the first step must always be to specify the task environment as fully as
possible, which is done viaPEAS(P
e
rformance,Environment,Actuators,Sensors).
Structure of Agents
● A rchitectureis the machinery that the agent executeson. It is a device with sensors and
actuators, for example, a robotic car, a camera, a PC.
● Agent programis an implementation of an agent function.
● Agent functionis a map from the percept sequence(history of all that an agent has
perceived to date) to an action.
Types of Agents
.
1 imple Reflex Agents
S
2. Model-Based Reflex Agents
3. Goal-Based Agents
4. Utility-Based Agents
5. Learning Agent
Problems:
● They have very limited
intelligence
● They do not have
knowledge of
non-perceptual parts of the
current state
● Mostly too big to generate
and to store.
● Not adaptive to changes in
the environment.
2. Model Based Reflex Agents
● A model-based agent has two important factors:
○ Model:It isknowledgeabout "how things happen inthe world," so it is called a
Model-based agent.
○ Internal State:It is a representation of thecurrentstate based on percept history.
● These agents have the model, "which is knowledge of the world" and based on the
model they perform actions.
● The Model-based agent can work in apartially observableenvironment,and track the
situation.
● Updating the agent state requires information about:
○ How the world evolves
○ How the agent's action affects the world.
2. P
roblem formulation:It is one of the core steps ofproblem-solving which decides what
action should be taken to achieve the formulated goal. In AI this core part is dependent
upon a software agent which consists of the following components to formulate the
associated problem.
3. I nitial State:This state requires an initial statefor the problem which starts the AI agent
towards a specified goal. In this state new methods also initialize problem domain
solving by a specific class.
4. A
ction:This stage of problem formulation works witha function with a specific class
taken from the initial state and all possible actions done in this stage.
5. T
ransition:This stage of problem formulation integratesthe actual action done by the
previous action stage and collects the final stage to forward it to their next stage.
6. G
oal test:This stage determines if the specifiedgoal achieved by the integrated
transition model or not, whenever the goal achieves stop the action and forward into the
next stage to determine the cost to achieve the goal.
7. P
ath costing:This component of problem-solving isnumerically assigned what will be
the cost to achieve the goal. It requires all hardware software and human working costs.
Well-defined Problems
xplaining this via example -Suppose the agent isin Arad, Romania & has to go Bucharest the
E
following day. Our agent has now adopted the goal of driving to Bucharest and is considering
where to go from Arad. Three roads lead out of Arad, one toward Sibiu, one to Timișoara, and
one to Zerind.
A problem can be defined formally by five components:
1. Initial State:The description ofstarting configurationof the agent. For example, the
initial state for our agent in Romania might be described as
In(Arad).
2. A
ctions:A description of thepossible actions availableto the agent. Given a particular
state s, ACTIONS(s) returns the set of actions that can be executed in s. Takes the agent
from one state to another state. A state can have a number of successor states. For
example, from the state
In(Arad), the applicable actions are {Go(Sibiu), Go(Timișoara),
Go(Zerind)}.
3. T
ransition Model:A description ofwhat each actiondoes; the formal name for this is the
transition model, specified by a function RESULT(s, a) that returns the state that results
from doing action a in state s.
RESULT(In(Arad),Go(Zerind)) = In(Zerind).
4. G
oal Test:The goal test, which determineswhethera given state is a goal state.
Sometimes there is an explicit set of possible goal states, and the test simply checks
whether the given state is one of them. The agent’s goal in Romania is the singleton set
{In(Bucharest)}.
5. P
ath Cost:A path cost function that assigns anumericcost to each path.The
problem-solving agent chooses a cost function that reflects its own performance
measure. For the agent trying to get to Bucharest, time is of the essence, so the cost of a
path might be its
length in kilometres.
State Space
● S tate space is the set of all states reachable from the initial state by any sequence of
actions.
● The state space forms a directed network or graph in which thenodes are statesand the
links between nodes are actions.
● A path in the state space is a sequence of states connected by a sequence of actions.
● Theinitial state, actions, and transition modeldefinethestate space of the problem.
Searching Process
.
1 heck the current state
C
2. Execute the allowable actions to move to the next state.
3. Check if the new state is a solution state.
4. If it is not , then the new state becomes the current state and the process is repeated
until the solution is found
OR
The state pace is exhausted
Search Strategies
1. Uninformed Search
a. BFS
b. DFS
c. Uniform Cost Search
d. Depth limited Search
e. Iterative Deepening Depth Search
f. Bidirectional Search
2. Informed Search
a. Best First Search
b. A* Search
Uninformed Search
ninformed search is a class of general-purpose search algorithms which operates inbrute
U
force-way.Uninformed search algorithmsdo not haveadditional information about state or
search spaceother than how to traverse the tree,so it is also calledblind search.
1. Breadth-first Search
Breadth First Search Algorithm |BFS InterviewBit
a. BFS algorithm starts searching breadthwise in a tree or graph, i.e. from the root
node of the tree and expands all successor nodes at the current level before
moving to nodes of the next level.
b. Breadth-first search implemented usingFIFO queuedata structure.
c. This will choose theshallowest node first(closestto start node).
d. Advantages:
i. BFS will provide a solution if any solution exists.
ii. If there are more than one solutions for a given problem, then BFS will
provide the minimal solution which requires the least number of steps.
e. Disadvantages:
i. It requires lots of memory since each level of the tree must be saved into
memory to expand the next level.
ii. BFS needs lots of time if the solution is far away from the root node.
f. Properties:
i. Time Complexity:Time Complexity of BFS algorithmcan be obtained by
the number of nodes traversed in BFS until the shallowest Node. Where
the d= depth of shallowest solution and b is a node at every state.
2
3 d
d
T(b) = 1+b
+b
+.......+ b
=
O(b )
g. Algorithm:
i. Create a variable called NODE-LIST and set it to initial state.
ii. Until a goal state is found or NODE-LIST is empty do:
1. Remove the first element from NODE-LIST and call it E. If
NODE-LIST was empty, quit.
2. For each way that each rule can match the state described in E do:
a. Apply the rule to generate a new state.
b. If the new state is a goal state, quit and return this state.
c. Otherwise, add the new state to the end of NODE-LIST
e. Disadvantages:
i. There is the possibility that many states keep reoccurring, and there is no
guarantee of finding the solution.
ii. DFS algorithm goes for deep down searching and sometimes it may go to
the infinite loop.
f. Properties:
i. Time Complexity:Time Complexity of DFS algorithmcan be obtained by
the number of nodes traversed in DFS until the most depth node. Where
the m = maximum depth of any node and n is the number of nodes.(It’s
way more than BFS).
2
3
m
m
T(n)= 1+ n
+ n
+.........+ n
=
O(n )
ii. pace Complexity:DFS algorithm needs to store onlya single path from
S
the root node, hence space complexity of DFS is equivalent to the size of
the fringe set, which isO(bm) .
Informed Search
Informed Search uses problem-specific knowledge (i.e. heuristics) that can dramatically
improve the search speed. Heuristics use domain specific knowledge to estimate the quality or
potential of partial solutions &identify the mostpromising search path.
Heuristics
euristics are criteria, methods or principles for deciding which among several alternative
H
courses of action promises to be the most effective in order to achieve some goal.
Heuristic Function
● A
heuristic function at a node n is an estimate of the optimum cost from the current
node to a goal. It is denoted by h(n).
h(n) = estimated cost of the cheapest path from node n to a goal
node.
● I t’s a function that returns a number, which will tell how easy/difficult it is to move from
start state to goal state.
● Examples:
○ We want a path from Kolkata to Guwahati. Heuristic for Guwahati may be
straight-line distance between Kolkata and Guwahati
h(Kolkata) = Euclidean Distance(Kolkata, Guwahati)
○
d. Algorithm:
i. Start with OPEN containing just the initial state
ii. Until a goal is found or there are no nodes left on OPEN do:
1. Pick the best node on OPEN
2. Generate its successors
3. For each successor do:
a. If it has not been generated before, evaluate it, add it to
OPEN, and record its parent.
b. If it has been generated before, change the parent if this
new path is better than the previous one. In that case,
update the cost of getting to this node and to any
successors that this node may already have.
e. Properties:
i. Time Complexity:The worst case time complexity ofGreedy best first
search isO(bm).
ii. pace Complexity:The worst case space complexityof Greedy best first
S
search isO(bm).Where, m is the maximum depth ofthe search space.
e. Algorithm:
i. Examine the current state, Return success if it is a goal state
ii. Determine successors of the current state.
iii. Choose successor of maximum goodness (break ties randomly)
iv. If goodness of best successor is less than current state's goodness, stop
v. Otherwise make the best successor, the current state and go to step 2.
f. Problems in Hill Climbing Algorithm:
i. Local Maximum -The algorithm terminates when thecurrent node is
local maximum as it is better than its neighbours. However, there exists a
global maximum where objective function value is higher.
Solution:Back Propagationcan mitigate the problemof Local maximum
as it starts exploring alternate paths when it encounters Local Maximum.
ii. idge -Ridge occurs when there are multiple peaksand all have the same
R
value or in other words, there are multiple local maxima which are the
same as global maxima.
Solution:Ridge obstacles can be solved bymovingin several directions
at the same time.
iii. lateau -Plateau is the region where all the neighbouringnodes have the
P
same value of objective function so the algorithm finds it hard to select
an appropriate direction.
Solution:Plateau obstacles can be solved by makingabig jump from the
current statewhich will land you in a non-plateauregion.
g. Characteristics:
i. Not Optimal
ii. Not Complete
iii. Requires less space
I mpracticalto solve large problems cases an handle large search problems cases &
C
usesmore time & space. usesless time & space.
It can give the best solution. I t can give a good solution but not
necessarily the most optimal solution.
There are three factors which are put into the machine, which makes it valuable:
1. Knowledge:Theinformationrelated to the environmentis stored in the machine.
2. Reasoning:The ability of the machine tounderstandthe stored knowledge.
3. Intelligence:The ability of the machine tomake decisionson the basis of the stored
information.
Logic
L
● ogic is the primary vehicle for representing and reasoning about knowledge.
● Provides a way to represent+reasoning.
● To specify or define a particular logic, one needs to specify three things:
○ Syntax:
■ The atomic symbols of the logical language,
■ The rules forconstructingwell formed, non-atomicexpressions (symbol
structures) of logic.
■ Syntax specifies the symbols in the language and how they can be
combined to form sentences. Hence facts about the world are
represented as sentences in logic.
○ Semantics:
■ The meanings of the atomic symbols of the logic,
■ The rules fordetermining the meaningsof non-atomicexpressions of
logic.
■ It specifies what facts in the world a sentence refers to. Hence, also
specifies how you assign a truth value to a sentence based on its
meaning in the world.
Syntactic Inference Method:
○
■ The rules for determining a subset of logical expressions, called theorems
of the logic.
■ It refers to a mechanical method for computing (deriving) new (true)
sentences from existing sentences.
Syntax of PL
1. A tomic Proposition:Atomic propositions are simplepropositions. It consists of a single
proposition symbol. These are the sentences which must be either true or false.
2. Compound proposition:Compound propositions are constructedby combining simpler
or atomic propositions, using parentheses and logical connectives.
Logical Connectives
Rules of Inference
Properties of PL
● Commutativity:
○ P ∧ Q= Q ∧ P, or
○ P ∨ Q = Q ∨ P.
● Associativity:
○ (P ∧ Q) ∧ R= P ∧ (Q ∧ R),
○ (P ∨ Q) ∨ R= P ∨ (Q ∨ R)
● Identity element:
○ P ∧ True = P,
○ P ∨ True= True.
● Distributive:
○ P∧ (Q ∨ R) = (P ∧ Q) ∨ (P ∧ R).
○ P ∨ (Q ∧ R) = (P ∨ Q) ∧ (P ∨ R).
● DE Morgan's Law:
○ ¬ (P ∧ Q) = (¬P) ∨ (¬Q)
○ ¬ (P ∨ Q) = (¬ P) ∧ (¬Q).
● Double-negation elimination:
○ ¬ (¬P) = P.
Limitations of PL
1. We cannot represent relations likeALL, some, or nonewith propositional logic. Example:
a. All the girls are intelligent.
b. Some apples are sweet.
2. Propositional logic has limited expressive power.
3. In propositional logic, we cannot describe statements in terms of their properties or
logical relationships.
Predicate Logic
his technique is used to represent the objects in the form ofpredicates or quantifiers.It is
T
different from Propositional logic as it removes the complexity of the sentence represented by
it. In short, FOPL is an advanced version of propositional logic.
Predicates
tatements involving variables which are neither T or F until or unless values of variables are
S
specified.
Eg - x is an animal
X = subject
is an animal = predicate
Quantifiers
ords that refer to quantities such as “some” or “all”. It tells for how many elements a given
W
predicate is True.
They’re used to express quantities without giving an exact number, i.e. some, many, all, etc.
Types of Quantifiers
1. Universal Quantifier:P(x) for all values of x inthe domain.
2. Existential Quantifier:There exists an element x in the doman such that P(x).
Rules of Inference