Professional Documents
Culture Documents
Ai - Unit I
Ai - Unit I
Ai - Unit I
1
OBJECTIVES
• To have a basic proficiency in a traditional AI language including
ability to write simple to moderate programs and to understand
code written in that language
• To improve analytical and problem solving skills based on the
characteristics of the problem using various heuristic search
techniques and to improve designing and playing a game
• To have knowledge on propositional calculus, proportional and
predicate logic to understand few systems such as natural
deduction, axiomatic system, etc.
2
Contd..
• To have an understanding of the basic issues of knowledge
representation and blind and heuristic search, as well as an
understanding of other topics such as minimax, resolution, etc. that
play an important role in AI programs.
• To have a basic understanding of some of the more advanced topics
of AI such as learning, natural language processing, agents and
robotics, expert systems, and planning
• To have basic knowledge on probabilistic analysis and networks as
well as fuzzy systems and fuzzy logics.
3
OUTCOMES:
• Identify problems that are amenable to solution by AI methods
4
TEXT BOOKS:
• Artificial Intelligence- Saroj Kaushik, CENGAGE Learning
• Artificial intelligence, A modern Approach , 2nded,
Stuart Russel, Peter Norvig, PEA
• Artificial Intelligence- Rich, Kevin Knight, Shiv Shankar B
Nair, 3rded, TMH
• Introduction to Artificial Intelligence, Patterson, PHI
5
REFERNCE BOOKS
• Artificial Intelligence, structures and Strategies for
Complex problem solving, - George F Lugar, 5th
edition, PEA
• Introduction to Artificial Intelligence, Ertel, Wolf
Gang, Springer
• Artificial Intelligence, A new Synthesis, Nils J
Nilsson, Elsevier
6
Why Artificial Intelligence?
With the help of AI, you can create such software or devices which can solve
real-world problems very easily and with accuracy such as health issues,
marketing, traffic issues, etc.
With the help of AI, you can create your personal virtual Assistant, such as
Cortana, Google Assistant, Siri, etc.
With the help of AI, you can build such Robots which can work in an
environment where survival of humans can be at risk.
AI opens a path for other new technologies, new devices, and new
Opportunities.
7
UNIT - I
• Introduction to artificial intelligence:
– Introduction - 1
– History - 2
– Intelligent systems - 3
– Foundations of AI - 7
– Applications - 9
– Tic-tac-toe game playing - 9
– Development of AI languages - 16
– Current trends in AI - 16
8
Introduction
• Foundation of AI was laid with Boolean theory by a
mathematician, Boole & other researchers
• Since the invention of computer in 1943, AI has been of
interest to the researchers
• They always aimed to make machines more intelligent than
humans
• AI has grown substantially for last six decades starting from
simple to intelligent programs
9
• AI comprises of a numerous subfields ranging from general purpose
areas to specific tasks
• General purpose areas: perception, logical reasoning, et.
• Specific tasks: game playing, theorem proving, diagnosing diseases,
etc.
• Scientists of other fields use AI to systematize and automate the
intellectual tasks
• AI is engaged in two different significant fields:
– Science of human intelligence
– Engineering discipline
10
11
12
13
14
15
16
17
18
19
Goals of Artificial Intelligence
• Replicate human intelligence
• Solve Knowledge-intensive tasks
• An intelligent connection of perception and action
• Building a machine which can perform tasks that requires
human intelligence such as:
• Proving a theorem
• Playing chess
• Plan some surgical operation
• Driving a car in traffic
• Creating some system which can exhibit intelligent behaviour,
learn new things by itself, demonstrate, explain, and can
advise to its user.
20
Advantages
High Accuracy with less errors
High-Speed
Useful for risky areas
Digital Assistant
Useful as a public utility.
21
Disadvantages
High Cost
Can't think out of the box
No feelings and emotions
Increase dependency on machines
No Original Creativity
22
History
In 400 B.C, philosophers said that, mind operates on knowledge
encoded in some internal language
23
• John McCarthy organized a conference on machine intelligence in
1956, since then the field is known as AI
• In1957, a program named GPS (General Problem Solver)
was developed and tested on problems requiring common sense
• John McCarthy announced his new development i.e., LISP
(LISt Processing language) in 1958
• MarvinMinsky of MIT demonstrated that computer programs could
solve spatial and logic problems
• 1960, another program named as STUDENT was developed to solve
algebraic problems
24
• L.Zadeh has developed Fuzzy set and logic to make decisions
under uncertain conditions
• Terry Winograd at MIT, developed SHRDLU, a program which
carries a simple dialogue with a user in English. This is written
in MACLISP
• Minsky also developed Frame Theory around 1970 used for
storing structured programs to be used by AI programs
• R.Kowalski developed PROLOG language around 1970. (Expert
Systems are also developed in the same year)
25
26
Intelligent systems
• AI is combination of Computer Science, Physiology &
Philosophy
• AI is a broad area from Machine Vision to Expert
Systems
• John Mc Carthy defines intelligence as the computational
part of the ability to achieve goals in the world
• Different people think of AI differently & there is no unique
definition
• AI is the study of making machines which do things
intelligently 27
• Hence, AI programs must have capability and
characteristics of intelligence such as
– Learning – ex. Trail and method,
– Reasoning - a way to infer facts from existing data
– Problem solving - finding winning moves in board
games; identifying people from their photographs
– Inferencing - the process of using a trained neural
network model to make a prediction.
– perceiving - the process of interpreting vision, sounds,
smell, and touch
– comprehending information – reading passage and
getting answers
• Quality of response
– It is limited by the sophistication of the ways in
which they can process the input
– Ex. The no.of templates available is a limitation
33
• Coherence
– Earlier versions had no structure on the conversation
– Every statement was based on entirely on the current
input and no context info was maintained
– More complex versions can do better
– Any intelligence strongly depends on coherence
• Semantics
– Have no semantic representation of content
– It does not have intelligence of understanding
– It imitates the human conversation style
34
Categorization of Intelligent Systems
35
36
37
• Systems that think like humans
– Requires cognitive modelling approaches
– Should know the functioning of brain and its
mechanism for processing information
– It is an area of cognitive science
– The stimuli are converted into mental
representation
– Cognitive processes manipulate it to build new
representation to generate actions
– Neural Network is a computing model for processing
information similar to brain
38
• Systems that act like humans
– Requires the overall behaviour of the system should be human like
– E.g. John is a human, and all humans are mortal, hence the conclusion is John
is mortal
– not that all intelligent behaviours are mediated by logical deliberation
39
• To summarize, it can be defined that intelligence is the property of
mind which encompasses capabilities like:
– Reason and draw meaningful conclusions
– Solve problems
– Think abstractly
– Learn new concepts and tasks that require high levels of intelligence
40
Components of AI Program
• AI Program should have:
– Knowledge base and
– navigational capacity which contains control strategy and
Inference mechanism
• Knowledge base:
– AI programs should be learning in nature and update its
knowledge accordingly
– it consists of facts and rules and characteristics like
voluminous, incomplete, imprecise, dynamic and keep
changing
41
• Control strategy:
– It determines which rule to be applied
• Inference mechanism:
– It requires search through knowledge base
42
Foundations of AI
• Commonly used AI techniques are rule-based, fuzzy logic,
neural networks, decision theory, statistics, probability
theory, genetic algorithms, etc.
• Since AI is interdisciplinary in general, foundations of AI
are
– Mathematics
– Neuroscience
– Control theory
– Linguistics
43
Applications
• Business: financial strategies, advising
• Engineering: check design, offer suggestions to create new products,
expert systems
• Manufacturing: assembly, inspection, maintenance
• Medicine: monitoring, diagnosing, prescribing
• Education: teaching
• Fraud detection
• Object identification
• Space shuttle scheduling
• Information retrieval
44
Tic-tac-toe game playing
• The objective is to write a program which never
loses
• Three approaches are presented to play the game
which increases
– Complexity
– Use of generalization
– Clarity of their knowledge
– Extensibility of their approach
45
Approach 1
• 3x3 board is represented as nine element vector
• Each element in a vector can contain any of the
following three digits
– 0- represents blank position
– 1 – indicates X player move
– 2 – indicates O player move
48
1. View the vector as a ternary number
49
42
Disadvantages
• Requires lot of space to store move table
• Lot of work is required to create move table
• Move table creation is highly error prone
• Cannot be extended to 3D since 327 board
positions are required
• This program is not intelligent at all since it does
not meet any of AI requirements
50
Approach 2
• The board B[1..9] is represented by a nine element vector
• Here
– 2 – represents blank position
– 3 – indicates X player move
– 5 – indicates O player move
52
• Go(n) – use this function computer can make a move in
square n
• Make_2 – this function helps the computer to make valid
2 moves
• PossWin(p) – if player p can win in the next move, then it
returns the index (from 1 to 9) of the square that constitutes a
winning move
• Otherwise it returns a ‘0’
• The function PossWin operates by checking, one at a time,
for each of rows/ columns/ diagonals
53
• If PossWin(p) = 0, then p cannot win
• Find whether opponent can win
• If so, then block it, this can be achieved as
– If (3*3*2 = 18), then X player can win as there is one blank
square
– If (5*5*2 = 50), then O player wins
56
Approach 3
• The board is chosen as a magic square of
order 3
• The magic square of order n consists of n2
distinct numbers from 1 to n2
• The numbers of columns, rows and diagonals
sum to the same constant
57
How to create magic square 3x3
58
59
• In this approach, a list of blocks played by each player
is maintained
• Each pair of blocks a player owns is considered
• Difference D between 15 and the sum of the blocks is
calculated
• If D<0 or D>9, then those two blocks are not collinear
and so can be ignored
• Else if the block representing difference in blank, i.e.
not in the list, then player can move in that block
60
61
1. Suppose H plays in block 8
2. C plays in block 5
3. H plays in block 1
4. C checks if H can win or not
– Compute sum of blocks played by H
• S=8+1=9
• Compute D=15-9=6
– The sixth block is a winning block for H and not there on either list. So, C blocks 6
5. H plays in block 4
6. C checks if C can win
– Compute sum of blocks played by C
• S=5+6=11
• Compute D=15-11=4; discard this block as it already is in list
– Now C checks whether H can win
• Compute sum of pair of square from list of H which have not used earlier
– S=8+4=12
– Compute D=15-12=3
• Block 3 is free, so C plays in block 3.
7. If H plays in block 2 or 9, then computer wins. Lets assume h plays 2
8. C checks if it can win
– Compute sum of blocks played by C which has not used earlier
• S=5+3=8
• Compute D=15-8=7
– Block 7 is free, so C plays in block 7 and wins the game
9. If H plays in block 7 then there is a draw
62
Advantages & Disadvantages
63
3 Dimensional Tic-Tac-Toe
64
65
• In this game, one may use the magic cube as
shown in fig.
• Numbers from 1 to 27 are arranged in a 3x3x3
pattern
• Sum of the numbers on each row, column,
diagonal is 42, called magic constant of cube
• The magic cube of order n has a magic constant
equal to n[(n3+1)/2]
66
67
Development of AI languages
• AI languages stress on knowledge representation schemes, pattern
matching, flexible search, and programs on data
• E.g. LISP, Pop-2, ML, Prolog, etc.
• LISP is a functional language based on calculus
• Prolog is a logic lang. based on 1st order predicate logic
• Pop-2 is a stack based language providing greater flexibility and has
similarity with LISP
• Pop 11 is embedded in an AI programming environments which focuses on
domain level
• AI programming can exploit any language from BASIC through C to
Smalltalk
68
Current trends in AI
69
• Evolutionary paths of AI and simulation started to converge
in cognitive psychology
• Simulation has been developed to study and understand
complex time varying behaviours exhibited by real physical
systems
• ANN has been developed based on functioning of human
brain to predict features based on previous details
• Evolutionary techniques mostly involve meta-heuristic
optimization algorithms such as evolutionary algorithms and
swarm intelligence
70
• Genetic algorithms based on Darwin theory of evolution
were developed mainly by emulating the nature and
behaviour of biological chromosome
• Swarm intelligence is based on the collective behaviour
of decentralized, self-organized systems
• Ants, bees and termites etc. solve complex problems by
mutual cooperation
• This emergent behaviour of self-organization by a group
of social insects is known as swarm intelligence
71
• In computational sense, the set of mobile agents
which are liable to communicate directly or
indirectly with each other and which collectively
carry out a distributed problem solving is known as
swarm intelligence
• Expert systems continues to remain an attractive
field for its practical utility in all walks of real life
• Emergence of agent technology as a subfield of AI, is
a significant paradigm shift for s/w development
72
Contents
• Problem solving: state-space search and control
strategies:
– Introduction 23
– General problem solving 24
– Characteristics of problem 32
– Exhaustive searches 34
– Heuristic search techniques 44
– Iterative deepening A* 57
– Constraint satisfaction 57
73
Problem solving: state-space search and
control strategies:
Introduction
• Problem solving is a method of deriving solution steps beginning from initial
description of the problem to the desired solution
• In AI, the problems are frequently modelled as a state space problem where
the state space is a set of all possible states from start to goal states
• A set of states form a graph in which two states are linked if there is
74
General problem solving
The following sections describe facilitating the modelling
of problems and search processes
• Production system
– Water jug problem
– Missionaries and
cannibals problem
• Control strategies
75
Production System (PS)
77
Water jug problem
Problem statement:
• We have two jugs, a 5 Lt. and a 3 Lt. without measuring
marker on them.
• There is endless water supply through tap
• Our task is to get 4 Lt. of water in the 5 Lt. jug
Solution
• State space for this problem can be described as the set
of ordered pairs of integers (X,Y)
– X – no.of Lt. of water in 5 Lt. jug
– Y – no.of Lt. of water in 3 Lt. jug
• Operations are defined as production rules as in Table
78
Production rules for water jug problem
79
Solution path 1
Rule applied 5 Lt. jug 3 Lt. jug Step no.
Start state 0 0
1 5 0 1
8 2 3 2
4 2 0 3
6 0 2 4
1 5 2 5
8 4 3 6
Goal state 4 -
80
Solution path 2
Rule applied 5 Lt. jug 3 Lt. jug Step no.
Start state 0 0
3 0 3 1
5 3 0 2
3 3 3 3
7 5 1 4
2 0 1 5
5 1 0 6
3 1 3 7
5 4 0 8
Goal state 4 -
81
Missionaries and cannibals problem
Problem statement
• Three missionaries and three cannibals want to cross a
river.
• There is a boat on their side of the river that can be used by
either one or two persons
• If the cannibals outnumber the missionaries, they will be
killed by cannibals
• How can they all cross over safely
82
Solution
• State space of the problem can be described as the set of
ordered pairs of left and right banks of the river as (L, R)
• Each bank is represented as a list [nM, mC, B]
– n- no.of missionaries M
– m- no.of cannibals C
– B- boat
83
1. Start state: ([3M, 3C, 1B], [0M, 0C, 0B])
– 1B means the boat is present & 0B means absent
84
Production rules for missionaries and cannibals problem
85
Solution path
Rule number ([3M, 3C, 1B], [0M, 0C, 0B]) <- START STATE
L2 ([2M, 2C, 0B], [1M, 1C, 1B])
R4 ([3M, 2C, 1B], [0M, 1C, 0B])
L3 ([3M, 0C, 0B], [0M, 3C, 1B])
R5 ([3M, 1C, 1B], [0M, 2C, 0B])
L1 ([1M, 1C, 0B], [2M, 2C, 1B])
R2 ([2M, 2C, 1B], [1M, 1C, 0B])
L1 ([0M, 2C, 0B], [3M, 1C, 1B])
R5 ([0M, 3C, 1B], [3M, 0C, 0B])
L3 ([0M, 1C, 0B], [3M, 2C, 1B])
R5 ([0M, 2C, 1B], [3M, 1C, 0B])
L3 ([0M, 0C, 0B], [3M, 3C, 1B]) -> GOAL STATE
87
88
State – Space search
90
• Solution path is a path through the graph from a node in S to
a node in G
• Objective of search algorithm is to find a solution path in the
graph
• There may be more than one solution path
91
• Possible operators (applied in this problem) are {2M0C,
1M1C, 0M2C, 1M0C, 0M1C}
– 2M - means two missionaries
– 1C – means one cannibal, etc.
– Invalid states
• Applying same operator both the ways is waste since it leads to previous
state, which is said to be looping situation
93
• To illustrate the progress, develop a tree of nodes, each
node represents a state
• Root node represents the start state
• Arcs represents, application of one of the operators
• Nodes for which no operator is applied, are leaf nodes
• Search space generated using valid operators are
shown in fig
• The sequence of operators applied to solve the
problem is given in table
94
95
96
Eight puzzle problem
Problem statement:
• A 3x3 grid with 8 randomly numbered (1-8) tiles
arranged on it with one empty cell
• At any point, the adjacent tile can move to the empty
cell, creating a new empty cell
• Solving this problem involves arranging tiles such that
we get the goal state
97
98
• A state of the problem should keep track of the position of
all tiles on the game board
• 0 represents blank position
• The states are
– Start state: [[3,7,6], [5,1,2], [4,0,8]]
– Goal state: [[5,3,6], [7,0,2], [4,1,8]]
– Operations can be {up, down, left, right}
99
10
0
Control strategies
• One of the most important components of problem solving
• It describes the order of application of the rules to the
current state
• It should move towards solution by exploring the solution
space in a systematic way
• DFS & BFS are systematic approaches
• Both are blind searches
10
1
• In DFS, single branch of the tree is followed until it
yields a solution or some pre-specified depth has
reached & then go back to immediate previous
node & explore other branches
• In BFS, a search space tree is generated level wise until
a solution is found or some specified depth is reached
• Both are exhaustive, uninformed and blind
• To solve real-world problems, effective control strategy
must be used
10
2
• To find the correct strategy for a given
problem, there are two directions
– Data-driven search, called forward chaining, from
start state
– Goal-driven search, called backward chaining,
from the goal state
10
3
Forward chaining
• Begins with known facts and works towards a conclusion
• Ex. In eight puzzle problem, we start from start state and
work forward to the goal state
• In this a tree is started building with move sequences with
the start state as root of tree
• The states of next level are generated by all rules
• This process is continued until a configuration that matches
the goal state
10
4
Backward chaining
• It is a goal-directed strategy
• Begins with goal state and continued working backward
• In many cases this is useful strategy
• The eight puzzle problem have single start state and single
goals state
• It makes no difference when the problem is solved using
either chaining strategy since computational effort in both
the strategies is same
10
5
Characteristics of problem
• Analysing the problem along several key characteristics
is a must before solving it
• Some of the types are:
– Type of problem
– Decomposability of problem
– Role of knowledge
– Consistency of knowledge base used
– Requirement of solution
10
6
Type of problem
• There are 3 types of problems in real life
– Ignorable
• Solution steps can be ignored for these problems
• E.g. in proving a theorem, a lemma can be ignored
– Recoverable
• Solution steps can be undone for these problems
• E.g. water jug prob, if a jug is filled, it can be emptied
• Used in single player puzzles, solved by back tracking, using
push-down stack
– Irrecoverable
• Solution steps cannot be undone for these problems
• E.g. any two player games, like chess, snakes and ladders, etc.
35
Decomposability of problem
10
8
Role of knowledge
10
9
Consistency of knowledge base used
11
1
Exhaustive searches
– Depth-First-Search (DFS)
– Depth-First-Iterative Deepening
– Bidirectional Search
11
2
Breadth-First-Search (BFS)
• BFS expands all the states one step away from start state, and then
expands two steps away, and then three steps away, and so on until a goal
state is reached
• BFS always give an optimal solution
• CLOSED is a Stack, that contains the states that are already expanded
• E.g. a BFS is implemented to check whether a goal node exists or not for
water jug problem
11
3
11
4
11
5
• At each state, applicable rule is applied first and
• Solution path:
(0,0) -> (5,0) -> (2,3) -> (2,0) -> (0,2) -> (5,2) -> (4,3) 11
6
Depth-First-Search (DFS)
• DFS is implemented using two stacks OPEN &
CLOSED
• OPEN list contains states that are to be
expanded
• CLOSED list keeps track of states already
expanded
• If we discover the first element of the OPEN as
goal state, get track of the path as we traversed 11
7
11
8
11
9
S.NO BFS DFS
1. BFS stands for Breadth First Search. DFS stands for Depth First Search.
BFS(Breadth First Search) uses Queue data DFS(Depth First Search) uses Stack data
2. structure for finding the shortest path. structure.
4. BFS is more suitable for searching vertices DFS is more suitable when there are
which are closer to the given source. solutions away from source.
DFS is more suitable for game or puzzle
BFS considers all neighbours first and problems. We make a decision, then
5. therefore not suitable for decision making explore all paths through this decision. And
trees used in games or puzzles. if this decision leads to win situation, we
stop.
The Time complexity of BFS is O(V + E) The Time complexity of DFS is also O(V + E)
when Adjacency List is used and O(V^2) when Adjacency List is used and O(V^2)
6.
when Adjacency Matrix is used, where V when Adjacency Matrix is used, where V
stands for vertices and E stands for edges. stands for vertices and E stands for edges.
7. Here, siblings are visited before the Here, children are visited before the
children siblings
12
Depth-First-Iterative Deepening
• DFID takes the advantages of BFS & DFS
• DFID expands all nodes at a given depth before expanding any
nodes at greater depth
• DFID gives optimal solution of shortest path from start state to
goal state
• At any time it is performing a DFS and never searches deeper than
depth ‘d’
• Thus it uses the space O(d)
• Disadvantage, is it performs wasted computation
• Working of DFID algorithm is as follows
Bidirectional Search
• It runs two simultaneous searches
• One search moves forward from start to goal and another moves back
from goal to start
• Searching is stopped when both meet in middle
• Useful if there are only one start and goal states
• If match is found, path can be traced from start to match state and
match to goal state
• Each node has link to its successors and parents
• Each of two searches has time complexity O(bd/2) and O(bd/2+bd/2) is
much less running time
Analysis of search methods
• There are n cities & distance between each pair of the cities is given
• Find shortest route of visiting all cities once & return to starting point
• Stop exploring any path as soon as its partial length becomes greater
than the shortest path length found so far
Heuristic search techniques
• This basically means that this search algorithm may not find
the optimal solution to the problem
• Step 1: Evaluate initial state, if it is goal state then return success and Stop.
– else if it is better than current state then assign new state as a current state.
– else if not better than the current state, then return to step 2.
• Step 5: Exit.
2. Steepest-Ascent hill climbing
• The steepest-Ascent algorithm is a variation of
the simple hill-climbing algorithm
• This algorithm examines all the neighbouring
nodes of the current state and selects one
neighbour node which is closest to the goal state
• This algorithm consumes more time as it searches
for multiple neighbours.
Algorithm for Steepest-Ascent hill climbing
• Step 1: Evaluate the initial state, if it is goal state then return success and stop, else
make the current state as your initial state.
• Step 2: Loop until a solution is found or the current state does not change.
– Let S be a state such that any successor of current state will be better than it.
• If the S is better than the current state, then set the current state to S.
• Step 5: Exit.
3. Stochastic hill climbing
• It does not examine all the neighbouring nodes
before deciding which node to select
• It just selects a neighbouring node at random and
decides (based on the amount of improvement
in that neighbor) whether to move to that
neighbour or to examine another
Algorithm for Stochastic Hill climbing
• Step 1: Evaluate the initial state. If it is a goal state then stop and return success.
Otherwise, make initial state as current state.
• Step 2: Repeat these steps until a solution is found or current state does not
change.
• a) Select a state that has not been yet applied to the current state.
• b) Apply successor function to current state & generate all neighbour states.
• c) Among the generated neighbour states which are better than current state
choose a state randomly (or based on some probability function)
• d) If the chosen state is goal state, then return success, else make it
current state and repeat step 2: b) part.
• Step 3: Exit.
Problems in different regions in Hill climbing
Hill climbing cannot reach the best possible state if it enters any of the following
regions :
• 1. Local maximum: At a local maximum all neighbouring states have values which
are worse than the current state
• Since hill-climbing uses a greedy approach, it will not move to the worse state and
terminate itself
• The process will end even though a better solution may exist.
• To overcome the local maximum problem: Utilise the backtracking technique
• Maintain a list of visited states.
• If the search reaches an undesirable state, it can backtrack to the previous
configuration and explore a new path.
• 2. Plateau: On the plateau, all neighbours have the same value.
Hence, it is not possible to select the best direction.
• 3. Ridge: Any point on a ridge can look like a peak because the
movement in all possible directions is downward.
4. Remove the first node on OPEN list and put this node on CLOSED list.
(a) If it has not been discovered before i.e., it is not on OPEN, evaluate this node by applying the
heuristic function, add it to the OPEN and record its parent.
(b) If it has been discovered before, change the parent if the new path is better than the previous
one. Update cost of getting to this node & to any successors that this node may already have.
8. Go to step 3.
• Best first searches will always find good paths to a goal node if there is
any
18
5
Constraint Satisfaction
• Many AI problems can be viewed as problems of constraint
satisfaction
• Objective is to solve problem state instead of optimal path
• These are Constraint Satisfaction (CS) Problems
• Search can be made easier in those cases in which the solution
is required to satisfy local consistency conditions
• E.g. Cryptography, n-Queen Problem, map colouring,
crossword puzzle, etc.
Crypt-Arithmetic puzzle
• Problem statement: solve the puzzle by assigning 0-9 in
such a way that each letter is assigned unique digit
which satisfy the following addition:
B A SE
+B A LL
GAMES
• Constrai
nts: no
two
C 4C 3C 2C 1
B A S E
+B A L L
GAME S
• Constraints equations are:
• E+L = S
• S+L+C1=E
• 2A+C2=M
• 2B+C3=A
• G=C4
19
0