Professional Documents
Culture Documents
Ai - 35 Imp Questions With Solution
Ai - 35 Imp Questions With Solution
IMP Question
Q. No. Questions
1 What is artificial intelligence? Define the different task domains of artificial
intelligence.
Artificial Intelligence is “the study of how to make computers do things, which, at the
moment, people do better”.
According to the father of Artificial Intelligence, John McCarthy, it is “The science
and engineering of making intelligent machines, especially intelligent computer
programs”.
Artificial Intelligence is a “way of making a computer, a computer-controlled robot,
or software think intelligently, in the similar manner the intelligent humans think”.
Formal Tasks
Games: chess, checkers, etc
Mathematics
Geometr
Logic
Integration and Differentiation
Verification
Theorem Proving
Expert Tasks:
Engineering ( Design, Fault finding, Manufacturing planning)
Scientific Analysis
Medical Diagnosis
Financial Analysis
2 What is the significance of the “Turing Test” in AI? Explain how it is performed.
The success of an intelligent behavior of a system can be measured with Turing Test.
Two persons and a machine to be evaluated participate in the test. Out of the two
persons, one plays the role of the tester. Each of them sits in different rooms. The
tester is unaware of who is machine and who is a human. He interrogates the
questions by typing and sending them to both intelligences, to which he receives
typed responses.
This test aims at fooling the tester. If the tester fails to determine machine’s response
from the human response, then the machine is said to be intelligent.
1. Is the problem decomposable into small sub-problems which are easy to solve?
Can the problem be broken down into smaller problems to be solved independently?
The decomposable problem can be solved easily.
Example: In this case, the problem is divided into smaller problems. The smaller
problems are solved independently. Finally, the result is merged to get the final result.
2. Can solution steps be ignored or undone?
In the Theorem Proving problem, a lemma that has been proved can be ignored for
the next steps.
Such problems are called Ignorable problems.
In the 8-Puzzle, Moves can be undone and backtracked.
Such problems are called Recoverable problems.
Uncertain outcome!
For certain-outcome problems, planning can be used to generate a sequence of
operators that is guaranteed to lead to a solution.
For uncertain-outcome problems, a sequence of generated operators can only have
a good probability of leading to a solution. Plan revision is made as the plan is carried
out and the necessary feedback is provided.
Playing Chess
Consider again the problem of playing chess. Suppose you had unlimited computing
power available. How much knowledge would be required by a perfect program? The
answer to this question is very little—just the rules for determining legal moves and
some simple control mechanism that implements an appropriate search procedure.
Additional knowledge about such things as good strategy and tactics could of course
help considerably to constrain the search and speed up the execution of the program.
Knowledge is important only to constrain the search for a solution.
Reading Newspaper
Now consider the problem of scanning daily newspapers to decide which are
supporting the Democrats and which are supporting the Republicans in some
upcoming election. Again assuming unlimited computing power, how much
knowledge would be required by a computer trying to solve this problem? This time
the answer is a great deal.
It would have to know such things as:
The names of the candidates in each party.
The fact that if the major thing you want to see done is have taxes lowered,
you are probably supporting the Republicans.
The fact that if the major thing you want to see done is improved education
for minority students, you are probably supporting the Democrats.
The fact that if you are opposed to big government, you are probably
supporting the Republicans.
And so on …
4 Explain Goal Based Agent and Utility based Agent architecture with proper
diagram.
Types of Agents
Agents can be grouped into five classes based on their degree of perceived
intelligence and capability :
Goal-based agents
These kinds of agents take decisions based on how far they are currently
from their goal(description of desirable situations).
Their every action is intended to reduce its distance from the goal. This
allows the agent a way to choose among multiple possibilities, selecting the
one which reaches a goal state.
The knowledge that supports its decisions is represented explicitly and can
be modified, which makes these agents more flexible.
They usually require search and planning. The goal-based agent’s behavior
can easily be changed.
They choose their actions in order to achieve goals. Goal-based approach is
more flexible than reflex agent since the knowledge supporting a decision is
explicitly modeled, thereby allowing for modifications.
Goal − It is the description of desirable situations.
Utility-based agents
Informed Search algorithms have information on the goal state which helps in more
efficient searching. This information is obtained by a function that estimates how
close a state is to the goal state. Example: and Graph Search.
Uninformed Search algorithms have no additional information on the goal node
other than the one provided in the problem definition. The plans to reach the goal
state from the start state differ only by the order and length of actions. Examples:
Depth First Search and Breadth-First Search.
It consumes moderate
It consumes less time
Time time because of slow
because of quick searching.
searching.
No suggestion is given
There is a direction given
Direction regarding the solution in
about the solution.
it.
It is more efficient as
It is comparatively less
efficiency takes into account
efficient as incurred cost is
cost and performance. The
Efficiency more and the speed of
incurred cost is less and
finding the Breadth-
speed of finding solutions is
Firstsolution is slow.
quick.
Comparatively higher
Computational Computational requirements
computational
requirements are lessened.
requirements.
S.
Parameters BFS DFS
No.
Visiting of Siblings/ Here, siblings are visited Here, children are visited
10.
Children before the children. before the siblings.
The visited nodes are added
Nodes that are traversed
Removal of to the stack and then
11. several times are deleted
Traversed Nodes removed when there are no
from the queue.
more nodes to visit.
DFS algorithm is a
In BFS there is no concept recursive algorithm that
12. Backtracking
of backtracking. uses the idea of
backtracking
BFS is used in various DFS is used in various
applications such as applications such as acyclic
13. Applications
bipartite graphs, shortest graphs and topological
paths, etc. order etc.
BFS requires more
14. Memory DFS requires less memory.
memory.
BFS is optimal for finding DFS is not optimal for
15. Optimality
the shortest path. finding the shortest path.
DFS has lesser space
In BFS, the space
complexity because at a
complexity is more critical
16. Space complexity time it needs to store only a
as compared to time
single path from the root to
complexity.
the leaf node.
BFS is slow as compared to DFS is fast as compared to
17. Speed
DFS. BFS.
When the target is close to When the target is far from
18. When to use? the source, BFS performs the source, DFS is
better. preferable.
8 What is a state space search? Explain with respect to the water jug problem.
Problem: There are two jugs of volume A gallon and B gallon . Neither has any
measuring mark on it.There is a pump that can be used to fill the jugs with
water.How can you get exactly x gallon of water into the A gallon jug. Assuming
that we have unlimited supply of water.
Let's assume we have A=4 gallon and B= 3 gallon jugs. And we want exactly 2
gallon water into jug A (i.e 4 gallon jug) how we will do this.
Solution:
We are given two jugs, a 4-gallon one and 3-gallon one. Neither has any measuring
marked on it. There is a pump, which can be used to fill the jugs with water. How can
we get exactly 2 gallons of water into 4-gallon jug?
The state space for this problem can be described as the set of ordered pairs of integers
(X, Y) such that X = 0, 1, 2, 3 or 4 and Y = 0, 1, 2 or 3; X is the number of gallons of
water in the 4-gallon jug and Y the quantity of water in the 3-gallon jug.
The start state is (0, 0) and the goal state is (2, n) for any value of n, as the problem
does not specify how many gallons need to be filled in the 3-gallon jug (0, 1, 2, 3).
So the problem has one initial state and many goal states. Some problems may have
many initial states and one or many goal states.
As in chess playing they are represented as rules whose left side are matched against
the current state and their right sides describe the new state which results from
applying the rule.
In order to describe the operators completely here are some assumptions, not
mentioned, in the problem state.
1. We can fill a jug from the pump.
2. We can pour water out a jug, onto the ground.
3. We can pour water out of one jug into the other.
4. No other measuring devices are available.
All such additional assumptions need to be given when converting a problem
statement in English to a formal representation of the problem, suitable for use by a
program.
To solve the water jug problem, all we need, in addition to the problem description
given above, is a control structure which loops through a simple cycle in which some
rule whose left side matches the current state is chosen, the appropriate change to the
state is made as described in the corresponding right side and the resulting state is
checked to see if it corresponds to a goal state.
The operators to be used to solve the problem can be described as shown in below
table:
Rule State Process
3 (X,Y | X=d & (X-d,Y) {Pour some water out of 4 - gallon jug
d>0) }
4 (X,Y | Y=d & (X,Y-d) {Pour some water out of 3 -gallon jug
d>0) }
9 (X,Y | X+Y<=4 (X+Y,0) {Pour all water from 3-gallon jug into
^Y>0) 4-gallon jug}
Hill Climbing is a heuristic search used for mathematical optimization problems in the field
of Artificial Intelligence.
Given a large set of inputs and a good heuristic function, it tries to find a sufficiently good
solution to the problem. This solution may not be the global optimal maximum.
Features of Hill Climbing
1. Variant of generate and test algorithm: It is a variant of generating and test algorithm.
The generate and test algorithm is as follows :
1. Generate possible solutions.
2. Test to see if this is the expected solution.
3. If the solution has been found quit else go to step 1.
Hence we call Hill climbing a variant of generating and test algorithm as it takes the
feedback from the test procedure. Then this feedback is utilized by the generator in
deciding the next move in the search space.
2. Uses the Greedy approach: At any point in state space, the search moves in that
direction only which optimizes the cost of function with the hope of finding the optimal
solution at the end.
Algorithm of A* search:
Step1: Place the starting node in the OPEN list.
Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure
and stops.
Step 3: Select the node from the OPEN list which has the smallest value of evaluation
function (g+h), if node n is goal node then return success and stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n into the closed
list. For each successor n', check whether n' is already in the OPEN or CLOSED list,
if not then compute evaluation function for n' and place into Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached
to the back pointer which reflects the lowest g(n') value.
Step 6: Return to Step 2.
Advantages:
A* search algorithm is the best algorithm than other search algorithms.
A* search algorithm is optimal and complete.
This algorithm can solve very complex problems.
Disadvantages:
It does not always produce the shortest path as it mostly based on heuristics
and approximation.
A* search algorithm has some complexity issues.
The main drawback of A* is memory requirement as it keeps all generated
nodes in the memory, so it is not practical for various large-scale problems.
Example:
In this example, we will traverse the given graph using the A* algorithm. The
heuristic value of all states is given in the below table so we will calculate the f(n) of
each state using the formula f(n)= g(n) + h(n), where g(n) is the cost to reach any
node from start state.
Here we will use OPEN and CLOSED list.
Solution:
When a problem can be divided into a set of sub problems, where each sub problem
can be solved separately and a combination of these will be a solution, AND-OR
graphs or AND - OR trees are used for representing the solution.
The decomposition of the problem or problem reduction generates AND arcs.
AND-OR Graph
Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -
∞, so we will compare each value in terminal state with initial value of Maximizer
and determines the higher nodes values. It will find the maximum among the all.
For node D max(-1,- -∞) => max(-1,4)= 4
For Node E max(2, -∞) => max(2, 6)= 6
For Node F max(-3, -∞) => max(-3,-5) = -3
For node G max(0, -∞) = max(0, 7) = 7
Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value
with +∞, and will find the 3rd layer node values.
For node B= min(4,6) = 4
For node C= min (-3, 7) = -3
Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all
nodes value and find the maximum value for the root node. In this game tree, there
are only 4 layers, hence we reach immediately to the root node, but in real games,
there will be more than 4 layers.
For node A max(4, -3)= 4
That was the complete workflow of the minimax two player game.
Properties of Mini-Max algorithm:
Complete- Min-Max algorithm is Complete. It will definitely find a solution
(if exist), in the finite search tree.
Optimal- Min-Max algorithm is optimal if both opponents are playing
optimally.
Time complexity- As it performs DFS for the game-tree, so the time
complexity of Min-Max algorithm is O(bm), where b is branching factor of
the game-tree, and m is the maximum depth of the tree.
Space Complexity- Space complexity of Mini-max algorithm is also similar
to DFS which is O(bm).
Limitation of the minimax Algorithm:
o The main drawback of the minimax algorithm is that it gets really slow for
complex games such as Chess, go, etc. This type of games has a huge
branching factor, and the player has lots of choices to decide.
o This limitation of the minimax algorithm can be improved from alpha-beta
pruning which we have discussed in the next topic.
Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of
α is compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α
at node D and node value will also 3.
Step 3: Now algorithm backtrack to node B, where the value of β will change as this
is a turn of Min, Now β= +∞, will compare with the available subsequent nodes value,
i.e. min (∞, 3) = 3, hence at node B now α= -∞, and β= 3.
In the next step, algorithm traverse the next successor of Node B which is node E,
and the values of α= -∞, and β= 3 will also be passed.
Step 4: At node E, Max will take its turn, and the value of alpha will change. The
current value of alpha will be compared with 5, so max (-∞, 5) = 5, hence at node E
α= 5 and β= 3, where α>=β, so the right successor of E will be pruned, and algorithm
will not traverse it, and the value at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At
node A, the value of alpha will be changed the maximum available value is 3 as max
(-∞, 3)= 3, and β= +∞, these two values now passes to right successor of A which is
Node C.
At node C, α=3 and β= +∞, and the same values will be passed on to node F.
Step 6: At node F, again the value of α will be compared with left child which is 0,
and max(3,0)= 3, and then compared with right child which is 1, and max(3,1)= 3
still α remains 3, but the node value of F will become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the
value of beta will be changed, it will compare with 1 so min (∞, 1) = 1. Now at C,
α=3 and β= 1, and again it satisfies the condition α>=β, so the next child of C which
is G will be pruned, and the algorithm will not compute the entire sub-tree G.
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3.
Following is the final game tree which is the showing the nodes which are computed
and nodes which has never been computed. Hence the optimal value for the
maximizer is 3 for this example.
Relational Knowledge:
The simplest way to represent declarative facts is as a set of relations of the
same sort used in the database system.
Provides a framework to compare two objects based on equivalent attributes.
Any instance in which two different objects are compared is a relational type
of knowledge.
The reason that this representation is simple is that standing alone provides
very weak inferential capabilities but knowledge represented in this form may
serve as the input to more powerful inference engine.
The table below shows a simple way to store facts.
o The facts about a set of objects are put systematically in columns.
o This representation provides little opportunity for inference.
Given the facts it is not possible to answer simple question such as: “Who is
the heaviest player?”
But if a procedure for finding heaviest player is provided, then these facts will
enable that procedure to compute an answer.
We can ask things like who “bats — left” and “throws — right”.
Inheritable Knowledge:
Here, the knowledge elements inherit attributes from their parents.
The knowledge is embodied in the design hierarchies found in the functional,
physical and process domains.
Within the hierarchy, elements inherit attributes from their parents, but in
many cases not all attributes of the parent elements be prescribed to the child
elements.
The inheritance is a powerful form of inference, but not adequate.
The basic KR (Knowledge Representation) needs to be augmented with
inference mechanism.
In order to support property inheritance objects must be organized into
classes and classes must be arranged into generalization hierarchy.
Figure below shows some additional baseball knowledge inserted into a
structure that is so arranged.
Boxed nodes — objects and values of attributes of objects.
Lines represent attributes.
Arrows — point from object to its value.
This structure is known as a slot and filler structure, semantic network or a
collection of frames.
Inferential Knowledge:
This knowledge generates new information from the given information.
This new information does not require further data gathering from source, but
does require analysis of the given information to generate new knowledge.
Example: given a set of relations and values, one may infer other values or
relations. A predicate logic (a mathematical deduction) is used to infer from
a set of attributes. Inference through predicate logic uses a set of logical
operations to relate individual data.
Represent knowledge as formal logic:
All dogs have tails ∀x: dog(x) → hastail(x)
Advantages:
A set of strict rules.
Can be used to derive more facts.
Truths of new statements can be verified.
Guaranteed correctness.
Many inference procedures available to implement standard rules of logic popular in
AI systems. e.g. Automated theorem proving.
Procedural Knowledge:
A representation in which the control information, to use the knowledge, is embedded
in the knowledge itself. For example, computer programs, directions, and recipes;
these indicate specific use or implementation;
Knowledge is encoded in some procedures, small programs that know how to do
specific things, how to proceed.
Advantages:
Heuristic or domain specific knowledge can be represented.
Extended logical inferences, such as default reasoning facilitated.
Side effects of actions may be modeled. Some rules may become false in time.
Keeping track of this in large systems may be tricky.
Disadvantages:
Completeness — not all cases may be represented.
Consistency — not all deductions may be correct. e.g. if we know that Fred
is a bird we might deduce that Fred can fly. Later we might discover that Fred
is an emu.
Modularity is sacrificed. Changes in knowledge base might have far-reaching
effects.
Cumbersome control information.
B. Backward Chaining:
Backward-chaining is also known as a backward deduction or backward reasoning
method when using an inference engine. A backward chaining algorithm is a form of
reasoning, which starts with the goal and works backward, chaining through rules to
find known facts that support the goal.
Properties of backward chaining:
It is known as a top-down approach.
Backward-chaining is based on modus ponens inference rule.
In backward chaining, the goal is broken into sub-goal or sub-goals to prove
the facts true.
It is called a goal-driven approach, as a list of goals decides which rules are
selected and used.
Backward -chaining algorithm is used in game theory, automated theorem
proving tools, inference engines, proof assistants, and various AI applications.
The backward-chaining method mostly used a depth-first search strategy for
proof.
Example:
In backward-chaining, we will use the same above example, and will rewrite all the
rules.
American (x) 𝖠 weapon(y) 𝖠 sells (x, y, z) 𝖠 Enemy(z ,America) →
Criminal(x) ...(1)
Owns(Nono, x).........................(2)
Missile(x) .......................... (3)
Missiles(x) 𝖠 Owns (Nono, x) → Sells (Colonel, x, Nono) ................. (4)
Missile(x) → Weapons (x) ....................... (5)
Enemy (Nono, America) ......................... (7)
American(Colonel). .......................... (8)
Backward-Chaining proof:
In Backward chaining, we will start with our goal predicate, which is
Criminal(Colonel), and then infer further rules.
Step-1:
At the first step, we will take the goal fact. And from the goal fact, we will infer other
facts, and at last, we will prove those facts true. So our goal fact is "Colonel is
Criminal," so following is the predicate of it.
Step-2:
At the second step, we will infer other facts form goal fact which satisfies the rules.
So as we can see in Rule-1, the goal predicate Criminal (Colonel) is present with
substitution {Colonel/x}. So we will add all the conjunctive facts below the first level
and will replace x with Colonel.Here we can see American (Colonel) is a fact, so it
is proved here.
Step-3:t At step-3, we will extract further fact Missile(x) which infer from
Weapon(x), as it satisfies Rule-(5). Weapon (x) is also true with the substitution of a
constant x at y.
Step-4:
At step-4, we can infer facts Missile(x) and Owns(Nono, x) form Sells(Colonel,x, z)
which satisfies the Rule- 4, with the substitution of Nono in place of z. So these two
statements are proved here.And hence all the statements are proved true using
backward chaining.
The above equation (a) is called as Bayes' rule or Bayes' theorem. This equation is
basic of most modern AI systems for probabilistic inference.
It shows the simple relationship between joint and conditional probabilities. Here,
P(A|B) is known as posterior, which we need to calculate, and it will be read as
Probability of hypothesis A when we have occurred an evidence B.
P(B|A) is called the likelihood, in which we consider that hypothesis is true, then we
calculate the probability of evidence.
P(A) is called the prior probability, probability of hypothesis before considering the
evidence
P(B) is called marginal probability, pure probability of an evidence.
In the equation (a), in general, we can write P (B) = P(A)*P(B|Ai), hence the Bayes'
rule can be written as:
Where A1, A2, A3,. , An is a set of mutually exclusive and exhaustive events.
Applying Bayes' rule:
Bayes' rule allows us to compute the single term P(B|A) in terms of P(A|B), P(B), and
P(A). This is very useful in cases where we have a good probability of these three
terms and want to determine the fourth one. Suppose we want to perceive the effect
of some unknown cause, and want to compute that cause, then the Bayes' rule
becomes:
Example-1:
Question: what is the probability that a patient has diseases meningitis with a
stiff neck?
Given Data:
A doctor is aware that disease meningitis causes a patient to have a stiff neck, and it
occurs 80% of the time. He is also aware of some more facts, which are given as
follows:
The Known probability that a patient has meningitis disease is 1/30,000.
The Known probability that a patient has a stiff neck is 2%.
Let a be the proposition that patient has stiff neck and b be the proposition that patient
has meningitis. , so we can calculate the following as:
P(a|b) = 0.8
P(b) = 1/30000
P(a)= .02
Hence, we can assume that 1 patient out of 750 patients has meningitis disease with
a stiff neck.