Artificial Intelligence

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 18

CIA II INTERNAL EXAMINATION ANSWER KEY

PART A (5x2=10 marks)

1. What is an agent?

An “agent” is an independent program or entity that interacts with its environment


by perceiving its surroundings via sensors, then acting through actuators or
effectors.

2. Name any two types of agents.

Simple reflex agents.


Model-based agents.

3. List any four features of environment.

 Fully observable vs Partially Observable


 Static vs Dynamic
 Discrete vs Continuous
 Deterministic vs Stochastic

4. Define heuristics.

Heuristics is a method of problem-solving where the goal is to come up with a


workable solution in a feasible amount of time.

5. Compare shoulder and flat local maximum.

Flat local maximum: It is a flat space in the landscape where all the neighbor states
of current states have the same value.
Shoulder: It is a plateau region which has an uphill edge.

PART B (2x13=26 marks)

6. Illustrate any three uninformed search algorithms with example.


o Breadth-first search is the most common search strategy for traversing a
tree or graph. This algorithm searches breadthwise in a tree or graph, so it is
called breadth-first search.
o BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.
o The breadth-first search algorithm is an example of a general-graph search
algorithm.
o Breadth-first search implemented using FIFO queue data structure.

o S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K

Time Complexity: Time Complexity of BFS algorithm can be obtained by the


number of nodes traversed in BFS until the shallowest Node. Where the d= depth of
shallowest solution and b is a node at every state.

T (b) = 1+b2+b3+.......+ bd= O (bd)

Space Complexity: Space complexity of BFS algorithm is given by the Memory


size of frontier which is O(bd).
Depth-first Search

o Depth-first search isa recursive algorithm for traversing a tree or graph data
structure.
o It is called the depth-first search because it starts from the root node and
follows each path to its greatest depth node before moving to the next path.
o DFS uses a stack data structure for its implementation.
o The process of the DFS algorithm is similar to the BFS algorithm.
o Root node--->Left node ----> right node.
o It will start searching from root node S, and traverse A, then B, then D and
E, after traversing E, it will backtrack the tree as E has no other successor
and still goal node is not found. After backtracking it will traverse node C and
then G, and here it will terminate as it found goal node.

o Completeness: DFS search algorithm is complete within finite state space


as it will expand every node within a limited search tree.
o Time Complexity: Time complexity of DFS will be equivalent to the node
traversed by the algorithm.

Depth-Limited Search Algorithm:


A depth-limited search algorithm is similar to depth-first search with a
predetermined limit. Depth-limited search can solve the drawback of the infinite
path in the Depth-first search. In this algorithm, the node at the depth limit will
treat as it has no successor nodes further.

Example:

Completeness: DLS search algorithm is complete if the solution is above the


depth-limit.

Time Complexity: Time complexity of DLS algorithm is O(bℓ).

Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).

Optimal: Depth-limited search can be viewed as a special case of DFS, and it is


also not optimal even if ℓ>d.

7. Compare Best first search and A* algorithm.

Best-first Search Algorithm (Greedy Search):

Greedy best-first search algorithm always selects the path which appears best at
that moment. It is the combination of depth-first search and breadth-first search
algorithms. It uses the heuristic function and search. Best-first search allows us to
take the advantages of both algorithms. With the help of best-first search, at each
step, we can choose the most promising node. In the best first search algorithm,
we expand the node which is closest to the goal node and the closest cost is
estimated by heuristic function, i.e.

1. f(n)= g(n).

Were, h(n)= estimated cost from node n to the goal.

The greedy best first algorithm is implemented by the priority queue.

Best first search algorithm:


o Step 1: Place the starting node into the OPEN list.
o Step 2: If the OPEN list is empty, Stop and return failure.
o Step 3: Remove the node n, from the OPEN list which has the lowest value
of h(n), and places it in the CLOSED list.
o Step 4: Expand the node n, and generate the successors of node n.
o Step 5: Check each successor of node n, and find whether any node is a goal
node or not. If any successor node is goal node, then return success and
terminate the search, else proceed to Step 6.
o Step 6: For each successor node, algorithm checks for evaluation function
f(n), and then check if the node has been in either OPEN or CLOSED list. If
the node has not been in both list, then add it to the OPEN list.
o Step 7: Return to Step 2.

Advantages:
o Best first search can switch between BFS and DFS by gaining the advantages
of both the algorithms.
o This algorithm is more efficient than BFS and DFS algorithms.

Disadvantages:
o It can behave as an unguided depth-first search in the worst case scenario.
o It can get stuck in a loop as DFS.
o This algorithm is not optimal.
Example:

Consider the below search problem, and we will traverse it using greedy best-first
search. At each iteration, each node is expanded using evaluation function
f(n)=h(n) , which is given in the below table.

In this search example, we are using two lists which are OPEN and CLOSED Lists.
Following are the iteration for traversing the above example.

Expand the nodes of S and put in the CLOSED list

Initialization: Open [A, B], Closed [S]

Iteration 1: Open [A], Closed [S, B]

Iteration 2: Open [E, F, A], Closed [S, B]


: Open [E, A], Closed [S, B, F]

Iteration 3: Open [I, G, E, A], Closed [S, B, F]


: Open [I, E, A], Closed [S, B, F, G]

Hence the final solution path will be: S----> B----->F----> G

Time Complexity: The worst case time complexity of Greedy best first search is
O(bm).
Space Complexity: The worst case space complexity of Greedy best first search is
O(bm). Where, m is the maximum depth of the search space.

Complete: Greedy best-first search is also incomplete, even if the given state
space is finite.

A* Search Algorithm:

A* search is the most commonly known form of best-first search. It uses heuristic
function h(n), and cost to reach the node n from the start state g(n). It has
combined features of UCS and greedy best-first search, by which it solve the
problem efficiently. A* search algorithm finds the shortest path through the search
space using the heuristic function. This search algorithm expands less search tree
and provides optimal result faster. A* algorithm is similar to UCS except that it uses
g(n)+h(n) instead of g(n).

In A* search algorithm, we use search heuristic as well as the cost to reach the
node. Hence we can combine both costs as following, and this sum is called as
a fitness number.

At each point in the search space, only those node is expanded which have the
lowest value of f(n), and the algorithm terminates when the goal node is found.

Algorithm of A* search:

Step1: Place the starting node in the OPEN list.

Step 2: Check if the OPEN list is empty or not, if the list is empty then return
failure and stops.

Step 3: Select the node from the OPEN list which has the smallest value of
evaluation function (g+h), if node n is goal node then return success and stop,
otherwise

Step 4: Expand node n and generate all of its successors, and put n into the closed
list. For each successor n', check whether n' is already in the OPEN or CLOSED list,
if not then compute evaluation function for n' and place into Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached
to the back pointer which reflects the lowest g(n') value.

Step 6: Return to Step 2.

Advantages:
o A* search algorithm is the best algorithm than other search algorithms.
o A* search algorithm is optimal and complete.
o This algorithm can solve very complex problems.

Disadvantages:
o It does not always produce the shortest path as it mostly based on heuristics
and approximation.
o A* search algorithm has some complexity issues.
o The main drawback of A* is memory requirement as it keeps all generated
nodes in the memory, so it is not practical for various large-scale problems.

Example:
In this example, we will traverse the given graph using the A* algorithm. The
heuristic value of all states is given in the below table so we will calculate the f(n)
of each state using the formula f(n)= g(n) + h(n), where g(n) is the cost to reach
any node from start state.

Here we will use OPEN and CLOSED list.


Solution:

Initialization: {(S, 5)}

Iteration1: {(S--> A, 4), (S-->G, 10)}


Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}

Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S--
>G, 10)}

Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal
path with cost 6.

8. Explain simple hill climbing algorithm.


o Hill climbing algorithm is a local search algorithm which continuously moves
in the direction of increasing elevation/value to find the peak of the mountain
or best solution to the problem. It terminates when it reaches a peak value
where no neighbor has a higher value.
o Hill climbing algorithm is a technique which is used for optimizing the
mathematical problems. One of the widely discussed examples of Hill
climbing algorithm is Traveling-salesman Problem in which we need to
minimize the distance traveled by the salesman.
o It is also called greedy local search as it only looks to its good immediate
neighbor state and not beyond that.
o A node of hill climbing algorithm has two components which are state and
value.
o Hill Climbing is mostly used when a good heuristic is available.
o In this algorithm, we don't need to maintain and handle the search tree or
graph as it only keeps a single current state.

Features of Hill Climbing:

Following are some main features of Hill Climbing Algorithm:

o Generate and Test variant: Hill Climbing is the variant of Generate and
Test method. The Generate and Test method produce feedback which helps
to decide which direction to move in the search space.
o Greedy approach: Hill-climbing algorithm search moves in the direction
which optimizes the cost.
o No backtracking: It does not backtrack the search space, as it does not
remember the previous states.
State-space Diagram for Hill Climbing:

The state-space landscape is a graphical representation of the hill-climbing


algorithm which is showing a graph between various states of algorithm and
Objective function/Cost.

On Y-axis we have taken the function which can be an objective function or cost
function, and state-space on the x-axis. If the function on Y-axis is cost then, the
goal of search is to find the global minimum and local minimum. If the function of
Y-axis is Objective function, then the goal of the search is to find the global
maximum and local maximum.

9. Outline in detail about heuristics and its types.

Types of Heuristics

There are many different kinds of heuristics. While each type plays a role in
decision-making, they occur during different contexts. Understanding the types can
help you better understand which one you are using and when.

Availability
The availability heuristic involves making decisions based upon how easy it is to
bring something to mind. When you are trying to make a decision, you might
quickly remember a number of relevant examples. Since these are more readily
available in your memory, you will likely judge these outcomes as being more
common or frequently occurring.
For example, if you are thinking of flying and suddenly think of a number of recent
airline accidents, you might feel like air travel is too dangerous and decide to travel
by car instead. Because those examples of air disasters came to mind so easily, the
availability heuristic leads you to think that plane crashes are more common than
they really are.

Familiarity

The familiarity heuristic refers to how people tend to have more favorable opinions
of things, people, or places they've experienced before as opposed to new ones. In
fact, given two options, people may choose something they're more familiar with
even if the new option provides more benefits.4

Representativeness
The representativeness heuristic involves making a decision by comparing the
present situation to the most representative mental prototype. When you are trying
to decide if someone is trustworthy, you might compare aspects of the individual to
other mental examples you hold.

A soft-spoken older woman might remind you of your grandmother, so you might
immediately assume that she is kind, gentle, and trustworthy. However, this is an
example of a heuristic bias, as you can't know someone trustworthy based on their
age alone.

Affect
The affect heuristic involves making choices that are influenced by the emotions
that an individual is experiencing at that moment. For example, research has shown
that people are more likely to see decisions as having benefits and lower risks when
they are in a positive mood. Negative emotions, on the other hand, lead people to
focus on the potential downsides of a decision rather than the possible benefits. 5
Anchoring
The anchoring bias involves the tendency to be overly influenced by the first bit of
information we hear or learn. This can make it more difficult to consider other
factors and lead to poor choices. For example, anchoring bias can influence how
much you are willing to pay for something, causing you to jump at the first offer
without shopping around for a better deal.
Scarcity

Scarcity is a principle in heuristics in which we view things that are scarce or less
available to us as inherently more valuable. The scarcity heuristic is one often used
by marketers to influence people to buy certain products. This is why you'll often
see signs that advertise "limited time only" or that tell you to "get yours while
supplies last."6
Trial and Error

Trial and error is another type of heuristic in which people use a number of different
strategies to solve something until they find what works. Examples of this type of
heuristic are evident in everyday life.7 People use trial and error when they're
playing video games, finding the fastest driving route to work, and learning to ride
a bike (or learning any new skill).

PART B (1x14=14 marks)

10. Demonstrate search algorithms in detail.

Search algorithms are one of the most important areas of Artificial Intelligence. This
topic will explain all about the search algorithms in AI.

Problem-solving agents:

In Artificial Intelligence, Search techniques are universal problem-solving


methods. Rational agents or Problem-solving agents in AI mostly used these
search strategies or algorithms to solve a specific problem and provide the best
result. Problem-solving agents are the goal-based agents and use atomic
representation. In this topic, we will learn various problem-solving search
algorithms.

o Search: Searchingis a step by step procedure to solve a search-problem in a


given search space. A search problem can have three main factors:

a. Search Space: Search space represents a set of possible solutions,


which a system may have.
b. Start State: It is a state from where agent begins the search.
c. Goal test: It is a function which observe the current state and returns
whether the goal state is achieved or not.
o Search tree: A tree representation of search problem is called Search tree.
The root of the search tree is the root node which is corresponding to the
initial state.
o Actions: It gives the description of all the available actions to the agent.
o Transition model: A description of what each action do, can be represented
as a transition model.
o Path Cost: It is a function which assigns a numeric cost to each path.
o Solution: It is an action sequence which leads from the start node to the
goal node.
o Optimal Solution: If a solution has the lowest cost among all solutions.

Properties of Search Algorithms:

Following are the four essential properties of search algorithms to compare the
efficiency of these algorithms:

Completeness: A search algorithm is said to be complete if it guarantees to return


a solution if at least any solution exists for any random input.

Optimality: If a solution found for an algorithm is guaranteed to be the best


solution (lowest path cost) among all other solutions, then such a solution for is
said to be an optimal solution.

Time Complexity: Time complexity is a measure of time for an algorithm to


complete its task.

Space Complexity: It is the maximum storage space required at any point during
the search, as the complexity of the problem.

Types of search algorithms

Based on the search problems we can classify the search algorithms into
uninformed (Blind search) search and informed search (Heuristic search)
algorithms.
Uninformed/Blind Search:

The uninformed search does not contain any domain knowledge such as closeness,
the location of the goal. It operates in a brute-force way as it only includes
information about how to traverse the tree and how to identify leaf and goal nodes.
Uninformed search applies a way in which search tree is searched without any
information about the search space like initial state operators and test for the goal,
so it is also called blind search.It examines each node of the tree until it achieves
the goal node.

It can be divided into five main types:

o Breadth-first search
o Uniform cost search
o Depth-first search
o Iterative deepening depth-first search
o Bidirectional Search
Informed Search

Informed search algorithms use domain knowledge. In an informed search, problem


information is available which can guide the search. Informed search strategies can
find a solution more efficiently than an uninformed search strategy. Informed
search is also called a Heuristic search.

A heuristic is a way which might not always be guaranteed for best solutions but
guaranteed to find a good solution in reasonable time.

Informed search can solve much complex problem which could not be solved in
another way.

An example of informed search algorithms is a traveling salesman problem.

1. Greedy Search
2. A* Search

11. Illustrate the nature of environments and its features.

An environment in artificial intelligence is the surrounding of the agent. The agent


takes input from the environment through sensors and delivers the output to the
environment through actuators. There are several types of environments:
 Fully Observable vs Partially Observable
 Deterministic vs Stochastic
 Competitive vs Collaborative
 Single-agent vs Multi-agent
 Static vs Dynamic
 Discrete vs Continuous
 Episodic vs Sequential
 Known vs Unknown
Environment types

1. Fully Observable vs Partially Observable


 When an agent sensor is capable to sense or access the complete state of an
agent at each point in time, it is said to be a fully observable environment else
it is partially observable.
 Maintaining a fully observable environment is easy as there is no need to keep
track of the history of the surrounding.
 An environment is called unobservable when the agent has no sensors in all
environments.
 Examples:
 Chess – the board is fully observable, and so are the opponent’s
moves.
 Driving – the environment is partially observable because what’s
around the corner is not known.
2. Deterministic vs Stochastic
 When a uniqueness in the agent’s current state completely determines the next
state of the agent, the environment is said to be deterministic.
 The stochastic environment is random in nature which is not unique and cannot
be completely determined by the agent.
 Examples:
 Chess – there would be only a few possible moves for a coin at the
current state and these moves can be determined.
 Self-Driving Cars- the actions of a self-driving car are not unique, it
varies time to time.
3. Competitive vs Collaborative
 An agent is said to be in a competitive environment when it competes against
another agent to optimize the output.
 The game of chess is competitive as the agents compete with each other to win
the game which is the output.
 An agent is said to be in a collaborative environment when multiple agents
cooperate to produce the desired output.
 When multiple self-driving cars are found on the roads, they cooperate with
each other to avoid collisions and reach their destination which is the output
desired.
4. Single-agent vs Multi-agent
 An environment consisting of only one agent is said to be a single-agent
environment.
 A person left alone in a maze is an example of the single-agent system.
 An environment involving more than one agent is a multi-agent environment.
 The game of football is multi-agent as it involves 11 players in each team.
5. Dynamic vs Static
 An environment that keeps constantly changing itself when the agent is up
with some action is said to be dynamic.
 A roller coaster ride is dynamic as it is set in motion and the environment
keeps changing every instant.
 An idle environment with no change in its state is called a static environment.
 An empty house is static as there’s no change in the surroundings when an
agent enters.
6. Discrete vs Continuous
 If an environment consists of a finite number of actions that can be deliberated
in the environment to obtain the output, it is said to be a discrete environment.
 The game of chess is discrete as it has only a finite number of moves. The
number of moves might vary with every game, but still, it’s finite.
 The environment in which the actions are performed cannot be numbered i.e.
is not discrete, is said to be continuous.
 Self-driving cars are an example of continuous environments as their actions
are driving, parking, etc. which cannot be numbered.
7.Episodic vs Sequential
 In an Episodic task environment, each of the agent’s actions is divided into
atomic incidents or episodes. There is no dependency between current and
previous incidents. In each incident, an agent receives input from the
environment and then performs the corresponding action.
 Example: Consider an example of Pick and Place robot, which is used to detect
defective parts from the conveyor belts. Here, every time robot(agent) will
make the decision on the current part i.e. there is no dependency between
current and previous decisions.
 In a Sequential environment, the previous decisions can affect all future
decisions. The next action of the agent depends on what action he has taken
previously and what action he is supposed to take in the future.
 Example:
Checkers- Where the previous move can affect all the following moves.
8. Known vs Unknown
In a known environment, the output for all probable actions is given.
Obviously, in case of unknown environment, for an agent to make a decision, it
has to gain knowledge about how the environment works.

You might also like