Artificial Intelligent

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

ASSIGNMENT COVER

REGION:MASHONALAND WEST
PROGRAMME:BACHELOR OF SOFTWARE ENGINEERING INTAKE: 4
FULL NAME OF STUDENT:MUTINGWENDE WITNESS PIN:P1723869D
MAILING ADDRESS:wmutingwende88@gmail.com
CONTACT CELL:0715139901 I.D NO:70-231354-C-04
COURSE NAME:ARTIFICIAL INTELLIGENCE COURSE CODE:BSEH351
ASSIGNMENT NO. e.g 1 or 2: 1 DUE DATE: 10 July 2021
ASSIGNMENT TITLE:

MARKERS’S COMMENTS:

OVERALL MARK: MARKER’S NAME:


MARKER’S SIGNATURE: DATE:

Issue Date: 3 October 2013 Revision 0


Question 1 Rule Based Systems
i. Define agent [2]
“An agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through actuators.” (Russell & Norvig, page 32). According to this
definition, humans, robots, and programs are agents. For example, a human is an agent because it
possesses sensors like eyes and actuators like hands, which, in this case, are also sensors) and it
interacts with an environment (the world).

ii)When do you say an agent is autonomous? [3]


An autonomous agent is a system situated within and a part of an environment that senses that
environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it
senses in the future. An agent is said to be autonomous when it exercises control over its own
actions.
iii)What design issues would you consider when making an autonomous rational agent[5]
As systems become more complex, poorly specified goals or control mechanisms may cause AI
agents to engage in unwanted and harmful outcomes. Thus it is necessary to design AI agents
that follow initial programming intentions as the program grows in complexity. How to specify
these initial intentions has also been an obstacle to designing safe AI agents. There is a need for
the AI agent to have redundant safety mechanisms to ensure that any programming errors do not
cascade into major problems. Humans are autonomous intelligent agents that have avoided these
problems and the present manuscript argues that by understanding human self-regulation and
goal setting, we may be better able safe autonomous rational agents may be designed. Some
general principles of human self-regulation are outlined and specific guidance for AI design is
given. Architecture and Program that is the method of turning environmental input into actions
and architecture which include hardware and or software on which agent`s program runs is also
an important design issue to consider when making an autonomous rational agent. Also it is vital
to know what the goal of an agent is to clearly design a good autonomous rational agent.
Translate the following sentences A to E below in to First Order Logic. Use only one logic
sentence for each English sentence. Do not include any ground variables. Also you must use
predicates

A.Every home has a fly [2]


∀x∃y (home (x), fly (y) → has (x,y))
B.Some homes are made of pole and daga [2]
∃x (home(x)→(made(x,pole) made(x, daga)
C.Every home has an owner [2]
∀x(home(x) → ∃y.owns(y, x))
D .I own a home [2]
(x) →own (I,x)
E.Peter does not own a home [2]
¬∃x.(owns(Peter, x) ∧ house(x))

Question 2: Knowledge representation

a.what is a Semantic Network? [2]


Semantic Network is a knowledge base that represents semantic relations between concepts in a
network. This is often used as a form of knowledge representation. It is a directed or undirected
graph consisting of vertices, which represent concepts, and edges, which represent semantic
relations between concepts, mapping or connecting semantic fields.
b. when would you want to use a Semantic Network in AI [3]
Semantic Network in AI are useful when one has knowledge that is best understood as a set of
concepts that are related to one another. Most semantic networks are cognitively based. They
also consist of arcs and nodes which can be organized into a taxonomic hierarchy. Semantic
networks contributed ideas of spreading activation, inheritance, and nodes as proto-objects
Nursery Rhymes
Jack and Jill went up the hill to fetch a pail of water. Jack fell down and broke his crown and Jill
came tumbling after. He ran home as fast as he could. Jack mend his head with vinegar and
brown paper.
a. represent the nursery rhyme above using a semantic network [5]

Jack Jill

Ran
Fell down Hill

Vinegar and Brown paper

Water

A shoe is coloured i.e. black, brown etc. A shoe has a given size, and has a tongue and laces.
A person can buy and own a shoe.
b. Represent this information using semantic Frames [5]
Shoe (
(Colour : Black or Brown)
(Owner : Human}
)

Person (
(Name: MyName)
(Age: Female)
(Sex: Male)
)

Components (
(Tongue and laces)
)
c. Write a prolog representation of this network [5]
Person _property (shoe)
Person _identity (name, sex , age)
Cellphone _components(tongue, laces)
Shoesize _(measurement)

Question 3: Search Strategies


i)Distinguish between an informed search and uniformed search strategy [2]
The prior difference between informed and uninformed search is that the informed search
provides the guidance on where and how to find the solution. Conversely, the uninformed search
gives no additional information about the problem except its specification. However, between
both informed and uninformed search techniques, the informed search is more efficient and cost-
effective.
ii)What is a Evaluation function? [3]
The evaluation function which is also known as heuristic function is a way to inform the search
about the direction to a goal. It provides an informed way to guess which neighbour of a node
will lead to a goal. It must use only information that can be readily obtained about a node. An
evaluation function is a function that calculates an approximate cost to a problem (or ranks
alternatives). For example the problem might be finding the shortest driving distance to a point.
A heuristic cost would be the straight line distance to the point. It is simple and quick to
calculate, an important property of most heuristics. The true distance would likely be higher as
we have to stick to roads and is much harder to calculate.
i)When do you say that an search strategy is admissible?[3]
A search strategy is admissible  if it never overestimates the cost of reaching the goal, i.e. the
cost it estimates to reach the goal is not higher than the lowest possible cost from the current
point in the path.
ii)Write a note on what you understand is Means Ends Analysis [5]
Means-ends analysis is a problem solving strategy that arose from the work on problem solving
of Newell and Simon (1972).  In means-ends analysis, one solves a problem by considering the
obstacles that stand between the initial problem state and the goal state.  The elimination of these
obstacles (and, recursively, the obstacles in the way of eliminating these obstacles) are then
defined as (simpler) sub goals to be achieved.  When all of the sub goals have been achieved –
when all of the obstacles are out of the way – then the main goal of interest has been achieved. 
Because the sub goals have been called up by the need to solve this main goal, means-ends
analysis can be viewed as a search strategy in which the long-range goal is always kept in mind
to guide problem solving.  It is not as near-sighted as other search techniques, like hill climbing.
Means-ends analysis is a version of divide-and-conquer.  The difference between the two is that
divide-and-conquer is purely recursive: the sub problems that are solved are always of the same
type.  Means-ends analysis is more flexible, and less obviously recursive, because the sub
problems that are defined for it need not all be of the same type.
References: Newell, A., & Simon, H. A. (1972). Human Problem Solving. Englewood Cliffs, NJ:
Prentice-Hall.
iii)What do you understand is the travelling salesman problem? [3]
The travelling salesman problem (TSP) is an algorithmic problem tasked with finding the
shortest route between a set of points and locations that must be visited. In the problem
statement, the points are the cities a salesperson might visit. The salesman`s goal is to keep both
the travel costs and the distance travelled as low as possible. Focused on optimization, TSP is
often used in computer science to find the most efficient route for data to travel between various
nodes.
vi)Why would you think the Hill-climbing Algorithm is best to deal the travelling salesman
problem? [2]
The Hill-climbing Algorithm is best to deal the travelling salesman problem because it can be
applied to the travelling salesman problem. It is easy to find an initial solution that visits all the
cities but will likely be very poor compared to the optimal solution. The algorithm starts with
such a solution and makes small improvements to it, such as switching the order in which two
cities are visited. Eventually, a much shorter route is likely to be obtained.
Describe the following Algorithm:
1.Simulated Annealing
Simulated Annealing (SA) is an effective and general form of optimization.  It is useful in
finding global optima in the presence of large numbers of local optima.  “Annealing” refers to an
analogy with thermodynamics, specifically with the way that metals cool and anneal.  Simulated
annealing uses the objective function of an optimization problem instead of the energy of a
material.
The algorithm is basically hill-climbing except instead of picking the best move, it picks a
random move.  If the selected move improves the solution, then it is always accepted. 
Otherwise, the algorithm makes the move anyway with some probability less than 1.  The
probability decreases exponentially with the “badness” of the move, which is the amount deltaE
by which the solution is worsened that is energy is increased.
Prob(accepting uphill move) ~ 1 - exp(deltaE / kT))
A parameter T is also used to determine this probability.  It is analogous to temperature in an
annealing system.  At higher values of T, uphill moves are more likely to occur.  As T tends to
zero, they become more and more unlikely, until the algorithm behaves more or less like hill-
climbing.  In a typical SA optimization, T starts high and is gradually decreased according to an
“annealing schedule”.  The parameter k is some constant that relates temperature to energy (in
nature it is Boltzmann’s constant.)
Simulated annealing is typically used in discrete, but very large, configuration spaces, such as
the set of possible orders of cities in the Travelling Salesman problem and in VLSI routing. It has
a broad range of application that is still being explored.
2.Hill climbing [5]
A hill-climbing algorithm is a local search algorithm that moves continuously upward
(increasing) until the best solution is attained. This algorithm comes to an end when the peak is
reached. This algorithm has a node that comprises two parts: state and value. It begins with a
non-optimal state (the hill’s base) and upgrades this state until a certain precondition is met. The
heuristic function is used as the basis for this precondition. The process of continuous
improvement of the current state of iteration can be termed as climbing. This explains why the
algorithm is termed as a hill-climbing algorithm.
A hill-climbing algorithm’s objective is to attain an optimal state that is an upgrade of the
existing state. When the current state is improved, the algorithm will perform further incremental
changes to the improved state. This process will continue until a peak solution is achieved. The
peak state cannot undergo further improvements.
A hill-climbing algorithm has four main features:
It employs a greedy approach: This means that it moves in a direction in which the cost
function is optimized. The greedy approach  enables the algorithm to establish local maxima or
minima.
No Backtracking: A hill-climbing algorithm only works on the current state and succeeding
states (future). It does not look at the previous states.
Feedback mechanism: The algorithm has a feedback mechanism that helps it decide on the
direction of movement (whether up or down the hill). The feedback mechanism is enhanced
through the generate-and-test technique. 
Incremental change: The algorithm improves the current solution by incremental changes.

3. A* Search Algorithm [5]


A * algorithm is a searching algorithm that searches for the shortest path between the initial and
the final state. It is used in various applications, such as maps.
In maps the A* algorithm is used to calculate the shortest distance between the source (initial
state) and the destination (final state).
A* algorithm has 3 parameters:
g : the cost of moving from the initial cell to the current cell. Basically, it is the sum of all the
cells that have been visited since leaving the first cell.
h : also known as the heuristic value, it is the estimated cost of moving from the current cell to
the final cell. The actual cost cannot be calculated until the final cell is reached. Hence, h is the
estimated cost. We must make sure that there is never an over estimation of the cost.
f : it is the sum of g and h. So, f = g + h
The way that the algorithm makes its decisions is by taking the f-value into account. The
algorithm selects the smallest f-valued cell and moves to that cell. This process continues until
the algorithm reaches its goal cell.

4.Iterative deepening [5]


Iterative Deepening Search is an iterative graph searching strategy that takes advantage of the
completeness of the Breadth-First Search (BFS) strategy but uses much less memory in each
iteration (similar to  Depth-First Search).
IDS achieves the desired completeness by enforcing a depth-limit on DFS that mitigates the
possibility of getting stuck in an infinite or a very long branch. It searches each branch of a node
from left to right until it reaches the required depth. Once it has, IDS goes back to the root node
and explores a different branch that is similar to DFS.
Iterative deepening A* (IDA*) is a graph traversal and path search algorithm that can find
the shortest path between a designated start node and any member of a set of goal nodes in a
weighted graph. It is a variant of iterative deepening search that borrows the idea to use a
heuristic function to evaluate the remaining cost to get to the goal from the A* search algorithm.
Since it is a depth-first search algorithm, its memory usage is lower than in A*, but unlike
ordinary iterative deepening search, it concentrates on exploring the most promising nodes and
thus does not go to the same depth everywhere in the search tree. Unlike A*, IDA* does not
utilize dynamic programming and therefore often ends up exploring the same nodes many times.
While the standard iterative deepening depth-first search uses search depth as the cutoff for each
iteration, the IDA* uses the more informative

 {\displaystyle f(n)=g(n)+h(n)} ,

where {\displaystyle g(n)}  is the cost to travel from the root to node {\displaystyle n}
 and {\displaystyle h(n)}  is a problem-specific heuristic estimate of the cost to travel

from {\displaystyle n}  to the goal.

5.British Museum [5]


The British Museum algorithm is a general approach to find a solution by checking all
possibilities one by one, beginning with the smallest. The term refers to a conceptual, not a
practical, technique where the number of possibilities are enormous.
For instance, one may, in theory, find the smallest program that solves a particular problem in
the following way:
Generate all possible source codes of length one character. Check each one to see if it solves the
problem. (Note: the halting problem makes this check troublesome.)
If not, generate and check all programs of two characters, three characters and so forth.
Conceptually, this finds the smallest program, but in practice it tends to take an unacceptable
amount of time (more than the lifetime of the universe, in many instances).
Similar arguments can be made to show that optimizations, theorem proving, language
recognition is possible or impossible.
Newell, Shaw, and Simon  called this procedure the British Museum algorithm.

Question 4 Game Playing

Describe

a)MinMax principle [5]

Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-making


and game theory. It provides an optimal move for the player assuming that opponent is also
playing optimally.
Mini-Max algorithm uses recursion to search through the game-tree.
Min-Max algorithm is mostly used for game playing in AI such as Chess, Checkers, tic-tac-toe,
go, and various tow-players game. This Algorithm computes the minimax decision for the
current state.
In this algorithm two players play the game, one is called MAX and other is called MIN.
Both the players fight it as the opponent player gets the minimum benefit while they get the
maximum benefit.
Both Players of the game are opponent of each other, where MAX will select the maximized
value and MIN will select the minimized value.
The minimax algorithm performs a depth-first search algorithm for the exploration of the
complete game tree.
The minimax algorithm proceeds all the way down to the terminal node of the tree, then
backtrack the tree as the recursion.

b)Alpha Beta Pruning [5]

Alpha Beta Pruning is a method that optimizes the Minimax algorithm . The number of states to
be visited by the minimax algorithm are exponential, which shoots up the time complexity. Some
of the brances  of the decision tree are useless, and the same result can be achieved if they were
never visited. Therefore, Alpha Beta Pruning cuts off these useless branches and, in best-case,
cuts back the exponent to half.
Alpha Beta Pruning relieved its name because it uses two parameters called alpha and beta to
make the decision of pruning branches.
–Alpha is used and updated only by the Maximizer and it represents the maximum value found
so far. The initial value is set to -∞. The value of alpha is passed down to children node, but the
actual node values are passed up while backtracking.
–Beta is used and updated only by the Minimizer and it represents the minimum value found.
The initial value is ∞. Again, the value of beta is passed down to the children node, but actual
node values are used to backtrack.
The condition used by alpha beta pruning to prune the useless branches is: alpha >= beta

You might also like