Professional Documents
Culture Documents
Unit 1
Unit 1
"It is a branch of computer science by which we can create intelligent machines which
can behave like a human, think like humans, and able to make decisions."
Artificial Intelligence exists when a machine can have human based skills such as
learning, reasoning, and solving problems. With Artificial Intelligence you do not need to
preprogram a machine to do some work, despite that you can create a machine with programmed
algorithms which can work with own intelligence, and that is the awesomeness of AI. It is
believed that AI is not a new technology, and some people says that as per Greek myth, there
were Mechanical men in early days which can work and behave like humans.
DEFINITIONS OF AI
AI definitions can be categorized into four, they are as follows:
• Systems that think like humans
• Systems that think rationally
• Systems that act like humans
• System that act rationally
• With the help of AI, you can build such Robots which can work in an environment where
survival of humans can be at risk.
• AI opens a path for other new technologies, new devices, and new Opportunities.
• Proving a theorem
• Playing chess
• Plan some surgical operation
• Driving a car in traffic
5. Creating some system which can exhibit intelligent behavior, learn new things by itself,
demonstrate, explain, and can advise to its user.
To create the AI first we should know that how intelligence is composed, so the Intelligence
is an intangible part of our brain which is a combination of Reasoning, learning, problem-solving
perception, language understanding, etc. To achieve the above factors for a machine or software
Artificial Intelligence requires the following discipline:
• Year 1943: The first work which is now recognized as AI was done by Warren McCulloch
and Walter pits in 1943. They proposed a model of artificial neurons.
• Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection
strength between neurons. His rule is now called Hebbian learning.
• Year 1950: The Alan Turing who was an English mathematician and pioneered Machine
learning in 1950. Alan Turing publishes "Computing Machinery and Intelligence" in
which he proposed a test. The test can check the machine's ability to exhibit intelligent
behavior equivalent to human intelligence, called a Turing test.
• Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial intelligence
program"Which was named as "Logic Theorist". This program had proved
38 of 52 Mathematics theorems, and find new and more elegant proofs for some
theorems.
• Year 1966: The researchers emphasized developing algorithms which can solve
mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, which was
named as ELIZA.
• Year 1972: The first intelligent humanoid robot was built in Japan which was named as
WABOT-1.
• The duration between years 1974 to 1980 was the first AI winter duration. AI winter refers
to the time period where computer scientist dealt with a severe shortage of funding from
government for AI researches.
• During AI winters, an interest of publicity on artificial intelligence was decreased.
• Year 1980: After AI winter duration, AI came back with "Expert System". Expert systems
were programmed that emulate the decision-making ability of a human expert.
• In the Year 1980, the first national conference of the American Association of Artificial
Intelligence was held at Stanford University.
• The duration between the years 1987 to 1993 was the second AI Winter duration.
• Again Investors and government stopped in funding for AI research as due to high cost but
not efficient result. The expert system such as XCON was very cost effective.
• Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary Kasparov,
and became the first computer to beat a world chess champion.
• Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum
cleaner.
1.4.7 Deep learning, big data and artificial general intelligence (2011-present)
• Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to
solve the complex questions as well as riddles. Watson had proved that it could understand
natural language and can solve tricky questions quickly.
• Year 2012: Google has launched an Android app feature "Google now", which was able
to provide information to the user as a prediction.
• Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the
infamous "Turing test."
• Year 2018: The "Project Debater" from IBM debated on complex topics with two master
debaters and also performed extremely well.
• Google has demonstrated an AI program "Duplex" which was a virtual assistant and which
had taken hairdresser appointment on call, and lady on other side didn't notice that she was
talking with the machine.
Now AI has developed to a remarkable level. The concept of Deep learning, big data, and data
science are now trending like a boom. Nowadays companies like Google, Facebook, IBM, and
Amazon are working with AI and creating amazing devices. The future of Artificial Intelligence
is inspiring and will come with high intelligence.
1.4.8 Future of artificial intelligence
Autonomous Transportation:
In future, enhanced automated transportation the technology will evolve and we will see in our
roads replicas from Back to the Future, where transportations like public buses, cabs, and even
private vehicles will go driverless and on autopilot. With more precision, smart vehicleswill
take over the roads and pave way for safer, faster and economical transport systems.
Robots into Risky Jobs:
Today, some of the most dangerous jobs are done by humans. Right from cleaning sewage to
fighting fire and diffusing bombs, it‟s we who get down, get our hands dirty and risk our lives.
The number of human lives we lose is also very high in these processes. In the near future, we can
expect machines or robots to take care of them. As artificial intelligence evolves and smarter robots
roll out, we can see them replacing humans at some of the riskiest jobs in the world. That‟s the
only time we expect automation to take away jobs.
An agent is anything that can viewed as perceiving its environment through sensors and
acting upon that environment through effectors. An Agent runs in the cycle of perceiving, thinking,
and acting those inputs and display output on the screen.
Hence the world around us is full of agents such as thermostat, cellphone, camera, and even
we are also agents.
Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through sensors.
Actuators: Actuators are the component of machines that converts energy into motion.
The actuators are only responsible for moving and controlling a system. An actuator can be an
electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can be legs,
wheels, arms, fingers, wings, fins, and display screen.
Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for
sensors andvarious motors for actuators.
Software Agent: Software agent can have keystrokes, file contents as sensory input and
act onthose inputs and display output on the screen.
2.1 PROBLEM-SOLVING AGENTS
In Artificial Intelligence, Search techniques are universal problem-solving methods.
Rational agents or Problem-solving agents in AI mostly used these search strategies or algorithms
to solve a specific problem and provide the best result. Problem-solving agents are the goal-based
agents and use atomic representation.
2.1.1 Search Algorithm Terminologies
Search: Searching is a step by step procedure to solve a search-problem in a given search space.
A search problem can have three main factors:
• Search Space: Search space represents a set of possible solutions, which a system may
have.
• Start State: It is a state from where agent begins the search.
• Goal test: It is a function which observe the current state and returns whether the goal state
is achieved or not.
Search tree: A tree representation of search problem is called Search tree. The root of the search
tree is the root node which is corresponding to the initial state.
Actions: It gives the description of all the available actions to the agent.
Transition model: A description of what each action do, can be represented as a transition
model.
Path Cost: It is a function which assigns a numeric cost to each path.
Solution: It is an action sequence which leads from the start node to the goal node.
Optimal Solution: If a solution has the lowest cost among all solutions.
Note: Backtracking is an algorithm technique for finding all possible solutions using recursion.
Advantage:
• DFS requires very less memory as it only needs to store a stack of the nodes on the path
from root node to the current node.
• It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right
path).
This algorithm performs depth-first search up to a certain "depth limit", and it keeps increasing
the depth limit after each iteration until the goal node is found. This Search algorithm combines the
benefits of Breadth-first search's fast search and depth-first search's memoryefficiency.
The iterative search algorithm is useful uninformed search when search space is large, and
depth of goal node is unknown.
Advantages:
• It combines the benefits of BFS and DFS search algorithm in terms of fast search and
memory efficiency.
Disadvantages:
• The main drawback of IDDFS is that it repeats all the work of the previous phase.
Example:
Following tree structure is showing the iterative deepening depth-first search. IDDFS algorithm
performs various iterations until it does not find the goal node. The iteration performed by the algorithm
is given as:
At each point in the search space, only those node is expanded which have the lowest value of
f(n), and the algorithm terminates when the goal node is found.
Algorithm of A* search:
Step1: Place the starting node in the OPEN list.
Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and stops. Step
3: Select the node from the OPEN list which has the smallest value of evaluation function(g+h), if
node n is goal node then return success and stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n into the closed list. For each
CS8491 – AI & ML page
55
successor n', check whether n' is already in the OPEN or CLOSED list, if not then compute evaluation
function for n' and place into Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the back pointer
which reflects the lowest g(n') value.
Step 6: Return to Step 2.
Advantages:
• A* search algorithm is the best algorithm than other search algorithms.
• A* search algorithm is optimal and complete.
• This algorithm can solve very complex problems.
Disadvantages:
• It does not always produce the shortest path as it mostly based on heuristics and
approximation.
Problems are the issues which come across any system. A solution is needed to solve that
particular problem.
Steps to Solve Problem using Artificial Intelligence
The definition of the problem must be included precisely. It should contain the possible
initial as well as final situations which should result in acceptable solution.
Analyzing the problem and its requirement must be done as few features can have
immense impact on the resulting solution.
Identification of Solutions:
This phase generates reasonable amount of solutions to the given problem in a particular
range.
Choosing a Solution:
From all the identified solutions, the best solution is chosen basis on the results produced
by respective solutions.
Implementation:
Problem Formulation
An element contains the value 0, if the corresponding square is blank; 1, if it is filled with
“O” and 2, if it is filled with “X”.
Hence starting state is {0,0,0,0,0,0,0,0,0}. The goal state or winning combination will be
board position having “O” or “X” separately in the combination of ({1,2,3}, {4,5,6},
{7,8,9},{1,4,7},{2,5,8}, {3,6,9}, {1,5,9}, { 3,5,7}) element values. Hence two goal states can be
{2,0,1,1,2,0,0,0,2} and {2,2,2,0,1,0,1,0,0}. These values correspond to the goal States.
Any board position satisfying this condition would be declared as win for corresponding
The 8-puzzle problem belongs to the category of “sliding-block puzzle” types of problems.
It is described as follows:
“It has set of a 3x3 board having 9 block spaces out of which, 8 blocks are having tiles bearing
number from 1 to 8. One space is left blank. The tile adjacent to blank space can move into it.
We have to arrange the tiles in a sequence.” The start state is any situation of tiles, and goal state
is tiles arranged in a specific sequence. Solution of this problem is reporting of “movement of
tiles” in order to reach the goal state. The transition function or legal move is any one tile
movement by one space in any direction (i.e. towards left or right or up or down) if that space is
blank.
It is a normal chess game. In a chess game problem, the start state is the initial
configuration of chessboard. The final or goal state is any board configuration, which is a winning
position for any player (clearly, there may be multiple final positions and each board
All or some of these production rules will have to be used in a particular sequence to find
the solution of the problem. The rules applied and their sequence is presented in the following
Table.
Local search algorithms are widely applied to numerous hard computational problems, including
problems from computer science (particularly artificial intelligence), mathematics, operations
research, engineering, and bioinformatics. Examples of local search algorithms are WalkSAT,
the 2-opt algorithm for the Traveling Salesman Problem and the Metropolis–Hastings algorithm.
A local search algorithm starts from a candidate solution and then iteratively moves to a
neighbor solution. This is only possible if a neighborhood relation is defined on the search space.
As an example, the neighborhood of a vertex cover is another vertex cover only differing by one
node. For boolean satisfiability, the neighbors of a truth assignment are usually the truth
assignments only differing from it by the evaluation of a variable. The same problem may have
multiple different neighborhoods defined on it; local optimization with neighborhoods that
involve changing up to k components of the solution is often referred to as k-opt.
Typically, every candidate solution has more than one neighbor solution; the choice of
which one to move to is taken using only information about the solutions in the neighborhood of
the current one, hence the name local search. When the choice of the neighbor solution is done
by taking the one locally maximizing the criterion, the metaheuristic takes the name hill climbing.
When no improving configurations are present in the neighborhood, local search is stuck at a
locally optimal point. This local-optima problem can be cured by using restarts (repeated local
search with different initial conditions), or more complex schemes based on iterations, like iterated
local search, on memory, like reactive search optimization, on memory- less stochastic
modifications, like simulated annealing.
Termination of local search can be based on a time bound. Another common choice is to
terminate when the best solution found by the algorithm has not been improved in a given number
of steps. Local search is an anytime algorithm: it can return a valid solution even if it's interrupted
at any time before it ends. Local search algorithms are typically approximation or incomplete
algorithms, as the search may stop even if the best solution found by the algorithm is not optimal.
CS8491 – AI & ML page
66
This can happen even if termination is due to the impossibility of improving the solution, as the
optimal solution can lie far from the neighborhood of the solutions crossed by the algorithms.
For specific problems it is possible to devise neighborhoods which are very large, possibly
exponentially sized. If the best solution within the neighborhood can be found efficiently, such
algorithms are referred to as very large-scale neighborhood search algorithms.
Features of Local search
• Keep track of single current state
• Move only to neighboring states
• Ignore paths
Advantages:
• Use very little memory
• Can often find reasonable solutions in large or infinite (continuous) state spaces.
• Features of “Pure optimization” problems
• All states have an objective function
• Goal is to find state with max (or min) objective value
• Does not quite fit into path-cost/goal-state formulation
• Local search can do quite well on these problems.
2.4.1 Hill Climbing Algorithm in Artificial Intelligence
• Hill climbing algorithm is a local search algorithm which continuously moves in the
direction of increasing elevation/value to find the peak of the mountain or best solution to
the problem. It terminates when it reaches a peak value where no neighbor has a higher
value.
• Hill climbing algorithm is a technique which is used for optimizing the mathematical
problems. One of the widely discussed examples of Hill climbing algorithm is Traveling-
salesman Problem in which we need to minimize the distance traveled by the salesman.
• It is also called greedy local search as it only looks to its good immediate neighbor state
and not beyond that.
• A node of hill climbing algorithm has two components which are state and value.
• Hill Climbing is mostly used when a good heuristic is available.
• In this algorithm, we don't need to maintain and handle the search tree or graph as it only
keeps a single current state.
c. Ridges: A ridge is a special form of the local maximum. It has an area which is higher than its
surrounding areas, but itself has a slope, and cannot be reached in a single move.
Solution: With the use of bidirectional search, or by moving in different directions, we can
improve this problem.
Perfect information: A game with the perfect information is that in which agents can look
into the complete board. Agents have all the information about the game, and they can see each
other moves also. Examples are Chess, Checkers, Go, etc.
Imperfect information: If in a game agents do not have all information about the game
and not aware with what's going on, such type of games are called the game with imperfect
information, such as tic-tac-toe, Battleship, blind, Bridge, etc.
Note: In this topic, we will discuss deterministic games, fully observable environment, zero-sum,
and where each agent acts alternatively.
In final game states, AI should select the winning move in such a way that each move
assigns a numerical value based on its board state. The ranking should be given as:
a) Win: 1
b) Draw: O
c) Lose: -1
It is important to consider the aspects related to winning with the highest ranking, losing
to the lowest, and a draw between the two players. The Max part of Minimax algorithm states that
the user has to select the move with the highest value. Final Game States are ranked on the basis
of their status of winning, losing or a draw. Ranking of Intermediate Game States is based on
the turn of player to make available moves. If it's X's turn, set the rank to that of the maximum
available move. If a move results into a win, X can take it. If it's O's turn, set the rank to that of the
minimum available move. If a move results into a loss, X can avoid it.
Search tree: A tree that is superimposed on the full game tree, and examines enough nodes
to allow a player to determine what move to make.
2.7.3 Optimal decisions in games
Optimal solution: In adversarial search, the optimal solution is a contingent strategy,
which specifies MAX(the player on our side)‟s move in the initial state, then MAX‟s move in the
states resulting from every possible response by MIN(the opponent), then MAX‟s moves in the
states resulting from every possible response by MIN to those moves, and so on.
Minimax Algorithm: The Min-Max algorithm is generally used for a game consisting of
two players such as tic-tac-toe, checkers, chess etc. All these games are logical games, so they can
be described by set of rules. It is possible to determine the next available moves from a given
Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so we will
compare each value in terminal state with initial value of Maximizer and determines the higher
nodes values.
Note: To better understand this topic, kindly study the minimax algorithm.
Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is compared
with firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node D and node value
will also 3.
Step 3: Now algorithm backtracks to node B, where the value of β will change as this is a turn of
Min, Now β= +∞, will compare with the available subsequent nodes value, i.e. min (∞, 3) = 3,
hence at node B now α= -∞, and β= 3.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value of beta
will be changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and again it
satisfies the condition α>=β, so the next child of C which is G will be pruned, and the algorithm
will not compute the entire sub-tree G.