Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

‭ RCET‬

M
‭VSSUT‬
‭Gate Smashers YT‬

‭INTRODUCTION‬
‭Artificial Intelligence‬
‭ rtificial Intelligence (AI) is the part of computer science concerned with designing intelligent‬
A
‭computer systems, that is, systems that exhibit characteristics we associate with intelligence in‬
‭human behaviour – understanding language, learning, reasoning, solving problems, and so on.‬
‭●‬ ‭Intelligence:‬‭Intelligence is the ability to acquire,‬‭understand and apply knowledge to‬
‭achieve goals in the world.‬

‭Objectives of AI?‬
‭ .‬
1 ‭ roblem-solving‬
P
‭2.‬ ‭Knowledge representation‬
‭3.‬ ‭Learning methods in AI in engineering Intelligent systems‬
‭4.‬ ‭Access the applicability, strengths and weaknesses of these methods in solving‬
‭engineering problems.‬

‭4 categories of AI definition‬

‭1.‬ ‭Thinking humanly: The cognitive modelling approach‬


‭a.‬ T
‭ he interdisciplinary field of cognitive science brings together computer models‬
‭from AI and experimental techniques from psychology to construct precise and‬
‭testable theories of the human mind.‬
‭2.‬ ‭Acting humanly: The Turing Test approach‬
‭a.‬ ‭The inability to distinguish computer responses from human responses is called‬
t‭ he‬‭Turing test.‬‭Intelligence requires knowledge.‬
‭b.‬ ‭The Turing Test, proposed by Alan Turing (1950), was designed to provide a‬
‭satisfactory operational definition of intelligence.‬
‭ .‬ ‭The computer would need to possess the following capabilities:‬
c
‭i.‬ ‭natural language processing‬‭to enable it to communicate‬‭successfully in‬
‭English‬
‭ii.‬ ‭knowledge representation‬‭to store what it knows or‬‭hears‬
‭iii.‬ ‭automated reasoning‬‭to use the stored information‬‭to answer questions‬
‭and to draw new conclusions‬
‭iv.‬ ‭machine learning‬‭to adapt to new circumstances and‬‭to detect and‬
‭extrapolate patterns‬

‭3.‬ ‭Thinking rationally: The “laws of thought” approach‬


‭a.‬ ‭In the “laws of thought” approach to AI, the emphasis is on correct inferences.‬

‭4.‬ ‭Acting rationally: The rational agent approach‬


‭a.‬ ‭Computer Agents operate autonomously, perceive their environment, persist over‬
‭ prolonged time period, adapt to change, and create and pursue goals.‬
a
‭b.‬ ‭A rational agent is one that acts so as to achieve the best outcome or when there‬
‭is uncertainty, the best-expected outcome.‬
‭ .‬ ‭The rational-agent approach has two advantages over the other approaches.‬
c
‭AI Applications‬
‭1.‬ E ‭ xpert Systems:‬‭Application-specific systems that‬‭rely on obtaining the knowledge of‬
‭human experts in an area and programming that knowledge into a system.‬
‭2.‬ ‭Machine Learning‬
‭3.‬ ‭Natural Language Processing:‬‭It is possible to interact‬‭with a computer that‬
‭understands natural language spoken by humans.‬
‭4.‬ ‭Speech Processing:‬‭Some intelligent systems are capable‬‭of hearing and‬
‭comprehending the language in terms of sentences and their meanings while a human‬
‭talks to it. March of speech recognition is statistically based hence is called speech‬
‭learning.‬
‭5.‬ ‭Vision Processing:‬‭These systems understand, interpret,‬‭and comprehend visual input‬
‭on the computer.‬
‭6.‬ ‭Intelligent Tutoring‬
‭7.‬ ‭Game Playing:‬‭AI plays a vital role in strategic games‬‭such as chess, poker, tic-tac-toe,‬
‭etc., where machines can think of a large number of possible positions based on‬
‭heuristic knowledge.‬
‭8.‬ ‭Robot Programming:‬‭Robots are able to perform the‬‭tasks given by a human. They have‬
‭sensors to detect physical data from the real world such as light, heat, temperature,‬
‭movement, sound, bump, and pressure. They have efficient processors, multiple sensors‬
‭and huge memory, to exhibit intelligence. In addition, they are capable of learning from‬
‭their mistakes and they can adapt to the new environment.‬

‭Four foundations of AI‬


‭1.‬ ‭Representation‬
‭a.‬ R ‭ epresentation‬‭is the way knowledge is encoded. It‬‭defines a system's‬
‭performance in doing something.‬
‭b.‬ ‭includes all the knowledge, including basic programs for testing and measuring a‬
‭structure in question, plus all the programs for transforming the structure into‬
‭another one in ways appropriate to the task.‬
‭c.‬ ‭Example -‬
‭i.‬ ‭symbolic description of a room for a moving robot and‬
‭ii.‬ ‭a description of a person with a disease for a classification program.‬

‭2.‬ ‭Search‬
‭a.‬ ‭extra information is used to guide the search. Search methods include:‬
‭i.‬ ‭Informed Searches‬
‭ii.‬ ‭Uninformed Searches‬
‭3.‬ ‭Reasoning‬
‭a.‬ T ‭ ypically rule-based systems are developed using human expertise to identify the‬
‭rules of the problem;‬
‭b.‬ ‭A database of previous problems and solutions are searching for the closest‬
‭match to the current problem‬

‭4.‬ ‭Learning‬
‭a.‬ A ‭ I systems use the ability to adapt or learn, based on the history, or knowledge, of‬
‭the system.‬
‭b.‬ ‭Learning takes the form of updating knowledge, adjusting the search,‬
‭reconfiguring the representation, and augmenting the reasoning.‬
‭c.‬ ‭Learning methods use statistical learning (using the number of the different‬
‭types of historical events to bias future action or to develop inductive‬
‭hypotheses, typically assuming that events follow some known distribution of‬
‭occurrence)‬
‭AGENTS‬
‭ n agent is anything that can be viewed as perceiving its environment through sensors and‬
A
‭acting upon that environment through actuators. Example -‬
‭●‬ ‭A‬‭human agent‬‭has eyes, ears, and‬
‭other organs for sensors and hands,‬
‭legs, vocal tract, and so on for‬
‭actuators.‬
‭●‬ ‭A‬‭robotic agent‬‭might have cameras‬
‭and infrared range finders for‬
‭sensors and various motors for‬
‭actuators.‬
‭●‬ ‭A‬‭software agent‬‭receives‬
‭keystrokes, file contents, and‬
‭network packets as sensory inputs‬
‭and acts on the environment by‬
‭displaying on the screen, writing‬
‭files, and sending network packets.‬

‭Sensors‬
I‭t is a device that senses changes in the environment & sends that information to other‬
‭electronic devices. An agent senses the environment through sensors only.‬

‭Actuators‬
I‭t is a component of the machine that‬‭converts electrical‬‭signals into mechanical motion.‬
‭Actuators are responsible for the movement & control of the system.‬
‭Goals of Agent‬
‭ .‬ H
1 ‭ igh Performance‬
‭2.‬ ‭Optimised Results‬
‭3.‬ ‭Rational Action‬

‭Agent → Perceive → Decision → Action‬

‭Rational Agent‬
‭ rational agent is one that does the right thing‬‭,‬‭i,e, For each possible percept sequence, a‬
A
‭rational agent should select an action that is expected to maximize its performance measure,‬
‭given the evidence provided by the percept sequence and whatever built-in knowledge the agent‬
‭has.‬
‭What is rational at any given time depends on four things:‬
‭●‬ ‭The performance measure defines the criterion of success‬
‭●‬ ‭The agent’s prior knowledge of the environment‬
‭●‬ ‭The actions that the agent can perform‬
‭●‬ ‭The agent’s percept sequence to date‬

‭Task Environments‬
‭ ask environments are essentially the “problems” to which rational agents are the “solutions.”‬
T
‭In designing an agent, the first step must always be to specify the task environment as fully as‬
‭possible, which is done via‬‭PEAS‬‭(‭P
‬ ‭e
‬ rformance,‬‭E‬‭nvironment,‬‭A‬‭ctuators,‬‭S‬‭ensors).‬
‭Structure of Agents‬

‭Agent = Architecture + Agent Program‬

‭●‬ A ‭ rchitecture‬‭is the machinery that the agent executes‬‭on. It is a device with sensors and‬
‭actuators, for example, a robotic car, a camera, a PC.‬
‭●‬ ‭Agent program‬‭is an implementation of an agent function.‬
‭●‬ ‭Agent function‬‭is a map from the percept sequence‬‭(history of all that an agent has‬
‭perceived to date) to an action.‬

‭Types of Agents‬
‭ .‬
1 ‭ imple Reflex Agents‬
S
‭2.‬ ‭Model-Based Reflex Agents‬
‭3.‬ ‭Goal-Based Agents‬
‭4.‬ ‭Utility-Based Agents‬
‭5.‬ ‭Learning Agent‬

‭1. Simple Reflex Agents‬


‭●‬ T ‭ he Simple reflex agents are the simplest agents. These agents take decisions on the‬
‭basis of the current percepts and ignore the rest of the percept history.‬
‭●‬ ‭These agents only succeed in the‬‭fully observable‬‭environment‬‭.‬
‭●‬ ‭The Simple reflex agent works on‬‭Condition-action‬‭rule,‬‭which means it maps the current‬
‭state to action. Such as a Room Cleaner agent, it works only if there is dirt in the room.‬

‭Problems‬‭:‬
‭●‬ ‭They have very limited‬
‭intelligence‬
‭●‬ ‭They do not have‬
‭knowledge of‬
‭non-perceptual parts of the‬
‭current state‬
‭●‬ ‭Mostly too big to generate‬
‭and to store.‬
‭●‬ ‭Not adaptive to changes in‬
‭the environment.‬
‭2. Model Based Reflex Agents‬
‭●‬ ‭A model-based agent has two important factors:‬
‭○‬ ‭Model:‬‭It is‬‭knowledge‬‭about "how things happen in‬‭the world," so it is called a‬
‭Model-based agent.‬
‭○‬ ‭Internal State:‬‭It is a representation of the‬‭current‬‭state based on percept history.‬
‭●‬ ‭These agents have the model, "which is knowledge of the world" and based on the‬
‭model they perform actions.‬
‭●‬ ‭The Model-based agent can work in a‬‭partially observable‬‭environment,‬‭and track the‬
‭situation.‬
‭●‬ ‭Updating the agent state requires information about:‬
‭○‬ ‭How the world evolves‬
‭○‬ ‭How the agent's action affects the world.‬

‭3. Goal-based agents (IMP)‬


‭●‬ T ‭ he knowledge of the current state environment is not always sufficient to decide for an‬
‭agent to know what to do. The agent needs to know its goal which describes desirable‬
‭situations.‬
‭●‬ ‭Goal-based agents expand the capabilities of the model-based agent by having the‬
‭"goal" information.‬
‭●‬ ‭They choose an action, so that they can achieve the goal.‬
‭●‬ ‭These agents may have to consider a long sequence of possible actions before deciding‬
‭whether the goal is achieved or not. Such considerations of different scenarios are‬
‭called‬‭searching and planning‬‭, which makes an agent‬‭proactive.‬
‭●‬ ‭Two types of Goal-based agents -‬
‭○‬ ‭Problem-solving agent:‬‭Use atomic representations‬
‭○‬ ‭Planning agents:‬‭use more advanced factored or structured representations‬

‭4. Utility-based agents‬


‭●‬ T ‭ hese agents are similar to the goal-based agent but provide an extra component of‬
‭utility measurement which makes them different by providing a measure of success at‬
‭a given state.‬
‭●‬ ‭Utility-based agent act based not only goals but also the‬‭best way to achieve the goal.‬
‭●‬ ‭The Utility-based agent is useful when there are multiple possible alternatives, and an‬
‭agent has to choose in order to perform the best action.‬
‭●‬ ‭The utility function maps each state to a real number to check how efficiently each‬
‭action achieves the goals.‬
‭5. Learning Agents‬
‭●‬ A ‭ learning agent in AI is the type of agent which can‬‭learn from its past experiences‬‭, or it‬
‭has learning capabilities.‬
‭●‬ ‭It starts to act with basic knowledge and then is able to act and adapt automatically‬
‭through learning.‬
‭●‬ ‭A learning agent has mainly four conceptual components, which are:‬
‭○‬ ‭Critic:‬‭Critic describes‬‭how well the agent is doing‬‭with respect to a fixed‬
‭performance standard. It‬‭gives feedback‬‭to the learning‬‭element.‬
‭○‬ ‭Learning element:‬‭It is responsible for‬‭making improvements‬‭by learning from‬
‭the environment.‬
‭○‬ ‭Performance element:‬‭It is responsible for‬‭selecting‬‭external action.‬
‭○‬ ‭Problem generator:‬‭This component is responsible for‬‭suggesting actions‬‭that‬
‭will lead to new and informative experiences.‬
‭●‬ ‭Hence, learning agents are able to learn, analyze performance, and look for new ways to‬
‭improve the performance.‬
‭PROBLEM SOLVING BY SEARCHING‬
‭ roblem solving is the major area of concern in Artificial Intelligence. It is the process of‬
P
‭generating the solution from given observed data. To solve a particular problem, we need to‬
‭build a system or a method which can generate the required solution.‬
‭●‬ ‭The process of looking for a sequence of actions that reaches the goal is called‬‭search.‬
‭●‬ ‭A search algorithm takes a problem as input and returns a solution in the form of an‬
‭action sequence.‬
‭●‬ ‭Once a solution is found, the actions it recommends can be carried out. This is called the‬
‭execution phase.‬

‭Steps for problem solving in AI‬


‭1.‬ G
‭ oal Formulation:‬‭This one is the first and simple‬‭step in problem-solving. It organizes‬
‭finite steps to formulate a target/goals which require some action to achieve the goal.‬
‭Today the formulation of the goal is based on AI agents.‬

‭2.‬ P
‭ roblem formulation:‬‭It is one of the core steps of‬‭problem-solving which decides what‬
‭action should be taken to achieve the formulated goal. In AI this core part is dependent‬
‭upon a software agent which consists of the following components to formulate the‬
‭associated problem.‬

‭3.‬ I‭ nitial State:‬‭This state requires an initial state‬‭for the problem which starts the AI agent‬
‭towards a specified goal. In this state new methods also initialize problem domain‬
‭solving by a specific class.‬

‭4.‬ A
‭ ction:‬‭This stage of problem formulation works with‬‭a function with a specific class‬
‭taken from the initial state and all possible actions done in this stage.‬

‭5.‬ T
‭ ransition:‬‭This stage of problem formulation integrates‬‭the actual action done by the‬
‭previous action stage and collects the final stage to forward it to their next stage.‬

‭6.‬ G
‭ oal test:‬‭This stage determines if the specified‬‭goal achieved by the integrated‬
‭transition model or not, whenever the goal achieves stop the action and forward into the‬
‭next stage to determine the cost to achieve the goal.‬

‭7.‬ P
‭ ath costing:‬‭This component of problem-solving is‬‭numerically assigned what will be‬
‭the cost to achieve the goal. It requires all hardware software and human working costs.‬
‭Well-defined Problems‬
‭ xplaining this via example -‬‭Suppose the agent is‬‭in Arad, Romania & has to go Bucharest the‬
E
‭following day. Our agent has now adopted the goal of driving to Bucharest and is considering‬
‭where to go from Arad. Three roads lead out of Arad, one toward Sibiu, one to Timișoara, and‬
‭one to Zerind.‬
‭A problem can be defined formally by five components:‬
‭1.‬ ‭Initial State:‬‭The description of‬‭starting configuration‬‭of the agent. For example, the‬
‭initial state for our agent in Romania might be described as‬
In(Arad).‬

‭2.‬ A
‭ ctions:‬‭A description of the‬‭possible actions available‬‭to the agent. Given a particular‬
‭state s, ACTIONS(s) returns the set of actions that can be executed in s. Takes the agent‬
‭from one state to another state. A state can have a number of successor states. For‬
‭example, from the state‬
In(Arad), the applicable actions are {Go(Sibiu), Go(Timișoara),‬

Go(Zerind)}.‬

‭3.‬ T
‭ ransition Model:‬‭A description of‬‭what each action‬‭does‬‭; the formal name for this is the‬
‭transition model, specified by a function RESULT(s, a) that returns the state that results‬
‭from doing action a in state s.‬
RESULT(In(Arad),Go(Zerind)) = In(Zerind).‬

‭4.‬ G
‭ oal Test:‬‭The goal test, which determines‬‭whether‬‭a given state is a goal state.‬
‭Sometimes there is an explicit set of possible goal states, and the test simply checks‬
‭whether the given state is one of them. The agent’s goal in Romania is the singleton set‬
{In(Bucharest)}.‬

‭5.‬ P
‭ ath Cost:‬‭A path cost function that assigns a‬‭numeric‬‭cost to each path.‬‭The‬
‭problem-solving agent chooses a cost function that reflects its own performance‬
‭measure. For the agent trying to get to Bucharest, time is of the essence, so the cost of a‬
‭path might be its‬ ‭
length in kilometres.‬

‭State Space‬
‭●‬ S ‭ tate space is the set of all states reachable from the initial state by any sequence of‬
‭actions.‬
‭●‬ ‭The state space forms a directed network or graph in which the‬‭nodes are states‬‭and the‬
‭links between nodes are actions.‬
‭●‬ ‭A path in the state space is a sequence of states connected by a sequence of actions.‬
‭●‬ ‭The‬‭initial state, actions, and transition model‬‭define‬‭the‬‭state space of the problem‬‭.‬
‭Searching Process‬
‭ .‬
1 ‭ heck the current state‬
C
‭2.‬ ‭Execute the allowable actions to move to the next state.‬
‭3.‬ ‭Check if the new state is a solution state.‬
‭4.‬ ‭If it is not , then the new state becomes the current state and the process is repeated‬
‭until the solution is found‬
‭OR‬
‭The state pace is exhausted‬

‭Properties of Search Algorithms‬


‭1.‬ C ‭ ompleteness:‬‭A search algorithm is said to be complete‬‭if it‬‭guarantees to return a‬
‭solution‬‭if at least any solution exists for any random‬‭input.‬
‭2.‬ ‭Optimality:‬‭If a solution found for an algorithm is‬‭guaranteed to be the best solution‬
‭(lowest path cost) among all other solutions, then such a solution is said to be an‬
‭optimal solution.‬
‭3.‬ ‭Time Complexity:‬‭Time complexity is a‬‭measure of time‬‭for an algorithm to complete its‬
‭task.‬
‭4.‬ ‭Space Complexity:‬‭It is the‬‭maximum storage space‬‭required‬‭at any point during the‬
‭search, as the complexity of the problem.‬

‭Search Strategies‬
‭1.‬ ‭Uninformed Search‬
‭a.‬ ‭BFS‬
‭b.‬ ‭DFS‬
‭c.‬ ‭Uniform Cost Search‬
‭d.‬ ‭Depth limited Search‬
‭e.‬ ‭Iterative Deepening Depth Search‬
‭f.‬ ‭Bidirectional Search‬
‭2.‬ ‭Informed Search‬
‭a.‬ ‭Best First Search‬
‭b.‬ ‭A* Search‬

‭Uninformed Search‬
‭ ninformed search is a class of general-purpose search algorithms which operates in‬‭brute‬
U
‭force-way.‬‭Uninformed search algorithms‬‭do not have‬‭additional information about state or‬
‭search space‬‭other than how to traverse the tree,‬‭so it is also called‬‭blind search.‬
‭1.‬ ‭Breadth-first Search‬
Breadth First Search Algorithm ‭|‬‭BFS InterviewBit‬
‭a.‬ ‭BFS algorithm starts searching breadthwise in a tree or graph, i.e. from the root‬
‭node of the tree and expands all successor nodes at the current level before‬
‭moving to nodes of the next level.‬
‭b.‬ ‭Breadth-first search implemented using‬‭FIFO queue‬‭data structure.‬
‭c.‬ ‭This will choose the‬‭shallowest node first‬‭(closest‬‭to start node).‬

‭d.‬ ‭Advantages:‬
‭i.‬ ‭BFS will provide a solution if any solution exists.‬
‭ii.‬ ‭If there are more than one solutions for a given problem, then BFS will‬
‭provide the minimal solution which requires the least number of steps.‬

‭e.‬ ‭Disadvantages:‬
‭i.‬ ‭It requires lots of memory since each level of the tree must be saved into‬
‭memory to expand the next level.‬
‭ii.‬ ‭BFS needs lots of time if the solution is far away from the root node.‬

‭f.‬ ‭Properties:‬
‭i.‬ ‭Time Complexity:‬‭Time Complexity of BFS algorithm‬‭can be obtained by‬
‭the number of nodes traversed in BFS until the shallowest Node. Where‬
‭the d= depth of shallowest solution and b is a node at every state.‬
2‬ ‭
‭ 3‬ d‬
‭ d‬

T(b) = 1+b‬
‭ +b‬
‭ +.......+ b‬
‭ =‬‭
‭ O(b‬ )‬

‭ii.‬ ‭ pace Complexity:‬‭Space complexity of BFS algorithm‬‭is given by the‬


S
‭Memory size of frontier which is‬‭O(b‬‭d‬‭).‬
‭iii.‬ ‭ ompleteness:‬‭BFS is complete, which means if the shallowest goal node‬
C
‭is at some finite depth, then BFS will find a solution.‬

‭iv.‬ ‭ ptimality:‬‭BFS is optimal if path cost is a non-decreasing‬‭function of the‬


O
‭depth of the node.‬

‭g.‬ ‭Algorithm:‬
‭i.‬ ‭Create a variable called NODE-LIST and set it to initial state.‬
‭ii.‬ ‭Until a goal state is found or NODE-LIST is empty do:‬
‭1.‬ ‭Remove the first element from NODE-LIST and call it E. If‬
‭NODE-LIST was empty, quit.‬
‭2.‬ ‭For each way that each rule can match the state described in E do:‬
‭a.‬ ‭Apply the rule to generate a new state.‬
‭b.‬ ‭If the new state is a goal state, quit and return this state.‬
‭c.‬ ‭Otherwise, add the new state to the end of NODE-LIST‬

‭2.‬ ‭Depth-first Search‬


Depth First Search Algorithm ‭|‬‭DFS InterviewBit‬
‭a.‬ ‭It is called the depth-first search because it starts from the root node and follows‬
‭each path to its greatest depth node before moving to the next path.‬
‭b.‬ ‭DFS uses a‬‭LIFO stack data structure‬‭for its implementation.‬
‭c.‬ ‭We pop the element from the stack when there is no adjacent unvisited node.‬
‭d.‬ ‭Advantages:‬
‭i.‬ ‭DFS requires very less memory as it only needs to store a stack of the‬
‭nodes on the path from root node to the current node.‬
‭ii.‬ ‭It takes less time to reach to the goal node than BFS algorithm (if it‬
‭traverses in the right path).‬

‭e.‬ ‭Disadvantages:‬
‭i.‬ ‭There is the possibility that many states keep reoccurring, and there is no‬
‭guarantee of finding the solution.‬
‭ii.‬ ‭DFS algorithm goes for deep down searching and sometimes it may go to‬
‭the infinite loop.‬

‭f.‬ ‭Properties:‬
‭i.‬ ‭Time Complexity:‬‭Time Complexity of DFS algorithm‬‭can be obtained by‬
‭the number of nodes traversed in DFS until the most depth node. Where‬
‭the m = maximum depth of any node and n is the number of nodes.‬‭(It’s‬
‭way more than BFS).‬
2‬
‭ 3‬
‭ m‬
‭ m‬

T(n)= 1+ n‬
‭ + n‬
‭ +.........+ n‬
‭ =‬‭
‭ O(n‬ )‬

‭ii.‬ ‭ pace Complexity:‬‭DFS algorithm needs to store only‬‭a single path from‬
S
‭the root node, hence space complexity of DFS is equivalent to the size of‬
‭the fringe set, which is‬‭O(b‬‭m‭)‬ .‬

‭iii.‬ ‭ ompleteness:‬‭DFS search algorithm is complete within‬‭finite state space‬


C
‭as it will expand every node within a limited search tree.‬

‭iv.‬ ‭ ptimality:‬‭DFS search algorithm is non-optimal, as‬‭it may generate a‬


O
‭large number of steps or high cost to reach to the goal node.‬
‭g.‬ ‭Algorithm:‬
‭i.‬ ‭Create a variable called NODE-LIST and set it to initial state.‬
‭ii.‬ ‭Until a goal state is found or NODE-LIST is empty do:‬
‭1.‬ ‭Remove the first element from NODE-LIST and call it E. If‬
‭NODE-LIST was empty, quit.‬
‭2.‬ ‭For each way that each rule can match the state described in E do:‬
‭a.‬ ‭Apply the rule to generate a new state.‬
‭b.‬ ‭If the new state is a goal state, quit and return this state.‬
‭c.‬ ‭Otherwise, add the new state to the end of NODE-LIST‬

‭3.‬ ‭Uniform Cost Search‬


‭What is Uniform Cost Search‬‭|‬‭Algorithm‬
‭a.‬ ‭Uniform-cost search is uninformed search: it doesn't use any domain knowledge.‬
‭It expands the least cost node, and it does so in every direction because no‬
‭information about the goal is provided.‬
‭b.‬ ‭It can be viewed as a function‬
f(n) = g(n)‬

‭here g(n) is the cumulative path cost.‬
w
‭c.‬ ‭The path cost is usually taken to be the sum of the step costs.‬
‭ .‬ ‭Properties:‬
d
‭i.‬ ‭Complete‬
‭ii.‬ ‭Optimal/Admissible‬

‭Informed Search‬
I‭nformed Search uses problem-specific knowledge (i.e. heuristics) that can dramatically‬
‭improve the search speed. Heuristics use domain specific knowledge to estimate the quality or‬
‭potential of partial solutions &‬‭identify the most‬‭promising search path.‬
‭Heuristics‬
‭ euristics are criteria, methods or principles for deciding which among several alternative‬
H
‭courses of action promises to be the most effective in order to achieve some goal.‬

‭Heuristic Function‬
‭●‬ A
‭ heuristic function at a node n is an estimate of the optimum cost from the current‬
‭node to a goal. It is denoted by h(n).‬
h(n) = estimated cost of the cheapest path from node n to a goal‬

node.‬

‭●‬ I‭ t’s a function that returns a number, which will tell how easy/difficult it is to move from‬
‭start state to goal state.‬
‭●‬ ‭Examples:‬
‭○‬ ‭We want a path from Kolkata to Guwahati. Heuristic for Guwahati may be‬
‭straight-line distance between Kolkata and Guwahati‬
h(Kolkata) = Euclidean Distance(Kolkata, Guwahati)‬

‭○‬

‭1.‬ ‭Greedy Best-first Search‬


‭a.‬ D ‭ FS is good because it allows a solution to be found without expanding all‬
‭competing branches. BFS is good because it does not get trapped on dead end‬
‭paths.‬‭Best first search combines the advantages of‬‭both DFS and BFS into a‬
‭single method by following a single path at a time, but switching paths‬
‭whenever some competing path looks more promising than the current one‬
‭does.‬
‭b.‬ ‭In the best first search algorithm, we‬‭expand the‬‭node which is closest to the‬
‭goal node‬‭and the closest cost is estimated by heuristic‬‭function, i.e.‬
f(n) = h(n)‬

f(n) = heuristic function‬

h(n) = distance remaining to a goal‬

‭c.‬ T
‭ he algorithm maintains a‬‭priority queue of nodes‬‭to be explored. A cost‬
‭function f(n) is applied to each node. The nodes are put in the priority queue‬
‭(here named OPEN) in the order of their f(n) values. Nodes with smaller f(n)‬
‭values are expanded earlier.‬

‭d.‬ ‭Algorithm:‬
‭i.‬ ‭Start with OPEN containing just the initial state‬
‭ii.‬ ‭Until a goal is found or there are no nodes left on OPEN do:‬
‭1.‬ ‭Pick the best node on OPEN‬
‭2.‬ ‭Generate its successors‬
‭3.‬ ‭For each successor do:‬
‭a.‬ ‭If it has not been generated before, evaluate it, add it to‬
‭OPEN, and record its parent.‬
‭b.‬ ‭If it has been generated before, change the parent if this‬
‭new path is better than the previous one. In that case,‬
‭update the cost of getting to this node and to any‬
‭successors that this node may already have.‬

‭e.‬ ‭Properties:‬
‭i.‬ ‭Time Complexity:‬‭The worst case time complexity of‬‭Greedy best first‬
‭search is‬‭O(b‬‭m‬‭)‭.‬‬

‭ii.‬ ‭ pace Complexity:‬‭The worst case space complexity‬‭of Greedy best first‬
S
‭search is‬‭O(b‬‭m‬‭).‬‭Where, m is the maximum depth of‬‭the search space.‬

‭iii.‬ ‭Completeness:‬‭Greedy best-first search is complete‬‭(it sorts the list).‬


‭iv.‬ ‭ ptimality:‬‭Greedy best first search algorithm might not necessarily be‬
O
‭optimal.‬
‭ hat is the difference between uniform-cost search and best-first search methods?‬‭.‬
W

‭2.‬ ‭A* Search‬


‭a.‬ A
‭ * is a best first search algorithm with‬
f(n) = g(n) + h(n)‬

where‬

g(n) = path cost from start to n‬

‭(n) = estimate of lowest cost path from n to goal‬
h
‭b.‬ ‭Difference between best first search & A* is that even though A* follows a best‬
‭first search algorithm, it is optimal/admissible that means if provided a solution‬
‭exists, the first solution found by A* is an optimal solution.‬
‭ .‬ ‭Algorithm:‬
c
‭Local Search‬
‭●‬ L ‭ ocal search algorithms operate using a single current note and generally move to the‬
‭neighbours of that node.‬
‭●‬ ‭Local search method keeps a‬‭small number of nodes‬‭in memory‬‭, they are suitable for‬
‭problems where the‬‭solution is the goal state itself‬‭and not the path.‬
‭●‬ ‭In addition to finding goals Local Search Algorithm is also useful for solving‬‭pure‬
‭optimization problems.‬
‭●‬ ‭Example - Hill Climbing & Simulated Annealing.‬

‭1.‬ ‭Hill Climbing‬


‭Hill Climbing in Artificial Intelligence | Types of Hill Climbing Algorithm‬
‭a.‬ ‭Hill Climbing is a form of heuristic search algorithm used to solve optimization‬
‭related problems.It takes into account the‬‭current‬‭state and immediate‬
‭neighbouring state.‬
‭b.‬ ‭The algorithm starts with a non-optimal state (current state) and iteratively‬
‭improves its state until some predefined condition is met (optimal state).‬
‭c.‬ ‭Memory efficient‬
‭d.‬ ‭Particularly useful when we want to‬‭maximize or minimize‬‭any particular function‬
‭based on the input which it is taking.‬

‭e.‬ ‭Algorithm:‬
‭i.‬ ‭Examine the current state, Return success if it is a goal state‬
‭ii.‬ ‭Determine successors of the current state.‬
‭iii.‬ ‭Choose successor of maximum goodness (break ties randomly)‬
‭iv.‬ ‭If goodness of best successor is less than current state's goodness, stop‬
‭v.‬ ‭Otherwise make the best successor, the current state and go to step 2.‬
‭f.‬ ‭Problems in Hill Climbing Algorithm:‬
‭i.‬ ‭Local Maximum -‬‭The algorithm terminates when the‬‭current node is‬
‭local maximum as it is better than its neighbours. However, there exists a‬
‭global maximum where objective function value is higher.‬
‭Solution:‬‭Back Propagation‬‭can mitigate the problem‬‭of Local maximum‬
‭as it starts exploring alternate paths when it encounters Local Maximum.‬

‭ii.‬ ‭ idge -‬‭Ridge occurs when there are multiple peaks‬‭and all have the same‬
R
‭value or in other words, there are multiple local maxima which are the‬
‭same as global maxima.‬
‭Solution:‬‭Ridge obstacles can be solved by‬‭moving‬‭in several directions‬
‭at the same time.‬

‭iii.‬ ‭ lateau -‬‭Plateau is the region where all the neighbouring‬‭nodes have the‬
P
‭same value of objective function so the algorithm finds it hard to select‬
‭an appropriate direction.‬
‭Solution:‬‭Plateau obstacles can be solved by making‬‭a‬‭big jump from the‬
‭current state‬‭which will land you in a non-plateau‬‭region.‬

‭g.‬ ‭Characteristics:‬
‭i.‬ ‭Not Optimal‬
‭ii.‬ ‭Not Complete‬
‭iii.‬ ‭Requires less space‬

‭2.‬ ‭Steepest Hill Descent‬


‭ useful variation on simple hill climbing considers all the moves from the current state‬
A
‭and selects the best one as the next state. This method is called steepest-ascent hill‬
‭climbing or gradient search. Notice that this contrasts with the basic method in which‬
‭the first state that is better than the current state is selected.‬

‭3.‬ ‭Simulated Annealing‬


‭ .‬ S
a ‭ imulated Annealing is a stochastic global search optimization algorithm.‬
‭b.‬ ‭The algorithm is inspired by annealing in metallurgy where metal is heated to a‬
‭high temperature quickly, then cooled slowly, which increases its strength and‬
‭makes it easier to work with.‬
‭c.‬ ‭Variation of hill climbing in which at the beginning of the process some downhill‬
‭moves may be made.‬
‭Difference between Uninformed & Informed Search‬
‭Uninformed (Blind) Search‬ ‭Informed (Heuristic) Search‬
‭ trategies have‬‭no additional information‬
S ‭ trategies that use‬‭problem-specific‬
S
‭about states beyond that provided in the‬ ‭knowledge‬‭beyond the definition of the‬
‭problem definition.‬ ‭problem itself.‬

‭ earch efficiency is low‬‭because nodes in the‬ S


S ‭ earch efficiency is high‬‭because nodes are‬
‭state are searched mechanically, until the‬ ‭searched strategically with the knowledge &‬
‭goal is reached or time limit is over/failure‬ ‭information provided.‬
‭occurs.‬

I‭ mpractical‬‭to solve large problems cases‬ ‭ an handle large search problems cases &‬
C
‭uses‬‭more time & space.‬ ‭uses‬‭less time & space.‬

‭It can give the best solution.‬ I‭ t can give a good solution but not‬
‭necessarily the most optimal solution.‬

‭ xamples - DFS, BFS, Iterative Deepening,‬


E ‭ xamples - (Greedy) Best First search, A*,‬
E
‭Uniform Cost Search, etc.‬ ‭AO*, Hill Climbing, etc.‬

‭ earching and Optimization Techniques in Artificial Intelligence: A Comparative Study &‬


S
‭Complexity Analysis‬
‭KNOWLEDGE REPRESENTATION‬
I‭n order to solve complex problems encountered in artificial intelligence, one needs both a large‬
‭amount of knowledge and some mechanism for manipulating that knowledge to create‬
‭solutions.‬

‭There are three factors which are put into the machine, which makes it valuable:‬
‭1.‬ ‭Knowledge:‬‭The‬‭information‬‭related to the environment‬‭is stored in the machine.‬
‭2.‬ ‭Reasoning:‬‭The ability of the machine to‬‭understand‬‭the stored knowledge.‬
‭3.‬ ‭Intelligence:‬‭The ability of the machine to‬‭make decisions‬‭on the basis of the stored‬
‭information.‬

‭ nowledge‬‭is a description of the world. It determines‬‭a system's competence by what it knows‬


K
‭and‬‭Representation‬‭is the way knowledge is encoded.‬‭It defines a system's performance in‬
‭doing something.‬

‭Knowledge Representation IMP stuff that doesn’t seem to be in syllabus‬

‭The Knowledge Representation models/mechanisms are often based on:‬


‭●‬ ‭Logic‬
‭●‬ ‭Rules‬
‭●‬ ‭Frames‬
‭●‬ ‭Semantic Net‬

‭Logic‬
‭‬ L
● ‭ ogic is the primary vehicle for representing and reasoning about knowledge.‬
‭●‬ ‭Provides a way to represent+reasoning.‬
‭●‬ ‭To specify or define a particular logic, one needs to specify three things:‬
‭○‬ ‭Syntax:‬
‭■‬ ‭The atomic symbols of the logical language,‬
‭■‬ ‭The rules for‬‭constructing‬‭well formed, non-atomic‬‭expressions (symbol‬
‭structures) of logic.‬
‭■‬ ‭Syntax specifies the symbols in the language and how they can be‬
‭combined to form sentences. Hence facts about the world are‬
‭represented as sentences in logic.‬
‭○‬ ‭Semantics:‬
‭■‬ ‭The meanings of the atomic symbols of the logic,‬
‭■‬ ‭The rules for‬‭determining the meanings‬‭of non-atomic‬‭expressions of‬
‭logic.‬
‭■‬ I‭t specifies what facts in the world a sentence refers to. Hence, also‬
‭specifies how you assign a truth value to a sentence based on its‬
‭meaning in the world.‬
‭ ‬ ‭Syntactic Inference Method:‬

‭■‬ ‭The rules for determining a subset of logical expressions, called theorems‬
‭of the logic.‬
‭■‬ ‭It refers to a mechanical method for computing (deriving) new (true)‬
‭sentences from existing sentences.‬

‭Types of Logic Representation used in AI‬


‭ .‬ P
1 ‭ ropositional Logic:‬‭Represents the knowledge‬‭about‬‭what is true and what is false.‬
‭2.‬ ‭First-order/Predicate Logic:‬‭Represents the objects‬‭in the form of‬‭predicates or‬
‭quantifiers.‬
‭3.‬ ‭Temporal Logic:‬‭Represents truth over time.‬
‭4.‬ ‭Modal Logic:‬‭Represents doubt.‬
‭5.‬ ‭Higher order Logic:‬‭Allows variables to represent‬‭many relations between objects.‬

‭Propositional logic (PL)‬


‭ ropositional logic is the simplest form of logic where all the statements are made by‬
P
‭propositions, i.e.‬‭in the form of True or False.‬‭Propositions‬‭can be either true or false, but it‬
‭cannot be both. This technique is also known as propositional calculus, statement logic, or‬
‭sentential logic.‬

‭Syntax of PL‬
‭1.‬ A ‭ tomic Proposition:‬‭Atomic propositions are simple‬‭propositions. It consists of a single‬
‭proposition symbol. These are the sentences which must be either true or false.‬
‭2.‬ ‭Compound proposition:‬‭Compound propositions are constructed‬‭by combining simpler‬
‭or atomic propositions, using parentheses and logical connectives.‬

‭Logical Connectives‬

‭Truth Tables:‬‭Propositional Logic in Artificial Intelligence‬‭- Javatpoint‬


‭Precedance‬
‭1.‬ ‭Negation‬
‭2.‬ ‭Conjunction‬
‭3.‬ ‭Disjunction‬
‭4.‬ ‭Implication‬
‭5.‬ ‭Biconditional‬

‭Rules of Inference‬

Rules of Inference - Definition & Types of Inference Rules


Rules of Deduction 1: Constructive Dilemma and Destructive Dilemma

‭Properties of PL‬
‭●‬ ‭Commutativity:‬
‭○‬ ‭P ∧ Q= Q ∧ P, or‬
‭○‬ ‭P ∨ Q = Q ∨ P.‬
‭●‬ ‭Associativity:‬
‭○‬ ‭(P ∧ Q) ∧ R= P ∧ (Q ∧ R),‬
‭○‬ ‭(P ∨ Q) ∨ R= P ∨ (Q ∨ R)‬
‭●‬ ‭Identity element:‬
‭○‬ ‭P ∧ True = P,‬
‭○‬ ‭P ∨ True= True.‬
‭●‬ ‭Distributive:‬
‭○‬ ‭P∧ (Q ∨ R) = (P ∧ Q) ∨ (P ∧ R).‬
‭○‬ ‭P ∨ (Q ∧ R) = (P ∨ Q) ∧ (P ∨ R).‬
‭●‬ ‭DE Morgan's Law:‬
‭○‬ ‭¬ (P ∧ Q) = (¬P) ∨ (¬Q)‬
‭○‬ ‭¬ (P ∨ Q) = (¬ P) ∧ (¬Q).‬
‭●‬ ‭Double-negation elimination:‬
‭○‬ ‭¬ (¬P) = P.‬

‭Limitations of PL‬
‭1.‬ ‭We cannot represent relations like‬‭ALL, some, or none‬‭with propositional logic. Example:‬
‭a.‬ ‭All the girls are intelligent.‬
‭b.‬ ‭Some apples are sweet.‬
‭2.‬ ‭Propositional logic has limited expressive power.‬
‭3.‬ I‭n propositional logic, we cannot describe statements in terms of their properties or‬
‭logical relationships.‬

‭Predicate Logic‬
‭ his technique is used to represent the objects in the form of‬‭predicates or quantifiers.‬‭It is‬
T
‭different from Propositional logic as it removes the complexity of the sentence represented by‬
‭it. In short, FOPL is an advanced version of propositional logic.‬

AI - PREDICATE LOGIC PART 2 - Knowledge Representation

‭Predicates‬
‭ tatements involving variables which are neither T or F until or unless values of variables are‬
S
‭specified.‬
‭Eg - x is an animal‬
‭X = subject‬
‭is an animal = predicate‬

‭Quantifiers‬
‭ ords that refer to quantities such as “some” or “all”. It tells for how many elements a given‬
W
‭predicate is True.‬
‭They’re used to express quantities without giving an exact number, i.e. some, many, all, etc.‬

‭Types of Quantifiers‬
‭1.‬ ‭Universal Quantifier:‬‭P(x) for all values of x in‬‭the domain.‬
‭2.‬ ‭Existential Quantifier:‬‭There exists an element x in the doman such that P(x).‬

‭Rules of Inference‬

You might also like