Professional Documents
Culture Documents
Artificial Intelligence Unit 1 Notes - Intelligent Agent
Artificial Intelligence Unit 1 Notes - Intelligent Agent
8 Queen
3. If it is hard to characterize the data.(Understanding the properties and
patterns within the data)
Eg Traditional problem – In DB, Finding the customer information.
In AI – It deals with data different from the usual stuff - identify the
object color
4. If the knowledge base of the problem voluminous.(If mountain of data is
available the AI can use its special skill to find hidden data which are
invisible to traditional methods.
Eg Emoji Detection.
5. The data or knowledge base is changing fast. The size of the knowledge
base increases day by day.
• Example:Training a language model to write poems.
6. In doing dirty jobs, humans get fatigue whereas computers can perform
such jobs without any tiredness.
• Example: labeling thousands of images with the correct category (e.g.,
cat, dog, car), ATM it performs clerk cum cashier for entire day.
What are the characteristics of AI?? (how will you analyse a problem)
1. Is the problem decomposable or not?
• Example: 8-queens problem is a classic AI problem that is decomposable.
2. Can the solution steps be ignored?
• Example: In theorem proving steps are considered.
• In playing chess, steps are not considered.
3. Is the universe predictable?
• Example: weather prediction, the universe is not predictable.
• In 8 Puzzle, the universe is predictable.
4. Is the solution to the goal absolute or relative?
• Example: In the 8-queens problem, the solution to the goal is absolute.
• Traveling salesman problem, the solution to the goal is relative.
5. Is the knowledge base consistent or not?
• Example: A knowledge base is consistent if it does not contain any
contradictions. For example, a knowledge base that says "A is B" and "B
is not A" is inconsistent. Inconsistent knowledge bases, lead to problems
for AI systems.
6. The role of knowledge.
• Example: In 8-puzzle, chess, etc., we get a solution with the help of a
knowledge base.
• Political Information – Knowledge base is not used.
7. Is the interaction with the computer necessary?
• Example: In some AI problems, such as playing chess against a computer,
interaction with the computer is necessary.
• For weather prediction, interaction with the computer is not necessary.
Subsystem of AI
Learning Systems
• Inductive learning – It allows system to learn from unlabelled data and generalize
their knowledge to unseen examples.
• Example: Natural language processing: Text analysis tools use inductive learning to
identify topics, relationships between words, and even generate new text that
mimics the style of the data it was trained on.
Adaptive learning
• Heuristic function
• Deduction Process – Games like chess use deduction to predict their opponent’s
moves and makes optimal decision based on the current game state
Knowledge Acquisition
Intelligent Search
• The search problem in AI are non-deterministic and the order of visiting the
elements in the search space is completely dependent on the given data set.
Logic Programming
It is one of the prime areas of research in AI that covers knowledge representation structures.
The aim of this research is to expand the PROLOG compiler to handle spatial temporal models and
maintain a parallel programming environment.
Soft Computing
Statistical learning, Fuzzy logic, genetic algorithm, these are the most important constitutes of soft
computing.
The main problem in AI is data and knowledge bases, and also reasoning and planning
are often contaminated with various form if incompleteness.
The incompleteness in data and knowledge bases is always a problem for reasoning
and planning in AI problems.
• A global DB - It stores the current state of the problem that PS trying to solve
• A Set of Rules – It is the collection of it-then statement that specify how the
PS can change the state of the problem
• A goal – It is the desired state of the problem that the PS trying to solve.
Types of Production System
1.Consistency: The system always applies rules in the same order and doesn't
change its approach.
• Unidirectionality: Once a decision or action is made, it is not reversed.
The system doesn't go back and undo previous steps.
• It is the program in which the application of the rule prevents the later application of the another
rule which may not have been applied at the time the first rule was selected
• It allows for the revision or modification of previous conclusions based on new information or
changes in the system's knowledge.
• It is a program with the property that the application of those rules that are allowable transforms
from state x to state y.
• Where certain operations or rules can be executed in a different order without affecting the final
outcome.
Advantages
• It is modular: The rules are independent of each other, so it is easy to add or remove rules without affecting the rest of the
system.
• It is declarative: The rules specify what the system should do, not how it should do it. This makes the system easier to
• It is flexible: The system can be used to solve a wide variety of problems, from simple puzzles to complex planning tasks.
Disadvantages
• It can be inefficient: The system can sometimes explore many unnecessary states before finding a solution.
• It can be difficult to debug: It can be difficult to determine why the system is not working as expected.
Working memory: This is the part of the system
that stores the current state of the problem being
solved.
Production rules: These are the rules that the
system uses to change the state of the problem. In
the figure, they are represented by the arrows
labeled "C1->A1," "C2->A2," and "C3->A3." Each
arrow represents a rule that has two parts: a
condition (C) and an action (A).
Non-Monotonic Production Partially Commutative Commutative Production
Characteristic Monotonic Production System System Production System System
Limited adaptation to new High adaptability to new Moderate adaptability; can High adaptability; decisions
Adaptability to New
information; decisions are information; decisions can be adjust decisions partially can be reordered without
Information
final. revised. based on new information. impacting the outcome.
Decisions can be revised based Decisions can be partially Decisions can be rearranged
No revision of decisions;
Revision of Previous Decisions on new data or changes in revised or adjusted as new without altering the final
decisions are not reversed.
context. information emerges. result.
Medical diagnosis system that Manufacturing workflow with Mathematical addition where
Simple rule-based system
Example adjusts diagnoses based on some flexibility in the order of the order of numbers does not
where each step is final.
new tests. certain tasks. impact the sum.
Water Jug Problem
The global database and the set of rules for water jug problem.
There are two jugs of 4L and 3L capacities. There is a pump issued to fill the jugs with water. How can
we obtain exactly 2L of water in 4L jug.
Solution:
1. We can empty the jug fully on to the ground or from one jug to the other.
2. We should not fill up a jug or empty a jug way or fractionally from the pump or the ground.
The global DB can be represented by a ordered part of numbers(x,y) for x=0,1,2,3,4 and y=0,1,2,3
which represent the amount of water in 4L jug and 3L jug respectively.
Branches of AI
• Heuristic Search: Uses "rules of thumb" to guide the search for optimal
solutions, often faster than exhaustive search. Example: A* algorithm finds
the shortest path in a maze by prioritizing promising directions.
• Genetic Programming: Evolves programs or algorithms through natural
selection principles. Example: Designing airplane wings: Genetic
algorithms can test millions of wing shapes to find the one that lifts the
most with the least fuel.
Intelligent Agents in Artificial Intelligence
What is an Agent?
• An agent can be anything that perceive its environment through sensors and
act upon that environment through actuators. An Agent runs in the cycle
of perceiving, thinking, and acting. An agent can be:
• Human-Agent: A human agent has eyes, ears, and other organs which work
for sensors and hand, legs, vocal tract work for actuators.
• Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP
for sensors and various motors for actuators.
• Software Agent: Software agent can have keystrokes, file contents as
sensory input and act on those inputs and display output on the screen.
Task environments are essentially the “problems” to which rational agents are the
“solutions”.
Task environment are specified as a PAGE (or) PEAS description, both means the same.
PAGE
• P – Percept
• A – Action
• G – Goal
• E – Environment
PEAS Representation
agent or rational agent, then we can group its properties under PEAS representation
• P: Performance measure
• E: Environment
• A: Actuators
• S: Sensors
Here performance measure is the objective for the success of an agent's behavior.
PEAS for self-driving cars:
Let's suppose a self-driving car then
PEAS representation will be:
Performance: Safety, time, legal drive,
comfort
Environment: Roads, other vehicles,
road signs, pedestrian
Actuators: Steering, accelerator, brake,
signal, horn
Sensors: Camera, GPS, speedometer,
odometer, accelerometer, sonar.
Example of Agents with their PEAS representation
3. Part -picking •Percentage of parts •Conveyor belt with •Jointed Arms •Camera
Robot in correct bins. parts, •Hand •Joint angle sensors.
•Bins
Properties of task environment
2.Static vs Dynamic
3.Discrete vs Continuous
4.Deterministic vs Stochastic
5.Single-agent vs Multi-agent
6.Episodic vs sequential
7.Known vs Unknown
8.Accessible vs Inaccessible
Fully observable vs Partially Observable:
• If an agent sensor can sense or access the complete state of an
environment at each point of time then it is a fully
observable environment, else it is partially observable.
• If the sensors are not active or the environment changes every second
then the environment is partially observable or inaccessible.
Deterministic vs Stochastic:
• If an agent's current state and selected action can completely determine
the next state of the environment, then such environment is called a
deterministic environment.
• Static environments are easy to deal because an agent does not need to
continue looking at the world while deciding for an action.
• Self-driving Car:
• Known : The car has detailed maps and can identify familiar landmarks, traffic rules, and road
signs. It can navigate known routes safely and make predictable decisions based on these
elements.
• Unknown : The car faces an unusual event like a car accident blocking its path. Its algorithms
trained on regular driving situations might not be equipped to handle this unforeseen
situation, requiring human intervention or adaptive learning capabilities.
Accessible vs Inaccessible
• If an agent can obtain complete and accurate information about the state's
environment, then such an environment is called an Accessible
environment else it is called inaccessible.
Agent Program is a function that implements the agent mapping from percept's to
a actions.
• Goal-based agents
• Utility-based agent
• Learning agent
• Simple Reflex agent:
• The Simple reflex agents are the simplest agents. These agents take
decisions on the basis of the current percepts and ignore the rest of the
percept history.
• The Simple reflex agent does not consider any part of percepts history
during their decision and action process.
• The Simple reflex agent works on Condition-action rule, which means
it maps the current state to action.
• Example: Automated hand sanitizer dispensers that release sanitizer
when they detect someone placing their hands beneath them, without
considering past interactions.
• Room Cleaner agent, it works only if there is dirt in the room.
•Problems for the simple reflex agent design
approach:
• They have very limited intelligence
• They do not have knowledge of non-
perceptual parts of the current state
• Mostly too big to generate and to store.
• Not adaptive to changes in the
environment.
• Model-based reflex agent
• A model-based agent has two important factors:
• Model: It is knowledge about "how things happen in the world," so it is called a
Model-based agent.
• Internal State: It is a representation of the current state based on percept history.
• Updating the agent state requires information about:
• How the world evolves
• How the agent's action affects the world.
Example
• Chess-playing programs that consider the current state of the board and
the history of moves to determine the best next move.
• Goal-based agents
• Goal-based agents expand the capabilities of the model-based agent by
having the "goal" information.
• They choose an action, so that they can achieve the goal.
• An agent knows the description of current state as well as goal state.
The action matches with the current state is selected depends on the
goal state.
• The goal is based agent is more flexible for more than one destination
also
• After identifying one destination, the new destination is specified, goal
based agent is activated to come up with a new behavior.
• Search and planning techniques required to achieve goal.
• Utility-based agents
• If more than one sequence exists to reach the goal state then the sequence with more reliable , safer,
• (i) when there are conflicting goals, the utility function specifies the appropriate trade-off.
• (ii) when the agent aims for several goals, none of which can be achieved with certainty then the success
Example
• Ride-sharing apps that optimize routes based on factors like distance, time, and user preferences to
• Learning Agents
• A learning agent in AI is the type of agent which can learn from its past
experiences, or it has learning capabilities.
• It starts to act with basic knowledge and then able to act and adapt automatically
through learning.
• A learning agent has mainly four conceptual components, which are:
• Learning element: It is responsible for making improvements by learning from
environment
• Critic: Learning element takes feedback from critic which describes that how well the
agent is doing with respect to a fixed performance standard.
• Performance element: It is responsible for selecting external action
• Problem generator: This component is responsible for suggesting actions that will lead to
new and informative experiences.
• Hence, learning agents are able to learn, analyze performance, and look for new
ways to improve the performance.
Example: Recommendation systems on streaming platforms that learn user preferences and suggest
movies or music based on viewing and listening history.