Ai and ml-1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Introduction to Artificial Intelligence

• Artificial intelligence (AI) refers to the simulation of human intelligence in machines


that are programmed to think like humans and mimic their actions.
• ML: It is a branch of artificial intelligence based on the idea that systems can learn
from data, identify patterns, and make decisions with minimal human intervention.
Artificial Intelligence
• The term artificial intelligence stirs emotions. The central question for the engineer,
especially for the computer scientist, is the question of the intelligent machine that
behaves like a person, showing intelligent behavior.
• In 1955, John McCarthy, first defined the term artificial intelligence as
• “The goal of AI is to develop machines that behave as though they were intelligent.”
• To test this definition, imagine the following scenario. Fifteen or so small robotic
vehicles are moving on an enclosed four by four meter square surface. One can
observe various behavior patterns. Some vehicles form small groups with relatively
little movement. Others move peacefully through the space and gracefully avoid any
collision. Still others appear to follow a leader. Aggressive behaviors are also
observable. Is what we are seeing intelligent behavior? According to McCarthy’s
definition the aforementioned robots can be described as intelligent.
• The psychologist Valentin Braitenberg has shown that this seemingly complex
behavior can be produced by very simple electrical circuits So-called Braitenberg
vehicles have two wheels, each of which is driven by an independent electric motor.
The speed of each motor is influenced by a light sensor on the front of the vehicle. The
more light that hits the sensor, the faster the motor runs. Vehicle 1 in the left part of
the figure, according to its configuration, moves away from a point light source.
Vehicle 2 on the other hand moves toward the light source. Further small
modifications can create other behavior patterns, such that with these very simple
vehicles we can realize the impressive behavior described above. Clearly the above
definition is insufficient because AI has the goal of solving difficult practical problems
which are surely too demanding for the Braitenberg vehicle.


• In the Encyclopedia Britannica the Definition that goes like:
• “AI is the ability of digital computers or computer-controlled robots to solve problems
that are normally associated with the higher intellectual processing capabilities of
humans”
• But this definition also has weaknesses. It would admit for example that a computer
with large memory that can save a long text and retrieve it on demand displays
intelligent capabilities, for memorization of long texts can certainly be considered a
higher intellectual processing capability of humans, as can for example the quick
multiplication of two 20-digit numbers. According to this definition, then, every
computer is an AI system.
• This dilemma is solved by Elaine Rich
• “Artificial Intelligence is the study of how to make computers do things at which, at
the moment, people are better.”
• The execution of many computations in a short amount of time are the strong points
of digital computers. In this regard they outperform humans by many multiples. In
many other areas, however, humans are far superior to machines. For instance, a
person entering an unfamiliar room will recognize the surroundings within fractions
of a second and, if necessary, just as swiftly make decisions and plan actions. To date,
this task is too demanding for autonomous robots. According to Rich’s definition, this
is therefore a task for AI. In fact, research on autonomous robots is an important,
current theme in AI.
• It would be dangerous, however, to conclude from Rich’s definition that AI is only
concerned with the pragmatic implementation of intelligent processes.
• Intelligent systems, in the sense of Rich’s definition, cannot be built without a deep
understanding of human reasoning and intelligent action in general, because of which
neuroscience is of great importance to AI. This also shows that the other cited
definitions reflect important aspects of AI.
• A particular strength of human intelligence is adaptivity. We are capable of adjusting
to various environmental conditions and change our behavior accordingly through
learning. Precisely because our learning ability is so vastly superior to that of
computers, machine learnings, according to Rich’s definition, a central subfield of AI.
Brain Science and problem solving
• Through research of intelligent systems, we can try to understand how the human
brain works and then model or simulate it on the computer.
• Many ideas and principles in the field of neural networks stem from brain science with
the related field of neuroscience.
• A very different approach results from taking a goal-oriented line of action, starting
from a problem and trying to find the most optimal solution. How humans solve the
problem is treated as unimportant here. The method, in this approach, is secondary.
First and foremost is the optimal intelligent solution to the problem. Rather than
employing a fixed method AI has as its constant goal the creation of intelligent agents
for as many different tasks as possible. Because the tasks may be very different, it is
unsurprising that the methods currently employed in AI are often also quite different.
• Similar to medicine, which encompasses many different, often life-saving diagnostic
and therapy procedures, AI also offers a broad palette of effective solutions for widely
varying applications. For mental inspiration Just as in medicine, there is no universal
method for all application areas of AI, rather a great number of possible solutions for
the great number of various everyday problems, big and small.

• Cognitive science is devoted to research into human thinking at a somewhat higher


level. Similarly, to brain science, this field furnishes practical AI with many important
ideas. On the other hand, algorithms and implementations lead to further important
conclusions about how human reasoning functions. Thus, these three fields benefit
from a fruitful interdisciplinary exchange.
• There are many interesting philosophical questions surrounding intelligence and
artificial intelligence. We humans have consciousness; that is, we can think about
ourselves and even ponder that we are able to think about ourselves. How does
consciousness come to be? Many philosophers and neurologists now believe that the
mind and consciousness are linked with matter, that is, with the brain.
• The question of whether machines could one day have a mind or consciousness could
at some point in the future become relevant. The mind-body problem in particular
concerns whether or not the mind is bound to the body.
Turing Test in AI
• In 1950, Alan Turing introduced a test to check whether a machine can think like a
human or not, this test is known as the Turing Test. In this test, Turing proposed that
the computer can be said to be an intelligent if it can mimic human response under
specific conditions.

• The Turing test is based on a party game "Imitation game," with some modifications.
This game involves three players in which one player is Computer, another player is
human responder, and the third player is a human Interrogator, who is isolated from
other two players and his job is to find that which player is machine among two of
them.
• Consider, Player A is a computer, Player B is human, and Player C is an interrogator.
Interrogator is aware that one of them is machine, but he needs to identify this on the
basis of questions and their responses.
• The conversation between all players is via keyboard and screen so the result would
not depend on the machine's ability to convert words as speech.
• The test result does not depend on each correct answer, but only how closely its
responses like a human answer. The computer is permitted to do everything possible
to force a wrong identification by the interrogator.
• The questions and answers can be like:
• Interrogator: Are you a computer?
• PlayerA (Computer): No
• Interrogator: Multiply two large numbers such as (256896489*456725896)
• Player A: Long pause and give the wrong answer.
• In this game, if an interrogator would not be able to identify which is a machine and
which is human, then the computer passes the test successfully, and the machine is
said to be intelligent and can think like a human.
Basics
• Sensor: Sensor is a device which detects the change in the environment and sends
the information to other electronic devices. An agent observes its environment
through sensors.
• Actuators: Actuators are the component of machines that converts energy into
motion. The actuators are only responsible for moving and controlling a system. An
actuator can be an electric motor, gears, rails, etc.
• Effectors: Effectors are the devices which affect the environment. Effectors can be
legs, wheels, arms, fingers, wings, fins, and display screen.

AI Agents
• An agent is anything that can perceive its environment through sensors and acts
upon that environment through effectors.
• Agent denotes rather generally a system that processes information and produces an
output from an input. These agents may be classified in many different ways.
• A human agent has sensory organs such as eyes, ears, nose, tongue and skin parallel
to the sensors, and other organs such as hands, legs, mouth, for effectors.
• In classical computer science, software agents are primarily employed. In this case
the agent consists of a program that calculates a result from user input.

• In robotics, on the other hand, hardware agents(also called autonomous robots)are


employed, which additionally have sensors and actuators at their disposal. The agent
can perceive its environment with the sensors. With the actuators it carries out
actions and changes its environment.


• Agents can be grouped into five classes based on their degree of perceived
intelligence and capability. All these agents can improve their performance and
generate better action over the time. These are given below:

1. Simple Reflex agent


• The Simple reflex agents are the simplest agents. These agents take decisions on the
basis of the current percepts and ignore the rest of the percept history.
• These agents only succeed in the fully observable environment.
• The Simple reflex agent does not consider any part of percepts history during their
decision and action process.
• The Simple reflex agent works on Condition-action rule, which means it maps the
current state to action. Such as a Room Cleaner agent, it works only if there is dirt in the
room.
• Percept: an impression of an object obtained by use of the senses.

• Problems for the simple reflex agent design approach:


• They have very limited intelligence.
• They do not have knowledge of non-perceptual parts of the current state.
• Not adaptive to changes in the environment.
Ex:
• iDraw, a drawing robot which converts the typed character into writing without
storing the past.
• Robotic vacuum cleaner that deliberate in an infinite loop, each percept contains a
state of a current location [clean] or [dirty] and accordingly it decides whether to
[suck] or [ continue moving]
2. Model-based reflex agent
• The Model-based agent can work in a partially observable environment, and track
the situation.

• A model-based agent has two important factors:


o Model: It is knowledge about "how things happen in the world," so it is called
a Model-based agent.
o Internal State: It is a representation of the current state based on percept
history.

• These agents have the model, "which is knowledge of the world" and based on the
model they perform actions.

• Updating the agent state requires information about: How the world evolves

EX: self steering mobile vision where it is necessary to check the percept history to
fully understand how world is evolving
3. Goal-based agents
• The knowledge of the current state environment is not always sufficient to decide for
an agent to what to do.

• The agent needs to know its goal which describes desirable situations.

• Goal-based agents expand the capabilities of the model-based agent by having the
"goal" information.

• They choose an action, so that they can achieve the goal.

• These agents may have to consider a long sequence of possible actions before deciding
whether the goal is achieved or not. Such considerations of different scenario are
called searching and planning, which makes an agent proactive
Ex: A mobile robot which should move from room 112 to room 179 in a building takes
actions different from those of a robot that should move to room 105, actions depends
on goals
• Goal-based agents: It is not sufficient to have the current state information unless
the goal is not decided. Therefore, a goal-based agent selects a way among multiple
possibilities that helps it to reach its goal.
• Note: With the help of searching and planning (subfields of AI), it becomes easy for
the Goal-based agent to reach its destination.
4. Utility-based agents
• These agents are similar to the goal-based agent but provide an extra component of
utility measurement which makes them different by providing a measure of success at a
given state.
• Utility-based agent act based not only goals but also the best way to achieve the goal.
• The Utility-based agent is useful when there are multiple possible alternatives, and an
agent has to choose in order to perform the best action.
• The utility function maps each state to a real number to check how efficiently each
action achieves the goals.

Route recommendation system which solves best route to reach a destination.


• Example: The main goal of chess playing is to ‘check-and-mate’ the king, but the
player completes several small goals previously.
• Note: Utility-based agents keep track of its environment, and before reaching its main
goal, it completes several tiny goals that may come in between the path.
5. Learning Agents
• A learning agent in AI is the type of agent which can learn from its past experiences, or
it has learning capabilities.
• It starts to act with basic knowledge and then able to act and adapt automatically
through learning.
• A learning agent has mainly four conceptual components, which are:
o Learning element: It is responsible for making improvements by learning from
environment.
o Critic: Learning element takes feedback from critic which describes that how
well the agent is doing with respect to a fixed performance standard.
o Performance element: It is responsible for selecting external action.
o Problem generator: This component is responsible for suggesting actions that
will lead to new and informative experiences.
• Hence, learning agents are able to learn, analyze performance, and look for new ways to
improve the performance.
• Any agent designed and expected to be successful in uncertain event is considered to be
a learning agent.

• The sum of all weighted errors gives the total cost caused by erroneous decisions. The
goal of a cost-based agent is to minimize the cost of erroneous decisions in the long
term, that is, on average. In we will become familiar with the medical diagnosis system
LEXMED as an example of a cost-based agent
• distributed agents are increasingly coming into use, whose intelligence are not localized
in one agent, but rather can only be seen through cooperation of many agents.

Nature of environment
• The design of an agent is oriented, along with its objective, strongly toward its
environment, or alternately its picture of the environment, which strongly depends
on it sensors.
• There are several types of environments
• Fully Observable vs Partially Observable
• Deterministic vs Stochastic
• Competitive vs Collaborative
• Single-agent vs Multi-agent
• Static vs Dynamic
• Discrete vs Continuous
• Episodic vs Sequential
• Fully Observable vs Partially Observable:
• The environment is observable if the agent always knows the complete state of the
world. Otherwise the environment is only partially observable.
• An environment is called unobservable when the agent has no sensors in all
environment.
• Chess – the board is fully observable, so are the opponent’s moves.
• Driving – the environment is partially observable because what’s around the corner
is not known.
• Deterministic / Non-deterministic − If the next state of the environment is
completely determined by the current state and the actions of the agent, then the
environment is deterministic; otherwise it is non-deterministic.
• Example:
• Chess – there would be only few possible moves for a coin at the current state and
these moves can be determined.
• Self Driving Cars – the actions of a self driving car are not unique, it varies time to
time.
• Competitive vs Collaborative
• An agent is said to be in a competitive environment when it competes against
another agent to optimize the output.
• Game of chess is competitive as the agents compete with each other to win the
game which is the output.
• An agent is said to be in a collaborative environment when multiple agents
cooperate to produce the desired output.
• When multiple self-driving cars are found on the roads, they cooperate with each
other to avoid collisions and reach their destination which is the output desired.
• Single-agent vs Multi-agent
• An environment consisting of only one agent is said to be a single agent
environment.
• Agent solving crossword puzzle.
• An environment involving more than one agent is a multi agent environment.
• The game of football is multi agent as it involves 10 players in each team.
• Chess- 2 agent environment
• Dynamic vs Static
• If the environment can change itself while an agent is deliberating then such
environment is called a dynamic environment else it is called a static environment.
• An idle environment with no change in its state is called a static environment.
• Taxi driving is an example of a dynamic environment whereas Crossword puzzles are
an example of a static environment.
• Discrete vs Continuous
• If an environment consists of a finite number of actions that can be deliberated in
the environment to obtain the output, it is said to be a discrete environment.
• The game of chess is discrete as it has only a finite number of moves. The number of
moves might vary with every game, but still, it’s finite.
• The environment in which the actions performed cannot be numbered ie. is not
discrete, is said to be continuous.
• Self-driving cars are an example of continuous environments as their actions are
driving, parking, etc. which cannot be numbered.
• Episodic vs Sequential:
• In an episodic environment, there is a series of one-shot actions, and only the
current percept is required for the action.
• However, in Sequential environment, an agent requires memory of past actions to
determine the next best actions.
• Crosswords, chess –sequential
• Image analysis - Episodic
Knowledge-Based Systems
• An agent is a program that implements a mapping from perceptions to actions. For
simple agents this way of looking at the problem is sufficient. For complex applications
in which the agent must be able to rely on a large amount of information and is meant
to do a difficult task, programming the agent can be very costly and unclear how to
proceed. Here AI provides a clear path to follow that will greatly simplify the work.
• First we separate knowledge from the system or program, which uses the knowledge
to, for example, reach conclusions, answer queries, or come up with a plan. This
system is called the inference mechanism.
• The knowledge is stored in a knowledge base(KB). Acquisition of knowledge in the
knowledge base is denoted Knowledge Engineering and is based on various knowledge
sources such as human experts, the knowledge engineer, and databases.
• In Fig. he general architecture of knowledge-based systems is presented.
• Moving toward a separation of knowledge and inference has several crucial
advantages. The separation of knowledge and inference can allow inference systems
to be implemented in a largely application-independent way. For example, application
of a medical expert system to other diseases is much easier by replacing the
knowledge base rather than by programming a whole new system.
• Through the decoupling of the knowledge base from inference, knowledge can be
stored declaratively. In the knowledge base there is only a description of the
knowledge, which is independent from the inference system in use. Without this clear
separation, knowledge and processing of inference steps would be interwoven, and
any changes to the knowledge would be very costly.
• Formal language as a convenient interface between man and machine lends itself to
the representation of knowledge in the knowledge base.

You might also like