Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

1. Describe AI agent and explain Agent’s environment.

2. Explain the problem state space search steps.


3. Describe DFS Search strategy.
4. Elaborate Propositional logic & its operations. Also explain the Tautology.
5. Explain Forward chaining process.
6. Describe the Baye’s theorem in detail.
7. What is Learning and explain rote learning and Inductive Learning in AI.
8. What is decision tree learning and explain it terminology.
9. Define expert systems and explain how it works.
10. Describe the Components and applications of an Expert system.
1.

Agent:

 An Agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through actuators.
 A human agent has eyes, ears, and other organs for sensors and hands, legs, mouth, and
other body parts for actuators.
 A robotic agent might have cameras and infrared range finders for sensors and various
motors for actuators.

A software agent receives keystrokes, file contents, and network packets as sensory inputs and acts
on the environment by displaying on the screen, writing files, and sending network packets.

Percept:
We use the term percept to refer to the agent's perceptual inputs at any given instant.

Percept Sequence:
An agent's percept sequence is the complete history of everything the agent has
Ever perceived.

Agent function:
Mathematically speaking, we say that an agent's behaviour is described by the
agent function that maps any given percept sequence to an action.

Agentprogram
Internally, the agent function for an artificial agent will be implemented by an
agent program. It is important to keep these two ideas distinct. The agent function
is an abstract mathematical description; the agent program is a concrete
implementation, running on the agent architecture.

2.

Steps in Defining the problem as State Space Search:


The state space representation forms the basis of most of the AI methods.
 Formulate a problem as a state space search by showing the legal
problem states, the legal operators, and the initial and goal states.
 A state is defined by the specification of the values of all attributes of interest in
the world
 An operator changes one state into the other; it has a precondition
which is the value of certain attributes prior to the application of the
operator, and a set of effects, which are the attributes altered by the
operator
 The initial state is where you start
 The goal state is the partial description of the solution
3.
Depth- First- Search.
We may sometimes search the goal along the largest depth of the tree, and move up only
when further traversal along the depth is not possible. We then attempt to find alternative
offspring of the parent of the node (state) last visited. If we visit the nodes of a tree using the
above principles to search the goal, the traversal made is called depth first traversal and
consequently the search strategy is called depth first search.

Algorithm:
1. Create a variable called NODE-LIST and set it to initial state

2. Until a goal state is found or NODE-LIST is empty do


a. Remove the first element from NODE-LIST and call it
E. If NODE-LIST was empty, quit
b. For each way that each rule can match the state described in E
do:
i. Apply the rule to generate a new state
ii. If the new state is a goal state, quit and return this state
iii. Otherwise, add the new state in front of NODE-LIST
4.

A proposition is a collection of declarative statements that has either a truth value "true” or a
truth value "false". A propositional consists of propositional variables and connectives. We
denote the propositional variables by capital letters (A, B, etc). The connectives connect the
propositional variables.

Some examples of Propositions are given below −

 "Man is Mortal", it returns truth value “TRUE”


 "12 + 9 = 3 – 2", it returns truth value “FALSE”

The following is not a Proposition −

 "A is less than 2". It is because unless we give a specific value of A, we cannot say
whether the statement is true or false.
Connectives

In propositional logic generally we use five connectives which are −

 OR (∨)
 AND (∧)
 Negation/ NOT (¬)
 Implication / if-then (→)
 If and only if (⇔).
OR (∨) − The OR operation of two propositions A and B (written as A∨B) is true if at least
any of the propositional variable A or B is true.
AND (∧) − The AND operation of two propositions A and B (written as A∧B) is true if both
the propositional variable A and B is true.
Negation (¬) − The negation of a proposition A (written as ¬A) is false when A is true and is
true when A is false.
Implication / if-then (→) − An implication A→Bis the proposition “if A, then B”. It is false
if A is true and B is false. The rest cases are true.
If and only if (⇔) − A⇔B is bi-conditional logical connective which is true when p and q
are same, i.e. both are false or both are true.
Tautologies

A Tautology is a formula which is always true for every value of its propositional variables.

Example − Prove [(A→B)∧A]→B is a tautology


A B A→B (A → B) ∧ A [( A → B ) ∧ A] → B

True True True True True

True False False False True


False True True False True

False False True False True

5.

Forward chaining is also known as a forward deduction or forward reasoning method when
using an inference engine. Forward chaining is a form of reasoning which start with atomic
sentences in the knowledge base and applies inference rules (Modus Ponens) in the forward
direction to extract more data until a goal is reached.

The Forward-chaining algorithm starts from known facts, triggers all rules whose premises
are satisfied, and add their conclusion to the known facts. This process repeats until the
problem is solved.

Properties of Forward-Chaining:
o It is a down-up approach, as it moves from bottom to top.
o It is a process of making a conclusion based on known facts or data, by
starting from the initial state and reaches the goal state.
o Forward-chaining approach is also called as data-driven as we reach to the
goal using available data.
o Forward -chaining approach is commonly used in the expert system, such as
CLIPS, business, and production rule systems.

Example:

"As per the law, it is a crime for an American to sell weapons to hostile nations.
Country A, an enemy of America, has some missiles, and all the missiles were sold to it
by Robert, who is an American citizen."

Prove that "Robert is criminal."

To solve the above problem, first, we will convert all the above facts into first-order definite
clauses, and then we will use a forward-chaining algorithm to reach the goal.

Facts Conversion into FOL:


o It is a crime for an American to sell weapons to hostile nations. (Let's say p, q, and r
are variables)
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p) ...(1)
o Country A has some missiles. ?p Owns(A, p) ∧ Missile(p). It can be written in two
definite clauses by using Existential Instantiation, introducing new Constant T1.
Owns(A,T1) ......(2)
Missile(T1) .......(3)
o All of the missiles were sold to country A by Robert.
?
o p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)
o Missiles are weapons.
Missile(p) → Weapons (p) .......(5)
o Enemy of America is known as hostile.
Enemy(p, America) →Hostile(p) ........(6)
o Country A is an enemy of America.
Enemy (A, America) .........(7)
o Robert is American
American(Robert). ..........(8)

Forward chaining proof:


Step-1:

In the first step we will start with the known facts and will choose the sentences which do not
have implications, such as: American(Robert), Enemy(A, America), Owns(A, T1), and
Missile(T1). All these facts will be represented as below.

Step-2:

At the second step, we will see those facts which infer from available facts and with satisfied
premises.

Rule-(1) does not satisfy premises, so it will not be added in the first iteration.

Rule-(2) and (3) are already added.

Rule-(4) satisfy with the substitution {p/T1}, so Sells (Robert, T1, A) is added, which infers
from the conjunction of Rule (2) and (3).

Rule-(6) is satisfied with the substitution(p/A), so Hostile(A) is added and which infers from
Rule-(7).
Step-3:

At step-3, as we can check Rule-(1) is satisfied with the substitution {p/Robert, q/T1, r/A},
so we can add Criminal(Robert) which infers all the available facts. And hence we reached
our goal statement.

6.
Bayes' Theorem, named after 18th-century British mathematician Thomas Bayes, is a
mathematical formula for determining conditional probability. Conditional probability is the
likelihood of an outcome occurring, based on a previous outcome having occurred in similar
circumstances.
Bayes' theorem relies on incorporating prior probability distributions in order to
generate posterior probabilities.

Prior probability, in Bayesian statistical inference, is the probability of an event occurring


before new data is collected. In other words, it represents the best rational assessment of the
probability of a particular outcome based on current knowledge before an experiment is
performed.

Posterior probability is the revised probability of an event occurring after taking into
consideration the new information. Posterior probability is calculated by updating the prior
probability using Bayes' theorem. In statistical terms, the posterior probability is the
probability of event A occurring given that event B has occurred.

Numerical Example of Bayes' Theorem


As a numerical example, imagine there is a drug test that is 98% accurate, meaning that 98%
of the time, it shows a true positive result for someone using the drug, and 98% of the time,
it shows a true negative result for nonusers of the drug.

Next, assume 0.5% of people use the drug. If a person selected at random tests positive for
the drug, the following calculation can be made to determine the probability the person is
actually a user of the drug.

(0.98 x 0.005) / [(0.98 x 0.005) + ((1 - 0.98) x (1 - 0.005))] = 0.0049 / (0.0049 + 0.0199) =
19.76%
Bayes' Theorem shows that even if a person tested positive in this scenario, there is a
roughly 80% chance the person does not take the drug.

7.
Learning: Learning is one of the fundamental building blocks of artificial intelligence (AI)
solutions. From a conceptual standpoint, learning is a process that improves the knowledge of
an AI program by making observations about its environment.
Components of Learning System
Rote Learning:
Rote learning technique avoids understanding the inner complexities but focuses
on memorizing the material so that it can be recalled by the learner exactly the way it
read or heard.

Learning by memorization: which avoids understanding the inner complexities the


subject that is being learned.

Learning something from Repeating:saying the same thing and trying to remember how
to say it;it does not help to understand ,it helps to remember ,like we learn a
poem,song ,etc.
Inductive Learning:
There are two types of inductive learning,

 Supervised
 Unsupervised

Supervised learning:( The machine has access to a teacher who corrects it.)

learning is the machine learning task of inferring a function from labeled training data.
The training data consist of a set of training examples. In supervised learning, each example is a
pair consisting of an input object (typically a vector) and a desired output value (also called the
supervisory signal). Example : Face recognition

Unsupervised Learning:( No access to teacher. Instead, the machine must search for
“order” and “structure” in the environment.)

since there is no desired output in this case that is provided therefore categorization is
done so that the algorithm differentiates correctly between the face of a horse, cat or human
(clustering of data)

Clustering:
In clustering or unsupervised learning, the target features are not given in the
training examples. The aim is to construct a natural classification that can be used to cluster the
data. The general idea behind clustering is to partition the examples
into clusters or classes. Each class predicts feature values for the examples in the class. Each
clustering has a prediction error on the predictions. The best clustering is the one that minimizes
the error.
Example: An intelligent tutoring system may want to cluster students' learning behavior so that
strategies that work for one member of a class may work for other members.

Reinforcement Learning:

Imagine a robot that can act in a world, receiving rewards and punishments and determining
from these what it should do. This is the problem of reinforcement
learning.Most Reinforcement Learning research is conducted with in the mathematical
framework of Markov Decision Process.

8.
Decision Trees are a sort of supervised machine learning where the training data is continually
segmented based on a particular parameter, describing the input and the associated output. Decision
nodes and leaves are the two components that can be used to explain the tree. The choices or results
are represented by the leaves. The data is divided at the decision nodes.
Decision Trees are a sort of supervised machine learning where the training data is continually
segmented based on a particular parameter, describing the input and the associated output. Decision
nodes and leaves are the two components that can be used to explain the tree. The choices or results
are represented by the leaves. The data is divided at the decision nodes.

The Gini index has a range of 0 to 1, with 0 denoting a pure set (all samples are from the same class)
and 1 denoting maximum impurity (equal distribution of samples among classes). The characteristic
with the lowest Gini index is chosen as the optimal attribute for data splitting when creating a decision
tree.

2. Information gain: A second criterion for assessing the quality of a split is information gain. It
calculates the amount of entropy (uncertainty) that is reduced as a result of dividing the data by a
specific property. A set of S's entropy is determined as follows:

Entropy(S) equals the sum of (p * log2(p)).

where p is the likelihood of each class in the set S. The difference between the entropy of the parent
set S and the weighted average entropy of its child subsets following the split is used to calculate the
information gain for a particular attribute.

Entropy(S) - weighted average of kid entropies equals information gain.

Terminologies of Decision Tree


1. Decision Tree: A decision tree is a flowchart-like structure in which each leaf node
represents a class label or a decision, and each inside node represents a test on an
attribute.
2. Root Node: The topmost node from which all other nodes in a decision tree are
derived. Based on the chosen attribute, it divides into child nodes to represent the
complete dataset.
3. Internal Node: An attribute test represented by a node in a decision tree called an
internal node. Based on the attribute value, it divides the dataset into subsets.
4. Leaf Node: In a decision tree, this node stands in for a decision or a class label. There
are no child nodes of it.
5. An attribute is a characteristic or feature that is utilized to divide a dataset at a node.
A decision tree's internal nodes each stand for an attribute test.
6. Splitting: The division of the dataset into more manageable subsets in accordance
with the chosen attribute. It establishes the decision tree's structure.
7. Split Criterion: A parameter or measurement that assesses the effectiveness of a
split. Gini impurity, entropy, and analytic distance are common split criteria.
8. Gini Impurity: A gauge of an example set's impurity or disorder. It shows the
likelihood of classifying an example wrongly at a node.
9. Entropy: A metric for the disorder or impurity of a collection of samples. The
dataset's ambiguity or unpredictability is quantified.
10. Information Gain: A measurement of the entropy or impurity reduction brought
about by a split. The property that offers the greatest details on the class labels is
chosen.
11. Pruning: Removing superfluous branches or nodes from a decision tree in order to
lessen overfitting and increase generalization.
12. Overfitting: When a decision tree model too closely mimics the noise or random
oscillations in the training data, it performs poorly on data that has not yet been
observed.
13. Underfitting: This refers to a decision tree model that performs poorly on both
training and test data because it is too simplistic and cannot identify the underlying
patterns in the data.
14. Depth: The distance along the longest path in a decision tree from any leaf node to
the root node. It establishes the tree's level of complexity.

9.

An expert system is a computer program that uses artificial intelligence (AI) technologies to
simulate the judgment and behavior of a human or an organization that has expertise and
experience in a particular field.

Expert systems are usually intended to complement, not replace, human experts.
How does an expert system work?

Modern expert knowledge systems use machine learning and artificial intelligence to
simulate the behavior or judgment of domain experts. These systems can improve their
performance over time as they gain more experience, just as humans do.

Expert systems accumulate experience and facts in a knowledge base and integrate them with
an inference or rules engine -- a set of rules for applying the knowledge base to situations
provided to the program.

The inference engine uses one of two methods for acquiring information from the knowledge
base:

1. Forward chaining reads and processes a set of facts to make a logical prediction about
what will happen next. An example of forward chaining would be making predictions
about the movement of the stock market.

2. Backward chaining reads and processes a set of facts to reach a logical conclusion about
why something happened. An example of backward chaining would be examining a set of
symptoms to reach a medical diagnosis.

An expert system relies on having a good knowledge base. Experts add information to the
knowledge base, and nonexperts use the system to solve complex problems that would
usually require a human expert.

The process of building and maintaining an expert system is called knowledge engineering.
Knowledge engineers ensure that expert systems have all the necessary information to solve a
problem. They use various knowledge representation methodologies, such as symbolic
patterns, to do this. The system's capabilities can be enhanced by expanding the knowledge
base or creating new sets of rules.

10.
What are the components of an expert system?

There are three main components of an expert system:

 The knowledge base. This is where the information the expert system draws upon is
stored. Human experts provide facts about the expert system's particular domain or
subject area are provided that are organized in the knowledge base. The knowledge base
often contains a knowledge acquisition module that enables the system to gather
knowledge from external sources and store it in the knowledge base.

 The inference engine. This part of the system pulls relevant information from the
knowledge base to solve a user's problem. It is a rules-based system that maps known
information from the knowledge base to a set of rules and makes decisions based on those
inputs. Inference engines often include an explanation module that shows users how the
system came to its conclusion.

 The user interface. This is the part of the expert system that end users interact with to get
an answer to their question or problem.
Applications and use cases of expert systems

Expert systems can be effective in specific domains or subject areas where experts are
required to make diagnoses, judgments or predictions.

These systems have played a large role in many industries, including the following:

 Financial services, where they make decisions about asset management, act as robo-
advisors and make predictions about the behavior of various markets and other financial
indicators.

 Mechanical engineering, where they troubleshoot complex electromechanical


machinery.

 Telecommunications, where they are used to make decisions about network technologies
used and maintenance of existing networks.

 Healthcare, where they assist with medical diagnoses.

 Agriculture, where they forecast crop damage.

 Customer service, where they help schedule orders, route customer requests and solve
problems.

 Transportation, where they contribute in a range of areas, including pavement


conditions, traffic light control, highway design, bus and train scheduling and
maintenance, and aviation flight patterns and air traffic control.
 Law, where automation is starting to be used to deliver legal services, and to make civil
case evaluations and assess product liability.

You might also like