Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

SC-BA-06 : ARTIFICIAL INTELLIGENCE IN

BUSINESS APPLICATIONS
(404BA)
(2019 Pattern) (Semester-IV)
Time : 2½ Hours] [Max. Marks : 50
Instructions to the candidates:
1) All questions are compulsory.
2) Neat diagrams must be drawn wherever necessary.
3) Assume suitable data, if necessary.

Q1) Answer any 5 out of 8 :


a) Define Artificial intelligence?
b) Define first order logic.
c) State Breadth-First search.
d) Define machine learning.
e) Define Hierarchical clustering.
f) Define Artificial Neural Networks.
g) Define terms ‘Fact’ and ‘Rule’.
h) State uniform-cost search.

a) Define Artificial intelligence?


Artificial intelligence (AI) is a branch of computer science that deals with the
creation of intelligent agents, which are systems that can reason, learn, and act
autonomously. AI research has been highly successful in developing effective
techniques for solving a wide range of problems, from game playing to medical
diagnosis.

b) Define first order logic.

First-order logic (FOL) is a formal language that is used to represent knowledge


and reasoning. FOL is based on the idea of predicates, which are statements that
can be true or false. For example, the predicate "John is tall" is true if John is tall
and false if he is not tall.

c) State Breadth-First search.

Breadth-first search (BFS) is a graph search algorithm that explores the nodes of a
graph in a breadth-first manner. This means that the algorithm starts at the root
node and then explores all of the nodes at the same level before moving on to the
next level.
d) Define machine learning.

Machine learning (ML) is a field of computer science that gives computers the
ability to learn without being explicitly programmed. ML algorithms are trained on
data, and they learn to make predictions or decisions based on that data.

e) Define Hierarchical clustering.

Hierarchical clustering is a type of clustering algorithm that creates a hierarchy of


clusters. The algorithm starts by clustering all of the data points into a single
cluster. Then, it recursively merges the clusters that are most similar to each other.

f) Define Artificial Neural Networks.

Artificial neural networks (ANNs) are a type of machine learning algorithm that
are inspired by the human brain. ANNs are made up of interconnected nodes,
which are similar to neurons in the brain. ANNs learn by adjusting the weights of
the connections between the nodes.

g) Define terms ‘Fact’ and ‘Rule’.

In first-order logic, a fact is a statement that is true or false. A rule is a statement


that describes how facts can be combined to form new facts. For example, the rule
"If John is tall and Mary is tall, then John and Mary are both tall" is a rule that
describes how two facts can be combined to form a new fact.

h) State uniform-cost search.

Uniform-cost search (UCS) is a graph search algorithm that explores the nodes of
a graph in a way that minimizes the total cost of the search. This means that the
algorithm always chooses the node with the lowest cost to explore next.

Q2) Answer any 2 out of 3 :


a) Distinguish between forward and backward chaining.
Forward chaining and backward chaining are two different approaches to
reasoning in artificial intelligence.

Forward chaining is a top-down approach to reasoning. It starts with a goal and


then works its way down to the facts that support the goal. For example, if the goal
is to determine whether John is tall, then the forward chaining algorithm would
start by checking the fact that John is tall. If the fact is true, then the algorithm
would conclude that John is tall. If the fact is not true, then the algorithm would try
to find other facts that could support the goal.
Backward chaining is a bottom-up approach to reasoning. It starts with the facts
and then works its way up to the goal. For example, if the facts are that John is tall
and that tall people are usually over 6 feet tall, then the backward chaining
algorithm would conclude that John is over 6 feet tall.

Here is a table that summarizes the key differences between forward chaining and
backward chaining:

Feature Forward chaining Backward chaining


Approach Top-down Bottom-up
Starting point Goal Facts
Direction of From goal to facts From facts to goal
reasoning
Strengths Easy to implement Efficient for some
problems
Weaknesses Can be inefficient for some Can be difficult to
problems implement

Forward chaining is a more intuitive approach to reasoning, but it can be


inefficient for some problems. Backward chaining is a more efficient approach to
reasoning, but it can be more difficult to implement.

Here are some examples of forward chaining and backward chaining:

• Forward chaining: A spam filter uses forward chaining to determine whether


an email is spam. The filter starts with the goal of determining whether the
email is spam. It then works its way down to the facts that support the goal,
such as the fact that the email contains the word "viagra". If the filter finds
enough facts that support the goal, then it concludes that the email is spam.
• Backward chaining: A program that plays chess uses backward chaining to
determine the best move to make. The program starts with the facts about
the current state of the game. It then works its way up to the goal of
determining the best move to make. The program does this by considering
all of the possible moves and then choosing the move that leads to the most
favorable outcome.

I hope this helps! Let me know if you have any other questions.

b) What is the common way to represent and parse grammars for natural
language processing?
The most common way to represent and parse grammars for natural language
processing is using context-free grammars (CFGs). CFGs are a type of formal
grammar that is used to describe the structure of sentences in a natural language.
CFGs are made up of production rules, which are statements that describe how a
sequence of symbols can be generated.

For example, the following CFG production rule describes how a noun phrase
(NP) can be generated:

NP → Det N

This rule states that a noun phrase can be generated by starting with a determiner
(Det) and then following it with a noun (N).

CFGs can be used to parse sentences by starting with the start symbol of the
grammar and then applying the production rules to generate the sentence. For
example, the following sentence can be parsed using the CFG above:

The boy ate the apple.

The parser would start with the start symbol of the grammar, which is NP. It would
then apply the production rule NP → Det N to generate the phrase "the boy". It
would then apply the production rule NP → Det N to generate the phrase "the
apple". Finally, it would combine the two phrases to generate the complete
sentence.

CFGs are a powerful tool for representing and parsing grammars for natural
language processing. They are relatively easy to understand and implement, and
they can be used to represent a wide variety of grammatical structures.

Here are some other ways to represent and parse grammars for natural language
processing:

• Dependency grammars: Dependency grammars are a type of grammar that


represents the relationships between words in a sentence. Dependency
grammars are typically used for tasks such as part-of-speech tagging and
semantic parsing.
• Probabilistic grammars: Probabilistic grammars are a type of grammar that
assigns probabilities to the production rules. Probabilistic grammars are
typically used for tasks such as natural language generation and machine
translation.
The choice of which representation and parsing algorithm to use will depend on
the specific task that is being performed.

c) Explain state space approach for solving any AI problem.


The state space approach is a general problem-solving technique that can be used
to solve a wide variety of problems. The state space approach works by
representing the problem as a state space, which is a graph of all possible states of
the problem. The goal of the state space approach is to find a path from the initial
state of the problem to the goal state.

The state space approach can be used to solve any AI problem that can be
represented as a state space. Some examples of AI problems that can be solved
using the state space approach include:

• Pathfinding: Pathfinding is the problem of finding a path between two points


in a graph. The state space of a pathfinding problem is the graph of all
possible paths between the two points.
• Game playing: Game playing is the problem of playing a game against an
opponent. The state space of a game playing problem is the graph of all
possible states of the game.
• Scheduling: Scheduling is the problem of finding a schedule for a set of
tasks. The state space of a scheduling problem is the graph of all possible
schedules for the tasks.

The state space approach is a powerful problem-solving technique, but it can be


computationally expensive for problems with large state spaces.

Here are some of the steps involved in the state space approach:

1. Problem representation: The first step is to represent the problem as a state


space. This involves identifying the states of the problem, the transitions
between states, and the goal state.
2. Search: The next step is to search the state space for a path to the goal state.
This can be done using a variety of search algorithms, such as breadth-first
search, depth-first search, and A* search.
3. Planning: Once a path to the goal state has been found, the next step is to
plan how to execute the path. This involves generating a sequence of actions
that will take the problem from the initial state to the goal state.
4. Execution: The final step is to execute the plan and solve the problem.

The state space approach is a powerful problem-solving technique, but it can be


computationally expensive for problems with large state spaces. However, the
state space approach is a versatile technique that can be used to solve a wide
variety of AI problems.

Q3) Answer 3 (a) or 3 (b) :


a) Discuss the role of reasoning in AI. How prediate logic is used in AI to
represent knowledge?
Reasoning is a fundamental part of artificial intelligence (AI). It is the ability to
draw inferences from premises, and to use those inferences to solve problems.
Reasoning is used in AI for a variety of tasks, including:

• Natural language processing: Reasoning is used in natural language


processing to understand the meaning of text. For example, reasoning can be
used to determine the logical implications of a sentence, or to infer the
relationships between entities mentioned in a sentence.
• Machine learning: Reasoning is used in machine learning to make
predictions. For example, reasoning can be used to combine multiple
features to make a prediction, or to reason about the uncertainty of a
prediction.
• Robotics: Reasoning is used in robotics to plan and execute actions. For
example, reasoning can be used to plan a path for a robot to follow, or to
reason about the consequences of an action.

Predicate logic is a formal language that is used to represent knowledge in AI.


Predicate logic is based on the idea of predicates, which are statements that can be
true or false. For example, the predicate "John is tall" is true if John is tall and
false if he is not tall.

Predicate logic can be used to represent a wide variety of knowledge, including


facts, rules, and relationships. For example, the following predicate logic statement
represents the fact that John is tall and Mary is short:

tall(john) & short(mary)

The following predicate logic statement represents the rule that if John is tall and
Mary is short, then John is taller than Mary:

tall(john) & short(mary) -> taller(john, mary)

Predicate logic can be used to reason about knowledge in a variety of ways. For
example, predicate logic can be used to:
• Deduce new knowledge: New knowledge can be deduced from existing
knowledge by using the rules of logic. For example, the following predicate
logic statement can be deduced from the statements above:
taller(john, mary)
• Answer questions: Questions can be answered by using predicate logic to
reason about the knowledge that is represented. For example, the following
question can be answered using the statements above:
Is John taller than Mary?

Predicate logic is a powerful tool for representing and reasoning about knowledge
in AI. It is a versatile language that can be used to represent a wide variety of
knowledge, and it can be used to reason about knowledge in a variety of ways.

b) Explain A * searching technique in detail with example. Discuss conditions


for the optimality of this technique.
A* search is a graph search algorithm that is used to find the shortest path
between two nodes in a graph. A* search is a greedy algorithm, which means that
it always chooses the node that seems to be the most promising one.

The A* search algorithm works by maintaining a priority queue of nodes that have
not yet been explored. The priority queue is ordered by the estimated cost of the
path from the current node to the goal node. The algorithm then pops the node
with the lowest estimated cost from the priority queue and explores it. If the node
is the goal node, then the algorithm terminates. Otherwise, the algorithm expands
the node and adds its children to the priority queue.

Here is an example of how the A* search algorithm can be used to find the shortest
path between two nodes in a graph:

Let's say we have a graph with the following nodes:

A-B-C-D-E

The goal node is E. The cost of each edge in the graph is 1.

The A* search algorithm would start by adding the node A to the priority queue.
The estimated cost of the path from A to E is 4.

The algorithm would then pop the node A from the priority queue and explore it.
The algorithm would then add the nodes B and C to the priority queue. The
estimated cost of the path from B to E is 3, and the estimated cost of the path from
C to E is 2.
The algorithm would then pop the node B from the priority queue and explore it.
The algorithm would then add the node D to the priority queue. The estimated cost
of the path from D to E is 1.

The algorithm would then pop the node D from the priority queue and explore it.
The algorithm would then add the node E to the priority queue. The estimated cost
of the path from E to E is 0.

The algorithm would then terminate, because the node E is the goal node.

The A* search algorithm is optimal under the following conditions:

• The cost function is non-negative.


• The cost function is monotonically increasing.
• The graph is acyclic.

If these conditions are met, then the A* search algorithm will always find the
shortest path between two nodes in the graph.

Here are some of the advantages of the A* search algorithm:

• It is a relatively efficient algorithm.


• It is guaranteed to find the shortest path between two nodes in the graph, if
the conditions for optimality are met.

Here are some of the disadvantages of the A* search algorithm:

• It can be computationally expensive for graphs with a large number of


nodes.
• It can be difficult to implement the A* search algorithm for graphs with
non-uniform edge costs.

Overall, the A* search algorithm is a powerful and efficient algorithm for finding
the shortest path between two nodes in a graph. However, it can be
computationally expensive for graphs with a large number of nodes.

Q4) Answer 4 (a) or 4 (b)


a) What are steps involved in natural language processing (NLP) of an
English sentence? Explain with an example sentence.
Natural language processing (NLP) is a field of computer science that deals with
the interaction between computers and human (natural) languages. NLP
encompasses a wide range of tasks, such as:

• Text analysis: This involves extracting meaning from text, such as


identifying the sentiment of a text or the topics that are discussed in a text.
• Machine translation: This involves translating text from one language to
another.
• Question answering: This involves answering questions that are posed in
natural language.

The steps involved in NLP of an English sentence can be summarized as follows:

1. Tokenization: This involves breaking the sentence down into individual


words or tokens. For example, the sentence "The quick brown fox jumps
over the lazy dog" would be tokenized into the following tokens: "The",
"quick", "brown", "fox", "jumps", "over", "the", "lazy", "dog".
2. Part-of-speech tagging: This involves assigning a part-of-speech tag to each
token. For example, the token "The" would be tagged as a determiner, the
token "quick" would be tagged as an adjective, and the token "jumps" would
be tagged as a verb.
3. Stemming: This involves reducing inflected words to their root form. For
example, the words "jumping" and "jumped" would both be stemmed to the
root word "jump".
4. Lemmatization: This involves grouping together different inflected forms of
a word under a single lemma. For example, the words "jumping", "jumped",
and "jump" would all be lemmatized to the lemma "jump".
5. Named entity recognition: This involves identifying named entities in a text,
such as people, places, organizations, and dates. For example, the sentence
"The quick brown fox jumps over the lazy dog" would identify the named
entities "The quick brown fox", "the lazy dog", and "over".
6. Parsing: This involves analyzing the syntactic structure of a sentence. For
example, the sentence "The quick brown fox jumps over the lazy dog"
would be parsed into a tree structure that shows the relationships between
the different words in the sentence.
7. Semantic analysis: This involves determining the meaning of a sentence.
This can be done by using a variety of techniques, such as word sense
disambiguation, coreference resolution, and sentiment analysis.

Here is an example of how the steps involved in NLP of an English sentence can
be applied to the sentence "The quick brown fox jumps over the lazy dog":
• Tokenization: The sentence is tokenized into the following tokens: "The",
"quick", "brown", "fox", "jumps", "over", "the", "lazy", "dog".
• Part-of-speech tagging: The tokens are tagged with their part-of-speech tags.
For example, the token "The" is tagged as a determiner, the token "quick" is
tagged as an adjective, and the token "jumps" is tagged as a verb.
• Stemming: The words are stemmed to their root form. For example, the
words "jumping" and "jumped" are both stemmed to the root word "jump".
• Lemmatization: The words are lemmatized to their lemma. For example, the
words "jumping", "jumped", and "jump" are all lemmatized to the lemma
"jump".
• Named entity recognition: The named entities in the sentence are identified.
For example, the sentence identifies the named entities "The quick brown
fox", "the lazy dog", and "over".
• Parsing: The sentence is parsed into a tree structure that shows the
relationships between the different words in the sentence.
• Semantic analysis: The meaning of the sentence is determined. This can be
done by using a variety of techniques, such as word sense disambiguation,
coreference resolution, and sentiment analysis.

b) Give an example of a problem for which breadth first search would


work better than depth-first search.
Breadth-first search (BFS) is a graph search algorithm that explores the nodes of a
graph in a breadth-first manner. This means that BFS starts by exploring the nodes
that are closest to the start node, and then it explores the nodes that are one level
further away, and so on.

Depth-first search (DFS) is a graph search algorithm that explores the nodes of a
graph in a depth-first manner. This means that DFS starts by exploring the nodes
that are closest to the start node, and then it explores the nodes that are one level
further away, but only if they are not connected to any of the nodes that have
already been explored.

Here is an example of a problem for which BFS would work better than DFS:

• Finding the shortest path between two nodes in a graph: BFS is a good
algorithm for finding the shortest path between two nodes in a graph
because it explores all of the nodes that are close to the start node before it
explores nodes that are further away. This means that BFS is more likely to
find the shortest path than DFS, which may explore nodes that are further
away from the start node before it explores nodes that are closer to the start
node.
Here is an example of a problem for which DFS would work better than BFS:

• Finding a path to a goal node in a graph with a large number of dead ends:
DFS is a good algorithm for finding a path to a goal node in a graph with a
large number of dead ends because it will eventually explore all of the
possible paths, even if some of the paths lead to dead ends. BFS, on the
other hand, may not explore all of the possible paths if there are a large
number of dead ends.

I hope this helps! Let me know if you have any other questions.

Q5) Answer 5(a) or 5(b) :


a) Write in detail about any two informed search strategies.
Here are two informed search strategies:

A search:* A* search is a graph search algorithm that is used to find the shortest
path between two nodes in a graph. A* search is a greedy algorithm, which means
that it always chooses the node that seems to be the most promising one.

A* search works by maintaining a priority queue of nodes that have not yet been
explored. The priority queue is ordered by the estimated cost of the path from the
current node to the goal node. The algorithm then pops the node with the lowest
estimated cost from the priority queue and explores it. If the node is the goal node,
then the algorithm terminates. Otherwise, the algorithm expands the node and adds
its children to the priority queue.

The estimated cost of a node is a heuristic function that estimates the cost of the
path from the node to the goal node. The heuristic function should be admissible,
which means that it should never overestimate the cost of the path.

Best-first search: Best-first search is a graph search algorithm that is similar to A*


search, but it does not use a heuristic function. Instead, best-first search simply
explores the nodes with the lowest estimated cost.

Best-first search is a simpler algorithm than A* search, but it is not as efficient.


However, best-first search can be used to solve problems where there is no known
heuristic function.

Here are some of the advantages of informed search strategies:


• They can find the shortest path between two nodes in a graph more quickly
than uninformed search strategies.
• They can find a path to a goal node in a graph with a large number of dead
ends.

Here are some of the disadvantages of informed search strategies:

• They can be more computationally expensive than uninformed search


strategies.
• They can be more difficult to implement.

Overall, informed search strategies are a powerful tool for solving problems that
involve searching a graph. However, they can be more computationally expensive
than uninformed search strategies.

b) Distinguish ambiguity and disambiguation in AI.


Ambiguity and disambiguation are two important concepts in artificial
intelligence (AI).

Ambiguity is the property of having multiple meanings. In AI, ambiguity can


occur in natural language processing (NLP), where a word or phrase can have
multiple meanings depending on the context. For example, the word "bank" can
refer to a financial institution, the side of a river, or a mound of earth.
Disambiguation is the process of resolving ambiguity. In AI, disambiguation is
used to determine the correct meaning of a word or phrase in a given context. For
example, if a machine learning model is given the sentence "I went to the bank,"
the model would need to disambiguate the word "bank" to determine whether the
sentence refers to a financial institution or the side of a river.

Here are some of the techniques that can be used for disambiguation in AI:

• Context: The context of a word or phrase can help to disambiguate its


meaning. For example, the word "bank" is more likely to refer to a financial
institution in the sentence "I went to the bank to deposit my paycheck."
• Knowledge base: A knowledge base is a collection of facts and relationships
that can be used to disambiguate words and phrases. For example, a
knowledge base that includes the fact that "banks are financial institutions"
would be helpful for disambiguating the word "bank" in the sentence "I
went to the bank."
• Statistical methods: Statistical methods can be used to disambiguate words
and phrases by analyzing the frequency of different meanings in a corpus of
text. For example, a statistical method might be used to determine that the
word "bank" is more likely to refer to a financial institution than to the side
of a river based on the frequency of these two meanings in a corpus of text.

Overall, ambiguity and disambiguation are important concepts in AI that are used
to resolve the multiple meanings of words and phrases. The techniques that can be
used for disambiguation in AI include context, knowledge bases, and statistical
methods.

  

[5860]-416 2

You might also like