Download as pdf or txt
Download as pdf or txt
You are on page 1of 109

Artificial Intelligence 417:

Most expected questions 2024

▪ Chapter wise Questions

▪ Subject
▪ Objective

1
Introduction to AI

2
Introduction to AI

3
Introduction to AI

4
Introduction to AI

5
Introduction to AI

6
Introduction to AI

7
Introduction to AI

8
Introduction to AI

9
Introduction to AI

• Machine Learning: It is a subset of Artificial Intelligence which enables machines to improve


at tasks with experience (data).

• The intention of Machine Learning is to enable machines to learn by themselves using the
provided data and make accurate Predictions/ Decisions.

• Machine Learning is used in Snapchat Filters, NETFLIX recommendation system.

10
Introduction to AI

11
Introduction to AI

12
Introduction to AI

13
Introduction to AI

14
Introduction to AI

15
Introduction to AI

16
Introduction to AI

17
Introduction to AI

18
Introduction to AI

19
Introduction to AI

20
Introduction to AI

21
Introduction to AI

22
Introduction to AI

23
Introduction to AI

24
Introduction to AI

25
Introduction to AI

26
Introduction to AI

27
Introduction to AI

This is because our model will simply remember the whole training set,
and will therefore always predict the correct label for any point in the
training set.

28
Introduction to AI

Google Maps, Apple maps, Ola, Uber

29
Introduction to AI

Naturalist Intelligence

30
Introduction to AI

31
Introduction to AI

32
Introduction to AI

33
Introduction to AI

34
Introduction to AI

35
Introduction to AI

36
Introduction to AI

37
Introduction to AI

Any machine that has been trained with data and can make decisions/predictions on its own
can be termed as AI.

Eg: The bot or the automation machine is not trained with any data is not an AI while a chatbot
that understands and processes human language is an AI.

38
Introduction to AI

In the given scenario, we are concerned about the bias.

When we talk about a machine, we know that it is artificial and cannot think on its own. It can have
intelligence, but we cannot expect a machine to have any biases of its own.

Any bias can transfer from the developer to the machine while the algorithm is being developed.

39
Introduction to AI

1. Mathematical Logical Reasoning: ability to regulate, measure, and understand numerical symbols,
abstraction and logic.

2. Linguistic Intelligence: Language processing skills both in terms of understanding or implementation in writing
or verbally.

3. Spatial Visual Intelligence : ability to perceive the visual world and the relationship of one object to another.

4. Kinesthetic Intelligence : ability that is related to how a person uses his limbs in a skilled manner.
40
5. Musical Intelligence : ability to recognize and create sounds, rhythms, and sound patterns.
Introduction to AI

Artificial Intelligence (AI) refers to any technique that enables computers to mimic human
intelligence i.e., make decisions, predict the future, learn and improve on its own.

With respect to the type of data fed in the AI model, AI models can be broadly categorised into
three domains:

1. Data sciences : takes input in the form of numeric and alphanumeric data.

2. Computer vision : takes input in the form of images and videos


41
3. Natural Language Processing :takes input in the form of text and speech.
Introduction to AI

Neural networks are loosely modelled after how neurons in the human brain
behave. The features of a neural network are :

1. They are able to extract data features automatically without needing the input
of the programmer.

2. A neural network is essentially a system of organizing machine learning


algorithms to perform certain tasks.

3. It is a fast and efficient way to solve problems for which the dataset is very
large, such as in images.
42
Project Cycle

43
Project Cycle

44
Project Cycle

45
Project Cycle

46
Project Cycle

47
Project Cycle

48
Project Cycle

49
Project Cycle

50
Project Cycle

51
Project Cycle

52
Project Cycle

53
Project Cycle

2. Data can be collected from one of the following sources:


a. surveys
b. observing therapist’s sessions
c. databases available on the internet
d. interviews

3. Once the textual data has been collected, it needs to be processed and cleaned so that an easier version can be
sent to the machine. Thus, the text is normalised through various steps and is lowered to minimum vocabulary since
54
the machine does not require grammatically correct statements but the essence of it.
Project Cycle

Problem Statement Template


Project Cycle
Project Cycle
Data Science

Comma separated Values

58
Data Science

59
Data Science

60
Data Science

While accessing data from any of the data sources, following points should be kept in mind:

1. Data which is available for public usage only should be taken up.

2. Personal datasets should only be used with the consent of the owner.

3. One should never breach someone’s privacy to collect data.

4. Reliable sources of data ensure the authenticity of data which helps in the proper training of the AI
model.

61
Data Science

62
Computer Vision

Apple vision Pro, Self-Driving Cars

63
Computer Vision

64
Computer Vision

Object Detection

65
Computer Vision

66
Computer Vision

67
Computer Vision

Resolution of an image refers to the number of pixels in an image, across the width and height.

For example a monitor resolution of 1280×1024. This means there are 1280 pixels from one side
to the other, and 1024 from top to bottom.

68
Natural language Processing

69
Natural language Processing

70
Natural language Processing

71
Natural language Processing

The first step in Data processing is Text Normalisation.

It helps in cleaning up the textual data in such a way that it comes down to a level where
its complexity is lower than the actual data.

In this we undergo several steps to normalise the text to a lower level.

The term used for the whole textual data from all the documents is known as corpus.

72
Natural language Processing

73
Natural language Processing

Stemming is the process in which the affixes of words are removed and the words are converted to
their base form.

In lemmatization, the word we get after affix removal (also known as lemma) is a meaningful one.

Lemmatization makes sure that lemma is a word with meaning and hence it takes a longer time to
execute than stemming.
74
Natural language Processing

75
Natural language Processing

46 tokens

76
Natural language Processing

The term used to describe the whole textual data from all the
documents altogether is known as corpus

77
Natural language Processing

Stopwords in the given sentence are: is, the, of, that, into, are, and

78
Natural language Processing

• Yes, the given statement is correct.

• Automatic summarization is relevant not only for summarizing the


meaning of documents and information, but also to understand the
emotional meanings within the information, such as in collecting data
from social media.

• Automatic summarization is especially relevant when used to provide


an overview of a news item or blog post, while avoiding redundancy
from multiple sources and maximizing the diversity of content
obtained.

79
Natural language Processing

The steps to implement bag of words algorithm are as follows:

1. Text Normalisation: Collect data and pre-process it

2. Create Dictionary: Make a list of all the unique words occurring


in the corpus. (Vocabulary)

3. Create document vectors: For each document in the corpus,


find out how many times the word from the unique list of words
has occurred.

4. Create document vectors for all the documents.

80
Natural language Processing

81
Natural language Processing

82
Natural language Processing

Corpus

83
Natural language Processing

Term Frequency Inverse Document Frequency

84
Natural language Processing

85
Natural language Processing

Script bot

86
Natural language Processing

87
Natural language Processing

Bag of words gives us two things:

1. A vocabulary of words for the corpus

2. The frequency of these words (number of times it has occurred in the whole corpus)

88
Natural language Processing

1. Tokenisation:
Akash, and, Ajay, are, best, friends | Akash, likes, to, play, football, but, Ajay, prefers, to, play, online, games

2. Removal of stopwords
Akash, Ajay, best, friends Akash, likes, play, football, Ajay, prefers, play, online, games

3. Converting text to a common case


akash, ajay, best, friends akash, likes, play, football, ajay, prefers, play, online, games

4. Stemming/Lemmatisation
akash, ajay, best, friend akash, like, play, football, ajay, prefer, play, online, game 89
Evaluation

90
Evaluation

91
Evaluation

92
Evaluation

93
Evaluation

94
Evaluation

95
Evaluation

F1 score can be defined as the measure of balance between precision and recall.

96
Evaluation

97
Evaluation

The model will have an F1 score of 1 if it has to be 100% accurate.

98
Evaluation

• Let us consider a model that predicts that a mail is spam or not.

• If the model always predicts that the mail is spam, people would
not look at it and eventually might lose important information.

• Here False Positive condition (Predicting the mail as spam while the
mail is not spam) would have a high cost.

99
Evaluation

• The confusion matrix is used to store the results of comparison


between the prediction and reality.

• From the confusion matrix, we can calculate parameters like recall,


precision ,F1 score which are used to evaluate the performance of an AI
model.

100
Evaluation

As shown in the graph, occurrence and value of a word are inversely proportional.

The words which occur most (like stop words) have negligible value.

As the occurrence of words drops, the value of such words rises. These words are termed as rare or valuable
words. These words occur the least but add the most value to the corpus
101
Evaluation

102
Evaluation

103
Evaluation

Confusion Matrix

104
Evaluation

105
Evaluation

106
Evaluation

107
Evaluation

• Predicting heavy rain is important for farmers to protect their crops.

• Focusing only on positive predictions – Storm is coming: can lead to farmers delaying their crop if not
accurate.

• Focusing only on negative predictions – Storm is not coming can lead to damaged crop if not accurate.

• The best approach is to balance both accuracy and catching important events. This is what the F1 Score
measures.

108
Evaluation

i) TP=60, TN=10, FP=25, FN=5 60+25+5+10=100


total cases have been performed

(ii) (Note: For calculating Precision, Recall and F1


score, we need not multiply the formula by 100 as
all these parameters need to range between 0 to 1)

Precision =TP/(TP+FP) =60/(60+25) =60/85 =0.7

Recall=TP/(TP+FN) =60/(60+5) =60/65 =0.92

F1 Score=2*Precision*Recall/(Precision+Recall)
=2*0.7*0.92/(0.7+0.92)
=0.79
109

You might also like