10 Ai Evaluation tp01

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

myCBSEguide

Class 10 - Artificial Intelligence (417)


Evaluation Test - 01

1. What is the term used for (1-Accuracy)?


a) Accuracy
b) Misclassification
c) F1 Score
d) Recall
2. The process of production of an analysis that corresponds too closely or exactly to a particular set of data, and may thus
fail to fit additional data or predict future observations reliably, is known as:
a) None of these
b) Evaluation
c) Modelling
d) Overfitting
3. What is evaluation?
i. It is the method for understanding the reliability AI model in this stage
ii. Testing data is very crucial
iii. It is done by comparing the output generated by the model with the actual outputs required by the model
iv. All of these
a) Option (iii)
b) Option (i)
c) Option (iv)
d) Option (ii)
4. The ratio of Correct predictions to the Total cases or samples is called:
a) Accuracy
b) Misclassification
c) F1 Score
d) Precision
5. The process of understanding the reliability of an AI model, which is based on outputs by feeding the testing dataset into
the model and comparing it with actual answers, is known as:
a) Modelling
b) Confusion matrix
c) Evaluation
d) Overfitting
To practice more questions & prepare well for exams, download myCBSEguide App. It provides complete study
material for CBSE, NCERT, JEE (main), NEET-UG and NDA exams. Teachers can use Examin8 App to create similar
papers with their own name and logo.
6. When will an AI model exhibit high performance?
7. What is the size of the confusion matrix?
8. Define Prediction in Evaluation.
9. What is the other name given for False Positive?
10. How is F1 Score calculated?
11. What are the different types of classification evaluation metrics?
12. How do you suggest which evaluation metric is more important for any case?

Copyright © myCBSEguide.com. Mass distribution in any mode is strictly prohibited.


1/5
myCBSEguide
13. Define F1 score and write its formula. Given below a variations in precision and recall, what will be its F1 score for
these instances.
Precision Recall F1 Score
Low Low

Low High

High Low
High High
14. Out of Precision and Recall, which Metric is more important? Explain with the help of examples.
15. What is a confusion matrix? Explain in detail with the help of an example.

Copyright © myCBSEguide.com. Mass distribution in any mode is strictly prohibited.


2/5
myCBSEguide

Class 10 - Artificial Intelligence (417)


Evaluation Test - 01

Solution

1. (b) Misclassification
Explanation: Misclassification
2. (d) Overfitting
Explanation: Overfitting
3. (c) Option (iv)
Explanation: All of these
4. (a) Accuracy
Explanation: Accuracy
5. (c) Evaluation
Explanation: Evaluation
6. An AI model will have good performance if the F1 Score for that model is high.
7. It is a 2× 2 matrix denoting the right and wrong predictions.
8. Prediction is the output that is given by the machine, and Reality is the real scenario in the forest when the Prediction is
made.
9. Type 1 error
10. F1 is based on Precision and Recall. When one (precision or recall) increases, the other one goes down. Hence, F1 score
combines precision-recall into a single number where F1 is calculated as:
F1 = (2*Recall Precision)/(Recall+ Precision)
It is dear that the F1 Score is the harmonic mean of Recall and Precision.
We can rewrite the F1 score in terms of the values that we saw in the confusion matrix: true positives, false negatives,
and false positives.
11. The following are the classification evaluation metrics:
i. Accuracy
ii. Confusion matrix
iii. Precision
iv. Recall
v. F1 Score
vi. Log Loss
12. F1 Score is an Evaluation metric that is more important in any case. F1 Score maintains a balance between the Precision
and Recall for the classifier. When the Precision is low, then F1 is low, and if the Recall is low again, the F1 Score is low.
The F1 Score is a number between 0 and 1 and is the harmonic mean of Precision and recall.
F1 Score = 2 ×
Precision × Recall

Precision + Recall

When we have a value of 1 (that is 100%) for both Precision and Recall, then F1 Score would also be an ideal 1 (100%).
It is known as the perfect value for the F1 Score. Because the values of both Precision and Recall ranges from 0 to 1;
hence the F1 Score also ranges from 0 to 1.
13. F1 score can be defined as balance between precision and recall. If both precision and recall have a value one, then F1
score will be automatically 1 i.e. the models percentage is 100%. Formula for F1 score is
F1 score = 2* P recision∗ Recall

P recision+Recall

Precision Recall F1 Score

Low Low Low

Copyright © myCBSEguide.com. Mass distribution in any mode is strictly prohibited.


3/5
myCBSEguide
Low High Low
High Low Low

High High High


To practice more questions & prepare well for exams, download myCBSEguide App. It provides complete study
material for CBSE, NCERT, JEE (main), NEET-UG and NDA exams. Teachers can use Examin8 App to create similar
papers with their own name and logo.
14. Choosing the best metric out of the Precision and Recall depends on the condition in which the model has been used. In a
case like Forest Fire or covid virus detection case, a False Negative can cost us a lot and is risky too. Let’s imagine that
no alert is given even when there is a Forest Fire; then, the whole forest might burn down. Similarly, consider the case of
coronavirus detection where a False Negative can be dangerous. Suppose that a deadly coronavirus has begun spreading,
and the model that is supposed to predict a viral outbreak could not detect it, then the virus might spread widely to infect
a lot of people.
On the contrary, there may be cases in which the False Positive condition costs more than False Negatives. Consider the
case of Mining. Let’s imagine that a model telling us that there exists petroleum at a point, and we keep on digging out
there without getting petroleum. So, it turns out that it is a false alarm. Here, the False Positive case (predicting there is
petroleum but there is no petroleum) may be very costly. Consider the case of a model that predicts whether a mail is
spam or not. When the model always predicts that the mail is spam, people would not look at it and eventually might
lose some important information. Here, the False Positive condition (Predicting the mail as spam while the mail is not
spam) would have a high cost.
We may conclude that when we want to know if our model’s performance is good, we need two measures: Recall and
Precision. In some cases, we might have a High Precision but Low Recall, while in some cases, we might require Low
Precision but High Recall. So, both the measures are important.
15. Confusion Matrix: Confusion Matrix is a 2× 2 table used to describe the performance of an AI classification
model/classifier on a set of testing data. This matrix is called the Confusion Matrix. Evaluation of the performance of an
AI classification model is based on the courts of test records correctly and incorrectly as predicted by the model.
Therefore, the confusion matrix is useful for measuring Recall (also known as Sensitivity), Accuracy, Precision, and F1
Score.
The following confusion matrix table for calculation of the 4-classification metrics (TP, TN, FN, TN) and to predict
value compared to the actual value in a confusion matrix:
Reality
The Confusion Matrix
Yes No
Yes True Positive (TP) False Positive (FP)
Prediction
No False Negative (FN) True Negative (TN)
The features of the confusion matrix are as below:
The target variable may have two values: Positive or Negative.
The rows represent the predicted values for the target variable.
The columns represent the actual values of the target variable.
True Positive, True Negative, False Positive, and False Negative are in a Confusion Matrix.
i. True Positive (TP): When the predicted value matches the actual value, it is called True Positive. The actual value
was positive, and the model predicted a positive value.
ii. True Negative (TN): When the predicted value matches the actual negative value, then it is called True Negative.
The actual value was negative, and the model also predicted a negative value.
iii. False Positive (FP): When the predicted value was falsely predicted, then it is called False Positive. Thus, the actual
value was negative, but the model predicted a positive value. This is also known as the Type 1 error.

Copyright © myCBSEguide.com. Mass distribution in any mode is strictly prohibited.


4/5
myCBSEguide
iv. False Negative (FN): When the predicted value was falsely predicted, then it is called False Negative. Thus, the
actual value was positive, but the model predicted a negative value. This is also known as the Type 2 error.
Example: Case of Loan given by banks (Good loan & Bad loan)

Here, the result of TP will be that bad loans are correctly predicted as bad loans.
While the value of TN will be that good loans are correctly predicted as good loans.
The value of FP will be that (actual) good loans are incorrectly predicted as bad loans.
The value of FN will be that (actual) bad loans are incorrectly predicted as good loans.
Therefore, the bank will lose a bunch of money when the actual bad loans are predicted as good loans because loans are
not being repaid. On the other hand, banks won’t be capable of making more revenue when the actual good loans are
predicted as bad loans. Thus, in this case, the cost of False Negatives is much higher than the cost of False
Positives(FN).

Copyright © myCBSEguide.com. Mass distribution in any mode is strictly prohibited.


5/5

You might also like