Professional Documents
Culture Documents
10 Ai Evaluation tp01
10 Ai Evaluation tp01
10 Ai Evaluation tp01
Low High
High Low
High High
14. Out of Precision and Recall, which Metric is more important? Explain with the help of examples.
15. What is a confusion matrix? Explain in detail with the help of an example.
Solution
1. (b) Misclassification
Explanation: Misclassification
2. (d) Overfitting
Explanation: Overfitting
3. (c) Option (iv)
Explanation: All of these
4. (a) Accuracy
Explanation: Accuracy
5. (c) Evaluation
Explanation: Evaluation
6. An AI model will have good performance if the F1 Score for that model is high.
7. It is a 2× 2 matrix denoting the right and wrong predictions.
8. Prediction is the output that is given by the machine, and Reality is the real scenario in the forest when the Prediction is
made.
9. Type 1 error
10. F1 is based on Precision and Recall. When one (precision or recall) increases, the other one goes down. Hence, F1 score
combines precision-recall into a single number where F1 is calculated as:
F1 = (2*Recall Precision)/(Recall+ Precision)
It is dear that the F1 Score is the harmonic mean of Recall and Precision.
We can rewrite the F1 score in terms of the values that we saw in the confusion matrix: true positives, false negatives,
and false positives.
11. The following are the classification evaluation metrics:
i. Accuracy
ii. Confusion matrix
iii. Precision
iv. Recall
v. F1 Score
vi. Log Loss
12. F1 Score is an Evaluation metric that is more important in any case. F1 Score maintains a balance between the Precision
and Recall for the classifier. When the Precision is low, then F1 is low, and if the Recall is low again, the F1 Score is low.
The F1 Score is a number between 0 and 1 and is the harmonic mean of Precision and recall.
F1 Score = 2 ×
Precision × Recall
Precision + Recall
When we have a value of 1 (that is 100%) for both Precision and Recall, then F1 Score would also be an ideal 1 (100%).
It is known as the perfect value for the F1 Score. Because the values of both Precision and Recall ranges from 0 to 1;
hence the F1 Score also ranges from 0 to 1.
13. F1 score can be defined as balance between precision and recall. If both precision and recall have a value one, then F1
score will be automatically 1 i.e. the models percentage is 100%. Formula for F1 score is
F1 score = 2* P recision∗ Recall
P recision+Recall
Here, the result of TP will be that bad loans are correctly predicted as bad loans.
While the value of TN will be that good loans are correctly predicted as good loans.
The value of FP will be that (actual) good loans are incorrectly predicted as bad loans.
The value of FN will be that (actual) bad loans are incorrectly predicted as good loans.
Therefore, the bank will lose a bunch of money when the actual bad loans are predicted as good loans because loans are
not being repaid. On the other hand, banks won’t be capable of making more revenue when the actual good loans are
predicted as bad loans. Thus, in this case, the cost of False Negatives is much higher than the cost of False
Positives(FN).