Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Evaluation Metrics

1. Accuracy
❖ Example: Medical data
➢ The number of instances that are correctly predicted as sick (TP)
➢ The number of instances that are correctly predicted as healthy (TN)
➢ The number of instances that are incorrectly predicted as sick when the
person is healthy (FP)
➢ The number of instances that are incorrectly predicted as healthy when
person is sick (FN)
Machine Predicted as Machine Predicted as
sick (Class A) Healthy (Class B)
RECALL
Positive (+) In Real Life True Positive (TP) False Negative (FN)
class A Sick
Negative (-) In Real Life False Positive (FP) True Negative (TN)
class B Healthy

PRECISION

❖ Formula
𝑵𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝑪𝒐𝒓𝒆𝒄𝒕 𝑷𝒓𝒆𝒅𝒊𝒄𝒕𝒊𝒐𝒏𝒔 𝑻𝑷+𝑻𝑵
Accuracy = ∗ 𝟏𝟎𝟎 = * 100
𝑻𝒐𝒕𝒂𝒍 𝑵𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝑷𝒓𝒆𝒅𝒊𝒄𝒕𝒊𝒐𝒏𝒔 𝑻𝑷+𝑻𝑵+𝑭𝑷+𝑭𝑵

2. Precision
➢ The Proportion of true positive predictions among all positive predictions
made by the model.

❖ Formula
𝑻𝑷
precision = * 100
𝑭𝑷+𝑻𝑷

Where, TP is the number of true positive predictions.


FP is the number of false positive predictions.

➢ 3. Recall
➢ It’s also known “sensitivity or true positive rate”. measures the proportion
of true positive predictions among all actual instances.
❖ Formula
𝑻𝑷
Recall = * 100
𝑻𝑷+𝑭𝑵

Where, TP is the number of true positive predictions.


FN is the number of incorrectly predicted as negative.

In words, recall quantifies the model’s ability to capture all the positive instances
without missing any. It answers the questions: “Out of all the actual positive
instances, how many did the model correctly identify as positive?” A high recall value
indicates that the model is effectively capturing most of the positive instances, with
fewer false negatives.

4. F1-Score (Harmonic mean)


➢ F1-Sore is a single metric that combines precision and recall into single
value, providing a balance between them.
➢ It particularly useful when classes are imbalanced.
➢ A high f1-Score indicates both good precision and recall.

❖ Formula
𝑷𝒓𝒆𝒄𝒊𝒔𝒐𝒏+𝑹𝒆𝒄𝒂𝒍𝒍
F1-Score = 𝟐 ∗
𝑷𝒓𝒆𝒄𝒊𝒔𝒊𝒐𝒏∗𝑹𝒆𝒄𝒂𝒍𝒍

Where, Precision is the proportion of true positive predictions among all


positive predictions.
Recall is the proportion of true positive predictions among all actual
positive instances.

ENJOY LEARNING!

You might also like