Assignment 3 Ai

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

NAME

ABDULLAH NASEER
CLASS
BSSE-5TH
ROLL NO
FSD-SP-073
ASSIGNMENT
ARTIFICIAL INTELLIGENCE
SUBMITTTED TO
MAM SAIRA
QUESTION:1(A)
Describe Bayesian decision theory with and without prior knowledge?```

Bayesian Decision Theory: With and Without Prior Knowledge

Bayesian decision theory is a framework for making optimal decisions based on probabilities and
potential consequences. It's particularly useful in situations where you need to classify something
or choose an action under uncertainty.

Here's how it works with and without prior knowledge:

With Prior Knowledge:

1. States of Nature (ω): These are the possible true states of the world you're trying to
classify. For example, a disease test might have states "healthy" (ω₁) and "sick" (ω₂).
2. Prior Probabilities (P(ω)): This is your initial belief about how likely each state is,
before you see any evidence. You might have a prior belief that P(healthy) = 0.9 (based
on general population health)
3. Evidence (x): This is some data or observation you make. In the disease test, this could
be a positive test result.
4. Likelihood (p(x|ω)): This is the probability of observing the evidence (x) given a
particular state (ω). For instance, p(positive test result | healthy) might be low.
5. Posterior Probability (P(ω|x)): Using Bayes' theorem, you can update your beliefs
about the states after considering the evidence. P(ω|x) = P(x|ω) * P(ω) / P(x)
6. Decision Rule and Loss Function: You define the potential actions (treat, don't treat)
and the cost (loss) associated with making the wrong decision for each state. The decision
rule chooses the action that minimizes the expected loss.

Without Prior Knowledge (Purely Empirical):

If you have no initial belief about the states (P(ω) is unknown), you can still use a simpler form
of Bayesian decision theory. Here's how:

1. Focus on Likelihood: You directly estimate the likelihood of observing the evidence (x)
given each state (ω) from your data, p(x|ω).
2. Decision Rule: Similar to the case with prior knowledge, you choose the action that
minimizes the expected loss based on the likelihoods.

In essence:

 With prior knowledge, you leverage your existing beliefs to refine your decision-making
after observing evidence.
 Without prior knowledge, you rely solely on the data to determine the likelihoods and
make the best decision based on that information.

QUESTION:1 (B)

Briefly describe applications of Bayesian decision theory with and without prior
knowledge?

ANS

Bayesian Applications: Knowledge Makes a Difference

With Prior Knowledge:

 Spam Filtering: Email providers use Bayes' theorem to consider your past behavior
(prior knowledge) and the content of an email (evidence) to classify it as spam or not.
 Medical Diagnosis: Doctors use their experience (prior knowledge) alongside test results
(evidence) to calculate the probability of a disease and recommend treatment.
 Search Engine Ranking: Search engines consider your past searches (prior knowledge)
along with keywords in your query (evidence) to rank the most relevant webpages.

Without Prior Knowledge (Purely Empirical):

 Optical Character Recognition (OCR): OCR software analyzes the shapes of pixels
(evidence) to directly determine the most likely letter or character (state) written in an
image, without prior knowledge of specific fonts.
 Self-Driving Cars: These cars use real-time sensor data (evidence) to directly calculate
the likelihood of pedestrians or obstacles (states) being present on the road.
 A/B Testing: Website designers run A/B tests where users are randomly shown different
versions of a page (actions). By analyzing user behavior (evidence), they can directly
determine which version performs better (optimal state).

QUESTION:2 (A)

How Bayesian classifier classify input data with example (Continuous data)?

Classifying with Naive Bayes and Continuous Data

Naive Bayes classifiers are a type of algorithm that use Bayes' theorem to predict the class label
(category) of a new data point. They can handle continuous data (numerical values) but make
an important assumption:

The Assumption: For each class, each feature (data point) is assumed to be normally
distributed. This means it follows a bell-shaped curve.
Here's how a Naive Bayes classifier (like Gaussian Naive Bayes) classifies data with an example:

1. Features and Classes:

Imagine you have a dataset for classifying emails as spam (class 1) or not spam (class 2). The
features could be things like word count (continuous) and presence of certain keywords
(discrete).

2. Training:

The classifier is trained on existing labeled emails. It learns the:

 Prior probability (P(ω)) of each class (spam vs not spam) based on the training data.
 Distribution parameters (mean and standard deviation) of each feature within each
class. This tells us how "spread out" the word count is for spam emails vs non-spam
emails.

3. Classifying New Email:

When a new email arrives with a certain word count (continuous data) and keywords (discrete
data), the classifier does this:

 Calculates the likelihood (p(x|ω)) of observing this word count given each class (spam
or not spam) using the normal distribution parameters learned during training.
 Uses Bayes' theorem to calculate the posterior probability (P(ω|x)) of each class given
the email's features (word count and keywords).
 The class with the highest posterior probability is predicted as the class label for the
new email (spam or not spam).

Essentially:

The classifier uses the distribution of features within each class to estimate how likely a new data
point belongs to each class. It then picks the class with the highest likelihood.

Limitations:

 The assumption of normally distributed features can be a weakness. If the data doesn't
follow this distribution, the classification might not be accurate.
 Naive Bayes also assumes features are independent, which isn't always true.

QUESTION#2 (B)

Describe problems in pattern classification with examples?

Pattern classification, despite its successes, faces several challenges. Here are some common
problems with examples:
1. Class Imbalance:

 Problem: Imagine classifying emails as spam or not spam. If spam emails only make up
1% of your data, the classifier might struggle to learn the patterns of spam because it sees
"not spam" emails way more often.
 Example: Fraud detection systems often deal with class imbalance. Fraudulent
transactions are rare compared to normal transactions, making it difficult to identify
fraudulent patterns effectively.

2. Overfitting:

 Problem: The classifier memorizes the training data too well and fails to generalize to
unseen data. Imagine training a classifier on handwritten digits that only sees digits
written in a very specific style.
 Example: A spam filter trained only on a specific spam campaign might incorrectly
classify future, different spam emails.

3. High dimensionality:

 Problem: When data has many features (dimensions), it can become difficult to visualize
patterns and the classifier can struggle with the "curse of dimensionality." For instance,
an image recognition system might struggle with very high-resolution images with
millions of pixels.
 Example: Analyzing gene expression data often involves thousands of genes. Finding
relevant genes for a specific disease can be challenging due to the high dimensionality.

4. Class Overlap:

 Problem: The boundaries between classes are not clear-cut. Imagine classifying images
of cats and dogs. Some dog breeds might look very cat-like, making classification
difficult.
 Example: Medical diagnosis can be challenging because some diseases share similar
symptoms. Classifying a disease based on symptoms alone can lead to errors.

5. Noise and outliers:

 Problem: Real-world data often contains noise (random errors) and outliers (data points
very different from the majority). These can confuse the classifier.
 Example: Speech recognition systems can struggle with background noise or unusual
accents.

You might also like