Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 9

(HMM)

Hidden Markov Model (HMM)

A Hidden Markov Model (HMM) is a statistical model that represents


systems which follow a Markov process with hidden states. In simpler
terms, it models situations where an underlying system can be in different
states that are not directly observable (hidden). Instead, we observe
outcomes that are influenced by these hidden states.
Applications of HMM
 HMMs are widely used in various fields, such as:

 1. Speech Recognition: To model sequences of speech sounds and predict


spoken words.
 2. Natural Language Processing: For part-of-speech tagging, named entity
recognition, and other sequence labeling tasks.
 3. Bioinformatics: To predict gene sequences, protein structure prediction, and
DNA sequence alignment.
 4. Finance: To model and predict market trends and stock prices.
 5. Activity Recognition: In video analysis to recognize human activities.
Components of an HMM
1. States: The hidden parts of the model. For example, in weather prediction, the states
could be "Sunny," "Cloudy," and "Rainy.“
2. Observations: The visible parts of the model. For example, observations could be
whether people are carrying umbrellas, wearing sunglasses, or wearing coats
3. . state to another. For instance, the probability of it being Sunny today given that it was
Cloudy yesterday.
4. Emission Probabilities: The probabilities of observing a particular observation from a
given state. For example, the probability of people carrying umbrellas given that it is
Rainy.
5. Initial Probabilities: The probabilities of the system starting in a particular state.
Example: Weather Prediction
 Imagine you are trying to predict the weather based on people's
clothing and accessories:

 - States (hidden) : Sunny, Cloudy, Rainy


 - Observations (visible) : Sunglasses, No sunglasses, Umbrella,
No umbrella
Scenario: Observing People in a City
1. Sunny: People are likely to wear
sunglasses.
2. Cloudy: People may not wear sunglasses
and might carry an umbrella or not. - Emission Probabilities:
3. Rainy: People are likely to carry
- P(Sunglasses | Sunny) = 0.9
umbrellas.
- P(No Sunglasses | Sunny) = 0.1
Let's set some example probabilities: - P(Sunglasses | Cloudy) = 0.3
- P(No Sunglasses | Cloudy) = 0.7
- Transition Probabilities:
- P(Sunny | Sunny) = 0.8 - P(Umbrella | Cloudy) = 0.4
- P(Cloudy | Sunny) = 0.15 - P(No Umbrella | Cloudy) = 0.6
- P(Rainy | Sunny) = 0.05 - P(Sunglasses | Rainy) = 0.1
- P(Sunny | Cloudy) = 0.2
- P(No Sunglasses | Rainy) = 0.9
- P(Cloudy | Cloudy) = 0.6
- P(Rainy | Cloudy) = 0.2 - P(Umbrella | Rainy) = 0.8
- P(Sunny | Rainy) = 0.1 - P(No Umbrella | Rainy) = 0.2
- P(Cloudy | Rainy) = 0.3
- P(Rainy | Rainy) = 0.6
Using HMM
 Suppose you observe that people are wearing sunglasses
on a particular day. Based on the emission probabilities,
you can use the HMM to estimate that the most likely
weather (hidden state) on that day is Sunny. Over a
sequence of days, by observing patterns of sunglasses
and umbrellas, you can predict the sequence of weather
states more accurately using the HMM.
Key Algorithms

 1. Forward Algorithm: Used to calculate the probability of a sequence of


observations.
 2. Viterbi Algorithm: Used to find the most likely sequence of hidden states
given the observations.
 3. Baum-Welch Algorithm: Used to train the model (i.e., to find the unknown
parameters like transition and emission probabilities).
Summary

 In essence, an HMM helps make educated guesses about unobservable


states based on observable events and the statistical relationships
between them. This makes it a powerful tool for sequence analysis in
various domains.

You might also like