The story goes thus(from Wiki) - Two friends, Alice and Bob, who live far apart from each other and who talk together daily over the telephone about what they did that day. Bob is only interested in three activities: walking in the park, shopping, and cleaning his apartment. The choice of what to do is determined exclusively by the weather on a given day. Alice has no definite information about the weather where Bob lives, but she knows general trends. Based on what Bob tells her he did each day, Alice tries to guess what the weather must have been like.
Alice believes that the weather operates as a discrete Markov chain. There are two states, "Rainy" and "Sunny", but she cannot observe them directly, that is, they are hidden from her. On each day, there is a certain chance that Bob will perform one of the following activities, depending on the weather: "walk", "shop", or "clean". Since Bob tells Alice about his activities, those are the observations. The entire system is that of a hidden Markov model (HMM).
Alice knows the general weather trends in the area, and what Bob likes to do on average. start_probability reflects Alice's belief that it is rainy on a given day, there is a probability of 0.7 that it'll rain the next day as well.Below is the python code (with some helpful print statements added). Feel free to copy :)
states = ('Rainy', 'Sunny')
observations = ('walk', 'shop', 'clean')
start_probability = {
'Rainy|Rainy' : 0.7,
'Rainy|Sunny' : 0.3,
'Sunny|Rainy' : 0.4,
'Sunny|Sunny' : 0.6
}
transition_probability = {
'Rainy|Rainy' : {'Rainy' : 0.8, 'Sunny' : 0.2},
'Rainy|Sunny' : {'Rainy' : 0.5, 'Sunny' : 0.5},
'Sunny|Rainy' : {'Rainy' : 0.6, 'Sunny' : 0.4},
'Sunny|Sunny' : {'Rainy' : 0.3, 'Sunny' : 0.7},
}
emission_probability = {
'Rainy' : {'walk': 0.1, 'shop': 0.4, 'clean': 0.5},
'Sunny' : {'walk': 0.6, 'shop': 0.3, 'clean': 0.1},
}
def forward_viterbi(obs, states, start_p, trans_p, emit_p):
T = {}
for state1 in states:
for state2 in states:
## prob. V. path V. prob.
T[state1+"|"+state2] = (start_p[state1+"|"+state2], [state2], start_p[state1+"|"+state2])
for output in obs:
U = {}
print "--------------------\nObservation:",output
for next_state in states:
total=0
argmax=None
valmax=0
print "Next state:"+next_state
for curr_state in states:
for prv_state in states:
print "\tprv_state|curr_state:",prv_state+"|"+curr_state
try:
(prob, v_path,v_prob)=T[prv_state+"|"+curr_state]
except KeyError:
(prob, v_path,v_prob)=T[prv_state+"|"+curr_state]=(0,None,0)
p=emit_p[curr_state][output] * trans_p[prv_state+"|"+curr_state][next_state]
prob *= p
v_prob *= p
total += prob
if v_prob > valmax:
argmax=v_path+[next_state]
valmax=v_prob
print "\t\t",v_path,v_prob
U[curr_state+"|"+next_state] = (total, argmax, valmax)
print "\targmax:",argmax,"valmax:",valmax
T=U
## apply sum/max to the final states:
total = 0
argmax = None
valmax = 0
for state1 in states:
for state2 in states:
try:
(prob, v_path, v_prob) = T[state1+"|"+state2]
except KeyError:
(prob, v_path, v_prob) = T[state1+"|"+state2]=(0,None,0)
total += prob
if v_prob > valmax:
argmax = v_path
valmax = v_prob
return (total, argmax, valmax)
def example():
return forward_viterbi(observations,
states,
start_probability,
transition_probability,
emission_probability)
res=example()
print "\nResult:",res
3 comments:
isn't this a "first-order" HMM?
your state transition for j only depends
on ONE previous state i? e.g.
p(j|i), second order HMM should be
p(k|i,j) - the probability of being in state k given the previous TWO states are i and j.
Nice extrapolation. But I think it could be done better, what if someone wants a nth order HMM?
If I'm able to write it, I'll surely post a link.
question - how did you calculate the transition and emission probabilities?
Post a Comment