Document 12388632

advertisement
Correction to: "A Tutorial on Hidden Markov Models and Selected Applications in Speech
Recognition," Lawrence R. Rabiner, Proc. IEEE, Feb. 1989
p. 271, Comparison of HMMs (based on correction provided by Vladimir Petroff)
There is an error in the equation before numbered equation 88; the correct equation should read:
p + q − 2 pq − r
1 − 2r
s=
The term − r was inadvertently omitted from the equation in the tutorial.
p. 272, Scaling Section (After equation (90)).
Consider the computation of αt(i). We use the notation αt(i) to denote the unscaled α’s, α t(i) to
denote the scaled (and iterated) α‘s and α t(i) to denote the local version of α before scaling. For each t,
we first compute α t(i) according to the induction formula (20), in terms of the previously scaled α t(i)
i.e.,
N
∑ α
αt (i ) =
(j) ajibi ( Ot )
t-1
(91a)
j=1
We determine the scaling coefficient ct as
ct
=
1
∑αt (i )
(91b)
N
i=1
giving
α t(i) = ct αt(i)
(91c)
so that equation (91a) can be written as
N
α t(i) = ∑ ct αt-1( j ) aji bi(Ot)
(92a)
j =1
We can now write the scaled α t(i) as
N
α t(i) =
∑ α
j=1
N N
∑ ∑ α t-1(j) aji bi(Ot)
i =1 j =1
By induction we can write α t-1(j) as
(j) aji bi(Ot)
t-1
(92b)
 t-1
 τ =1


α t-1(j) =  ∏ cτ  αt-1(j)
(93a)
Thus we can write α t(i) as
 t-1 
αt-1(j)  ∏ cτ  aji bi(Ot)
∑
 τ =1 
j =1
N
α t (i) =
=
 t-1 
cτ  aji bi(Ot)
∑
∑ αt-1(j)  ∏

τ =1
i =1 j =1
N
N
αt(i)
N
∑ α (i)
t
i =1
i.e., each αt(i) is effectively scaled by the sum over all states of αt(i).
(93b)
Download