This is an implementation of a state-emitting MarkovModel. I am using
terminology similar to Manning and Schutze.
Functions: train_bw Train a markov model using the Baum-Welch
algorithm. train_visible Train a visible markov model using MLE.
find_states Find the a state sequence that explains some
observations.
load Load a MarkovModel. save Save a
MarkovModel.
|
|
|
|
MarkovModel()
|
|
|
|
|
train_bw(states,
alphabet,
training_data,
pseudo_initial=None,
pseudo_transition=None,
pseudo_emission=None,
update_fn=None)
train_bw(states, alphabet, training_data[, pseudo_initial] [,
pseudo_transition][, pseudo_emission][, update_fn]) -> MarkovModel |
source code
|
|
|
_baum_welch(N,
M,
training_outputs,
p_initial=None,
p_transition=None,
p_emission=None,
pseudo_initial=None,
pseudo_transition=None,
pseudo_emission=None,
update_fn=None) |
source code
|
|
|
_baum_welch_one(N,
M,
outputs,
lp_initial,
lp_transition,
lp_emission,
lpseudo_initial,
lpseudo_transition,
lpseudo_emission) |
source code
|
|
|
_forward(N,
T,
lp_initial,
lp_transition,
lp_emission,
outputs) |
source code
|
|
|
_backward(N,
T,
lp_transition,
lp_emission,
outputs) |
source code
|
|
|
train_visible(states,
alphabet,
training_data,
pseudo_initial=None,
pseudo_transition=None,
pseudo_emission=None)
train_visible(states, alphabet, training_data[, pseudo_initial] [,
pseudo_transition][, pseudo_emission]) -> MarkovModel |
source code
|
|
|
_mle(N,
M,
training_outputs,
training_states,
pseudo_initial,
pseudo_transition,
pseudo_emission) |
source code
|
|
|
|
list of (states, score)
|
|
|
_viterbi(N,
lp_initial,
lp_transition,
lp_emission,
output) |
source code
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|