Markov process
From Free net encyclopedia
In probability theory, a Markov process is a stochastic process characterized as follows: The state <math>c_k</math> at time <math>k</math> is one of a finite number in the range <math>\{1,\ldots,M\}</math>. Under the assumption that the process runs only from time <math>0</math> to time <math>N</math> and that the initial and final states are known, the state sequence is then represented by a finite vector <math>C=(c_0,...,c_N)</math>.
Let <math>P(c_k | c_0,c_1 ,...,c_{(k-1)})</math> denote the probability of the state <math>c_k</math> at time k conditioned on all states up to time <math>k-1</math>. Suppose a process was such that <math>c_k</math> depended only on the previous state <math>c_{k-1}</math> and was independent of all other previous states. This process would be known as a first-order Markov process. This means that the probability of being in state <math>c_k</math> at time <math>k</math>, given all states up to time <math>k-1</math> depends only on the previous state, i.e., <math>c_{k-1}</math> at time <math>k-1</math>:
- <math>P(c_k|c_0,c_1,\ldots,c_{k-1})=P(c_k|c_{k-1}).\,</math>
For an <math>n</math>th-order Markov process,
- <math>P(c_k|c_0,c_1,\ldots,c_{k-1})=P(c_k|c_{k-n},\ldots,c_{k-1}).\,</math>
In general, for the Viterbi algorithm, the underlying process is assumed to be a Markov process with the following characteristics:
- finite-state, this means that the number <math>N</math> is finite.
- discrete-time, this means that going from one state to other takes the same unit of time.
- observed in memoryless noise, this means that the sequence of observations depends probabilistically only on the previous sequence transitions.