The notion of Markov dependence, defined in section 5 for Bernoulli trials, may be extended to trials with several possible outcomes. Consider
In discussing Markov dependent trials with
Now suppose that for any states
The
One may similarly prove
The unconditional probabilities, (6.6)
One proves (6.7) in exactly the same way that one proves (6.4) and (6.5) . Similarly, one may show that
The transition probabilities
Given an
If we define the
Example 6A . If the transition probability matrix
If the initial unconditional probabilities are assumed to be given by
We define a Markov chain with
In words, a Markov chain is ergodic if, as
which are the same, no matter what the values of the initial unconditional probabilities
In view of (6.14) , we call
One of the important problems of the theory of Markov chains is to determine conditions under which a Markov chain is ergodic. A discussion of this problem is beyond the scope of this book. We state without proof the following theorem.
If there exists an integer
It is sometimes possible to establish that a Markov chain is ergodic without having to exhibit an
Two states
If all states in a Markov chain communicate and if a state
Having established that a Markov chain is ergodic, the next problem is to obtain the stationary probabilities
Consequently, if a Markov chain is ergodic, then a solution of (6.17) that satisfies the conditions
Example 6B . The Markov chain considered in example 6A is ergodic, since
A matrix
The matrix
It is clear that the transition probability matrix
Example 6C . Random walk with retaining barriers. Consider a straight line on which are marked off positions
The chain is ergodic, since
We solve these equations by successive substitution.
From the first equation we obtain
To determine
We finally conclude that the stationary probabilities for the random walk with retaining barriers for
If a Markov chain is ergodic, then the physical process represented by the Markov chain can continue indefinitely. Indeed, after a long time it achieves statistical equilibrium and probabilities
We next desire to study an important class of non-ergodic Markov chains, namely those that possess absorbing states. A state
Example 6D . Random walk with absorbing barriers. Consider a straight line on which positions
In the Markov chain with transition probability matrix
Given a Markov chain with an absorbing state
We call
The probability
If
Equation (6.28) is proved as follows. The probability of going from state
Example 6E . Probability of a gambler’s ruin. Let
Solution
For
To solve these equations we note that, defining
Exercises
6.1 . Compute the 2-step and 3-step transition probability matrices for the Markov chains whose transition probability matrices are given by
(i)
Answer
(i), (iii)
(ii)
(iv)
6.2 . For each Markov chain in exercise 6.1, determine whether or not (i) it is ergodic, (ii) it has absorbing states.
6.3 . Find the stationary probabilities for each of the following ergodic Markov chains:
(ii)
(iii)
Answer
(i), (iii)
6.4 . Find the stationary probabilities for each of the following ergodic Markov chains:
6.5 . Consider a series of independent repeated tosses of a coin that has probability
Answer
6.6 . Random walk with retaining barriers. Consider a straight line on which positions
6.7 . Gambler’s ruin. Let two players
Answer
6.8 . Consider 2 urns, I and II, each of which contains 1 white and 1 red ball. One ball is drawn simultaneously from each urn and placed in the other urn. Let the probabilities that after
6.9 . In exercise 6.8 find the most probable number of red balls in urn
Answer