Of interest in many problems of applied probability theory is the evolution in time of the state of a random phenomenon. For example, suppose one has two urns (I and II), each of which contains one white and one black ball. One ball is drawn simultaneously from each urn and placed in the other urn. One is often concerned with questions such as, what is the probability that after 100 repetitions of this procedure urn I will contain two white balls? The theory of Markov 1 chains is applicable to questions of this type.
The theory of Markov chains relates to every field of physical and social science (see the forthcoming book by A. T. Bharucha-Reid, Introduction to the Theory of Markov Processes and their Applications ; for applications of the theory of Markov chains to the description of social or psychological phenomena, see the book by Kemeny and Snell cited in the next paragraph). There is an immense literature concerning the theory of Markov chains. In this section and the next we can provide only a brief introduction.
Excellent elementary accounts of this theory are to be found in the works of W. Feller, An Introduction to Probability Theory and Its Applications , second edition, Wiley, New York, 1957, and J. G. Kemeny and J. L. Snell, Finite Markov Chains , Van Nostrand, Princeton, New Jersey, 1959. The reader is referred to these books for proof of the assertions made in section 6 .
The natural generalization of the notion of independent Bernoulli trials is the notion of Markov dependent Bernoulli trials. Given
Suppose that the quantities \begin{align} P(s, s)= & \text { probability of success on the }(k+1) \text {st trial, } \\ & \text { given that there was success on the } k \text {th trial, } \\ P(f, s)= & \text { probability of success at the }(k+1) \text {st trial, } \\ & \text { given that there was failure at the } k \text {th trial, } \tag{5.3} \\ P(f, f)= & \text { probability of failure at the }(k+1) \text {st trial, } \\ & \text { given that there was failure at the } k \text {th trial, } \\ P(s, f)= & \text { probability of failure at the }(k+1) \text {st trial, } \\ & \text { given that there was success at the } k \text {th trial, } \end{align} are independent of
Example 5A . Let the weather be observed on
In the case of independent repeated Bernoulli trials the probability function
The quantities
To justify (5.6) , we reason as follows: if
From (5.7) and the fact that (5.8)
Equation (5.6) constitutes a recursive relationship for
By using theoretical exercise 4.5, it follows from (5.6) and (5.9) that for
By interchanging the role of
It is readily verifiable that the expressions in (5.10) and (5.11) sum to one, as they ought.
In many problems involving Markov dependent repeated Bernoulli trials we do not know the probability
\begin{align} P_{k}(s, s)= & \text { conditional probability of success at the } \\ & (k+1) \text { st trial, given success at the first trial, } \\ P_{k}(s, f)= & \text { conditional probability of failure at the } \\ & (k+1) \text { st trial, given success at the first trial, } \\ P_{k}(f, f)= & \text { conditional probability of failure at the } \tag{5.12} \\ & (k+1) \text { st trial, given failure at the first trial, } \\ P_{k}(f, s)= & \text { conditional probability of success at the } \\ & (k+1) \text { st trial, given failure at the first trial. } \end{align}
Since
In the same way that we obtained (5.6) we obtain \begin{align} P_{k}(s, s) & =P_{k-1}(s, s) P(s, s)+P_{k-1}(s, f) P(f, s) \tag{5.14} \\ & =P_{k-1}(s, s) P(s, s)+\left[1-P_{k-1}(s, s)\right][1-P(f, f)] \\ & =P_{k-1}(s, s)[P(s, s)+P(f, f)-1]+[1-P(f, f)] \end{align}
By using theoretical exericse 4.5, it follows from (5.14) and (5.9) that for
By interchanging the role of
Example 5B . Consider a communications system which transmits the digits 0 and 1. Each digit transmitted must pass through several stages, at each of which there is a probability
Solution
In observing the passage of the digit through the communications system, we are observing a 4 -tuple
The probability that a digit entering the system as 0 will be transmitted by the third stage as 0 is given by \begin{align} P_{3}(0,0)= & \frac{1-P(0,0)}{2-P(0,0)-P(1,1)}[P(0,0)+P(1,1)-1]^{3} \tag{5.21} \\ & \quad+\frac{1-P(1,1)}{2-P(0,0)-P(1,1)} \\ = & \frac{1-p}{2-2 p}(2 p-1)^{3}+\frac{1-p}{2-2 p} \\ = & \frac{1}{2}\left[1+(2 p-1)^{3}\right] . \end{align} If
Example 5C . Suppose that the digit transmitted through the communications system described in example 5B (with
Solution
The conditional probability that a digit transmitted by the third stage as 0 entered the system as 0 is given by
To justify (5.22), note that
From (5.21) and (5.23) it follows that the conditional probability that a digit leaving the system as 0 entered the system as 0 is given by
Example 5D . Let us use the considerations of examples 5B and 5C to solve the following problem, first proposed by the celebrated cosmologist A. S. Eddington (see W. Feller, “The Problem of
Solution
We consider a sample description space of 4-tuples
The persons
Statistical Equilibrium. For large values of
To justify (5.24) , use (5.10) and (5.11) and the fact that (5.9) implies that
Equation (5.24) has the following significant interpretation: After a large number of Markov dependent repeated Bernoulli trials has been performed, one is in a state of statistical equilibrium, in the sense that the probability
From (5.24) one sees that the trials are asymptotically fair in the sense that approximately
Example 5E . If the communications system described in example 5B consists of a very large number of stages, then half the digits transmitted by the system will be 0’s, irrespective of the proportion of 0’s among the digits entering the system.
Exercises
5.1 . Consider a series of Markov dependent Bernoulli trials such that
Answer
5.2 . Consider a series of Markov dependent Bernoulli trials such that
5.3 . Consider a series of Markov dependent Bernoulli trials such that
Answer
5.4 . Consider a series of Markov dependent Bernoulli trials such that
5.5 . If
Answer
0.35.
5.6 . Suppose the probability is equal to
5.7 . Consider a game played as follows: a group of
Answer
(i)
5.8 . Suppose you are confronted with 2 coins,
5.9 . A certain young lady is being wooed by a certain young man. The young lady tries not to be late for their dates too often. If she is late on 1 date, she is
Answer
5.10 . Suppose that people in a certain group may be classified into 2 categories (say, city dwellers and country dwellers; Republicans and Democrats; Easterners and Westerners; skilled and unskilled workers, and so on). Let us consider a group of engineers, some of whom are Easterners and some of whom are Westerners. Suppose that each person has a certain probability of changing his status: The probability that an Easterner will become a Westerner is 0.04, whereas the probability that a Westerner will become an Easterner is 0.01. In the long run, what proportion of the group will be (i) Easterners, (ii) Westerners, (iii) will move from East to West in a given year, (iv) will move from West to East in a given year? Comment on your answers.
- The theory of Markov chains derives its name from the celebrated Russian probabilist, A. A. Markov (1856–1922). ↩︎