Independent Trials

The notion of independent families of events leads us next to the notion of independent trials . Let be a sample description space of a random observation or experiment on which is defined a probability function . Suppose further that each description in is an n-tuple. Then the random phenomenon which describes is defined as consisting of trials. For example, suppose one is drawing a sample of size from an urn containing balls. The sample description space of such an experiment consists of -tuples. It is also useful to regard this experiment as a series of trials, in each of which a ball is drawn from the urn. Mathematically, the fact that in drawing a sample of size one is performing trials is expressed by the fact that the sample description space consists of -tuples ; the first component represents the outcome of the first trial, the second component represents the outcome of the second trial, and so on, until represents the outcome of the th trial.

We next define the important notion of event depending on a trial. Let be a sample description space consisting of trials, and let be an event on . Let be an integer, 1 to . We say that depends on the th trial if the occurrence or nonoccurrence of depends only on the outcome of the th trial. In other words, in order to determine whether or not has occurred, one must have a knowledge only of the outcome of the th trial. From a more abstract point of view, an event is said to depend on the th trial if the decision as to whether a given description in belongs to the event depends only on the th component of the description. It should be especially noted that the certain event and the impossible event may be said to depend on every trial, since the occurrence or nonoccurrence of these events can be determined without knowing the outcome of any trial.

Example 2A . Suppose one is drawing a sample of size 2 from an urn containing white and black balls. The event that the first ball drawn is white depends on the first trial. Similarly, the event that the second ball drawn is white depends on the second trial. However, the event that exactly one of the balls drawn is white does not depend on any one trial. Note that one may express in terms of and by .

Example 2B . Choose a summer day at random on which both the Dodgers and the Giants are playing baseball games, but not with one another. Let or 0, depending on whether the Dodgers win or lose their game, and, similarly, let or 0, depending on whether the Giants win or lose their game. The event that the Dodgers win depends on the first trial of the sample description space

We next define the very important notion of independent trials . Consider a sample description space consisting of trials. For let be the family of events on that depends on the th trial. We define the trials as independent (and we say that consists of independent trials) if the families of events are independent . Otherwise, the trials are said to be dependent or nonindependent. More explicitly, the trials are said to be independent if (1.11) holds for every set of events , such that, for depends only on the th trial.

If the reader traces through the various definitions that have been made in this chapter, it should become clear to him that the mathematical definition of the notion of independent trials embodies the intuitive meaning of the notion, which is that two trials (of the same or different experiments) are independent if the outcome of one does not affect the outcome of the other and are otherwise dependent.

In the foregoing definition of independent trials it was assumed that the probability function was already defined on the sample description space , which consists of -tuples. If this were the case, it is clear that to establish that consists of independent trials requires the verification of a large number of relations of the form of (1.11) . However, in practice, one does not start with a probability function on and then proceed to verify all of the relations of the form of (1.11) in order to show that consists of independent trials. Rather, the notion of independent trials derives its importance from the fact that it provides an often-used method for setting up a probability function on a sample description space . This is done in the following way.

Let be sample description spaces (which may be alike) on whose subsets, respectively, are defined probability functions , . For example, suppose we are drawing, with replacement, a sample of size from an urn containing balls, numbered 1 to . We define (for ) as the sample description space of the outcome of the kth draw; consequently, . If the descriptions in are assumed to be equally likely, then the probability function is defined on the events of by .

Now suppose we perform in succession the random experiments whose sample description spaces are , respectively. The sample description space of this series of random experiments consists of tuples , which may be formed by taking for the first component any member of , by taking for the second component any member of , and so on, until for the th component we take any member of . We introduce a notation to express these facts; we write , which we read “ is the combinatorial product of the spaces ”. More generally, we define the notion of a combinatorial product event on . For any events on on , and on we define the combinatorial product event as the set of all -tuples , which can be formed by taking for the first component any member of , for the second component any member of , and so on, until for the th component we take any member of .

We now define a probability function on the subsets of . For every event on that is a combinatorial product event, so that for some events , which belong, respectively, to , we define  

Not every event in is a combinatorial product event. However, it can be shown that it is possible to define a unique probability function on the events of in such a way that (2.1) holds for combinatorial product events.

It may help to clarify the meaning of the foregoing ideas if we consider the special (but, nevertheless, important) case, in which each sample description space is finite, of sizes , respectively. As in section 6 of Chapter 1 , we list the descriptions in : for .

Now let be the sample description space of the random experiment, which consists in performing in succession the random experiments whose sample description spaces are , respectively. A typical description in can be written where, for represents a description in and is some integer, 1 to . To determine a probability function on the subsets of , it suffices to specify it on the single-member events of . Given probability functions defined on , respectively, we define on the subsets of by defining  

Equation (2.2) is a special case of (2.1) , since a single-member event on can be written as a combinatorial product event; indeed,  

Example 2C . Let be the sample description space of the experiment of tossing a coin, and let be the sample description space of the experiment of throwing a fair dic. Let be the sample description space of the experiment, which consists of first tossing a coin and then throwing a die. What is the probability that in the jointly performed experiment one will obtain heads on the coin toss and a 5 on the die toss? The assumption made by (2.2) is that it is equal to the product of (i) the probability that the outcome of the coin.toss will be heads and (ii) the probability that the outcome of the die throw will be a 5.

We now desire to show that the probability space, consisting of the sample description space , on whose subsets a probability function is defined by means of (2.1) , consists of independent trials. 

We first note that an event in , which depends only on the th trial, is necessarily a combinatorial product event; indeed, for some event in  

Equation (2.4) follows from the fact that an event depends on the th trial if and only if the decision as to whether or not a description belongs to depends only on the th component of the description. Next, let be events depending, respectively, on the first, second,…, th trial. For each we have a representation of the form of (2.4) . We next assert that the intersection may be written as a combinatorial product event:  

We leave the verification of (2.5) , which requires only a little thought, to the reader. Now, from (2.1) and (2.5) whereas from (2.1) and (2.4) \begin{align} P\left[A_{k}\right] & =P_{1}\left[Z_{1}\right] \cdots P_{k-1}\left[Z_{k-1}\right] P_{k}\left[C_{k}\right] P_{k+1}\left[Z_{k+1}\right] \cdots P_{n}\left[Z_{n}\right] \tag{2.7} \\ & =P_{k}\left[C_{k}\right]. \end{align} From (2.6) and (2.7) it is seen that (1.11) is satisfied, so that consists of independent trials.

The foregoing considerations are not only sufficient to define a probability space that consists of independent trials but are also necessary in the sense of the following theorem, which we state without proof. Let the sample description space be a combinatorial product of sample description spaces . Let be a probability function defined on the subsets of . The probability space consists of independent trials if and only if there exist probability functions , defined, respectively, on the subsets of the sample description spaces , with respect to which satisfies (2.6) for every set of events , on such that, for depends only on the kth trial (and then is defined by (2.4) ). 

To illustrate the foregoing considerations, we consider the following example.

Example 2D . A man tosses two fair coins independently. Let be the event that the first coin tossed is a head, let be the event that the second coin tossed is a head, and let be the event that both coins tossed are heads. Consider sample description spaces: , . Clearly is the sample description space of the outcome of the two tosses, whereas and are the sample description spaces of the outcome of the first and second tosses, respectively. We assume that each of these sample description spaces has equally likely descriptions.

The event may be defined on either or . If defined on . If defined on . The event may in a similar manner be defined on either or . However, the event can be defined only on .

The spaces on which and are defined determines the relation that exists between , and . If both and are defined on , then . If and are defined on and , respectively, then .

In order to speak of the independence of and , we must regard them as being defined on the same sample description space. That and are independent events is intuitively clear, since consists of two independent trials and depends on the first trial, whereas depends on the second trial. Events can be independent without depending on independent trials. For example, consider the event that the two tosses have the same outcome. One may verify that and are independent and also that and are independent. On the other hand, the events , , and are not independent.

Exercises

2.1 . Consider a man who has made 2 tosses of a die. State whether each of the following six statements is true or false.

Let be the event that the outcome of the first throw is a 1 or a 2.

Statement 1: depends on the first throw.

Let be the event that the outcome of the second throw is a 1 or a 2.

Statement 2: and are mutually exclusive events.

Let be the event that the sum of the outcomes is 7.

Statement 3: depends on the first throw.

Let be the event that the sum of the outcomes is 3.

Statement 4: and are mutually exclusive events.

Let be the event that one of the outcomes is a 1 and the other is a 2.

Statement 5: is a sub-event of .

Statement 6: is a sub-event of .

 

Answer

(i) ; (2) ; (3) ; (4) ; (5) ; (6) .

 

2.2 . Consider a man who has made 2 tosses of a coin. He assumes that the possible outcomes of the experiment, together with their probability, are given by the following table:

Sample Descriptions

Show that this probability space does not consist of 2 independent trials. Is there a unique probability function that must be assigned on the subsets of the foregoing sample description space in order that it consist of 2 independent trials?

2.3 . Consider 3 urns; urn I contains 1 white and 2 black balls, urn II contains 3 white and 2 black balls, and urn III contains 2 white and 3 black balls. One ball is drawn from each urn. What is the probability that among the balls drawn there will be (i) 1 white and 2 black balls, (ii) at least 2 black balls, (iii) more black than white balls?

 

Answer

(i) ; (ii), (iii) .

 

2.4 . If you had to construct a mathematical model for events and , as described below, would it be appropriate to assume that and are independent? Explain the reasons for your opinion. 
(i) is the event that a subscriber to a certain magazine owns a car, and is the event that the same subscriber is listed in the telephone directory.

(ii) is the event that a married man has blue eyes, and is the event that his wife has blue eyes.

(iii) is the event that a man aged 21 is more than 6 feet tall, and is the event that the same man weighs less than 150 pounds.

(iv) is the event that a man lives in the Northern Hemisphere, and is the event that he lives in the Western Hemisphere.

(v) is the event that it will rain tomorrow, and is the event that it will rain within the next week.

2.5 . Explain the meaning of the following statements:

(i) A random phenomenon consists of trials.

(ii) In drawing a sample of size , one is performing trials.

(iii) An event depends on the third trial.

(iv) The event that the third ball drawn is white depends on the third trial.

(v) In drawing with replacement a sample of size 6, one is performing 6 independent trials of an experiment.

(vi) If is the sample description space of the experiment of drawing with replacement a sample of size 6 from an urn containing balls, numbered 1 to 10, then , in which for .

(vii) If, in (vi), balls numbered 1 to 7 are white and if is the event that all balls drawn are white, then , in which for .