The Probability Law of a Function of a Random Variable

In this section we develop formulas for the probability law of a random variable , which arises as a function of another random variable , so that for some Borel function To find the probability law of , it is best in general first to find its distribution function , from which one may obtain the probability density function or the probability mass function in cases in which these functions exist. From (2.2) we obtain the following formula for the value at the real number of the distribution function : Of great importance is the special case of a linear function , in which and are given real numbers so that and . The distribution function of the random variable is given by If is continuous, so is , with a probability density function for any real number given by

If is discrete, so is , with a probability mass function for any real number given by

Next, let us consider . Then . For is the empty set of real numbers. Consequently,

For

One sees from (8.7) that if possesses a probability density function then the distribution function of may be expressed as an integral; this is the necessary and sufficient condition that possess a probability density function . To evaluate the value of at a real number , we differentiate (8.7) and (8.6) with respect to . We obtain \begin{align} f_{x^{2}}(y) &= \begin{cases} \left[f_{X}(\sqrt{y}) + f_{X}(-\sqrt{y})\right] \frac{1}{2 \sqrt{y}}, & \text{for } y > 0 \\ 0, & \text{for } y < 0 \end{cases} \tag{8.8} \end{align} It may help the reader to recall the so-called chain rule for differentiation of a function of a function, required to obtain (8.8), if we point out that

If is discrete, it then follows from (8.7) that is discrete, since the distribution function may be expressed entirely as a sum. The probability mass function of for any real number is given by

Example 8A . The random sine wave . Let in which the amplitude is a known positive constant and the phase is a random variable uniformly distributed on the interval to . The distribution function for is given by Consequently, the probability density function is given by

Random variables of the form of (8.11) arise in the theory of ballistics. If a projectile is fired at an angle to the earth, with a velocity of magnitude , then the point at which the projectile returns to the earth is at a distance from the point at which it was fired; is given by the equation , in which is the gravitational constant, equal to or . If the firing angle is a random variable with a known probability law, then the range of the projectile is also a random variable with a known probability law.

A random variable similar to the one given in (8.11) was encountered in the discussion of Bertrand’s paradox in section 7; namely, the random variable , in which is uniformly distributed over the interval 0 to .

Example 8B . The positive part of a random variable . Given any real number , we define the symbols and as follows:

Then and . Given a random variable , let . We call the positive part of . The distribution function of the positive part of is given by

Thus, if is normally distributed with parameters and ,

The positive part of a normally distributed random variable is neither continuous nor discrete but has a distribution function of mixed type.

The Calculus of Probability Density Functions . Let be a continuous random variable, and let . Unless some conditions are imposed on the function , it is not necessarily true that is continuous. For example, is not continuous if has a positive probability of being negative. We now state some conditions on the function under which is a continuous random variable if is a continuous random variable. At the same time, we give formulas for the probability density function of in terms of the probability density function of and the derivatives of .

We first consider the case in which the function is differentiable at every real number and, further, either for all or for all . We may then prove the following facts (see R. Courant, Differential and Integral Calculus , Interscience, New York, 1937, pp. 144145): (i) as goes from to is either monotone increasing or monotone decreasing; (ii) the limits

exist (although they may be infinite); (iii) for every value of such that there exists exactly one value of such that [this value of is denoted by ]; (iv) the inverse function is differentiable and its derivative is given by

For example, let . Then is positive for all . Here and . The inverse function is , defined for . The derivative of the inverse function is given by . One sees that is equal to , as asserted by (8.17). We may now state the following theorem:

If is differentiable for all , and either for all or for all , and if is a continuous random variable, then is a continuous random variable with probability’ density function given by 

\begin{align} f_{Y}(y) & = \begin{cases} f_{X}\left[g^{-1}(y)\right]\left|\frac{d}{d y} g^{-1}(y)\right|, & \text{if } \alpha < y < \beta \tag{8.18} \\ 0, & \text{otherwise.} \end{cases} \end{align} 

in which and are defined by (8.16). 

To illustrate the use of (8.18), let us note the formula: if is a continuous random variable, then

To prove (8.18), we distinguish two cases; the case in which the function is monotone increasing and that in which it is monotone decreasing. In the first case the distribution function of for may be written

In the second case, for ,

" transform="translate(0,791.8)">

If (8.20) is differentiated with respect to is obtained. We leave it to the reader to consider the case in which or .

One may extend (8.18) to the case in which the derivative is continuous and vanishes at only a finite number of points. We leave the proof of the following assertion to the reader.

Let be differentiable for all and assume that the derivative is continuous and nonzero at all but a finite number of values of . Then, to every real number , (i) there is a positive integer and points such that, for ,

or (ii) there is no value of such that and ; in this case we write . If is a continuous random variable, then is a continuous random variable with a probability density function given by

 

We obtain as an immediate consequence of (8.22): if is a continuous random variable, then

Equations (8.23) and (8.24) may also be obtained directly, by using the same technique with which (8.8) was derived.

The Probability Integral Transformation . It is a somewhat surprising fact, of great usefulness both in theory and in practice, that to obtain a random sample of a random variable it suffices to obtain a random sample of a random variable , which is uniformly distributed over the interval 0 to 1. This follows from the fact that the distribution function of the random variable is a nondecreasing function. Consequently, an inverse function may be defined for values of between 0 and 1: is equal to the smallest value of satisfying the condition that .

Example 8C . If is normally distributed with parameters and , then and , in which denotes the value of satisfying the equation .

In terms of the inverse function to the distribution function of the random variable , we may state the following theorem, the proof of which we leave as an exercise for the reader.

Theorem 8A . Let be independent random variables, each uniformly distributed over the interval 0 to 1. The random variables defined by 

are then a random sample of the random variable . Conversely, if are a random sample of the random variable and if the distribution function is continuous, then the random variables 

are a random sample of the random variable , which is uniformly distributed on the interval 0 to 1. 

The transformation of a random variable into a uniformly distributed random variable is called the probability integral transformation . It plays an important role in the modern theory of goodness-of-fit tests for distribution functions; see T. W. Anderson and D. Darling, “Asymptotic theory of certain goodness of fit criteria based on stochastic processes”, Annals of Mathematical Statistics, Vol. 23 (1952), pp. 195–212.

Exercises

8.1 . Let have a distribution with parameters and . Show that has a distribution with parameters and .

8.2 . The temperature of a certain object, recorded in degrees Fahrenheit, obeys a normal probability law with mean 98.6 and variance 2. The temperature measured in degrees centigrade is related to by . Describe the probability law of .

8.3 . The magnitude of the velocity of a molecule with mass in a gas at absolute temperature is a random variable, which, according to the kinetic theory of gas, possesses the Maxwell distribution with parameter in which is Boltzmann’s constant. Find and sketch the probability density function of the kinetic energy of a molecule. Describe in words the probability law of .

 

Answer

for otherwise.

 

distribution with parameters and

8.4 . A hardware store discovers that the number of electric toasters it sells in a week obeys a Poisson probability law with mean 10. The profit on each toaster sold is 2. If at the beginning of the week 10 toasters are in stock, the profit from sale of toasters during the week is minimum . Describe the probability law of .

8.5 . Find the probability density function of , in which is uniformly distributed on to .

 

Answer

for otherwise.

 

8.6 . Find the probability density function of the random variable , in which and are known constants and is a random variable uniformly distributed on the interval to , in which (i) is a constant such that , (ii) for some integer .

8.7 . Find the probability density function of , in which is normally distributed with parameters and . The random variable is said to have a lognormal distribution with parameters and . (The importance and usefulness of the lognormal distribution is discussed by J. Aitchison and J. A. C. Brown, The Lognormal Distribution , Cambridge University Press, 1957.)

 

Answer

for otherwise.

 

In exercises 8.8 to 8.11 let be uniformly distributed on (a) the interval 0 to 1, (b) the interval -1 to 1. Find and sketch the probability density function of the functions given.

8.8 . (i) , (ii) .

8.9 . (i) , (ii) .

 

Answer

8.9 (i) , (ii) .

 

8.10 . (i) , (ii) .

8.11 . (i) , (ii) .

 

Answer

(a): (i) for ; =0 otherwise; (ii) for ; otherwise; (b) for ; =0 otherwise.

 

In exercises 8.12 to 8.15 let be normally distributed with parameters and . Find and sketch the probability density functions of the functions given.

8.12 . (i) , (ii) .

8.13 . (i) , (ii) .

 

Answer

(i) for otherwise; (ii) for otherwise.

 

8.14 . (i) , (ii) .

8.15 . (i) , (ii) .

 

Answer

(i) where for ; otherwise; (ii) for otherwise.

 

8.16 . At time , a particle is located at the point on an -axis. At a time randomly selected from the interval 0 to 1, the particle is suddenly given a velocity in the positive -direction. For any time let denote the position of the particle at time . Then , if , and , if . Find and sketch the distribution function of the random variable for any given time .

In exercises 8.17 to 8.20 suppose that the amplitude at a time of the signal emitted by a certain random signal generator is known to be a random variable (a) uniformly distributed over the interval -1 to normally distributed with parameters and Rayleigh distributed with parameter .

8.17 . The waveform is passed through a squaring circuit; the output of the squaring circuit at time is assumed to be given by . Find and sketch the probability density function of for any time .

 

Answer

(a) for otherwise; for otherwise; (c) for otherwise.

 

8.18 . The waveform is passed through a rectifier, giving as its output . Describe the probability law of for any time .

8.19 . The waveform is passed through a half-wave rectifier, giving as its output , the positive part of . Describe the probability law of for any .

 

Answer

Distribution function :

 

(a) 0 for ; for ; for ; 1 for ; (b) 0 for ;

for for ; (c) 0 for for .

8.20 . The waveform is passed through a clipper, giving as its output , where or 0, depending on whether or . Find and sketch the probability mass function of for any .

8.21 . Prove that the function given in (8.12) is a probability density function. Does the fact that the function is unbounded cause any difficulty?