Functions of transformations

One of the most useful concepts in the theory of normal transformations on unitary spaces is that of a function of a transformation. If is a normal transformation with spectral form (for this discussion we temporarily assume that the underlying vector space is a unitary space), and if is an arbitrary complex-valued function defined at least at the points , then we define a linear transformation by Since for polynomials (and even for rational functions) we have already seen that our earlier definition of yields, if is normal, , we see that the new notion is a generalization of the old one. The advantage of considering for arbitrary functions is for us largely notational; it introduces nothing conceptually new. Indeed, for an arbitrary , we may write , and then we may find a polynomial that at the finite set of distinct complex numbers takes, respectively, the values . With this polynomial we have , so that the class of transformations defined by the formation of arbitrary functions is nothing essentially new; it only saves the trouble of constructing a polynomial to fit each special case. Thus for example, if, for each complex number , we write and then is the perpendicular projection on the subspace of solutions of .

We observe that if , then (assuming of course that is defined for all , that is, that ) , and if , then . These statements imply that if is an arbitrary rational function of and , we obtain by the replacements , , and . The symbol is, however, defined for much more general functions, and in the sequel we shall feel free to make use of expressions such as and .

A particularly important function is the square root of positive transformations. We consider , defined for all real , as the positive square root of , and for every positive we write (Recall that for all . The discussion that follows applies to both real and complex inner product spaces.) It is clear that and that ; we should like to investigate the extent to which these properties characterize . At first glance it may seem hopeless to look for any uniqueness, since if we consider , with an arbitrary choice of sign in each place, we still have . The transformation that we constructed, however, was positive, and we can show that this additional property guarantees uniqueness. In other words: if and , then . To prove this, let be the spectral form of ; then Since the are distinct and positive, so also are the ; the uniqueness of the spectral form of implies that each is equal to some (and vice versa), and that the corresponding ’s and ’s are equal. By a permutation of the indices we may therefore achieve for all , so that , as was to be shown.

There are several important applications of the existence of square roots for positive operators; we shall now give two of them.

First: we recall that in Section: Positive transformations we mentioned three possible definitions of a positive transformation , and adopted the weakest one, namely, that is self-adjoint and for all . The strongest of the three possible definitions was that we could write in the form for some self-adjoint . We point out that the result of this section concerning square roots implies that the (seemingly) weakest one of our conditions implies and is therefore equivalent to the strongest. (In fact, we can even obtain a unique positive square root.)

Second: in Section: Positive transformations we stated also that if and are positive and commutative, then is also positive; we can now give an easy proof of this assertion. Since and are functions of (polynomials in) and respectively, the commutativity of and implies that and commute with each other; consequently Since and are self-adjoint and commutative, their product is self-adjoint and therefore its square is positive.

Spectral theory also makes it quite easy to characterize the matrix (with respect to an arbitrary orthonormal coordinate system) of a positive transformation . Since is the product of the proper values of , it is clear that implies . (The discussion in Section: Multiplicity applies directly to complex inner product spaces only; the appropriate modification needed for the discussion of self-adjoint transformations on possibly real spaces is, however, quite easy to supply.) If we consider the defining property of positiveness expressed in terms of the matrix of , that is, , we observe that the last expression remains positive if we restrict the coordinates by requiring that certain ones of them vanish. In terms of the matrix this means that if we cross out the columns numbered , say, and cross out also the rows bearing the same numbers, the remaining small matrix is still positive, and consequently so is its determinant. This fact is usually expressed by saying that the principal minors of the determinant of a positive matrix are positive. The converse is true. The coefficient of the -th power of in the characteristic polynomial of is (except for sign) the sum of all principal minors of rows and columns. The sign is alternately plus and minus; this implies that if has positive principal minors and is self-adjoint (so that the zeros of are known to be real), then the proper values of are positive. Since the self-adjoint character of a matrix is ascertainable by observing whether or not it is (Hermitian) symmetric ( ), our comments reduce the problem of finding out whether or not a matrix is positive to a finite number of elementary computations.

EXERCISES

Exercise 1. Corresponding to every unitary transformation there is a Hermitian transformation such that .

Exercise 2. Discuss the theory of functions of a normal transformation on a real inner product space.

Exercise 3. If and if is a positive transformation that commutes with both and , then .

Exercise 4. A self-adjoint transformation has a unique self-adjoint cube root.

Exercise 5. Find all Hermitian cube roots of the matrix

Exercise 6. 

  1. Give an example of a linear transformation on a finite-dimensional unitary space such that has no square root.
  2. Prove that every Hermitian transformation on a finite-dimensional unitary space has a square root.
  3. Does every self-adjoint transformation on a finite-dimensional Euclidean space have a square root?

Exercise 7. 

  1. Prove that if is a positive linear transformation on a finite-dimensional inner product space, then .
  2. If is a linear transformation on a finite-dimensional inner product space, is it true that ?

Exercise 8. If and if for some , then .

Exercise 9. If , then for all and .

Exercise 10. If the vectors are linearly independent, then their Gramian is non-singular.

Exercise 11. Every positive matrix is a Gramian.

Exercise 12. If and are linear transformations on a finite-dimensional inner product space, and if , then . (Hint: the conclusion is trivial if ; if , then is invertible.)

Exercise 13. If a linear transformation on a finite-dimensional inner product space is strictly positive and if , then . (Hint: try first.)

Exercise 14. 

  1. If is a Hermitian transformation on a finite-dimensional unitary space, then is invertible.
  2. If is positive and invertible and if is Hermitian, then is invertible.

Exercise 15. If , then . (Hint: compute and prove thereby that the second factor is invertible whenever .)

Exercise 16. Suppose that is a self-adjoint transformation on a finite-dimensional inner product space; write , , and .

  1. Prove that is the smallest Hermitian transformation that commutes with and for which both and . ("Smallest" refers, of course, to the ordering of Hermitian transformations.)
  2. Prove that is the smallest positive transformation that commutes with and for which .
  3. Prove that is the smallest positive transformation that commutes with and for which .
  4. Prove that if and are self-adjoint and commutative, then there exists a smallest self-adjoint transformation that commutes with both and and for which both and .

Exercise 17. 

  1. If and are positive linear transformations on a finite-dimensional unitary space, and if and are unitarily equivalent, then and are unitarily equivalent.
  2. Is the real analogue of (a) true?