Triangular form

It is now quite easy to prove the easiest one of the so-called canonical form theorems. Our assumption about the scalar field (namely, that it is algebraically closed) is still in force.

Theorem 1. If is any linear transformation on an -dimensional vector space , then there exist subspaces with the following properties:

  1. each ( ) is invariant under ,
  2. the dimension of is ,
  3. ( ) ( ).

Proof. If or , the result is trivial; we proceed by induction, assuming that the statement is correct for . Consider the dual transformation on ; since it has at least one proper vector, say , there exists a one-dimensional subspace invariant under it, namely, the set of all multiples of . Let us denote by the annihilator (in ) of , ; then is an -dimensional subspace of , and is invariant under . Consequently we may consider as a linear transformation on alone, and we may find , , satisfying the conditions (i), (ii), (iii). We write , and we are done. ◻

The chief interest of this theorem comes from its matricial interpretation. Since is one-dimensional, we may find in it a vector . Since , it follows that is also in , and since is two-dimensional, we may find in it a vector such that and span . We proceed in this way by induction, choosing vectors so that lie in and span for . We obtain finally a basis in ; let us compute the matrix of in this coordinate system. Since is in and since is invariant under , it follows that must be a linear combination of . Hence in the expression the coefficient of must vanish whenever ; in other words, implies . Hence the matrix of has the triangular form It is clear from this representation that for , so that the are the proper values of , appearing on the main diagonal of with the proper multiplicities. We sum up as follows.

Theorem 2. If is a linear transformation on an -dimensional vector space , then there exists a basis in such that the matrix is triangular; or, equivalently, if is any matrix, there exists a non-singular matrix such that is triangular.

The triangular form is useful for proving many results about linear transformations. It follows from it, for example, that for any polynomial , the proper values of , including their algebraic multiplicities, are precisely the numbers , where runs through the proper values of .

A large part of the theory of linear transformations is devoted to improving the triangularization result just obtained. The best thing a matrix can be is not triangular but diagonal (that is, unless ); if a linear transformation is such that its matrix with respect to a suitable coordinate system is diagonal we shall call the transformation diagonable .

EXERCISES

Exercise 1. Interpret the following matrices as linear transformations on and, in each case, find a basis of such that the matrix of the transformation with respect to that basis is triangular.

  1. .
  2. .
  3. .
  4. .
  5. .
  6. .

Exercise 2. Two commutative linear transformations on a finite-dimensional vector space over an algebraically closed field can be simultaneously triangularized. In other words, if , then there exists a basis such that both and are triangular. (Hint: to imitate the proof in Section: Triangular form , it is desirable to find a subspace of invariant under both and . With this in mind, consider any proper value of and examine the set of all solutions of for the role of .)

Exercise 3. Formulate and prove the analogues of the results of Section: Triangular form for triangular matrices below the diagonal (instead of above it).

Exercise 4. Suppose that is a linear transformation over an -dimensional vector space. For every alternating -linear form , write for the function defined by

Since is an alternating -linear form, and, in fact, is a linear transformation on the (one-dimensional) space of such forms, it follows that , where is a scalar.

  1. .
  2. .
  3. .
  4. .
  5. If the scalar field has characteristic zero and if is a projection, then .
  6. If is the matrix of in some coordinate system, then .
  7. .
  8. .
  9. For which permutations of the integers is it true that for all -tuples of linear transformations?
  10. If the field of scalars is algebraically closed, then . (For this reason trace is usually defined to be ; the most popular procedure is to use (f) as the definition.)

Exercise 5. 

  1. Suppose that the scalar field has characteristic zero. Prove that if and are projections, then whenever . (Hint: from the fact that conclude that the range of is the direct sum of the ranges of .)
  2. If are linear transformations on an -dimensional vector space, and if and , then each is a projection and whenever . (Start with and proceed by induction; use a direct sum argument as in (a).)

Exercise 6. 

  1. If is a linear transformation on a finite-dimensional vector space over a field of characteristic zero, and if , then there exists a basis such that if , then for all . (Hint: using the fact that is not a scalar, prove first that there exists a vector such that and are linearly independent. This proves that can be made to vanish; proceed by induction.)
  2. Show that if the characteristic is not zero, the conclusion of (a) is false. (Hint: if the characteristic is , compute , where and .)