Adjoints of projections

There is one important case in which multiplication does not get turned around, that is, when ; namely, the case when and commute. We have, in particular, , and, more generally, for every polynomial . It follows from this that if is a projection, then so is . The question arises: what direct sum decomposition is associated with?

Theorem 1. If is the projection on along , then is the projection on along .

Proof. We know already that and (cf. Section: Dual of a direct sum ). It is necessary only to find the subspaces consisting of the solutions of and . This we do in four steps.

  1. If is in , then, for all , so that .
  2. If , then, for all in , so that is in .
  3. If is in , then, for all , so that .
  4. If , then for all in , so that is in .

Steps (i) and (ii) together show that the set of solutions of is precisely ; steps (iii) and (iv) together show that the set of solutions of is precisely . This concludes the proof of the theorem. ◻

Theorem 2. If is invariant under , then is invariant under ; if is reduced by , then is reduced by .

Proof. We shall prove only the first statement; the second one clearly follows from it. We first observe the following identity, valid for any three linear transformations , , and , subject to the relation : (Compare this with the proof of Section: Projections and invariance , Theorem 2.) Let be any projection on ; by Section: Projections and invariance , Theorem 1, the right member of (1) vanishes, and, therefore, so does the left member. By taking adjoints, we obtain ; since, by Theorem 1 of the present section, is a projection on , the proof of Theorem 2 is complete. (Here is an alternative proof of the first statement of Theorem 2, a proof that does not make use of the fact that is the direct sum of and some other subspace. If is in , then for all in , and therefore is in . The only advantage of the algebraic proof given above over this simple geometric proof is that the former prepares the ground for future work with projections.) ◻

We conclude our treatment of adjoints by discussing their matrices; this discussion is intended to illuminate the entire theory and to enable the reader to construct many examples.

We shall need the following fact: if is any basis in the -dimensional vector space , if is the dual basis in , and if the matrix of the linear transformation in the coordinate system is , then This follows from the definition of the matrix of a linear transformation; since , we have To keep things straight in the applications, we rephrase formula (2) verbally, thus: to find the element of in the basis , apply to the -th element of and then take the value of the -th linear functional (in ) at the vector so obtained.

It is now very easy to find the matrix in the coordinate system ; we merely follow the recipe just given. In other words, we consider , and take the value of the -th linear functional in (that is, of considered as a linear functional on ) at this vector; the result is that Since , so that , this matrix is called the transpose of .

Observe that our results on the relation between and (where is a projection) could also have been derived by using the facts about the matricial representation of a projection together with the present result on the matrices of adjoint transformations.