Alternating forms

A -linear form is skew-symmetric if for every odd permutation in . Equivalently, is skew-symmetric if for every permutation in . (If for all , then, in particular, whenever is odd. If, conversely, for all odd , then, given an arbitrary , factor it into transpositions, say, , observe that , and, since , conclude that , as asserted. This proof makes tacit use of the unproved but easily available fact that if and are permutations in , then .) The set of all skew-symmetric -linear forms is a subspace of the space of all -linear forms. To get a non-trivial example of a skew-symmetric bilinear form , let and be linear functionals and write More generally, if is an arbitrary -linear form, a skew-symmetric -linear form can be obtained from by forming , where the summation is extended over all permutations in .

A -linear form is called alternating if whenever two of the ’s are equal. (Note that if , then this condition is vacuously satisfied.) The set of all alternating -linear forms is a subspace of the space of all -linear forms. There is an important relation between alternating and skew-symmetric forms.

Theorem 1. Every alternating multilinear form is skew-symmetric.

Proof. Suppose that is an alternating -linear form, and that and are integers, . If are vectors, we write if the ’s other than and are held fixed (temporarily), then is an alternating bilinear form of its two arguments. Since, by bilinearity, and since, by the alternating character of , the left side and the two extreme terms of the right side of this equation all vanish, we see that . This, however, says that or, since the ’s are arbitrary, that . Since every odd permutation is the product of an odd number of transpositions, such as , it follows that for every odd , and the proof of the theorem is complete. ◻

The connection between alternating forms and skew-symmetric ones involves one subtle point. Consider the following “proof” of the converse of Theorem 1: if is a skew-symmetric -linear form, if , and if are vectors such that , then since , and at the same time, since is skew-symmetric; consequently , so that is alternating. This argument is wrong; the trouble is in the inference “if , then .” If we examine that inference in more detail, we find that it is based on the following reasoning: if , then , so that . This is correct. The trouble is that in certain fields , and therefore the inference from to is not justified; the converse of Theorem 1 is, in fact, false for vector spaces over such fields.

Theorem 2. If are linearly dependent vectors and if is an alternating -linear form, then .

Proof. If for some , the conclusion is trivial. If all the are different from , we apply the theorem of Section: Linear combinations to find an , , that is a linear combination of the preceding ones. If, say, , we replace in by this expansion, and use the linearity of in its -th argument, and draw the desired conclusion by an argument of the same type. ◻

In one extreme case (namely, when ) a sort of converse of Theorem 2 is true.

Theorem 3. If is a non-zero alternating -linear form, and if are linearly independent vectors, then .

Proof. Since ( Section: Dimension , Theorem 2) the vectors form a basis, we may, given an arbitrary set of vectors , write each as a linear combination of the ’s. If we replace each in by the corresponding linear combination of ’s and expand the result by multilinearity, we obtain a long linear combination of terms such as , where each is one of the ’s. If, in such a term, two of the ’s coincide, then, since is alternating, that term must vanish. If, on the other hand, all the ’s are distinct, then for some permutation . Since (Theorem 1) is skew-symmetric, it follows that . If , it would follow that , and hence that for all , contradicting the assumption that . ◻

The proof (not the statement) of this result yields a valuable corollary.

Theorem 4. Any two alternating -linear forms are linearly dependent.

Proof. Suppose that and are alternating -linear forms and that is a basis. Given any vectors , write each of them as a linear combination of the ’s, and, just as above, replace each of them, in both and , by the corresponding linear combination. It follows that each of and is a linear combination (the same linear combination) of terms such as and , where each is one of the ’s. Since and are scalars, they are linearly dependent, so that there exist scalars and not both zero, such that ; from these facts we may infer that , as asserted. ◻