Geometric Identities in Invariant Theory by

advertisement
Geometric Identities in Invariant Theory
by
Michael John Hawrylycz
B.A. Colby College (1981)
M.A. Wesleyan University (1984)
Submitted to the Department of Mathematics
in Partial Fulfillment of the Requirements for the Degree of
Doctor of Philosophy
at the
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
February 1995
(
1995
Massachusetts Institute of Technology
All rights reserved
Signature of Author ................
,............
.
.......................
Department of Mathematics
26 September, 1994
Certified by ........
.....
....
-. ...........-.............................
Gian-Carlo Rota
Professor of Mathematics
Accepted by ..............
. .........
. -.....
..............................
David Vogan
Chairman, Departmental Graduate Committee
Department of Mathematics
Scier~§,•
MASSACHUSETTS INSTITUTE
()F *rrr-"!1yjnn',/
MAY 23 1995
Geometric Identities in Invariant Theory
by
Michael John Hawrylycz
Submitted to the Department of Mathematics
on 26 September, 1994, in partial fulfillment of the
requirements for the degree of
Doctor of Philosophy
Abstract
The Grassmann-Cayley (GC) algebra has proven to be a useful setting for proving
and verifying geometric propositions in projective space. A GC algebra is essentially
the exterior algebra of a vector space, endowed with the natural dual to the wedge
product, an operation which is called the meet. A geometric identity in a GC
algebra is an identity between expressions P(A, V, A) and Q(B, V, A) where A and
B are sets of anti-symettric tensors, and P and Q contain no summations. The idea
of a geometric identity is due to Barnabei, Brini and Rota.
We show how the classic theorems of projective geometry such as the theorems of
Desargues, Pappus, Mobius, as well as well as several higher dimensional analogs,
can be realized as identities in this algebra.
By exploiting properties of bipartite matchings in graphs, a class of expressions,
called Desarguean Polynonials, is shown to yield a set of dimension independent
identities in a GC algebra, representing the higher Arguesian laws, and a variety
of theorems of arbitrary complexity in projective space. The class of Desarguean
polynomials is also shown to be sufficiently rich to yield representations of the general
projective conic and cubic.
Thesis Supervisor: Gian-Carlo Rota
Title: Professor of Mathematics
Acknowledgements
I would like to thank foremost my thesis advisor Professor Gian-Carlo Rota without
whom this thesis would not have been written. He contributed in ideas, inspiration,
and time far more than could ever be expected of an advisor. I would like to
thank Professors Kleitman, Propp, and Stanley for their teaching during my stay
at M.I.T. I am particularly grateful that Professors Propp and Stanley were able to
serve on my thesis committee. Several other people who contributed technically to
the thesis were Professors Neil White of the University of Florida, Andrea Brini of
the University of Bologna, and Rosa Huang of Virginia Polytechnic Institute, and
Dr. Emanuel Knill of the Los Alamos National Laboratory.
A substantial portion of the work was done as a member of the Computer Research
and Applications Group of the Los Alamos National Laboratory. The group is
directed my two of the most generous and interesting people I have known, group
leader Dr. Vance Faber, and deputy group leader Ms. Bonnie Yantis. I am very
indebted to both of them. The opportunity to come to the laboratory is due to my
friend Professor William Y.C. Chen of the Nankai Institute and LANL.
I especially thank Ms. Phyllis Ruby of M.I.T. for many years of assistance and
advice.
I would also like to express my sincere gratitude to my very supportive family and
friends. Three special friends are John MacCuish, Martin Muller, and Alain Isaac
Saias.
Contents
1
The Grassmann-Cayley Algebra
9
1.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
1.2
The Exterior Algebra of a Peano Space
1.3
Bracket methods in Projective Geometry . ...............
1.4
Duality and Step Identities
1.5
Alternative Laws .............................
1.6
Geometric Identities ...............
................
12
21
. ..................
....
26
30
..........
..
34
2 Arguesian Polynomials
43
2.1
The Alternative Expansion
2.2
The Theory of Arguesian Polynomials . ................
2.3
Classification of Planar Identities . ..................
2.4
Arguesian Lattice Identities . ..................
2.5
A Decomposition Theorem
. ..................
....
48
.
....
.......................
57
66
74
3 Arguesian Identities
4
44
83
3.1
Arguesian Identities
3.2
Projective Geometry ...........................
3.3
The Transposition Lemma ............
.................
Enlargement of Identities
........
83
93
............
98
105
CONTENTS
.... . 105
4.1
The Enlargement Theorem
4.2
Exam ples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.3
Geom etry ............
..................
..
. . . . . 126
.. ............
129
5 The Linear Construction of Plane Curves
...
The Planar Conic ...........
5.2
The Planar Cubic ...........
5.3
The Spacial Quadric and Planar Quartic . . . . . . . . . . . . . . . . 144
............
........
. . . . . 130
5.1
... .. 134
List of Figures
36
1.1
The Theorem of Pappus . . . . . . . . . . . . . . .
2.1
The Theorem of Desargues
2.2
The graphs Bp, for i = 1,...,5 . . . . . . . . . . .
2.3
The Theorem of Bricard . . . . . . . . . . . . . . .
2.4
The Theorem of the third identity. . . . . . . . . .
2.5
The First Higher Arguesian Identity . . . . . . . .
4.1
Bp for P = (aV BC) A (bV AC) A (cV BCD) A (d V CD) and B2p.
109
4.2
The matrix representation of two polynomials P
125
4.3
A non-zero term of an identity P -
5.1
Linear Construction of the Conic . . . . . . .
. . . . . . . . . . . . .
133
5.2
Linear Construction of the Cubic. . . . . . ...
.............
138
. . . . . . . . . . . . .
Q. .........
Q....
126
8
LIST OF FIGURES
Chapter 1
The Grassmann-Cayley
Algebra
Malgrd les dimensions restreintes de ce livre, on y
trouvera, je l'espe're, un expose assez complet de la
G6omitrie descriptive.
Raoul Bricard, Geometrie Descriptive, 1911
1.1
Introduction
The Peano space of an exterior algebra, especially when endowed with the additional
structure of the join and meet of extensors, has proven to be a useful setting for
proving and verifying geometric propositions in projective space. The meet, which
is closely related to the regressive product defined by Grassmann, was recognized as
the natural dual operation to the exterior product, or join, by Doubilet, Rota, and
Stein [PD76]. Recently several researchers including Barnabei, Brini, Crapo, Kung,
Huang, Rota, Stein, Sturmfels, White, Whitely and others have studied the bracket
ring of the exterior algebra of a Peano space, showing that this structure is a natural
structure for geometric theorem proving, from an algebraic standpoint. Their work
has largely focused on the bracket ring itself, and less upon the Grassmann-Cayley
algebra, the algebra of antisymmetric tensors endowed with the two operations of
CHAPTER 1. THE GRASSMANN-CAYLEY ALGEBRA
the wedge product, join, and its natural dual meet.
The primary goal of this thesis, is to develop tools for generating identities in the
Grassmann-Cayley algebra. In his Calculus of Extensions, Forder [For60], using precursors to this method, develops thoroughly the geometry of the projective plane,
with some attention to projective three space. The work of Forder contains implicitly, although not stated as such, the idea of a geometric identity, a concept first made
precise in the work of Barnabei, Brini, and Rota [MB85]. Informally, a geometric
identity is an identity between expressions P(A, V, A) and Q(B, V, A), involving the
join and meet, where A and B are sets of extensors, and each expression is multiplied
by possible scalar factors. The characteristic distinguishing geometric identities in
a Grassmann-Cayley algebra from expressions in the Peano space of a vector space
is that in the former no summands appear in either expression. Such identities
are inherently algebraic encodings of theorems valid in projective space by propositions which interpret the join and meet geometrically. One problem in constructing
Grassmann-Cayley algebra identities is that the usual expansion of the meet combinatorially or via alternative laws, leaves summations over terms which are not
easily interpreted. While the work of Sturmfels and Whitely [BS91] is remarkable,
in showing that any bracket polynomial can be "factored" into a Grassmann-Cayley
algebra expression by multiplication by a suitable bracket factor, their work does not
provide a direct means for constructing interesting identities. Furthermore, because
of the inherent restrictions in forming the join and meet based on rank, natural
generalizations of certain basic propositions in projective geometry, do not seem to
have analogs as identities in this algebra.
The thesis is organized into chapters as follows: The first chapter develops the basic
notions of the Grassmann-Cayley algebra, within the context of the exterior algebra
of a Peano space, following the presentation of Barnabei, Brini, and Rota [MB85].
We define the notion of an extensor polynomial as an expression in extensors, join
and meet and prove several elementary properties about extensor polynomials which
will be useful in the sequel. Next we demonstrate how bracket ring methods are
useful in geometry by giving a new result for an n-dimensional version of Desargues'
Theorem, as well as several results about higher-dimensional projective configurations. This chapter concludes by defining precisely the notion of geometric identity
in the Grassmann-Cayley algebra, and giving several examples of geometric identities, including identities for theorems of Bricard [Haw93], M6bius, and Pappus,
[Haw94].
In Chapter 2 we identify a class of expressions, which we call Arguesian polynomials,
so named because they yield geometric identities most closely related to the theorem of Desargues in the projective plane and its many generalizations to higher-
1.1.
INTRODUCTION
dimensional projective space. The notion of equivalence between two Arguesian
polynomials is made precise by E-equivalence. In essence, two Arguesian polynomiE
als P and Q are E-equivalent, written P = Q if P and Q reduce to the same bracket
polynomial in the monomial basis of column tableaux, in vectors and covectors, via
a certain expansion, called the alternative expansion E(P). The alternative expansion is a recursive evaluation of P subject to the application of alternative laws for
vectors and covectors, as presented in Chapter 1. After presenting several technical
lemmas necessary in the subsequent chapters, Chapter 2 explores the structure of
Arguesian polynomials, by classifying the planar Arguesian identities. Surprisingly,
there are only three distinct theorems up to E-equivalence in the plane, the theorem of Desargues, a theorem attributable to Raoul Bricard, and a third lesser known
theorem of plane projective geometry. In addition, a particularly simple subclass of
Arguesian identities are characterized which yield geometric identities for the higher
Arguesian lattice laws, justifying our choice of terminology. The characterization
results of chapter 2 rely on a decomposition theorem for Arguesian polynomials.
The proof of this theorem is given in the final section of this chapter.
Identities between Arguesian polynomials are closely related to properties of perfect
matchings in bipartite graphs. Each perfect matching in a certain associated graph
Bp corresponds to a non-zero term of the given polynomial P. The theory of Arguesian identities is more complex than the theory of bipartite matchings because
of a sign associated with each such matching. In Chapter 3 we present a general
construction, from which all Arguesian identities follow, enabling a variety of identities in any dimension. The construction may be seen as a kind of alternative law for
Arguesian polynomials in the sense of Barnabei, Brini, and Rota [MB85]. Ideally,
our identities would be proven in the context of superalgebras [RQH89, GCR89],
thereby eliminating the need to consider the details of sign considerations. To this
date, however, the meet as an operation in supersymmetric algebra has not been
rigorously defined, and such attempts have led to contradictory results, or results
which are difficult to interpret. A recent announcement by Brini [Bri94] indicates
that the theory of Capelli operators and Lie superalgebras may provide the required
setting.
The fourth chapter proves a dimension independence theorem for Arguesian identities, called the enlargement theorem. Specifically, given any identity P = Q between
two Arguesian polynomials P(a, X) and Q(a, X), both in step n, we may formally
substitute for each vector a E a, (and each covector X E X), the join (or meet) of
distinct vectors al V ... Vak = a(k) E a(k) (or covectors X 1 A... A Xk = X(k) E X(k))
of steps k (and n - k), to yield Arguesian polynomials p(k) and Q(k) which then
satisfy p(k) f Q(k). This theorem suggests that Arguesian identities are in fact con-
CHAPTER 1. THE GRASSMANN-CAYLEY ALGEBRA
sequences of underlying lattical identities, which we conjecture. The enlargement
theorem strongly suggests that indeed Arguesian identities are a class of identities
valid in supersymmetric algebra [GCR89], in terms of positive variables, an idea suggested by Rota. Indeed, the enlargement theorem itself was first intuited by Rota
as an effort to understand when Grassmann-Cayley algebra identities are actually
identities in supersymmetric algebra.
In the fifth and final chapter we conclude with another give another application of
vector/covector methods to the study of projective plane curves or surfaces. The
vanishing of an Arguesian polynomial, in step 3 with certain vectors (or equivalently
covectors ) replaced by common variable vectors represents the locus of a projective
plane curve of given order. This addresses an old problem of algebraic geometry,
dating to even Newton, the linear construction of plane curves. This idea is due
to Grassmann and sees a considerable simplification in the language of GrassmannCayley algebra. We show how the forms for Arguesian polynomials in the plane
yield symmetric and elegant expressions for the conic, cubic, and a partial solution
to the quartic. As a final result, a generalization of Pascal's Theorem for the planar
cubic is given.
A Maple V program was written which reduces any Arguesian polynomial to its
canonical monomial basis. This code was extremely useful in obtaining and verifying
many of the results of the thesis, and undoubtably the code is useful for further work.
The author will gladly supply this code upon request.
1.2
The Exterior Algebra of a Peano Space
A Peano space is a vector space equipped with the additional structure provided by
a form with values in a field. The definition of a Peano space and its basic properties
were first developed by Doubilet, Rota, and Stein [PD76] and later Barnabei, Brini,
and Rota [MB85]. We will state and prove only some of their results, for completeness, and the reader is referred to these papers for a more complete treatment. Let
K be an arbitrary field, whose values will be called scalars, and let V be a vector
space of dimension n over K, which will remain fixed throughout.
Definition. A bracket of step n over the vector space V is a non-degenerate
alternating n-linear form defined over the vector space V, in symbols, a function
X1, X,..., x, -4 [Xl,X2,...
X]
EK
defined as the vectors xl, x2,..., x,, range over the vector space V, with the following
properties:
1.2. THE EXTERIOR ALGEBRA OF A PEANO SPACE
1.
[l, 22 ...
, ,n]
13
= 0 if any two of the xi coincide.
2. For every x, y E V, a, 3 E K the bracket is multilinear
[Xi,..., Xi-l,ax
+
3py, Xi+1,...,IXn] =
i a[xi,...,Zi-1,X,Xi+1,...,Xn] + P[xx,x
,y,
3. There exists a basis bl,b 2 ,...,b, of V such that [bl,b
2
xi+l ,...
,
n ].
,...,bn] $ 0.
Definition A Peano space of step n is defined to be a pair (V, [.]), where V denotes
a vector space of dimension n and [-] is a bracket of step n over V.
A Peano space will be denoted by a single letter V leaving the bracket understood
when no confusion is possible. A non-degenerate multilinear alternating n-form is
uniquely determined to within a non-zero multiplicative constant, however the choice
of this constant will determine the structure of the Peano space. A Peano space can
be viewed geometrically as a vector space in which an oriented volume element is
specified. The bracket [x 1 , x.2 ,..., xn] gives the volume of the parellelpiped those
sides are the vectors xi. If V is a vector space of dimension n, a bracket on V
of step n can be defined in several ways. The usual way is simply to take a basis
el, e2, ... , en of V and then given vectors;
xj
ijej
=
i = 1, 2,...
to set
[XI,
2,
... ,
]= det(xij).
Although a bracket can always be computed as a determinant, it will prove more
interesting in this context to view the bracket as an operation not unlike a norm in
the theory of Hilbert space, rather than as a determinant in a specific basis. The
exterior algebra of a vector space is a special case of a Peano space and can be
developed in this context.
To construct the exterior algebra of a Peano space V of step n over the field K, let
S(V) be the free associative algebra with unity over K generated by the elements
of V. For every integer k, 1 _<k < 'nconsider the subspace Nk(V) of S(V) of all
vectors f = Ei aiewi with wi = ()
(i) such that xi E V, a E K and such
1 2)
that for every z1, z2,..., z 7n-k E V,
a[ i),
O[xl
,
2 ,..z2,]=
... Xk
, Z1,Z2
..,Z. n-]
0.
"- 0.
CHAPTER 1. THE GRASSMANN-CAYLEY ALGEBRA
For k > n we denote by Nk(V) the subspace of S(V) spanned by all words of length
k, and let
N(V) = N 1 e N 2 (V) e*-e.. E Nn(V).
It is easy to see that N(V) is an ideal of the algebra S(V). The quotient algebra
G(V) = S(V) \ N(V) is called the exterior algebra of the Peano space V.
It is readily seen that G(V) decomposes as
n
G(V) = ®Gk(V),
k=O
where
dimLGk(V) =(
.
The above construction may be equivalently performed by imposing an equivalence
relation on sequences of vectors. Given two sequences of vectors of length k we write
al,a2, ... , ak ", bib 2 ,..., bk when for every choice of vectors Xk+1,. - - - ,n we have
[ai,a 2 ,...,ak, xk+l, . .,x,,] = [blb 2 , ... ,bk,
k+1,. .. ,
].
An equivalence class under this relation will be called an extensor. More precisely,
let 0 : S(V) -- G(V) denote the canonical projection of S(V) onto G(V). If
Xl,X2,.. -,k
is a word in S(V) with X1,
2 ,...,
k E V, for k > 0 we denote its
image under 0 by
0(x1,X2,... ,Xk) = Xl V x2 V ... V Xk,
and provided
(x1, x2,... , xk) : 0 the element is called the extensor 1XX2 "... k of
step k.
The product in the exterior algebra of a Peano space is called the join, in order to
emphasize its geometric significance, and is denoted by the symbol V . We note that
this usage differs from the ordinary usage where exterior multiplication is denoted
as the wedge product A. It is clear that the join of two extensors, since non-zero by
definition, is an extensor. When an extensor A is written as
A = al Va2 V ...Vak,
we say that the linearly ordered set {al, a2,'
, ak} is a representation of the
extensor A.
It is not always possible to write a sum of two or more extensors of step k as
another extensor of step k, and hence one also has indecomposable k-vectors in
1.2. THE EXTERIOR ALGEBRA OF A PEANO SPACE
15
Gk(V). For example, if a, b, c, and d are linearly independent in V, then ab + cd is
an indecomposable 2-vector in G 2 (V). Let B = b1 b2 ... bj be an extensor of step j.
Then
A V B = al V a 2 V ... V ak V bl V ... bj = ala2 .. akbl .. bj
is an extensor of step j+k. In particular, AVB is nonzero if and only if al, a2,..., ak
bl,..., bj are distinct and linearly independent.
Consider any k vectors al,..., ak E V and their expansions ai = 'U 1 ai,j ej in terms
of the given basis {el,..., e,}. By multilinearity and antisymmetry, the expansion
of their join equals
al,i1
aali 2
12,i
a2,i2
al Va 2 V **a=
"
aljk
a1,jk
ei, Vei,
...
eik
l<il<...<ikL L
ak,i,
ak,i2
""
kjk
but in general we will avoid passing to the coordinate level.
Summarized below are some of the properties of the exterior algebra of a Peano
space. These results are well-known in the context of an exterior algebra.
Proposition 1.1 Let V be a Peano space of step n over the field K and let G(V)
be its exterior algebra. Choose a basis {al,a2,... Ia,, } of V and let S(al, a 2 ,... a n)
be the free associative algebra with unity over K in the variables al, a2,... , an. The
algebra G(V) is isomorphic to the quotient of the algebra S(a, a2,..., an) by the
ideal generated by the following elements of S(al, a2,..., an)
a?
i = 1,2,...,n
aiaj + ajai
i,j= 1,2,...,n
Proposition 1.2 For every i = 1,2,... ,k,
Xl,X2,.. .
i-1, Xi+l,...,k , y E V
and for every a,8 E K we have
1. X 1 V xz2"
Xi+l V
V ... V xi-
1
V xi+ 1 V ... VXk = a(xl V ... V i-1 V
V (ax+-y)
V X k)) +P(XiV
...
V Xi-
1
V y V Xi+V
i+
V
V xk).
2. For every permutationa of {1,2,... ,k},
,(1)VX,(2)V. *"VXa(k)= sgn(a)x V
X2 V " X k where sgn(a) is the signature of the permutation a.
Proposition 1.3 Let A E Gh,(V) and B E Gk(V), then
BVA = (-1)hkA V B
(1.1)
CHAPTER 1. THE GRASSMANN-CAYLEY ALGEBRA
Proposition 1.4 Let A be a subspace of V of dimension k > 0; if {x1, X2,... , xk}
and {yl, Y2, ... ,Yk} are two bases of A then
k = Cyl V y2 V... V Yk
X1 V X2 V .V
for some non-zero scalar C.
By Proposition 1.4 every non-trivial subspace of V is uniquely represented modulo
a non-zero scalar by a non-zero extensor, and vice-versa. The zero subspace is
represented by scalars. We say that the extensor x1 ... Xk is associated to the
subspace generated by the vectors of V corresponding to {x 1,... , k}. We also
remark that the join al V ... V ak is non-zero if and only if the set of associated
vectors is a linearly independent set. The following proposition is fundamental.
Proposition 1.5 Let A, B be two subspaces of V with associated extensors F and
G respectively. Then
1. F V G = O if and only if A nB
l
{0}.
2. If A n B = {0}, then the extensor F V G is the extensor associated to the
subspace generated by A U B.
Let W be a subspace of a Peano space V, and let wz, w2,..., wn be a basis of V such
that Wi, w2,... , Wk is a basis of W. We define the restriction of a Peano space V
to W to be the Peano space obtained by giving W the bracket
[xlX2,.. .,Xk]W
= [X1,
2,...,k,
Wk+1,. .. , W].
The bracket [-]w depends to within a multiplicative constant on a choice of the basis
elements wk+lwk+2, ... , wn. Let W' be the subspace spanned by Wk+l,..., wn. We
define the Peano space on the quotient space V \ W' by setting
[Vi!,V2
-...
,VUn-k\W'
]
= [ZiX 2?...
,Xk, Wk+l,.-..,Wn,
where vi are vectors in V \ W' and xi is any vector that is mapped to vi by the
canonical map of V into V \ W'. The bracket in the quotient Peano space depends
on a choice of a basis of W', again to within a multiplicative constant. These constructions suggest there might be a relationship between matroids and the bracket
algebra, and this connection is fully explored in White [Whi75].
1.2. THE EXTERIOR ALGEBRA OF A PEANO SPACE
Proposition 1.6 Let W be a subspace of the Peano space V endowed with a restriction of the bracket, and let V \ W be endowed with the quotient bracket. Then
the exterior algebra of the Peano space W is naturally isomorphic to the restriction
to W of the exterior algebra of the Peano space V, and the exterior algebra of the
Peano space V \ W is naturally isomorphic to the quotient of the exterior algebra of
V by the ideal generated by the exterior algebra of W.
A second operation in the exterior algebra of a vector space is the meet. A precursor to this operation was originally recognized by Hermann Grasssman in his
famous Ausdehnungslehre [Grall], whose intention was to develop a calculus for the
geometry of linear varieties. The equivalent of the meet was called the regressive
product, unfortunately denoted by Grassrnmann by the same notation as the join or
wedge product. While this operation was used by later authors such as Whitehead
[Whi97] and Fordor [For60], the realization that the exterior algebra of a Peano
space, with its two operations of join V and meet A, is the natural structure for the
study of projective invariant theory under the special linear group was not made
until Rota [PD76].
Given a representation of an extensor A = al V a2 V ... V ak and an ordered r-tuple
of non-negative integers hi, h 2 ,... , h1Tsuch that hi + h2 + ".. + hr = k, a split of
type (hi, h2,..., h,.) of the representation A = al V a2 V ... ak is an ordered r-tuple
of extensors (A 1, A 2 ,..., A,) such that
1. Ai = Aif hi = 0 and Ai = ai, V aiV ... V a
if hi
0.
2. Ai V Aj 4 0.
3. A1VA
2
V ... v A,. = ±A.
In what follows we shall denote by S(a, a2,..., ak; 1 , h 2 ,... , hr) the finite set of
all splits of type (hi, h 2 ,..., h,.) of the extensor A relative to the representation
A = al V ... V ak. One can easily extend the definition of the signature of A to the
signature of the split A = A 1 V A 2 V ... V Ar as follows;
sgn(A1,
A2,...,
A.) =
I
1
-1
... V A = A
ifAI VA2V
V...VA r=-A
ifAIVA
1f A, v A 22 V ...V A, = -A
The bracket notation can be extended to include the case where its entries are
extensors instead of just vectors. Furthermore, this definition is easily verified to be
independent of the choice of representation of the extensors A 1 , A 2 ,..., A,
18
CHAPTER 1. THE GRASSMANN-CAYLEY ALGEBRA
Definition. Let A 1 , A 2 ,..., A, be extensors in a Peano space of step n. Choose
representations
A1
=
allVa12V---Val=s
A2
=
a 2 1 Va
Ak
=
akl V ak2 V... V akek
22
V
Va
2
2
and define
[A 1, A 2 ,... , A,] = all, a12,.
.*
als, , , a2 a22, .
when sl + s2 + - - - + sn = n and [A,
,
a, 2,2,
.
, a 2 ,2, an,1, ... ,
n,s,
A 2 ,..., A,,] = 0 otherwise.
A proposition which easily follows from this definition is
Proposition 1.7 Let A and B be two extensors of step k and step n - k. Then
[B, A] = (-1)k(n-k)[A, B].
The definition of the meet of two extensors is based on the following fundamental
property of Peano spaces.
Proposition 1.8 Let al,a2, -... , ak and bl, b2 ,..., bp be vectors of a Peano space V
of step n with k + p > n. If A = al V a 2 V ... V ak and B = bl V b2 V ... V bp then
the following identity holds:
E
(A,,A
2
)ES(A;n-p,k+p-n)
sgn(A 1 , A 2 ) [A1, B] A 2 =
E
(B1,B
2
sgn(B 1 , B 2 ) [A, B 2] B 1.
)ES(B;k+p-n,n-k)
PROOF. We consider the functions
f, 9g: Vk+p --+ Gk+p-n(V)
defined as
f(a 1 ,a 2 ,. .. ,akl,b2,. .. ,b,)
E
(AI,A2)ES(A;n-p,k+p-n)
=
sgn(Ai, A2) [A1, B] A 2
1.2. THE EXTERIOR ALGEBRA OF A PEANO SPACE
g(al, a2, ... , ak, bl, b2,..., b,) =
sgn(Bi, B 2 ) [A, B 2] B 1
E
(B 1 ,B 2 )ES(B;k+p-n,n-k)
where A = alVa2 V... ak and B = b1Vb2 V... bp. Direct verification shows that f and
g are (k +p)-multilinear functions in the vectors al, a2,... , ak, bi, b2 , .. , bp. Hence
f and g coincide if and only if they take the same values on any (k + p)-tuple of
vectors taken from a given basis {el, e2,..., e,n } of V. Since f and g are alternating
in the first k variables and in the last p variables, separately, it is sufficient to prove:
f(eil,...eik
... , ej ) = g(ei, ....
ek;
ej, .... ej )
in the case where il < i 2 < ... < ik j1 < j2 < ... " jp. Since f and g must agree on any
basis, we may set d = k +p - n and simultaneously require that i1 = jl,
, id = jd.
We then compute
f(ei, .... eik;ejl,...,ejp) =
(-1)d(k-d) [eid+l
,
. ..
eik , ejl, ..
, ejp]eil V ei2 V . . . V eid
and
g(eil,...,eik; ej1,...,ejp) = [ei,...,eik,ed+1,,... ,ejp]ei, V ei, V..
V eid.
Now since ej, e 2,..., ejd agree with ei1 , ei 2 ,..., eid the former vectors which are the
first d vectors in ej,, ej,...,
ej may each be shifted as far to the left as possible in
the bracket in f to yield g, after d(k - d) sign changes. This proves the result.
0
We may now define the meet of two extensors A and B. Given extensors A =
al V a2 V ... V ak and B = bl V b2 V ... V bp with k,p > 1, we define the binary
operation A by setting:
1. AAB=Oifk+p<n.
2. AA B = E(A4,A2) sgn(Ai, A 2 ) [A1, B] A 2 =
(B1 ,B2) sgn(B 1, B 2 ) [A, B 2] B 1
where the summations range over the splits S(al, a 2 ,...
-,
S(bl, b2 ,..., bp; k + p - n, n - k) respectively.
ak; n - p, k + p - n) and
We remark that in the above equivalent formulas for the meet of two extensors A
and B, the sign of the split extensor sgn(B 1 , B 2 ) is computed as the sign of the
ordered pair B 1, B 2 with respect to the order of vectors in B and not with respect
to an underlying linear order on A or B. Geometrically, the meet of two extensors
plays a similar role to the join.
CHAPTER 1. THE GRASSMANN-CAYLEY ALGEBRA
Proposition 1.9 Let A and B be associated to the subspaces X and Y of V. If the
union X U Y spans the whole space V and if X n Y 5 0 then A A B is the extensor
associated to the subspace X n Y of V.
PROOF.
Suppose that X U Y spans V and let k be of step A and p of step B,
with k + p > n. The claim holds trivially when k + p < n. Suppose therefore
that k +p > n and take a basis c = {cl,2,.... ,cd} of the subspace X nfY where
d = k +p - n. We then complete C to a basis {cl, c2,..., Cd, al,..., ak-d} associated
to A and a basis {cl, C2,..., Cd, bi,..., bp-d of Y associated to B such that
A
=
c1Vc2V"..VcdValV..
B
=
cl Vc
2
Vak-d
V...VcdVblV...Vbp d.
Compute the meet A A B by splitting either extensor. In particular
AAB =
:[cl, c21 * *, cd, a1, a2,. .. , ak-d, bl,..., bp-d]Cl V c2 V ...cd
since all other terms vanish. Thus cl,..., cd is the extensor associated to the inter-
section subspace X n Y.
0
The following commutativity and associativity relations hold for the meet, their
proofs can be found in [MB85].
Proposition 1.10 Let A and B be two extensors of steps k and p respectively.
Then
A A B = (-1)(n-k)(n-P)B A A.
Proposition 1.11 (Linearity) Let A, A 1 , A 2 and B be extensors with A = At +
A 2 . Then
AAB = A AB+A 2 AB
Proposition 1.12 (Associativity) Let A, B, C be extensors, then the associative
law holds for the meet;
(AAB) AC= AA(BAC)
The definition of the meet of extensors can be extended to the sum of extensors as
follows: Set T = A + C; then by Proposition 1.11 we define T A B = A A B + CA B.
The following definition is fundamental to this thesis.
Definition. A Peano space of step n equipped with the two operations of join V and
meet A is called the Grassmann-Cayley algebra of step n and denoted GC(n).
1.3. BRACKET METHODS IN PROJECTIVE GEOMETRY
A few other notations will be useful as well. By a bracket polynomial on the
alphabet A of letters, we mean a sum of terms, consisting of products of brackets
[.], whose entries are selected from the letter set A. The content of a bracket is
the set of elements in that bracket. By an extensor polynomial P(Ai, V, A) in
the extensors {Ai} we mean a formal expression in Ai and the binary operations
V and A. The step of the polynomial P is the step of the extensor obtained upon
evaluating all join and meet operations in P. By Propositions 1.5 and 1.9 P is an
extensor.
1.3
Bracket methods in Projective Geometry
The methods of bracket algebra provide natural machinery for proving theorems
of projective geometry. This connection, originally developed by Grassmann, was
explored fully by Forder [For60], although the concept of identities involving only
join and meet in a Grassmann-Cayley algebra was not made explicit there. Crapo
[Cra91], Sturmfels [BS89, BS91], White [Whi75, Whi91] and others have worked
extensively using these methods. A main problem in computational synthetic geometry is to find coordinates or non-realizability proofs for abstracting defined configurations. Of particular interest is Sturmfels method of final polynomials which is
discussed in Bokowski and Sturmfels [BS87]. As an illustration of bracket method
techniques we give a new proof of an n-dimensional generalization of Desargues'
theorem [Jon54], and simplify several results of Fordor using the language of Peano
space.
Theorem 1.13 (Desargues) Let al,... ,a, and bl, ... ,b,
be tetrahedra in n - 1
dimensional projective space. If the lines aibi, 1 < i < n pass through a commmon
point p, then the colines formed by the intersections of the pairs of hyperplanes
al,...,ai,...,an and bl,...,bi,..., b,i all lie on a common hyperplane.
PROOF. If p is the center of perspectivity, then the vertices bl,..., b,, lie on the lines
pai,pa 2 , -... ,pa,. We first find an bracket expression for p. Let xi, 1 < i < n - 3 be
variable points whose coordinates in a given basis are independent transcendentals
Zi,
, X). Consider the expression
(xi, 1 , Xi,2,
alblxX2
A a2 b2 x 1X 2 ... n-_ 3 =
+([al , b, xl,.. , x-3, a2 ]b2 l " ,
Xn-3
3
-
[al, bl,
,... , xn
3 , b2 ]a2 xl ...
Xn-3)
CHAPTER 1. THE GRASSMANN-CAYLEY ALGEBRA
22
Since albl and a 2 b2 have p as a common point of intersection we may also write
alblzXl2 .. Xn-3 A a 2 b2 x 1X 2 .. Xn-3 = PX1X2 ... Xn-3 so that
Xn-3 =
PX1X2..
+([al,bi, xi,... ,2 n-3,a2 b2 x 1
"Xn-3 - [a1 ,b, x,...,
Xn-3,b2]a2 xl .
"Xn-3)
From this one concludes that
(p ± ([ai, b,21 ,...
-3,a2]b2 - [al,bl,
,
,..., Xn-3, b2 ]a 2 ) V X1X2 " Xn-3 = 0.
Since the above linear combination of p, b2 and a2 is a point in n - 1 dimensional
projective space and X1,..., Xn-3 is a basis for a flat of rank n - 3,
p = =([al, bl, xi,..., xn-3,a 2]b2 - [al, bi, xl,. .., n-3, b2 ]a 2 ))+klix+- - "+kn-3Xn-3
where the kl,..., kn-
3
are elements of the underlying field K.
Since the brackets
in this expression are non-zero, the sum of the first two terms represents a point on
the line a 2 b2 (it suffices to join b2 - a2 with a 2b2 ) while the linear combination of
the xi represents a point off the line a 2 b2 , since the indeterminates may be chosen
ingeneral position. Hence we conclude that ki = 0 for all i and that
p = :([al, b, xi, ...,Xn-3,a2]b2 - [al, bl, xi,...,,Xn-3,b2]a2)),
the entries 21,..., , -3 acting as arbitrary scale factors. In this way we may write
k~bi = p + kiai for 1 < i < n, and hence
n
k: .-- k
.. .
. k'I bl ...bi .b,
= V (p+ kiai).
(1.2)
j=1
isi
Expanding the right side of 1.2 and transposing p to the left in each term one has
k' ... k'bk'
b ..
;-i ...
b,, =
pV(
ki...k-...k^i..kn(-1)-1..
. ai --an+
j=1
n
kil
..
L.
l i
kj ...kn(-1)Jal .. i...aj ...an) + ki ...ki ..kn al .. ai .. an
j=i+l
Computing the meet with the hyperplane al
i ... an, the last term in the above
expression vanishs and the non-vanishing terms consist of a sum of terms of form
(minus the scalars)
pala2 • • &j • • i .. an A al ... iai ... an
(1.3)
1.3. BRACKET METHODS IN PROJECTIVE GEOMETRY
23
for j = 1,..., n, j # i. The result of computing the meet of 1.3 is a single non-zero
term, and therefore the n - 2 dimensional coline k ... k ... kn bl ... bi ... bn A
al "h·i . a,, is given by a scalar multiple [p, al,... ,pn] of the sum
.an+
. j ... i ...
kl. j .. .ki ... k (-1)J-lal ...
L j=1
n
Ski..ki'·.i'"kJ'.-kn(-1)J al'".. i'".aJ"
'a
n
(1.4)
-
j=i+1
By Proposition 1.34 the linear combination of hyperplanes is again a hyperplane.
We show that all colines of form (1.4) lie in the hyperplane
H =
k, .-. i
k,,(-1)'-lal ...
i...an.
(1.5)
i=1
Since the sum of the steps of the hyperplane and coline span the underlying Peano
space, it suffices to show, Proposition 1.9 that the meet of any coline 1.4 and the
hyperplane 1.5 is zero. The coline (1.4) has the property that each subset of n - 3
vectors of {al,... , a,} occurs in exactly two terms or not at all. Therefore in
computing the meet H A L by splitting L, in the sum of step n - 3 terms resulting,
each non-zero extensor of step n - 3 on vector set {al,..., a, } occurs either twice or
not at all. We show that each such pair of extensors occurs with opposite sign, that
the coefficients are +1 is obvious. The terms of H A L contributing to the identical
pair of extensors of step n - 3 may be written as
(-1)-lal.
. .
aj.- -a,
ldi ...
a, A (-1)-l
(-1)j-lal ... dj " " ai-
al ... a^j. .. di
1i ...
al ... an A (-1)/-lal
aj
"" '
."al
l
an,
"'an
(1.6)
(1.7)
with I = j and where the vectors of L required by the bracket of H A L are al in
the first term and aj in the second, and the extensors are otherwise identical. Since
the signs of terms of H and L alternate, the difference in sign in 1.6 and 1.7 is
equivalently given by the difference in sign of
al "".aj..
"-ai... di . an A al
al " ""al .."ai".
.aj ..."di". .al".. an,
.
(1.8)
a",-j- .-a,, A al . .di.. "di" .. aj .. "an.
(1.9)
where the positions of aj and aI are identical in the first extensor and identical in
the second extensor of 1.8 and 1.9. An easy calculation using Proposition 1.8 shows
that the sign of 1.8 and 1.9 alternate.
O
The following is a theorem in four-dimensional projective space.
CHAPTER 1. THE GRASSMANN-CAYLEY ALGEBRA
Theorem 1.14 If A 1 , A 2 , A 3 , A 4 are four lines in general position in a projective
space of four dimensions, and C 1 is the line at the intersectionof the solids A 2 UA 3 ,
A 3 U A 4 and A 2 U A 4 , and similarly for the lines C 2 , C 3 , C4 , then the solids A1 U
C 1 , A 2UC 2 , A 3 UC 3 , A 4 UC 4 intersect in a line, called the associate of Ai, A 2 , A 3 , A 4 .
PROOF. A projective space of 4 dimensions corresponds to a Peano space of step
5. Let A, B, C, D be lines. Let us denote the juxtaposition of lines as their join.
By choosing representations A = ab, B = cd, C = ef, D = gh it follows by direct
expansion using the definition of the meet that
AB A CD = (AB A C) V D + (AB A D) V C
Similarly we have
AB A AC A AD
=
((AB A C) V A) A AD
=
(ABACAAD) VA
=
A[A, B, e][f, A, D] - A[A, B, f][e, A, D]
(1.10)
where C = e V f. Since lines are of step 2, they commute with each other and we
have AB = BA. If now A 1 , A 2 , A 3 , A 4 denote four lines in general position then
denoting Aij = Ai V Aj and setting
B 1 = A 12 A A 13 A A 14
C 1 = A 23 A A 3 4 A A 42 ,
B 2 = A 23 A A 24 A A 2 1
C 2 = A 34 A A 4 1 A A 13 ,
B 3 = A 34 A A 32 A A 32
C 3 = A 41 A A 1 2 A A 24 ,
B 4 = A 4 1 A A42 A A43
C,1 = A 12 A A23 A A31 .
By the definition of the meet,
B 1 = [A12 A A 3 A A4 1]AI
B3 = [A3 4 A A1 A A 3 2 ]A 3 ,
B 2 = [A 23 A A 4 A Al' 2]A2
B 4 = [A4 1 A A2 A A 34 ]A1 I.
The join of any six vectors in step 5 must vanish so we have the dual relation that
the meet of the six covectors of type Aij is zero. By expanding the expression
(A 1 2 A A 23 A A31 A A 14 A A 24 ) V A3 4
(1.11)
in two different ways using 1.15 we obtain
-[A,(12
or
)
A A,(
23) A
Aa(3 1) A A0(1 4 ) A Aa( 3 4 )]Aq(
24 )
= 0
(1.12)
1.3. BRACKET METHODS IN PROJECTIVE GEOMETRY
where the sum a is taken over all permutations of the covectors. By expanding each
of the expressions B 1 V C1, B 2 V C2, B 3 V C3, B 4 V C4 a direct calculation shows that
1.12 may be expressed as
B 1 V C1 + B2 V C2 + B3 V C3 + B 4 V C4 = 0
Now resubstituting for Bi above we have
[A1 2 A A 3 A A 4 1]A1 V C1 + [A2 3 A A 4 A A 12 ]A 2 V C2+
[A3 4 A A 1 A A 2 3 ]A 3 V C3 + [A4 1 + A2 + A 3 4 ]A 4 V C4 = 0.
(1.13)
Since the brackets are scalars, the dependence of Ai V Ci, i = 1,..., 4 means that
O
these subspaces intersect in a line.
We finally give a theorem holding in a projective space of any dimension.
Theorem 1.15 Let R 1 , R 2 ,..., IR,. be flats of dimension rl - 1, r 2 - 1,..., r- - 1
which span a projective space of dimension n - 1. Let p be any point which lies
outside the span of any m - 1 of the Rj, 1 < j < m. Then p is incident with a
unique flat of dimension m - 1 which intersects each of the Rj in one point.
PROOF.
Let the Rj for 1 < j < m be represented by extensors Rj = aa
... a.)
Since there is a point p outside the span of any m - 1 flats, the vectors a k),j
1,..., n form a basis. We may therefore expand p =
X k)a k) where the
scalar coordinates. The required flat of step m may be described as
M=(=(
x i ai ) V ...V (• xi)a
i=1
)),
k)are
(1.14)
i=l1
where the scalars x() of M are the same as the coordinates of p. The flat M contains
the point of Rj given by the jth factor of 1.14. The flat also evidently contains the
point p, and if M contained another point q of some Rj then we have q V M = 0.
If the point q belongs to R 1 for example, we may write q = ya1) +
+ yra (1)
Then computing the join q V M and expanding by linearity, leaves a relation between
independent extensors of step m + 1 which is a contradiction. If any other flat of
step m which cuts each Ri in just one point passes through p, let pl,..., Pm be the
(k)
E x.(k)
flat.
flat. Then
Then
k)k)
V Pl
A npm
may be expanded to obtain another dependency
i
amongst flats of step m + 1. The assumption that M is non-zero is valid as otherwise
the factors of 1.14 are linearly dependent and hence p is on the join of m - 1 of the
Rj, which is contrary to hypothesis.
O
26
CHAPTER 1. THE GRASSMANN-CAYLEY ALGEBRA
Example 1.16 In a Peano space of step 4, let 11,12 be lines spanning projective
three space and let p be a point not on either line. Then there exists a unique line
containing p and intersecting both lines.
1.4
Duality and Step Identities
Several useful identities follow from the properties of the join and meet alone. In
this section we give those which will are necessary in subsequent chapters.
Let V be a Peano space of step n over the field K. We say that a linearly ordered
basis {al, a2,..., an} of V is unimodular whenever
[ala2,-...,an] = 1.
Let al, a2,... an be a unimodular basis. The extensor
E = a V a2 V ... an
will be called the integral. The integral is well-defined and does not depend on
the choice of unimodular basis. For details and properties of unimodular bases the
reader is referred to [MB85].
The integral behaves like an identity in a GC algebra. Thus for every extensor B
with step(B) > 0, we have
BVE=0,
BAE=B
while for every scalar k, we have
kVE=kE,
kVE=k.
For every n-tuple (bl, b2 ,..., bn) of vectors in V, we have the identity
bl V b2 V ... V bn = [bl, b2,..., bn]E.
The following propositions are example of what may be called step identities. These
are identities in GC(n) which follow strictly fr'om the join and meet and the step of
the extensors.
Proposition 1.17 Let A, B be extensors such that step(A) + step(B) = n. Then
AVB= (AAB)VE.
1.4. DUALITY AND STEP IDENTITIES
Proposition 1.18 Let A, B, C be extensors such that step(A)+step(B)+step(C)=
n; then
AA (BVC) = [A,B,C] = (A V B) A C.
The following proposition may be regarded as a generalization of Proposition 1.18.
The polynomial P is said to be properly step k if P is of step k with no proper
subpolynomial of P evaluating to step 0.
Proposition 1.19 Let {Ai} be an ordered set of extensors in a Grassmann-Cayley
algebra of step n such that Ei step(Ai) = n. Let P(Ai, V, A) be a properly step 0
extensor polynomial and let Q(V, A, Ai) be another properly step 0 extensor polynomial on the same ordered set {Ai } of extensors, where the operations of V and A
have been interchanged at will, subject to the condition that step(Q(V, A, Ai)) = 0.
Then Q = P as extensors.
PROOF. For any P(Ai, V, A) either P = RV S or P = RA S but as step(P) = 0
and step(P') : 0 for any proper subexpression P' of P, only P = R A S is possible.
Further, we may assume step(R) = k and step(S) = n - k since step(P) = 0 iff
R and S have complementary step. Assume that k < n - k. Let {Bi} C {Ai}
be the subset of the extensors used to form extensor R. If -i step(Bi) > k, then
since Ei step(Ai) = n the complementary set of extensors {Ci} = {Ai) \ {Bi} must
satisfy >i step(C;) < n - k. Since step(A A B) 5 step(A), step(A A B) 5 step(B),
and step(A V B) = step(A) + step(B), (assuming A V B non-zero) S({Ci}, V, A)
must have step less than n - k = step(S). It follows that Ei step(Bi) = k and
step(Ci) = n - k. If R contains any operation A then step(R) < k, so R =
±Bi V B 2 V ... V Bi, S = ±C 1 V C2 V ... V Cj and,
Ci
P = (B 1 V B 2 V ... Bi) A (Ci V C 2 V ... Cj) =
[B1B2 ...- Bi, C1C2 ... Cj] = [B12,... , Bi, C1, C2, ... ,Cj]Now let Q be any other extensor satisfying the hypothesis of the theorem. Extensor Q is a polynomial in V and A on the same ordered set of extensors {Ai} =
{B1, B 2 , ... Bi, C 1 , C2 ,... , Cj }. Without loss of generality, we may write Q =
TR' A S' with R' = B 1 V B2 V ... Bi, and i' < i. Then,
Q =±(BI
V B 2 V... Bi,) A (Bi'+ 1V ... Bi V C V ... V Cj) =
[B(1B2. .. Bi', Bi'+I ... BiCG1,,C2,
, Cj] =-
[B1, Bj2,..., Bi, Bi+,+I... Bi;, C1, C2, ... , Cj] = -P.
CHAPTER 1. THE GRASSMANN-CAYLEY ALGEBRA
Corollary 1.20 Let Ai, A 2 ,..., Am be extensors in GC(n) such that Cm=l step(Ai)
= n. Then for any i, j < n the following is an identity
(A1 V A2 ... V Ai) A (Ai+I V... V A,) = (Al V A2 ... V Aj) A (Aj+i V... V Am)
Example 1.21 If the sum of the steps of A, B, C, D, E, F is n then ((A A B) VC) A
(D V (E A F)) = (A A B) A (CV D V (E A F)) is a GC identity.
Proposition 1.22 Let P be a non-zero extensor polynomial of step k > 0 in GC(n).
Then P = V Zi V R where Zi and R are extensor polynomials with step(Zi) = 0 for
all i, and R is properly step k.
PROOF.
The polynomial P involves a set {Ai} of extensors in operations V and
A. If subexpression Q g P has step 0, then if P is non-zero and Q : P, the next
outermost operation in P involving Q must be a V so that Q V S C P for some S of
step k > 0. If step(S) = 0 set Q +- QVS and repeat this step. Hence (QVS)AT C P
occurs with step(S) > 0, step(T) > 0, unless the Proposition is true. In this case set
[Q]SAT +- (QVS) AT. On the other hand, if Q is of step n, since P is non-zero, the
next outermost parenthesization containing Q must be of the form Q A R = [Q]R.
In either case factoring the scalar Q to the left, by induction, an extensor R of step
k times a product of step 0 brackets remains.
0
Proposition 1.23 Let P(Ai, V, A) be a properly step 0 non-zero extensor polynomial in GC(n) in V, A and extensors A1 , A2,..., A,, such that Ei step(Ai) = kn for
some k > 0. Then P contains k occurences of the meet A, and 7n - 1 - k occurences
of the join V.
PROOF. The operations V and A are binary operations over the alphabet of extensors
{Ai, i = 1,..., n}, and P(Ai, V, A) is a parenthesized word. Hence the total number
of occurences of V and A is mn- 1. We now show the number of A occurences must
be k. If any join or meet of a subexpression of P vanishes then the extensor P = 0.
Recursively evaluate step(P) by applying the rules step(R V S) = step(R) + step(S)
and step(RA S) = step(R)+ step(S)-n. Let T be the binary tree corresponding to a
parenthesized word, whose vertices represent recursively evaluated subexpressions,
and a node Q has children R and S if Q = R V / A S. Evaluate the step of P
recursively labeling each node of T by the step of the corresponding extensor. It is
evident that the root of T has label
step(P) = 0 = •
step(Ai) - (
# of occurences of A) - n.
1.4. DUALITY AND STEP IDENTITIES
Since ET step(Ai) = kn it follows that there must be k occurences of meet in P. 0
Proposition 1.24 Let P(Ai, V, A) be a non-zero extensor polynomial in GC(n).
Then step(P) = k if and only if Ei step(Ai) - k
PROOF.
(mod n).
By the formula derived in Proposition 1.23 we have
step(Ai) - k - n = step(P).
Proposition 1.25 Any two extensor polynomials P(Ai, V, A) and Q(Bi, V, A) having the same sum of steps of extensors, consequently have the same number of join
and the same number of meet operations.
PROOF.
Assume P has step k.
By Proposition 1.23 step(P) =
i step(Ai) -
(#meets).n which implies that the number of meets in P is (Ei step(Ai)-step(P))/n.
By Proposition 1.24 E• step(Ai) = tn + k for some t > 0. Then the number of meets
in P is in fact (t - n + k - k)/n = t. Similarly, the number of meets in Q is t and
0
the number of joins in both P and Q must be equal as well.
Corollary 1.26 Let {Ai }, {Bj} be sets of extensors, each formed by joining vectors from a fixed alphabet A. Then any two non-zero polynomials P(Ai, V, A) and
Q(Bj, V, A) having the same number of occurences of each vector from A have the
same step and same number of meets and joins.
PROOF. Provided A is non-zero, the step of an extensor Ai is the number of vectors
joined in Ai. Since each vector occurs homogeneously Ei step(Ai) = Cj step(Bj)
as both are non-zero.
0
The meet operation defines a second exterior algebra structure on the vector space
G(V). The duality operator connecting the two is the Hodge Star Operator,
[WVDH46, MB85] Given a linearly ordered basis {al,a2,.., a,n}, the associated
cobasis of covectors of V is the set of covectors {a1,..., a,,} where
ai=
- [ai, al,..
... ,an]-lal V... VdiV...an.
Let {al, a 2 , ... , a,,n} be a linearly ordered basis of V. The Hodge star operatorrelative
to the basis {al, a2,..., a,n} is defined to be the (unique) linear operator
*: G(V)
-+ G(V)
CHAPTER 1. THE GRASSMANN-CAYLEY ALGEBRA
such that, for every subset S of {1, 2,..., n},
1 = E
*ail V ... V ai
=
(-1)il+'"+ik-k(k+l)/2 [al,...
where if S = {il,...,ik} with il < i2 < "'
,an]-lap, V
aP-k
< ik,then S c = {Pl,...,Pn-k} and
pl < " < Pn-k. This definition is equivalent to setting *1 = E and *ail V . Vaik =
ai, A ..A aik where {al,... ,an} is the associated basis of covectors of {al,..., an}.
We shall require the following two propositions whose proofs can be found in [MB85].
Proposition 1.27 A Hodge star operator is an algebra isomorphism between the
exterior algebra of the join (G(V), V) and the exterior algebra of the meet (G(V), A).
Moreover, it maps the set of extensors of Gk(V) onto the set of extensors of step
Gn-k(V) .
When the basis is unimodular, the Hodge star operator implements the duality between join and meet. In this case we can state the duality principle: For any identity
p(Ai,..., Ap) = 0 between extensors Ai of steps ki, joins and meets holding in a
Grassman-Cayley algebra, the identity 3(A1,... , Ap) = 0 obtained by exchanging
joins and meets and replacing Ai by an extensor Ai of step n - k holds.
Proposition 1.28 Let the linearly ordered basis {al,..., an} be unimodular. Then
the Hodge star operator * relative to {a1, ... ,a,} satisfies the following:
1. * maps extensors of step k to extensors of step n - k.
2. *(x V y) = (*x) A (*y) and *(x A y) = (*x) V (*y), for every x,y E G(V).
3. *1 = E and *E = 1.
4.
1.5
*(*x) = (l-)k(n-k)x,
for every x E Gk(V).
Alternative Laws
Let X = (xi,j) be a generic 7nx d matrix over the complex numbers, and let C[xi,j]
denote the corresponding polynomial ring in nd variables. The matrix X may be
thought of as a configuration of n71
vectors in the vector space Cd . These vectors also
represent a configuration of n points in d - 1 dimensional projective space pd-1.
Consider the set A(n, d) = {[Ai,...,A d]l < A1 < A2 < . < Ad < n} of ordered dtuples in [n], whose elements are the brackets. Define C[A(n, d)] to be the polynomial
1.5. ALTERNATIVE LAWS
ring generated by the (')-element set A(n, d). The algebra homomorphism
C[A(n, d)] -+ C[xij] defined by, for bracket [A] = [Ai, .. ,,Ad],
'n,d
"
(
X,
1,1 XA1,2 "' X,\l,d
[A]
=
det
X2,,1
XA2 ,2
...
X- 2,d
XAd,1
X-d,2
"
X)d,d
maps each bracket [A] to the d x d subdeterminant of X whose rows are indexed
by A. The image of the ring map On,d coincides with the subring Bn,d of C[xi,j]
generated by the d x d minors of X, called the bracket ring. The ring map On,d is
in general not injective and if In,d C C[A(n, d)] denotes the kernel of On,d, this ideal
is called the ring of syzygies. It is not difficult to see that Bn,d - C[A(n, d)]/In,d.
It is shown in Sturmfels [BS89], that an explicit Gr6bner basis exists for the ideal
In,d and that the standard tableaux form a C-vector space basis for the bracket ring
Bn,d.
The group SL(Cd) of d x d-matriccs with determinant 1 act on the right on the
ring C[xij] of polynomial functions on a general n x d-matrix X = (xi,j). The
two fundamental theorems of invariant theory give an explicit description of the
invariant ring C[xi,j] S L (Cd), although only the first is relevant here. Every d x d
minor of X is invariant under SL(Cd), and therefore Bn,d C C[xi,j]SL(Cd). The
equality of these rings is given by the following.
Theorem 1.29 (First Fundamental Theorem of Invariant Theory) The invariantring C[xi,j]S L(Cd) is generated by the dxd-minors of the matrix X = (xi, xj).
Alternative laws were first introduced by Rota in [MB85] and are useful for calculation in C[xi,j]sL(Cd). An alternative law is an identity which can be used in
simplifying expressions containing joins and meets of extensors of different step.
The two laws we use, given in Propositions 1.30 and 1.36 are most closely related to
the Laplace expansion for determinants. We use the following notational convention
throughout: juxtaposition of vectors al a2 ... as shall denote their join a Va2 V-..
*Va,
while juxtaposition of covectors X1X
2
... Xk denotes their meet X 1 A X 2 A ... Xk.
Proposition 1.30 Let al, a2, . . . , ak be vectors and X 1 , X 2 ,... , X,
k > s. Set A = ala2 ... ak, then:
A A(X AX29 A ... A X,) =
Z
sgn(A1,A 2 ,..., A,+) X
(A 1 ,...,As+1 )ES(A;1,...,1,k-s)
[AI, Xi][A 2 , X 2 ] ... [As, Xs]A 8 +1
covectors, with
CHAPTER 1. THE GRASSMANN-CAYLEY ALGEBRA
The proof relies essentially on the associativity of the meet operation.
PROOF.
Associating covector X1 to A and computing the meet by splitting extensor A we
obtain,
A A (X1 A ... A X,)=
(A A Xi) A (X2 A .."A X,) =
Z
(
sgn(A1, B 2) [A1, Xi] B 2)A (X2 A . AX,s)
(A 1 ,B 2 )ES(A;1,k-1)
Again associating B 2 with X2 and computing the meet B 2 AX
B 2 we see that the above expression equals
S
2
splitting the extensor
sgn(A1, B 2 )[A1, X1] B 2 X
(A1 ,B 2 )ES(A;1,k-1)
S
(A 2 ,B 3 )ES(B
sgn(A 2 , B 3 )[A 2 , X 2 ]B3 ) A (X3 A ... A Xs) =
2 ;1,k-2)
sgyn(A 1 , A 2,B 3 ) [AI, X 1] x [A2 X 2]B3 A (X3 A'
E
A X,).
(A 1 ,A2 ,B3 )ES(A;1,1,k-2)
Continuing in this way, the expression on the right side is obtained.
0
Example 1.31 (a V a2 V a3) A (Xi A X 2 ) = [a1, Xl][a 2 , X 2 ]a3 - [al,X][a3, X2]a2[a2, Xl][al, X2]a3 + [a2, Xl][a 3 , X 2]a + [a3, XI][al, X2]a2 - [a3 ,Xl][a2, X2]al
Corollary 1.32 Let al, a2, ,
,a,,
be vectors and X1, X2,..., Xn be covectors; then
(al V ... V an) A (Xi A - Xn) = det([ai, Xjl)i,j=1,2,...,n
The double bracket of covectors X1, X 2 ,..., X, denoted [[Xi, X2 ,..., X,]] is defined to be X 1 A X2 A
.
A X,.
We may conclude from the properties of the meet that the double bracket is also
non-degenerate and is of step zero, a scalar. Thus, the vector space spanned by the
covectors is of dimension n. A set of covectors with non-zero double bracket constitutes a basis of covectors. In this case, a corresponding basis of vectors al, a2,..., a,
can be found satisfying Xi = al ... di ' "an. A simple calculation shows that that
[[X1,... ,Xn] = [al,...,a, ]n - 1, an identity known as Cauchy's theorem on the
adjugate.
Following notation of [MB85] we shall have need of the notion of the split of an
extensor written as the meet of covectors, if A is an extensor and
A = X1 A-
A Xk
1.5. ALTERNATIVE LAWS
with X1,X 2 ,...,Xk covectors, given an ordered s-tuple of non-negative integers
kl, k 2 ,..., ks such that k +-...- +k = k, a cosplit of type (ki,..., ks) of the extensor
A = X1 A ... A Xk is an ordered s-tuple of extensors such that
1. Ai = E if ki = 0 and Ai = Xi, A Xi, A .. A Xaiki if ki # 0.
2. Ai A Aj A 0.
3. Ai A A 2 A ... A A, = ±A.
Denote by C(X
1
A - A Xk; kl,..., k,) the set of all cosplits of type (kl,..., k,) of
the set of covectors {X1 ,..., Xk}. Define as above
sg7n(A1, A2,..., As) =
if Al A A 2 A ... AA, = A
-1
if Al A A 2 A ... AA, = -A
We have a dual expansion for covectors whose proof follows easily from Hodge
duality.
Proposition 1.33 Let Ai, A 2 ,..., Ak and B 1 , B 2 ,... , Bp be covectors in a Peano
space V of step n with k + p > n. Setting A' = A 1 A A 2 A ... A Ak and B' =
B 1 A B 2 A ... A Bp then the following identity holds:
:sgnl
(A', AA) [[A', B']] A' =
(A',A )EC(A;n-p,k+p-n)
sgn(B'1, B.') [[A', B']] B'1
(1.15)
(B',B ' ) EC(B;k+p-n,n-k)
Given A = X1 A ..A Xk,B = Y1 A " A Yp with k,p Ž 1, define A V B = 0 if
k + p < n and A V B equivalently by either side of 1.33 if k + p > n.
Proposition 1.34 Every non-trivial linear combination of covectors is a covector.
PROOF. It suffices to prove the assertion in the case of a linear combination of two
covectors a and /3, a 5 3. Let the linear combination be ha + kp3, h, k E K. The
meet a A p has step ((n - 1) + (n - 1)) - n = n - 2. There exists vectors e, f such
that
a = (.A/)V e
#=(aA,8)Vf
Hence ha + k,3 = (a A 3) V he + (a A 3P)V kf since h, k are scalars this last expression
equals (a A #) V (he + k f) which is a covector.
O
CHAPTER 1. THE GRASSMANN-CAYLEY ALGEBRA
Proposition 1.35 Every extensor of step k < n can be written as the meet of n- k
covectors.
The dual proposition to 1.30 is now easily stated and proved.
Proposition 1.36 Let X 1 , ..., Xk be covectors and al,... , a, be vectors, with k >
s. SetA= XAX
2
A ...A Xk. Then
AV (al V ...V as) =
E
sgn(A1,...,A,+l) x [Xi,al][A2, a2] ... [A,,a,]A,+1
(A1,...,A.+i)EC(A;1,1,...,k-s)
Apply the Hodge star operator relative to a unimodular basis to both
PROOF.
O
sides of the identity in Proposition 1.30.
1.6
Geometric Identities
The Grassmann-Cayley algebra is an invariant algebraic language for describing
statements in synthetic projective geometry. The modern version was developed in
the 1970's by Gian-Carlo Rota and his collaborators. The fundamental relation in
a GC algebra is the notion of a geometric identity. Informally, this is an identity
amongst extensor polynomials involving only join and meet, and was first made precise in Barnabei, Brini, and Rota [MB85]. We shall now precisely define a geometric
identity.
Let S be a set, whose elements are called variables with a grading g, namely a
function from S to the set of non-negative integers. We say that e E S is of step k
if g(e) = k. Consider the free algebra, with no identities or associativity conditions,
generated by S with two binary operations V1 and Af, which we call the free join
and free meet respectively. This free algebra is denoted as P/(S), and called the
anarchic GC algebra on the graded set S. An element of P (S)is a parenthesized
polynomial, in the sense of universal algebra, in the operations of free join and free
meet. In terms of the anarchic algebra the notion of geometric identity can be made
precise.
Let V be a Peano space of step n over a field K and set E(V) for the set of all
extensors of V. Let S be a set with a grading g. Given a function o0 : S -- E(V)
such that 0o(e) is of step k if g(e) = k < n and 0o= 0 otherwise, define a canonical
map 0 associated to 0o of the anarchic algebra P1 (S) -- E(V) as follows:
1.6. GEOMETRIC IDENTITIES
1. For every element e E S, q(e) = 0o(e).
2. If p E P1 (S) and p = pl Vf P2, set q(p) = k(pi) V C(P2); if p E Pf(S) and
P = Pl Af P2 set, 0(p) = 0(pl) A O(P2).
This defines 0(p) for every p E PI(S), by induction on the number of pairs of
parentheses of the polynomial p. We say that a pair of polynomials (p, q) with
p, q E P1 (S) is a geometric identity of dimension n over the the field K when
0(p) = 4(q) for every canonical map 0 of the anarchic algebra Pf(S) to the set of
extensors E(V) of a Grassmann-Cayley algebra of step n over the field K.
The salient property that distinguishes a geometric identity from an equality in
a Peano space, is that a geometric identity contains no summands, and contain
brackets only to balance the homogeneity of occurences of the variables. Thus
geometric identities allow direct geometric interpretation via Propositions 1.5 and
1.9. We now give several examples of geometric identities using the proof techniques
developed thus far. In particular, by close study of an identity of Rota [PD76] in
the exterior algebra of a Peano space, we are able to give a new geometric identity
for Pappus' Theorem, a result perhaps believed by some not to be expressible in a
Grassmann-Cayley algebra. To illustrate we begin with some simple examples of
geometric identities. The first is often taken as an axiom in projective geometry.
Proposition 1.37 The intersectionsof the opposite diagonals of a complete quadralateral are never collinear.
PROOF.
The following is an identity in a GC algebra of step 3:
((ab A cd) V (ac A bd)) A (ad A bc) = 2[abc][abd][acd][bcd]
as follows from expanding the left hand side of 1.6 as
([abd]c - [abc]d) V ([acd]b - [acb]d) V ([adc]b - [adb]c).
The non-zero terms of this expression may be rearranged as the right side of 1.6. The
rank one extensors a, b, c, d may now be interpreted, modulo the field of scalars, as
points in the projective plane P 2 . Assuming that the points a, b, c, d are distinct so
that the lines ab, cd, ac, bd exist, the expressions ab A cd, ac A bd and ad Abc represent
points in the projective plane. Form the line ((ab A cd) V (ac A bd)). If this line is
non-zero then by Proposition 1.5 the left side of 1.6 vanishs precisely when the point
ad A bc is not in general position, or lies on the line, i.e. the points are collinear.
Since 1.6 is an algebraic identity the right side vanishs as well. The brackets of the
CHAPTER 1. THE GRASSMANN-CAYLEY ALGEBRA
c
b'
Figure 1.1: The Theorem of Pappus
right side are elements of the field K so this happens only if one of the four brackets
vanishs, i.e., one of the four triples of points are collinear. This contradicts that
O
a, b, c, d is a complete quadrilateral.
Each extensor polynomial with parenthesization in a GC algebra determines a maximal geometry Gp, for letting P = Q A R (interchangeably Q V R) be of step 0
then if Q and R are non-zero with corresponding geometries GQ and GR then P
vanishs if CQ and CR lie on a common hyperplane by Proposition 1.9. The weak
order on the set of finite geometries in projective 71 space, has as points the set of
finite geometries, with G < H if every incidence of H is an incidence of G. Thus
Proposition 1.38 Every extensor polynomial P in GC(n) vanishs on a set of geometries G < Gp in the weak order of some finite geometry.
An identity in a Grassman-Cayley algebra of step 3 for Pappus' theorem, involving
joins and meets alone may be achieved as a consequence of a Proposition following
the statement of Theorem 1.39. This identity appears in Hawrylycz [Haw94].
Theorem 1.39 (Pappus)
(bc' A b'c) V (ca' A c'a) V (ab' A a'b) = (c'b A b'c) V (ca' A ab) V (ab' A a'c').
Proposition 1.40 If a, b, c, a', b', c' are any six points in the plane then the expression
(bc' A b'c) V (ca' A c'a) V (ab' A a'b)
changes at most in sign under any permutation of the points.
1.6. GEOMETRIC IDENTITIES
PROOF.
sion as
We may first expand each of the parenthesized terms to write this expres([bc'c]b' - [bc'b']c) V ([ca'a]c' - [ca'c']a) V ([abb]a' - [ab'a']b).
Expanding further;
+[bc'c][ca'a][ab'b][b'ca']- [bc'c][ca'a][ab'a'][b'c'b]
- [bc'c][ca'c'][ab'b][b'aa']+ [bc'c][ca'c'][ab'a'][b'ab]
- [bc'b'][ca'a][ab'b][cc'a' + [bc'b'][ca'a][a'][
[cc'b]
+[bc'b'][ca'c'][ab'b][caa']- [bc'b'][ca'c'][ab'a'][cab]
of which the only surviving terms are
[bc'c][ca'a][ab'b][b'c'da']- [bc'b][ca'c'][aba'][cab].
(1.16)
Expression 1.16 changes sign alone under interchanging any of the points
in the pairs bc', b'c, ca', c'a, ab', a'b. If the points a, b are interchanged then the sum
of 1.16 and the expression obtained may be written as
([aa'b'][bb'c']- [ba'b'][ab'c'])[cc'a'][abc]+ ([cac'][bca'] - [bcc'][caa'])[abb'][a'b'c'].
By expanding the meet aa'bA b' in two different ways by Proposition 1.8 we obtain
the identity [aa'b']b+ [a'bb']a + [bab']a' = [aa'b]b' which may be meeted with b'c' to
obtain
(1.17)
[aa'b'][bb'c']+ [a'bb'][ab'c']+ [bab'][a'bc']= 0.
Substituting relation 1.17 into the sum 1.6, the first term in parentheses reduces to
[abb'][a'b'c']. By similarly splitting the extensor abc A c' and meeting with a'c we
obtain
[cac'][bca']+ [cbc'][caa']+ [cc'a'][abc]= 0,
which reduces the second parenthesized term to -[cc'a'][abc] and thus the sum vanishes. By symmetry the expression changes sign under any transposition of unprimed
or primed letters, and the proposition follows.
O
From Proposition 1.40 we obtain an identity in joins and meets alone which yields
Pappus' theorem.
Theorem 1.41 (Pappus) If a, b, c are collinear and a', b', c' are collinear and all
distinct, then the intersections ab' n a'b, bc' n b'c and ca' n c'a are collinear.
CHAPTER 1. THE GRASSMANN-CAYLEY ALGEBRA
That the identity 1.39 is valid follows from Proposition 1.40 by interPROOF.
changing vectors b and c' on the right side, or by direct expansion. To obtain
Pappus theorem, assume a, b, c and a', b', c' are 2 sets of three collinear points. Now
ca' A ab = c[a'ab] and ab' A a'c' = -b'[aa'c] and so the right side of 1.39 reduces to
(c'b A b'c) V c V b' = 0. Hence the left hand side vanishs as well. But this side of the
identity vanishs most generally if the three points ab' f a'b, bc' n b'c, and ca' n c' are
0
collinear. Applying Proposition 1.9 the result follows.
Following Forder [For60] we define,
Definition 1.42 Given the ordered basis a = {al,..., an} of a GC algebra of step
n, and an extensor e = al ... at of step 0 < 1 < n, define the supplement le to be
} CIa is
i]a ... ,an-I where {a',.. .,an
the unique extensor [al,...a, a
the set of basis elements linearly independent from {al,..., al}. In the case where
the basis a is unimodular we may write
jal ...
at = at+l ... an.
Let A = {ai,..., a,n} and B = {bl,...,b,} be two alphabets of linearly ordered
vectors in a GC algebra of step n. Let k be an integer 1 < k < n, and A k)
denote the step k extensor, ai V a i +1 V ... V ai+k-1 with indices taken mod n and
then reordered with respect to A. Given 1 < i < n, let 7y,, denote the covector
ala2
i a' an. The notation B k) is similarly defined. In Hawrylycz [Haw93] we
prove the following identity in a GC algebra of step n. In Chapter 3, we obtain the
identity 1.43 as a corollary of a much more general result.
Theorem 1.43 In a GC algegra of step n let k be a positive integer 1 < k < n and
Aie), B)k) the set of extensors of step k constructed as above. Then
[bi, b2, .. bnnn- 1 V ((IA k))bi A Ak)) =
i=1
[al, a 2
,
a]
A(n]i,,-k+i+1)V (a,
-k+i+l))E,
AB
i=1
where I denotes the supplement and E the integral.
Theorem 1.43 leads to several interesting theorems in n-dimensional projective
space.
Theorem 1.44 (Bricard) Let a, b, c and a'b'c' be two triangles in the projective
plane. Form the lines aa', bb', cc' joining respective vertices. Then these lines intersect the opposite edges bc,ac, and ab in colinear points if and only if the join of the
1.6. GEOMETRIC IDENTITIES
points bc A b'c', ac A a'c' and ab A a'b' to the opposite vertices a', b and c' form three
concurrent lines.
PROOF. In GC(3) set
= bc, A_ = ac, A = ab. Then the covectors,
supplements, and B 2) are determined and Theorem 1.43 yields
[a'b'c']2 (aa' A be) V (bb' A ac) V (cc' A ab) =
[abc] (a' V (bc A b'c')) A (b' V (ac A a'c')) A (c' V (ab A a'b')),
which interprets to give the Theorem.
0
Theorem 1.45 In projective three space, let {ab, cd}, {bc, ad}, {cd, ab}, and {ad, be}
be four pairs of skew lines and let {a'd', b'c'}, {a'b', c'd'}, {b'c', a'd'}, {c'd', a'b' be
another set of skew lines. Then the four points, formed by intersecting each of the
second four lines from the first set, cd, ad, ab, bc, with the planes formed by the join
of the first four lines of that set ab, bc, cd, ad to the vertices a', b', c', d', are coplanar,
if and only if the four planes formed by intersecting the planes bcd, acd, abd, abc with
the second four lines of the second set b'c', c'd ' , a'd', a'b' and joining those points to
the first four lines of the second set a'd', a'b', b'c', c'd' pass through a common point.
PROOF. In GC(4)set A
supplements, and B
2)
2)
= cd, A
2)
= ad, A
2)
= ab, A
2)
= be. Then the covectors,
are determined and Theorem 1.43 yields
[a'b'c'd']3(aba'A cd) V (bcb' A ad) V (cdc' A ab) V (add' A be) =
[abcd](a'd' V (bcd A b'c')) A (a'b' V (acd A c'd')) A
(b'c' V (abd A a'd')) A (c'd' V (abe A a'b'))
(1.18)
0
We are able to give several dimension dependent theorems, including a version of
the MSbius Tetrahedron theorem, a proposition not known to have any analog in
the plane or dimensions higher than three.
Theorem 1.46 (H.M. Taylor) If a, b, c and a', b', c' are triangles and if p is a
point such that pa',pb', pc' meet bc, ca, ab respectively in collinearpoints, then pa, pb,
pc meet b'c', c'a', a'b' respectively in collinearpoints.
CHAPTER 1. THE GRASSMANN-CAYLEY ALGEBRA
PROOF. Expanding each meet, by the linearity of the join we may write,
[a'b'c'](pa'A bc) V (pb' A ca) V (pc'A ab) =
[a'bc']([pa'c]b- [pa'b]c) V([pb'a]c - [pb'c]a) V [pc'b]a - [pc'a]b) =
[a'b'c'][abc]([pa'c][pb'a]c'b]- [pa'b][pb'c][pc'a]).
(1.19)
This last relation remains invariant upon exchange of primed and unprimed letters
and so we conclude
(a' V b' V c') A (pa' A bc) V (pb' A ca) V (pc' A ab) =
(a V b V c) A (pa A b'c) V (pb A c'a') V pc A a'b').
which proves the theorem.
Theorem 1.47 (Bricard) If the meets of pa,pb,pc with sides b'c', c'a', a'b' of triangle a'bc' when joined to opposite vertices give concurrent lines, then the meets of
pa', pb', pc' with sides bc, ca, ab of triangle a, b, c when joined to the opposite vertices
give concurrent lines.
PROOF. Again by the definition of meet and linearity of the join,
(pa A b'c') V a' = [pac']b'a' - [pab']c'a'
so that
((pa A b'c') V a')) A ((pb A c'a') V b')) A ((pc A a'b') V c')) =
[a'b'c']2 (pb'c A pc'a A pa'b - pbc' A pca' A pab').
Thus we conclude, interchanging primed and unprimed letters
[abc] 2 ((pa A b'c') V a')) A ((pb A c'a') V b')) A ((pc A a'b') V c')) =
[a'dbc']2 ((pa' A bc) V a)) A ((pb' A ca) V b)) A (pc' A ab) V c))
which proves the theorem.
O
Theorem 1.48 (MWbius) If a, b, c, d and a' , b' , c', d' are tetrahedraand a' , b' , c' are
on the planes bcd, cda, dab and a, b, c, d are on the planes b'c'd', c'd'a',d'a'b',a'b'c'
then the point d' is on the plane abc. That is, each tetrahedron has its vertices on
the faces of the other.
1.6. GEOMETRIC IDENTITIES
PROOF.
The following is an identity in a GC algebra of step 4.
bca' A cab' A abc' A a'b'c' = c'b'a A c'a'b A a'b'c A abc
Expand the left hand side as abc' A a'b'c' = [a, b, c'a']b'c' - [a, b, c',b']a'c'. Then
cab'AabcAa'b'c = [a, b, c', a'][c, a, b'c']b'+[a,b, cb'][c, a, b', a']c'-[a, b, c, b'][c, a, b'c']a'
and hence the left hand side may be written as
bca' A cab' A abc' A a'bc' =
c, ]b,
[a, b, c'a'][c, a, bYc'][b, c, a', b'] - [a, b,b', c'][c, , aa', b'][, ac'a'].
In a similar manner the the right hand side may be expanded to give an identitcal
bracket polynomial. If the left hand side is zero then the planes bca', cab', abc' meet
in a point which lies on the plane a'b'c'. In this case the right hand side is zero and
the planes b'c'a,c'a'b,a'b'c meet in a point d' which lies on the plane abc. This yields
0
the theorem.
42
CHAPTER 1. THE GRASSMANN-CAYLEY ALGEBRA
Chapter 2
Arguesian Polynomials
We introduce a class of expressions in a Grassmann-Cayley algebra called Arguesian
polynomials so named as each represents a projective invariant closely related to
the configuration of Desargues' Theorem in the plane. Let a and X be two sets
of alphabets, called vectors and covectors. Initially, we shall take both sets of
cardinality equal to the step n of the GC algebra and write a = {al, a2,..., an} and
X = {X 1 , X 2 , ... , X, }. In examples we will often simply let lowercase letters denote
vectors and uppercase letters denote covectors. We shall also use the convention that
the juxtaposition of vectors denotes their join
ala 2 ... ak = al V a2 V ...V ak
while the juxtaposition of covectors denotes the meet of covectors, as these are
the only non-zero operations between extensors of step 1 and n - 1. For example,
a V BCD is an extensor of step 3, representing a plane, if its underlying GC algebra
has step 5.
By an Arguesian Polynomial we mean an expression in a Grassmann-Cayley
(GC) algebra of step n involving only joins and meets and the two sets of variables,
a = {al, a 2 ,.., an } and X = {Xi,X 2 , ...,Xn}.An Arguesian polynomial will be
called homogeneous if it has k occurrences of every vector, and I occurrences of every
covector, for k, I positive integers. If k < 1 we say that P is of type I, while if k > 1
then P is type II. When an Arguesian polynomial P is of step 0 or step n we shall say
that P is of full step. By a subexpression Q C P of an Arguesian polynomial P we
mean that the expression Q in vectors, covectors, join and meet is a subexpression
of the parenthesized expression P in the alphabets a, X and binary operations V
and A. For example, abc V XYZQ is not a subexpression of abc V XYZQR since
the meet of covectors XYZQR has higher precedence than the join of vectors abc
CHAPTER 2. ARGUESIAN POLYNOMIALS
and meet of covectors XYZQR.
We develop certain properties of Arguesian polynomials which arise from perfect
matchings in an associated bipartite graph. The most natural matching properties
occur when each vector occurs exactly once, and each covector several times, or
conversely, and following notation of White [Whi91] we call such polynomials multilinear of type I or type II. Thus an Arguesian polynomial P is type I multilinear
if each vector occurs once and each covector I times for 1 > 1. The integer I is called
the order of P. In Chapters 2-4, Arguesian polynomials will be multilinear unless
otherwise explicitly stated. A type I basic extensor e is an expression of the form
al' ak V X1... XI for 1 > k, a type II basic extensor has 1 < k and the meet A
replacing the join V. An Arguesian polynomial is P trivial if P can be written as
the product of brackets, each bracket consisting only of vectors or only of covectors.
A type I (type II) Arguesian Polynomial P is proper if every proper subexpression
Q C P has step > 0. Unless otherwise stated, we shall assume P is proper.
Given Q C P, Let V(Q) denote the subset (not multiset) of vectors of a occurring
in Q and C(Q) the subset of covectors of X occurring in Q. We remark that if type
I/II P has order 1 then Proposition 1.24 shows that P is necessarily of full-step, (yet
P may still be the zero polynomial).
2.1
The Alternative Expansion
We shall study the equivalence of Arguesian polynomials subject to a canonical
expansion in the monomial basis of brackets in vectors and covectors, [PD76].
Proposition 2.1 Any non-trivial non-zero type I Arguesian polynomial in a
Grassmann-Cayleyalgebra of step n can be written in the form:
P = [[X 1 , X2, ... , Xn]]-' Ca[al,XX(1)][a2 , XZ(
where X,
2 )]
." [an, X(n)]
is a permutation of the covector set X and C, is an integer constant
depending on a.
PROOF. If step(P) = n then P = R V S for extensors R, S of complentary step
and P = (R A S)E. Therefore let P = RAS be a non-zero type I Arguesian
polynomial on vector set a = {ai,...,an} and covector set X = {X
1 ,...,Xn).
By applying Propositions 1.30 and 1.36 we evaluate the subexpressions Q C P
recursively, extending the join and meet by linearity, and factoring each ordered
2.1. THE ALTERNATIVE EXPANSION
bracket [[X1,...,X,]] = X
1
A
A X,, out of the expansion.
...
At each level of
recursion, an expression of one of the following types is evaluated:
1. T1 A T 2 (dually Ti V T2 ) with T 1 and T2 both linear combinations of terms
T =
kiXiX, . Xii,
ki, Xi; . . Xi
T2 =
with each term a scalar k times the meet of covectors. Extending by linearity, a
term of the meet T 1 AT2 is non-zero iff the sets of covectors of the corresponding
terms of T 1 and T 2 have empty intersection. Then
T1 AT2 =
i
i'
kiki,Xii ... XijXi ... Xi
The case T 1 V T2 replaces covectors with vectors, and A with V.
2. T 1 V T2 (or T 1 A T2 ) with T 1 a linear combination of terms, each term a scalar
times the join of vectors al ... ak, and T2 a linear combination of terms, each
term a scalar times the meet of covectors X 1 - -- X. The resulting expression
is non-zero iff both Ti and T 2 are non-zero. Expanding by Proposition 1.30,
by setting A = X 1 A ... A Xk, a term of T 1 VT 2 , the basic extensor k la
1 l " ak V
k 2X
1
... XI, becomes
klk 2
sgn(A 1 , A 2 ,... , A+l) x [al, Al][a 2 , A 2]
...
[as, A,]A,+l
(A 1 ....As.1)
ES(A;1,...,1,k-s)
(2.1)
where each Ai, i = 1,..., k is a distinct covector, (since the covectors X1,..., Xt
are necessarily distinct if P is non-zero), and Ak+1 is the meet of a linearly
orderedset of I - k covectors.
3. Ti V T 2 each term a scalar times the meet of covectors. A term of T 1 V T2 is
non-zero iff the union of sets of covectors in terms of T1 , T2 span X. The term
is calculated in either of two equivalent ways by Proposition 1.33. We choose
by convention to split the term of T2 to obtain
klXi, - Xil V k2 Xj, . - Xj,,
k k2 [[XI,...,X]]X,X {
sgn([[Xil,...,Xil,X
sgn(Xj, -... Xj,.\
=
Xj, \ {X \ {Xi,..., Xi,} x
\ {Xi,I...,Xi,}])] x
\ IX\
{Xi,,...,
Xit}, X \ f{Xi,...
(2.2)
,Xi,}),
(2.3)
CHAPTER 2. ARGUESIAN POLYNOMIALS
where the sign term of 2.2 accounts for linearly ordering the bracket [[Xi,...,X,]],
and the sign term of 2.3 for computation of the join. The ordered bracket [[X1,...,
X,]] may be factored from the expansion. Since type I P is multilinear in vectors
and non-zero, if a meet R A S with R, S the join of vectors occurs in P then R A S =
=k[al,..., an] and the resulting bracket may be factored from the expression. Then
since P is homogeneous of order I in covectors, the resulting expression, by 1 and 3
reduces to ±[[X 1 ,..., X,]]t and P is trivial. Hence P must contain a basic extensor
e = al ... ak V / A X A ... A X1 which may be expanded by Case 2. Since P has
step 0, and is proper, recursively evaluating P by applying 1,2,3, each vector ai E a
is contained in a bracket of form [a, X] in the final expansion. Thus any non-zero
term of the expansion contains the product of n scalar brackets [ai, Xj], one for each
vector ai E a. If for a monomial M of the final expansion, C(M) # X, the remaining
covectors of P form a non-homogeneous set of (k - 1)n covectors. Application of 1
and 3 reduces this set to k - 1 brackets [[Xi,..., Xn]]. Since each non-zero bracket
must contain an entire set X, some bracket must contain two occurrences of the
same covector and the term vanishs.
0
We have the dual result for type II non-zero Arguesian polynomials whose proof is
omitted.
Proposition 2.2 Any non-trivialnon-zero type II Arguesianpolynomial Q in GC(n)
can be written in the form:
Q=[a1,a2,
a,,.
C,C[a (l), Xl][a.( ), X21
2
[a (n),Xn]
where aa is a permutation of the vector set a and C, is an integer constant depending
on a.
PROOF. Using the alternative laws Proposition 1.30, 1.36 and Proposition 1.8 the
proof is entirely analagous to Proposition 2.1
0
Definition 2.3 Given a non-trivial non-zero type I or type II Arguesian polynomial
P, the bracket polynomial
E(P) = •
CG[al,Xa(1)][a2 , Xa( 2 )]
.
[an, Xa(n)]
defined by either Propositions2.1 or 2.2, is called the Alternative expansion 6(P)
of P.
2.1. THE ALTERNATIVE EXPANSION
Definition 2.4 Given an Arguesian polynomial P on vector set a and covector set
X, a transversal ir is a bijection 7r : a -+ X such that the monomial
[alx,X(1)][az, Xr(2) ***[anX,(r)]
occurs with non-zero coeficient C, in E(P). We shall denote by E(P)1, the non-zero
monomial of E(P) determined by 7r.
Example 2.5 The map ir : a -+ A, d -+ C, b -+ D, c -+ B is a transversal of the
type I Arguesian ((a V AB) A C) V d) A ((bV CD) A A) A (cV BD) with corresponding
non-zero monomial +[a, A][b, D][c, B][b, D] as can be seen forming the alternative
expansion 6(P) and reordering the bracket [[A, B, C, D]] of order 1.
Theorem 2.6 (Desargues) The correspondingsides of two coplanar triangles intersect in collinear points if and only if joins of the correspondingvertices are concurrent.
[a, b, c](a V BC) A (b V AC) A (c V AB) - E = [[A, B, C]](bc A A) V (ac A B) V (ab A C)
PROOF.
The Arguesian polynomial P in step 3
(a V BC) A (b V AC) A (c V AB)
is expanded using Proposition 2.1 to obtain,
(B[a, C] - C[a, B]) A (A[b, C] - C[b, A]) A (A[c, B] - B[c, A])).
(2.4)
The meet of any two common covectors must vanish, hence by the linearity of the
meet, 2.4 becomes
- BCA[a, C][b, A][c, B] + CAB[a, B][b, C][c, A]
Also,
Q = (bcA A) V (ac A B) V (ab A C)
may be similarly expanded
([b, A]c - [c, A]b) V ([c, B]C - [c, B]a) V ([a, C]b - [b, C]a)
whose non-zero terms are,
- [b, A][c, B][a, C]cab + [c, A][a, B][b, C]bca
(2.5)
CHAPTER 2. ARGUESIAN POLYNOMIALS
p
Figure 2.1: The Theorem of Desargues
Since the join and meet are anti-symmetric operations on vectors and covectors,
interchanging the positions of any two vectors (or covectors) changes the parity of
sign. Thus E(P) = £(Q), and since the extensor abc is of step 3 while ABC is an
extensor of step 0, we may multiply expressions 2.5 and 2.1 by these factors, and the
integral E on the left, to obtain the given identity. A somewhat more appealing form
is obtained by taking a new basis a', b', c' setting A = b'c', B = a'c' and C = a'b'.
Hence by Cauchy's theorem ABC = [a', b', c']2 , and we obtain after cancellation,
[a, b, c][a', b' , c'](aa' A bb' A cc') = (bc A b'c) V (ac A a'c') V (ab A a'b)
(2.6)
The identity 2.6 may now be easily interpreted: Assuming the points a, b, c and
a', b', c' are in general position, the left side vanishs, most generally, when the intersection of lines aa' and bb' lies on the line cc', or the three lines are concurent. Since
2.6 is an algebraic identity the left side vanishs iff the right side vanishs, which occurs
when the line formed by joining points bc n b'c', ac n a'c' contains the point ab n a'b',
or the three points are concurrent. For a synthetic proof, see [Cox87, Sam88].
O
2.2
The Theory of Arguesian Polynomials
We study the problem of determining when two Arguesian polynomials represent
the same projective invariant, [PD76]. Given Arguesian polynomials P and Q, deE
fine P = Q, read P is E-equivalent to Q, if there is a real number r such that
the identity P = rQ is valid in a GC algebra, where we allow that either side may
be multiplied by the integral extensor E. In the case of Arguesian polynomials Eequivalence incorporates the fact that the scalar brackets [al,..., a,,], [[X 1 ,..., Xn]]
2.2. THE THEORY OF ARGUESIAN POLYNOMIALS
and the overall sign difference of P and Q have no bearing on the geometry. Multiplication by the integral, E merely formalizes the equivalence P V Q = (P A Q) - E
when step(P) + step(Q) = n.
We shall have need to distinguish between the 1 homogeneous occurrences of the
covectors (vectors) in a type I (II) polynomial P, replacing the covector Xj E X
by distinct Xj 1 , Xj,,..., Xj, (and similarly for vectors.) The resulting polynomial
is called the repeated variable representation P*(a, X*) of P, and we shall say
that XjA is a repeated covector of label X i. We shall often write X* to denote a
generic repeated covector of label X. The laws for joins and meets hold identically
for repeated variables. The expansion of Propositions 2.1(2.2) can be applied to
type I, (II) P (Q) of order I in repeated representation, as a multilinear polynomial
in I -n covectors, (vectors), the resulting expansion having no cancellation of terms.
This expansion, including brackets [[Xi,, ... , XI,, ]],
is defined to be the repeated
alternative expansion E(P*) of P*, and as every variable of P* is distinct, each
monomial of E(P*) occurs with scalar coefficient ±1l. As P* is evaluated recursively,
the expansion £(Q*) is well-defined for subexpressions Q* C P*, where it signifies
the linear combination of extensors, and brackets [a, Xj,], [[Xl,,..., Xlm]] over the
field K. For type I Q C P denote [a, Xj,] E E(Q*) to mean the bracket [a, Xj,]
occurs amongst the brackets of £(Q*). If R is a vector or covector E(R*) = R*. By
definition if R = S V / A T then E(R*) = E(S*) V / A E(T*).
If G(a, X*) (G(a, X)) denotes the exterior algebra of step n generated by vectors a
and covectors X*, (X), and I is the ideal of G(a, X*) generated by relation Xi, - Xi,
for Xii,, Xi E X*, then G(a, X*)/I r G(a, X) under the canonical projection p :
G(a, X*) -+ G(a, X*)/I. It is clear that p is an algebra homomorphism, and if
A*, B* denote elements of G(a, X*), then p(A* V / A B*) = p(A*) V / A p(B*), where
the join or meet are evaluated in G(a, X*) and G(a, X), respectively. Ignoring scalar
brackets [[X1,...
,X,]]
in the image of E(P*) under p, we may write
E(P") --4 E(P).
Given type I Arguesian P and Q C P, Proposition 2.1 recursively evaluates E(P) by
expanding Q as a linear combination of brackets [a, X], and extensors over K, in the
variable set V(Q)U C(Q). The resulting expression we call the partial alternative
expansion E(Q) of Q. By definition, if Q = RV / A S, then E(Q) = E(R) V / A E(S).
Let [a, X] E E(P) denote that the bracket with content [a, X] occurs in some monomial of E(P). If {al,... ,ak} ({Xi,...,XI}) denotes a set of vectors (covectors)
contained in the support of the extensors of linear combination 6(Q), (well-defined
as step Q > 0, for any proper Q C P), we shall write Q(al,..., ak) (Q(Xi,... ,X1))
to make this explicit. The notations Q*(al,...,ak) and Q*(X*,...,X*) are sim-
CHAPTER 2. ARGUESIAN POLYNOMIALS
ilarly defined by g(Q*). Indeed, Q _ P is an extensor which is representable in
GC(n) as a linear combination of extensors which are either the join of vectors or
the meet of covectors. Thus, [a, X*] E E(P*) if and only if 3R V / A S C P with
a E V(R), X* E C(S*), and R*(a), S*(X*). We thus define,
Definition 2.7 A subexpression Q g P of type I P is type I (type II) if E(Q*)
is a linear combination Q*(X*,... , X*),(Q*(aL,... ,ak)), for a set of covectors
{X* ,...,X} IC X* (vectors {al,.... ak} C a).
Example 2.8 Let P = (((a V AB) A C) V d) A ((b V CD) A A) A (c V BD). The
subexpression Q = ((a V AB) A C) V d) is type I, while b is a type II subexpression.
Example 2.9 The type I polynomial P = (aV BC) A(bVAC)A (cVAB) in repeated
representation is P* = (a VB 1C 1) A (bV A 1C 2 ) A (cV A2 B 2 ). The repeated alternative
expansion is formed as,
(B [a, C 1 ] - CI [a, B 1]) A (A I [b, C 2] - C 2 [b, A 1]) A (A 2[c, B 2] - B 2 [c, A 2])
then expanding by linearity of meet yielding the terms,
BIAI A 2[a, C 1][b, C2][c, B 2] - B1AIB 2[a,C1][b, C2][c, A 2 ]B1C 2 A 2[a, C 1][b, Ai][c, B 2] + B1C 2B 2[a, Cl][b,Ai][c, A 2] C1 AiA 2[a, B][b,C 2][c, B 2] + C1 Al B 2[a, B1][b, C 2][c, A 2] +
CiC2A 2[a, Bi][b, Ai][a, B 2] - C1C 2 B 2[a, B1][b, Ai][c, A 2]
Since the meet of any two covectors of the same letter type is zero, only two of the
terms survive in C(P).
Let QCP beQ = (aVBC) A(bVAC). Then
E(Q*) =
£(Q) =
[a, C1][b, C 2]B1AI - [a, C1 ][b, A 1]B 1C 2 [a, B 1 ][b, C2 ]CIA 1 + [a, B 1][b, A 1]CIC 2 ,
[a, C][b, C]BA - [a,C][b,A]BC - [a,B][b,C]CA.
The linear combination £(Q*) is represented as Q*(A1, B 1 , C1, C 2 ), while E(Q) as
Q(A, B, C). Given transversal 7r : a -+ B, b -- C, c -* A, C(P)1r = +[a, B][b, C][c, A],
hence [a, B] E £(P)j
7 .
In studying the transversals of Arguesian polynomials, the following definition is
useful.
2.2. THE THEORY OF ARGUESIAN POLYNOMIALS
Definition 2.10 A pre-transversal of a type I Arguesian polynomial P*(a,X*)
is a map f* : a -+ X* such that such that the projection f : a -+ X is a bijection,
and f* : ai - Xj only if [ai, Xj] E E(P*).
Given Q 9 P, a pre-transversal f* identifies a set of monomials {I(Q*)JI.} of £(Q*)
as follows: M E {f(q*) ip} iff V[a, X*] E M, f* : a -+ X*, a E V(Q), X* E C(Q*).
As C(P) = X for any Arguesian P, an easy induction shows,
Proposition 2.11 Given Arguesian P, Q g P, and a pre-transversalf*, there is
at most one monomial of {C(Q*)lIp } having non-zero projection 9(Q)If under p.
If £(Q)If is non-zero under f*, we denote the unique monomial defined by Proposi-
tion 2.11 as £(Q*)If*. Its extensor is denoted ext(E(Q*)l f ). If the projection S(Q)If
is non-zero, its extensor is denoted as ext(E(Q)I|). We may write [a, X] E 6(Q)lf
to indicate that the bracket [a, X] occurs amongst the brackets of the monomial
£(Q)Ij. Write X E ext(E(Q)(f) to mean ext(C(Q)lf) is the meet of covectors one
of which is X. The dual notations are defined similarly.
Since the vectors of P are multilinear, an easy induction establishs:
Proposition 2.12 Let P be type I Arguesian, Q g P a type I subexpression, and
let f*, g* be pre-transversalsof P* having non-zero projections E(Q) If, and £(Q)19 .
If E(Q)If and E(Q)lg have the same sets of brackets {[a,X]}, then ext(C(Q)lf) =
ext(C(Q)lf).
We shall require several technical lemmas.
Lemma 2.13 Let f* be a pre-transversal of a type I Arguesian P*, Q g P, with
£(Q)If non-zero, and X E C(Q). Then
1. Let Q C P be type I. If [a,X] E (0) £(Q)If, for some (any) a E V(Q), and
X 0 (e) ext(E(Q) if), then: For any pre-transversalg* with £(Q) g non-zero,
[b, X] E E(Q)JI for some b E V(Q) iff X 0 ext(E(Q)J1).
2. Let Q _ P be type I. If [a,X] E (0) E(Q)If, for some (any) a E V(Q), and
X e (4) ext(C(Q)l1 ), then: For any pre-transversalg*with E(Q)1g non-zero,
[b, X] E (.) E(Q),g, for some (any) b E V(Q), and X E (V) ext(&(Q)lg).
3. Let Q C P be type II. If [a,X] E £(Q)If, for a E V(Q), then: For every
pre-transversalg* of P* with
[b, X] E(Q) ,.
£(Q)Ig
non-zero, there is b E V(Q) such that
CHAPTER 2. ARGUESIAN POLYNOMIALS
PROOF. We prove 1-3 simultaneously by induction Q C P. A basic extensor Q = e
has unique vectors and covectors, and 1 and 3 are clear. For 2, X E ext(E(e)lf) iff
[a, X] € 6(e) If.
Let Q = RAS for type I R, S.
Case 1) Suppose [a,X] E ,(Q)If and X E ext(C(Q)(f). Then without loss of
generality [a,X] E E(R)lI, [b,X] . S(S)If, for any b E V(S). If [b,X] E 6(Q)Ig,
and in particular [b,X] E E(R)ig, then if X E ext(E(R)ig), by 2) applied to R,
X E ext(C(R)lf), a contradiction. As [b,X] V ,(S)If,X V ext(E(S)If), 2) applied
to S yields [b,X] ý C(S)Ig. If conversely, [b,X] ý E(Q)Ig then [b,X] V E(R)Ig,
so [a,X] E E(Q)I| and X V ext(t(R)li) imply, by part 1 applied to R, X E
ext(E(R)Ig). Then by 2) applied to S, X 0 ext(C(S)Ig), so X E ext(C(Q)Ig).
Suppose that [a,X] V E(Q)jf for any a E V(Q) but X E S(Q)Ij. Without loss
of generality X E ext(C(R)If). Then by 2 applied to S, [a,X] V E(S)I9, and
if [a,X] E S(R)Ig then X V ext(&(R)lg) and then X V £(Q)1g. If conversely,
[b,X] V £(Q)Ig, then by 1) applied to R and 2) applied to S, X E ext(C(R)lg),
X V ext(E(S) g), and then X E ext(E(Q)|g) as required. In the remaining cases we
shall omit the proof of the parenthesized case as it follows identically.
Case 2) Suppose [a,X] E C(R)lf, for some a E V(R), and X E ext(E(R)l().
By induction on R, using 2), [a,X] E E(R)Ig, for some b E V(R), and X E
ext(E(R)Ig). Then X V ext(E(S)lg) by 2) on S and the Lemma holds. Therefore let
[a,X] E £(R)If, for some a E V(R), and X E ext(C(S)lf). If [b,X] E E(R)Ig,
by 1 applied to R, X V ext(S(R)l,). Then [a,X] V E(S)If,X E ext(((S)lf)
and [b,X] V E(S)I, implies by 1), X E ext(E(S)lg), and then X E ext(E(Q)[9 ).
Otherwise, if [b,X] E E(S)Il then if X ý ext(C(S)lg) then by 2) applied to S,
[a, X] E E(R)If is contradicted. If [b, X] E E(S)Ig but X 0 ext(C(S)g1), 1) applied
to R yields X E ext(E(R)J,), and then X E ext(E(Q)lg).
Let Q = RV S for type I R, S.
Case 1) Let [a,X] E C(R)Ii. Then as E(Q)If is non-zero, X E ext(E(R)lI) or
X E ext(E(S)jl), but not both, as X V ext(E(Q)lf). If X E ext(E(R)lj), by 2)
applied to R, [b,X] E E(R)1g, and X E ext(E(R)Ig). Then [b,X] . E(S)If for any
b E V(S), and X V ext(E(S)(f), implies by 2) applied to S, X i ext(E(S)Ig) and
so X J ext(E(Q)lg). If [a,X] E &(R)Jl but X E ext(C(S)Ii), then by 1) applied to
R, if [a, X] E E(R)9g, then X V ext(E(R)jg), and by 1) applied to S [b, X] V S(S)If
implies X E ext(E(S) g), so again the Lemma holds. If [b, X] E £(S)Ig then by 2)
applied to S, X V ext(E(S) g), or else [a, X] E E(R)If is contradicted. By 1) applied
to R, X E ext(E(R)If), and X 0 ext(S(Q)Ig), as required. The converse is similar.
2.2. THE THEORY OF ARGUESIAN POLYNOMIALS
Case 2) If [a,X] E £(Q)(I, for some a E V(Q), and X E ext(E(Q)lf), then X E
ext(&(R)lf)nfext(E(S)Ji). Let [a,X] E £(R)If. By 2) applied to R, [b,X] E E(R)Ig,
X E ext(E(R)lg). Then [b,X] V E(S)Ig, and by 1) applied to S, X E ext(E(S)1g),
so X E ext(t(Q)lg).
Let Q = R V S for type II R, type I S.
Case 1) Let [a,X] E £(Q)Ij and X V ext(C(Q)lf). Then if [a,X] E £(Q)If, then
if [a,X] E &(R)ig, for a E V(R), by 3) applied to R, [b,X] E E(R)| 9 . Then by
2) applied to S, X V ext(C(S)Jg), and the Lemma holds. If [a,X] E £(S)JI, then
S satisfies 1). Then [b,X] E S(S)I, implies X V ext(C(S)lg) which implies X V
ext(E(Q)19 ) as required. If [b, X] E E(S)Ig with b E ext(E(R)lg) and X E ext(E(S)Jg)
again implies X 0 ext(E(Q)ig). If [a,X] E E(Q)[I with a E ext(E(R)lf), X E
ext(C(S)If), then, if [b,X] E E(S)Ig, by 2) applied to S, X 0 ext(E(S)lj), or else
[a,X] E £(Q)If is violated, and then X . ext(E(Q)Ig). If b E ext(C(R)lg),X E
ext(&(S)lg), then X V ext(E(Q)lg). Conversely, if [a,X] V E(Q)1g, then [a,X] E
C(S)If, X ext(&(S)If) implies by 1) on S, X E ext(E(S)Ig) and X E ext(E(Q)lg).
If a E ext(E(R)If), X E ext(E(S)lf), then by 1) applied to S, X E ext(E(S)lg), and
X E ext(E(Q)lg).
Case 2) If [a,X] E E(Q)|I,
for some a E V(Q), and X E ext(E(Q)If), as the
covectors of ext(E(S)Jg) are distinct, it is not possible to have a E ext(E(R)lf) and
X E ext(E(S)lf), and it follows that [a,X] E &(R)lf or [a,X] E E(S)If. In the
former case, by 3) applied to R and 1) applied to S, [b,X] E £(R)Ig and X E
ext(C(S)lg), so X E ext(S(Q)Ig). In the latter, by 2) applied to S, X E ext(E(Q) I),
X E ext(E(S)Ig), and then X E ext(C(Q)lg).
Let Q = RA S for type II R, type I S.
Case 3) If [a,X] E S(R)If, for a E V(R), the Lemma holds by induction. Suppose [a,X] E £(S)lf, for a E V(S). Then if X E ext(C(S)if), as Q is type II,
3b E ext(C(R)lf), with [b,X] E E(Q)If, a contradiction. Hence X ý ext(E(S)lf),
and S satisfies 1). Then Vg* with 6(S)Ig non-zero [a,X] V E(S)g1 implies X E
ext(&(S)1j) and again [b,X] E C(Q)Ig for b E V(R). If finally, [a,X] E C(Q)I f for
a E ext(E(R)lf), X E ext(C(R)If) then again S satisfies 1), and the result follows.
The following Lemma is fundamental to Arguesian polynomials. The Lemma is false
when the assumption of multilinearity is dropped.
Proposition 2.14 Let P be a non-zero type I Arguesianpolynomial with transversal
-r.
Then for any Q g P, there is a unique monomial E(Q*) ,. of E(Q*) having non-
CHAPTER 2. ARGUESIAN POLYNOMIALS
zero projection
C(Q)1 .
W 4-(Q*)f
If Q is the join of vectors or meet of covectors, then £(Q*) = Q*, £(Q) = Q,
PROOF.
Let Q = e = al
and the result is trivial.
...
ak V / A XI
...
XI be a type I(II)
basic extensor. Then each ai E V(e) (Xj E C(e)) satisfy [ai,Xj] E E(e)l(, for
X E C(e)(a E V(e)), and as the covectors C(e) occur without repetition, the map
7r : ai -+ X,, ai E V(e)(Xj E C(e)) defines a unique XJ E C(e*)(a E V(e)).
The unique monomial of £(e*) having brackets {[ai, Xj]} is the required monomial
E(e*) ,..
Let Q = R A S with R, S type I. For any pre-transversal ir* with E(Q)j, non-zero,
[a,X] E £(Q)J, implies [a,X] E E(R)I, or [a,X] E £(S)J,. By Proposition 2.12,
the brackets {[a,X]} of E(R)J, and E(S)If uniquely determine ext(S(R)I,), and
ext(((S)I,). Hence £(Q)1, factors uniquely as C(R) , A E(S)I,. By induction there
are unique E(R*)I . 4 E(R)I,, £(S*)lk.* 4 £(S)I,, and as p is a homomorphism of
algebras
de f
&(Q*)I . =
(R*)IJ.
AE(S*)17. 4 S(R)17 AE(S)17 = (Q)1
is the required monomial of E(Q*). Let Q = R V S, for type I R, S. The argument
is identical and as C(P) = X, a single monomial £(Q*)I,. of £(R*)I,. V £(S*)(,.
survives in the projection under p.
Let Q = R V S for type II R, type I S. Let g* be a pre-transversal of P* With
E(Q)|g non-zero. If a E V(S) then [a,X] E S(S)Ig, for some X E C(S), and by
Proposition 2.12, ext(C(S)lg) is determined by the set of brackets [a, X] E E(S)1g.
By Lemma 2.13 part 3, given any pre-transversal g* with &(Q)Ig non-zero, the
covectors X E C(R) satisfying [a,X] E C(R)Ig, determine a set C C C(R) such
that [a, X] E S(R)1, for some a E V(R) iff X E C. Thus we conclude, for every
[a, X] E E(Q) ,,
[a, X] e E(S)|, iff a E V(S)
[a, X] EE(R)J
1 iff X E C
[a,X] E E(Q)I,
a E V(R)
X E C(S) \C
otherwise.
Then ext(6(R)I,),ext(E(S)I,) are determined by V(S),C and 7r*, and E(R)I, V
£(S)I, is the unique factorization of E(Q)I,, with corresponding unique g(R*)I,. 24
E(R)JI, and £(R*)I,* 4 £(S)I,. As the covectors of ext(E(S)J,) are distinct, the
brackets of third type above determine a unique map r : a -4 X, a E V(R), X E
C(S) \ C, and a unique monomial £(Q*)I,* of E(R*) ,* VE(S*)I,. having projection
E(Q)k1
under p. The proof is similar for Q = R A S with R type II, S type I.
0
2.2. THE THEORY OF ARGUESIAN POLYNOMIALS
Corollary 2.15 Given an Arguesian polynomial P and transversal7r, the coeficient
C, of C(P)I, is always -1.
Corollary 2.15 motivates the following definition.
Definition 2.16 Given an Arguesian polynomial P with transversal7r the coeficient C, of E(P)I is called the sign of ir and denoted sgn(C(P) 1).
Proposition 2.17 (Grassmann Condition for Arguesian Polynomials) If f*
is a pre-transversalof type I Arguesian polynomial P* but C(P)j = 0 then either,
1. There exists type I R A S C P with R, S type I, E(R)If, (S)If non-zero, and
Xj E X such that
Xj
E
ext(E(R)lf) n ext(E(R)lf).
2. There exists type I RV S C P with R, S type I, E(R)If, E(S)If non-zero, and
Xj E X such that
Xj V ext(((R)lf) U ext(C(R)If).
PROOF. Given pre-transversal f* of type I P* if £(P)II = 0 then there exists
minimal Q C P with 6(Q)lf = 0. If Q is a vector or covector then £(Q)jf = Q
which is non-zero. Hence we may assume Q = R V / A S with E(R)lf and £(S)JI
non-zero.
Let Q = R A S for type I R, S.
By Proposition 2.14 the unique monomial of
{S(Q*)IIp} projecting to 6(Q)If and having C(R)If,E(S)If non-zero is given by
E(R*)IIA,
£(S*)If.. Then
E(R*)Ifp AC(S*)1p
I4-
(R)if A £(S)Jf = E(Q)Jf = 0
iff 3Xj E X with Xj E ext(C(R)lj) n ext(E(S)If).
Let Q = R VS for type I R, S. The set of monomials of {f(Q*)If } projecting to
E(Q) If and having S(R)fIf,(S)If non-zero is given by C(R*)If VC(S*)If.. Then
E(R*)I
-
V (S*)I. • 4 E(R)|I VE(S)I = E(Q)jf = 0
iff BXj E X with Xj V ext(E(R)If) U ext(C(S)If).
If Q = R V / A S with R type II and S is type I. Then f* identifies a unique E(Q*)
projecting to E(Q)If non-zero, having E(R)If, E(S)If non-zero.
O
CHAPTER 2. ARGUESIAN POLYNOMIALS
Example 2.18 The bijection f : a -*
F,b -+ E,c -- A,d -+ B,e -
C, f -+ D
corresponds to a pre-transversal of
P = ((a V ADF) A (b V ACE))
V((c V AEF) A (d V BCD)) V ((e V BCE) A (f V BDF))
yet f is not a transversal, as R = (a V ADF) A (b V ACE) then £(R) If = [a, F][b, E]
ADAC = 0.
The following Lemmas will be necessary for the proof of the main theorems of
Chapters 3 and 4.
Lemma 2.19 Let Q C P is a type I subexpression of type I Arguesian P, and let
fjl, f2* be pre-transversalswithC(Q)lf, E(Q)If 2 non-zero. If Xj, E ext(E(Q*)II), i =
1,2, for Xj, # Xj, E X* of label Xj, then for every f* with E(Q)If non-zero, there
is a E V(Q) such that
[a, Xj] E E(Q)jI.
PROOF. If e = al
...
ak V X
1
...
Xi the result is trivially valid as the covectors of
e are distinct. If Q = R A S with R type II, S type I, the result is vacuous. If
Q = RV S with S type II, R type I, then Xj, E ext(C(Q*)lIf), i = 1,2 implies
Xjj E ext(E(S*)If*), i = 1,2. Then for every f*, [a,Xj] E £(S)If, a E V(S), so
[a, Xj] E E(Q)If, a E V(Q).
If Q = R A S, with R, S type I, then the result is true by induction unless Xj, E
ext(E(R*)JIf),Xj2 E ext(E(S*)if.). Suppose [a,Xjl] SE(Q)If, for pre-transversal
f*, for any a E V(Q). Then [b, Xj] . C(R)If, for any b E V(R), and [c, Xj] J .(S)If,
for any c E V(S). If [a,Xj] E E(R)f 1,, a E V(R), then as Xj E ext(E(R)If,),
by Lemma 2.13, part 2, [b,Xj] E E(R)If, for some b E V(R), a contradiction.
Similarly, [a, Xj] 0 E(S)f,. Then by Lemma 2.13, part 1, [a, Xj] . C(Q)If, implies
Xj E ext(C(R)f) n ext(E(S)lf) and $(Q)If = 0, a contradiction. If Q = R V S,
for type I R,S then Xj, E ext(E(S*)fi,),i = 1,2, as the extensor S is split by
convention. Then [a, Xj] E C(S)Jf and [a, Xj] E E(Q)If.
O
Lemma 2.20 Let P be a non-zero type I Arguesian polynomial of step n homogeneous of order 1 with vector set a and covector set X* = {Xj,
1,..., } in the repeated variable representation of P.
sals of P, with corresponding pre-transversals r*, a*.
j
= 1,..., n, i =
Let ir,a be two transverThen for any a E a, if
[a, Xj,] E E(P*)I,., [a, Xji,] E E(P*)I *, and j = j', then i = i'.
2.3. CLASSIFICATION OF PLANAR IDENTITIES
PROOF. By induction on Q C P. If Q = e = al .. ak V X1.XIX is a basic
extensor, as each vector/covector is distinct, the Lemma is valid. If Q = R A S
with R, S type I (similarly Q = R V S) then as [a, X] E E(Q)1, implies either
[a,X] E e(R)n,, a E V(R), or [b,X] E C(S)I,, b E V(S), the result follows by
induction. Therefore, let Q = RV S with R type II, and S type I, and let a E V(Q)
satisfy [a,Xj,] E £(Q*)I,. and [a,Xj2] E E(Q*)Ia., for Xj, $ Xj, E X*, and distinct
ir,a. If Xj, E C(R*) then [a,Xj
3 ] E £(R*)I,.,a E V(R), as R is type II, and
3R1 A R 2 C R with R 1 type II, R 2 type I, Xj3 E C(R*), a E V(RI A R 2 ). Apply
Lemma 2.13 part 3. Then [b, Xj] E £(R1 A R2) a for every transversal a, and some
b E V(R 1 AR 2 ), which implies Xj 2 E C(R*) and the Lemma is true by induction. We
may therefore assume Xj, Xj 2 E C(S*), and a E V(R). Then [a,Xj ] E £(Q*)I,r,
[a, Xj2] E E(Q*)a1*, Xj, E ext(C(S*)I,.), Xj2 E ext(£(S*)I..), and by Lemma 2.19,
for every transversal y, [b, Xj] E
£(S) ,
for some b E V(S), a contradiction.
O
We conclude this section with a definition.
Definition 2.21 Given an Arguesian polynomial P(a,X) define the associated
graph Bp = (a U X, E) to be the bipartite multigraph on vertex sets a and X,
having edge (a,X) E E if [a,X*] E E(P*) for some X* E X* of label X.
The pre-transversals of P are perfect matchings of associated Bp, but the graph Bp
of an Arguesian polynomial P unfortunately does not completely determine £(P),
as the following example shows.
Example 2.22 The planar polynomials P = (a V BC) A (b V AC) A (c V AB) and
Q = ((a v BC) A A) V ((b V AC) A B) V ((c V AB) A C) satisfy Bp
y - BQ, both the
cycle C6, yet P and Q are not E-equivalent as is shown in Theorem 2.23.
2.3
Classification of Planar Identities
E
In this section we determine the complete set of indecomposable identities P = Q
valid in step 3, where P and Q are multilinear Arguesian polynomials of type I
and II. The definition of a decomposable Arguesian polynomial, and a theorem
characterizing these polynomials is contained in the final section of this chapter.
Theorem 2.23 shows that the set of such identities is limited, including only the
Theorem of Desargues', a Theorem due to Raoul Bricard [Brill], and a third planar
duality based Theorem, provably equivalent to Desargues' Theorem. We proceed
by determining the set of type I Arguesian polymomials up to E-equivalence, and
CHAPTER 2. ARGUESIAN POLYNOMIALS
permutation of the vectors and covectors in either variable set. In GC(1), there is
evidently only the trivial polynomial a V A which is always zero. In step 2, if P
contains a subexpression a V A, P is decomposable, and if P contains a V AB this
may be rewritten as [[A, B]]a. It is easy to see that the only possible indecomposable
type I polynomial up to multiplication of the bracket [[A, B]] is [a, b]. In step 3 we
have the more interesting;
Lemma 2.23 The complete set of indecomposable type I Arguesian polynomials in
step 3, up to E-equivalence, and permutation of the variables sets a, X is:
1. [a,b,c]
2. ((a V BC) A A) A bc
3. (((aV BC) A A) VbV (BA (cV AC))
4. (aV BC) A (bV AC) A (cV AB)
5. ((a V BC) A A) V ((b V AC) A B) V ((c V AB) A C)
PROOF.
In step 3 let, a = {a, b, c}, X = {A, B, C}. To demonstrate the nonequivalence of the above set P1-P5 , we give E(P) for P1 - P5 above, which are
clearly distinct up to permutation of the variable sets. By Theorem 2.1 each expands
to a distinct polynomial in the monomial basis and hence each corresponds to a
distinct projective invariant, [PD76].
1. [a,b,c]
2. [c, C][b, A][a, B] - [b, C][c, A][a, B] + [b, B][c, A][a, C] - [c, B][b, A][a, C]
3. [b, C][a, B][c, A] - [b, A][a, B][c, C] - [b, B][a, C][c, A]
4. [a, B][b, C][c, A] - [a, C][b, A][c, B]
5. [a, B][b, C][c, A] + [a, C][b, A][c, B]
The associated graphs Bp, for i = 1,... 5 are given in Figure 2.3, and these represent
the complete set of simple bipartite graphs on (3, 3) vertices having each vertex of
degree at least 2, up to isomorphism.
To see that the list P1 - P5 is complete we proceed as follows: Since P is multilinear
in vectors, the three vectors may occur only once, the covectors homogeneously. Let
2.3. CLASSIFICATION OF PLANAR IDENTITIES
P1
P2
P3
P4-P5
Figure 2.2: The graphs Bpi for i = 1,..., 5
P be such a polynomial, and let Q C P be a subexpression. By Proposition 2.1, a
non-zero expression on covectors alone in step 3 reduces to, A, B, C, AB, AC or BC,
up to scalar multiples. We may not have Q C P with Q = a V / A C or ab V / A BC
(or any relabelling ), as then by Theorem 2.36 P is decomposable The polynomial
P = abc V ABC
E
E
[a, b, c], the first polynomial P1. Since a V ABC = a we may
E
E
assume that Q = ABC does not appear in P. Similarly abc A A - [a, b, c] & P1.
Therefore if P does not contain a subexpression Q of form ab A C or a V BC, or a
permutation of the variables {a, b, c}, {A, B, C}, P reduces to the first polynomial
P1.
In any step 3 indecomposable polynomial, every proper subexpression Q C P evaluates to either step 1 or step 2. At the outermost level of parenthesization Arguesian
P evaluates to step 0 or 3. Thus if P = Q V / A R then without loss of generality,
Q is step 1 and R step 2. In this case we shall say the subexpressions Q, R are
maximal. For every proper Q C P, if Q = Qi A Q2 then Q1,Q2 are both step 2,
while if Q = Q1 V Q 2 then Q1, Q2 are both step 1.
We first consider the case where Q = ab A C C P is a step 1 subexpression (or
equivalently ac A B, bc A A, ab A B, etc.) occurring in P. Thus either P = Q A R
with R step 2, or there exists a step 1 S with Q V S C P having step 2. Given
Q = abAC C P, we may enumerate up to E-equivalence the complete set of possible
non-maximal step 1 R and maximal step 2 S,such that QAR C P or QVS C P. The
list of step 1 (points) subexpressions, is S E {c, AB, BC, AC, (cVAB) A C}, whereas
the set of step 2 (lines) R = {A, B, C, c V AB}. To see that the list is complete, we
observe that since Q = ab A C , then it is impossible that R be literally c V AC (or
cVBC) as then abAC forces for every 7r, [a, C] E £(P)I, or [b, C] E E(P)j, and hence
[c, A] E £(P)I,r([c,B] E S(P)1,) and P is decomposable. Then S = (c VAB) A B is
decomposable, and R = ((cVAB)AC)VAC is decomposable, forcing [c, A] E E(P)1,,
while
((cV AB) A C) V AB =
CHAPTER 2. ARGUESIAN POLYNOMIALS
([c, B]AC - [c, A]BC) V AB = [c, B][A, C,B]A - [c, A][B, C,A]B.
E
This last expression may be more compactly written as [[A, B, C]]([c, A]B-[c, B]A) c V AB and the list terminates.
The proof now proceeds in a several cases. If Q = ab A C is maximal, and P =
(ab A C) A X where X E {A, B} then P is either zero or decomposable, else P (ab A C) A (cV AB) which is P2 of the list. Otherwise, if Q is non-maximal we have
three cases, and P contains one of the following subexpressions:
1. (ab A C) V ((c V AB) A C)
2. (ab A C) Vc
3. (ab A C) V AB
In case 1, we have
(ab A C) V ((c V AB) A C) = ([a, C]b - [b, C]a) V ([c, B]AC - [c, A]BC)
which is E-equivalent to ((ab A C) A (c V AB)) A C = P2 A C. Case 2 is essentially
identical, and extends only to P2. If in Case 3 (ab A C) V AB is maximal, then we
E
may have P = ((abAC)VAB)Vc = P2 or P = ((abAC)VAB)A((cVAB)AC)
E
P2.
If case 3 is non-maximal we have two sub-cases 3a) R = ((ab A C) V AB) A C C P,
(meeting with A or B yielding decomposable P), and 3b) ((ab A C) V AB) A (cVAB).
In case 3a) if R is maximal then the only non-zero, indecomposable case is P E
(((ab A C) V AB) A C) A (c V AB) - P2. If R is non-maximal then P contains
R V S with S a point. The only surviving cases are R V S = (((ab A C) V AB) A
E
C) V AB L (ab A C) V AB or R V S = (((ab A C) V AB) A C) V ((c V AB) A
E
C) - ((ab A C) V (c V AB)) A C = P2 A C. Consider Case 3b.) We may write
R= ((abAC)VAB)A(cVAB)
E
((abAC)A(cVAB))AAB
E
P2AAB. We
E
conclude that if P contains any subexpression of form ab A C then the P = P2.
If ab C P then either abA C C P (or ac A B etc. ), which is the first case considered,
or ab is maximal. If ab is maximal then we must have cV AB, cV AC or cV BC C P
or else P is P1. Any indecomposable expression S containing c V AB and only
covectors such that P = ab A S must be equivalent to S = (c V AB) A C, or a
permutation of {A, B. C}, thus P = P2.
To continue, next suppose that R = a V BC C P and in particular that P contains
R = (((aV BC) A A) V b) (or any permutation of {A. B, C}). Again we calculate the
2.3. CLASSIFICATION OF PLANAR IDENTITIES
allowable points and lines which may be joined and meeted with R. It is not difficult
to see that complete set of points S such that RVS C S = P is {c, AB, AC, BC, (cV
AB) A C, (c V BC) A A, (c V AC) A B} while the complete set of lines T such that
RAT C P is {A,B, C, cV AB, cV BC, cV AC}. If R is maximal, the case RVS = P,
E
then P = (((a V BC) A A) V b) V c - P2, while if P is RA AB, RA AC, or RA BC
then P is decomposable. Further if P is
(((a V BC) A A) V b) A ((c V AB) A C)
(or an equivalent permutation of covectors B and C), Then P = P3. Otherwise R
is not maximal, there are two cases, and P contains a subexpression S of the form:
1. (((a V BC) A A) V b) A B
2. (((aVBC) AA)Vb)A(cVAB)
where B may be exchanged with C in the first case,and c V AB with c V AC in the
second. In Case 1 if S is maximal then, P = (((a V BC) A A) V b) A B) V (c V AC)
which is P3. If S is non-maximal then there are three sub-cases for 1, in which P
contains a subexpression T
E
la) (((aV BC) A A) V b) A B) V AC
Ib) (((a V BC) A A) V b) A B) V ((c V AC) A B)
ic) (((a V BC) A A) V b) A B) V c.
where in ib) we may equivalently join with (cVAB)AC. By an expansion analagous
to Equation 2.7 the expression in la) is E-equivalent to ((a V BC) A A) V b, thus
reducing this case. The expression in ib) expands as
[a, B][b, A][c, C]B + [a, C][b, B][c, A]B - [a, B][b, C][c, A]B E P3 AB
and the scalar P3 may be factored. Finally, in Case Ic) [c, B] V £(P), and Ic) may
be factored as (((a V BC) A A) V bc) A B which is P2 A B.
In Case 2 above, this expression may be joined with C if maximal or joined with
AC or BC identically if not. If maximal, we have
E
((((aV BC) A A) V b) A (cV AB))VC
-- ((((aV BC) A A) V b) V ((cV AB)) A C)
which is just P3. If Case 2 is not maximal, and R' = [(((a V BC) A A) V b) A (c V
AB)] V AC C P, (equivalently BC), then R' may be replaced by E-equivalent
E
[(((a V BC) A A) V b) A (c V AB) A C] V A = P3 V A.
CHAPTER 2. ARGUESIAN POLYNOMIALS
We are now ready to consider the final cases. The complete set of R C P (with
P indecomposable), containing a single vector a, (up to permutation of covectors),
have been shown to be a, (a V BC), (a V BC) A A. Thus if (a V BC) A A) V b • P, or
a case previously considered, we must have R' C P, (up to permutation or exchange
of multilinear vectors), for R' one of
1. (aVBC) A (bV AC),
2. ((a V BC)A A) V ((bV AC) A B),
3. (a V BC)A ((bV AC) A B).
In the first case, R' is a point, the second a line and the third a scalar. Case 3 is
E
easily dismissed as decomposable. If case 1 is maximal, then either P= (a V BC) A
E
(b V AC) A (c V AB), which is P4, or P
(a V BC) A (b V AC) A A, (equivalently
B, C), decomposable. If Case 1 is not maximal, we remark that in any P containing
(a V BC) A (b V AC), and for every transversal 7r of P, either [a, C] E E(P)I , or
[b, C] E A(P) ,. Thus if indecomposable, 3 distinct ir,a such that [c, A] E £(P)I
and [c,B] E E(P) a for c. Thus £(P) = k I [a,C][b,A][c,B] + k2 [a,B][b,C][c,A].
The constants k 1, k2 are necessarily plus or minus 1 and if these signs are opposite
E
E
P = P4, while if equal P P5.
Finally, as in case 2, let R = ((aV BC) A A) V ((bV AC) A B) C P. If R is
E
maximal then P is equivalent to either (((aV BC) A A) V ((bV AC) A B)) V c P3, or
E
E
P
((aVBC)AA)V((bVAC) AB))V((cVAB)AC) = P5. If case 2 is non-maximal,
then R = (aVBC)A((bVAC)AB) is either meeted with a subexpression E equivalent
to A, B, C or c V AB. If R A A (or equivalently B) C P, then [b, A] E E(P)J, for
every 7r. If R A C C P then for every 7r, [a, C] E C(P)I,, or [b, C] E E(P) ,.Hence
E
there is no transversal of P with 7r : c -4 C. If P is indecomposable P
P4 or
E
E
P P5. If P = R A (c V AB) then again P= P4,P5. The proof is complete.
0
Corollary 2.24 The complete set of indecomposable type II Arguesian polynomials
in step 3, up to E-equivalence is:
1. [[A, B, C]]
2. ((a V BC) A A) A bc
3. ((abAC) Vc)ABA(aV(bcAA))
2.3. CLASSIFICATION OF PLANAR IDENTITIES
4. (bc A A) V (ac A B) V (ab A C)
5. ((bcAA)Va)A((acAB)Vb)A((abAC)Vc)
PROOF. The steps of Lemma 2.23 can be dualized, replacing join with meet and
O
vectors with covectors, to yield the type II Arguesian Q1 - Q5 above.
Theorem 2.25 The complete set of planar indecomposable Arguesian identities
P
E
Q is
E
[a, b, c](a V BC) A (bV AC) A (cVAB) =
[[A, B, C]](bc A A) V (ac A B) V (ab A C)
E
[a, b,c]2 ((a V BC) A A) V ((bVAC) A B) V ((cVAB) A C)
[[A, B, C]]2 ((bc A A) V a) A ((ac A B) V b) A ((ab A C) V c)
E
[a, b, c](((a V BC)A A) V bV (CA(cV AB)) E
[[A, B, C]](((abA C) V c) A B A (a V (bc A A)
PROOF. It suffices to compare E(P) and E(Q) for type I P and type II Q in the
lists above. The expansions below may be compared with equations 3-5 of Lemma
2.23, giving the valid identities. The first identity has already been demonstrated.
To see the second identity, we have
[a, b, c]2 ((a V BC) A A) V ((b V AC) A B) V ((c V AB) A C) =
[a, b, c]2 ([a, C]BA - [a, B]CA) V ([b, C]AB - [b, A]CB) V ([c, B]AC - [c, A]BC) =
[a, b, c]2[[A, B, C]]2 ([a, C][b, A][c, B] + [a, B][b, C][c, A]) =
[[A, B, C]]2 ([b, A]ca - [c, A]ba) A ([a, B]cb - [c, B]ab) A ([a, C]bc - [b, C]ac)E =
[[A, B, C]]2 ((bc A A) V a) A ((ac A B) V b) A ((ab A C) V c)E
The third identity is slightly deeper,
[a, b, c]((a V BC) A A) V b V (C A (cV AB)) =
[a, b, c]([a, C]BA - [a, B]CA) V b V (CA[c, B] - CB[c, A]) =
CHAPTER 2. ARGUESIAN POLYNOMIALS
[a, b, c]([a, C](BA V b) - [a, B](CA V b) V (CA[c, B] - CB[c, A]) =
[a, b, c](-[a, C](b V BA) + [a, B](b V CA) V (CA[c, B] - CB[c, A]) =
[a, b, c](-[a,C][b, A]B + [a, C][b, B]A)+
([a, B][b, A]C - [a, B][b, C]A) V (CA[c, B] - CB[c, A]) =
[a, b,c][[A, B, C]]([b, C][a, B][c, A] - [b, A][a, B][c, C] - [b, B][a, C][c, A]) =
[[A, B, C]]([a, C]([b, B]c - [c, B]b)[b, C]([a, B]c - [c, B]a) A ([b, A]ac - [c, A]ab)E =
[[A, B, C]]([a, C]bc A B - [b, C]ac A B) A ([b, A]ac - [c, A]ab)E =
[[A, B, C]]([a,C]bc - [b, C]ac) A B A ([b, A]ac - [c, A]ab)E =
[[A, B, C]]((ab A C) V c) A B A (a V (bc A A))E
proving the last identity.
0
The identities of Theorem 2.25 may be given a direct geometric interpretation.
The first yields the Theorem of Desargues in the plane, the second a lesser known
theorem due to Raoul Bricard, and third another duality based planar theorem,
[Cox87, Sam88].
Theorem 2.26 (Bricard) Let a, b, c and a', b', c' be two triangles in the projective
plane. Form the lines aa', bb', and cc' joining respective vertices. Then these lines
intersect the opposite edges bc, ac, and ab in colinear points if and only if the join
of the points bc n b'c', ac n a'c' and ab n a'b' to the opposite vertices a', b' and c' form
three concurrent lines.
PROOF.
identity,
In a GC(3), let a,b, c be vectors and A, B, C be covectors. Then the
[a,b,c]2 (a V BC) A A) V ((b V AC) A B) V ((cV AB) A C)=
[[A, B, C]2]((bc A A) V a)A ((ac A B) V b) V ((ab A C) V c)E
is valid by Theorem 2.25. Upon specialization of A = b'c', B = a'c',C = a'b' one
obtains;
[a, b,c]2 (aa' A b'c') V (bb' A a'c') V (cc' A a'b') =
[a', b', c']((bc Ab'c') V a) A ((ac A a'c') V b) A ((ab A a'b') Vc)E
2.3. CLASSIFICATION OF PLANAR IDENTITIES
Figure 2.3: The Theorem of Bricard
The left side vanishs when the points a, b, c are non-colinear and the points aa' A
b'c', bb'A a'c' forming a line in the projective plane, when joined to the point cc' Aab,
do not span the plane, i.e. the three points are colinear. Interpreting the right side
of the identity, the two lines (bc A b'c') V a and (ac A a'c') V b interesect in a point
in the plane. The right side vanishs when the join of this point and the extensor
corresponding to the line (ab A a'b') V c do not span the plane, or the three lines are
concurrent.
O
The third identity of Theorem 2.25 gives the following Theorem of plane projective
geometry. The identity has an interesting symmetric form.
Theorem 2.27 Let a, b, c and a', b', c' be two triangles in a projective plane. The
points determined by the intersection of the lines aa' n b'c' and a'b' n cc' and the
point b are collinear if and only if the line a'c' and the lines, given by intersecting
the lines ab n a'b' and bc f b'c' and joining these two points to the point c and point
a respectively, are concurrent.
CHAPTER 2. ARGUESIAN POLYNOMIALS
Figure 2.4: The Theorem of the third identity.
PROOF.
yields,
Specialization to a basis of vectors in the third identity of Theorem 2.25
[a, b, c](aa' A b'c') V b V (a'b' A cc') =
((ab A a'b') V c) A a'c' A (a V (bc A b'c') )E
0
which directly interprets the Theorem.
While classification of spacial Arguesian polynomials P may be given by a laborious
case analysis of Bp, a classification of Arguesian polynomials in steps greater than
4, seems to be a difficult problem.
2.4
Arguesian Lattice Identities
We consider a restricted form of type I Arguesian polynomial P =
ei is a type I basic extensor ei = ai,j1
...
ai,k V Xi,i
...
ei where each
Xi,, with I > k. Let fj denote
a type II basic extensor. Theorem 2.29 answers the question of given P = Ai= ei
when does there exist a set { f} such that AL ei and Vý 1 fj represent the same
projective invariant. Indecomposable identities of this form are shown to represent
the higher-order Arguesian lattice identities.
2.4. ARGUESIAN LATTICE IDENTITIES
Lemma 2.28 Let P = A• ei be a non-zero meet of type I basic extensors. If
P(a, X) is indecomposable then for any X* E X* with label X E X, and a E a,
[a, X*] E E(P*) iff [a, X] E (P).
PROOF.
Since P(a, X) is non-zero there is a pre-transversal 7r* of P* with
We have £(P)I, = A!= 1 (ej) 1 , and given
X* E C(e!),a E C(ei), if [a,X*] 0 E(e)I1, then X* E ext(E(e!)|,I.). As a covector
of label X occurs exactly once in a bracket [a,X] of £(P)|,, by the Grassmann
condition for Arguesian polynomials, the order of non-zero P is either 1 or 2. Thus
E(P), is non-zero for every pre-transversal 7r*. Applying Proposition 2.41,since P
is indecomposable, given X* E C(e!), there is a pre-transversal r* : a -- X*, for
any a E V(ei), and the Lemma follows.
O
corresponding transversal 7r of P.
If P = A ei it suffices to consider the sets C(ei) alone. Form a multigraph C(P) =
({ei}, E), called the covector graph, whose vertices are basic extensors {ei}, and
e = (ei, ej) E E is an edge labelled X if X E C(ei) n C(ej). We may without loss of
generality assume that C(P) contains no isolated vertices or vertices of degree one,
as then P is decomposable.
Theorem 2.29 Let P =
AL=
1 ei be an indecomposable meet
ei = ai,l -
Xi,li I > k and let fj = ajl ... aj,k. A Xj,
ai,ki V Xi,1
...
of type I basic extensors
...
Xj,id
denote
a type II basic extensor. Then
I
A/ei
i=1
m
V fi
j=1
for some set {fj }'• if and only if C(P) is a multicycle of length n or a multipath
of length n. The former in step n represents a Grassmann-Cayley identity for the
n - 3-rd higher Arguesian Law.
PRooF. Let P = A=,1 ei be indecomposable and E(P) = £(Q) for Q = V=1 fj.
By Proposition 2.28 [a,X*] E E(P*) iff [a, X] E E(P) for any a E a,X* E X* of
label X. Let Vx = {a E al[a*, X] E £(Q) for X E X and a* E a* of label a}. Then
as P E Q, for every X E X, if a E a with [a, X*] E E(P*) for X* of label X, then
a E Vx. IfVx
l
Vy, for X,Y E X then f. 0 fy, for let ai ý Vy and suppose
fx = fy =
al...ai ... aj .. ak A X1 ... X .. Y .. X
occurs in Q = V fj with k > 1. As P E Q there is a transveral of indecomposable
P with 7r : ai -+ X and ir : aj -+ Y, for some aj E V(fxy). Then a(a) = 7r(a) for
CHAPTER 2. ARGUESIAN POLYNOMIALS
a {ai, aj}, and a : ai - Y, aj -+ X is another transversal of P, and [ai, Y] E E(P),
a contradiction.
Suppose the underlying simple graph of C(P) has a vertex u of degree three (or
more), with edges labelled X, Y, Z. If fx, fy, fz denote the type II basic extensors
corresponding to X, Y, Z then fx, fy,fz are distinct, by the multilinearity of vectors
in P. There is a vector ai assigned to u with repeated ai, E V(f ), ai2 E V(fp ), ai, E
V(fj). Since given 7r,
aij E E.(Q) for at most one j = 1, 2, 3, Q is zero by Grassmann
condition I. Thus the underlying simple graph of C(P) consists of the union of cycles
and paths. If this graph is not a single path or cycle then let {e[ } denote the set
of extensors corresponding to vertices in some component. Then V1 = V({e.}) and
C 1 = C({e' }) are equinumerous sets satisfying Theorem 2.36 and P is decomposable.
Given P = A ei, form Q = V fj and suppose C(P) is a multicyle or multipath.
We show that the signs of each monomial in E(P) and E(Q) are equal or opposite.
As vectors are multilinear and the covectors of each ei are distinct, we may pass
to non-repeated variables. Consider P = A 1 x eL. Given two transversals r,a :
a -+ X a permutation p of X is defined as p(7r(i)) -- a(i) as i = 1... n, where
the multilinear vectors a are indexed by i. Since P is homogeneous of order 2,
the scalar bracket B = [[XI,...,X,,]] produced by C(P) has order 1. Let ei =
ai,lai,2 ... ai,k V Xi,lX i ,2 ... Xi,p,p > k. Then E(P)I, = A=l E(ei)I, with
9(e)l , =
[ai,, Xi,r(1)][ai,2, Xi,r(2)] ' [ai,k, Xi,r(k)] Xi,i " Xi,p \ {Xi,r(1),..., Xi,wr(k)I
sgn(Xi,1
...
Xi,p \
{Xi,7r(1),...
, Xi,7r(k)
},Xir(1),
.. ., Xi, r(k) ),
and if sgn(B,) denotes the parity of the number of transpositions necessary to
linearly order the covectors of ext(E(ei),), i = 1,...,1 contained in B under 7r,
sgn(S(P)If)
i
=
j
sgn(E(e,) 1,) x sgn(B,,).
(2.7)
i=1
Case I) Suppose Xm,i, Xm,j E C(em.) for some m1 = 1,..., with r : am,,-+ Xm,i, 7r:
am,2 --+ Xm,j, a : an,,1 -4 Xm,j, a : am,2 -+ Xm,i, for am,1, am,2 E V(em). Suppose
also that 7r(ar,,) = a(ar,,) for r
m7n and if r = m, for s 5 1, 2. The covectors in
bracket B under 7rand a are ordered identically and sgn(B,) = sgn(B,). Only
sgn(E(em) ,) may change in the product of equation 2.7. For transversal 7r,
sgn(C(em)1,r) =
sgn(Xm,1i "Xm,p
Xmr(1),..., Xi,...,
\Xnm,(1), ... Xi,... ,Xj,..., Xm,7(k)I,
Xi,... ,Xn,r(k)) .
2.4. ARGUESIAN LATTICE IDENTITIES
The sign sgn(C(ei) l) is identical except with Xi and Xj interchanged, so
sgn(C(P)Ir) = -sgn(E(P))j).
Case II) Suppose a E V(em) and either 7r(a) = a(a) or if 7r(a) / a(a) then
a(a) 0 i(b) for any b E V(em,),bb a. Then if 7r : a -+ Xm,i and a : a -+ Xm,j,
sgn(E(em)l1) =
sgn(Xm,
-...Xm,i'
Xm,j
Xm,p
\ {Xm,7(1),..., Xm,i,..., Xm,j,. .. , Xm,,r(k)}
Xm,r(1), ... , Xm,i,
(2.8)
...
, Xm,i,
.
..
Xm,r(k))
(2.9)
while the covectors of e,, contributing to B, are
X,,1
n,j
Xmi,
X...
...
{X,,(1), ... , Xm,i, ... , Xm,j, ... , Xm,,7(k) } (2.10)
Xm,p \
=
Similarly sgn(E(em)lj)
sgn(Xm,1 • Xm,i -• Xm,j - *Xm1,p \ {Xm,,(1),..., Xm,i,..., Xm,j,..., Xm,,r(k)}
(2.11)
(2.12)
Xm,r(1), ... , Xmj, ...I,Xm,i, . .., Xm,(k))
with covectors contributing to Ba,
'
Xm,l 1Xm,i
..
Xm,j "- Xm,p \ {Xm,r(1),..., Xm,i,... ,Xm,j,...
, Xm,(k)}
(2.13)
The positions of Xm,i and Xm,j in the terms 2.9, 2.12 are equal for both Ir and a
since a fixed order on V(em) is maintained. The covectors
Xm,.. Xm,'i ' Xm,j
Xm,p \ {X,(),...,
,i,..., XX
Xm,,...
mj, .. Xm,
X (k)}
occur both in 2.11 and 2.13, and may be simultaneously transposed, without affecting sgn(E(P)j,) x sgn(&(P)jI), such that sgn(S(e,,m) ,) and sgn(S(em)la) differ by
a transposition of Xm,i and Xm,j while 2.10 differs from 2.13 in exactly one position
in which 2.10 contains Xm,j while 2.13 contains Xm,i.
We show that given transversals ir, a of P, sgn(PJ,) x sgn(Plj) = sgn(QJ,) x
sgn(Ql,). It suffices to verify the case in which the permutation p determined by
ir and a is a cycle. We say that the transversals 7r and a differ by a transposition
if ir(ai) = a(ai) for ai E a except on distinct ai, a2 in which case, r : al -+ X, r :
a 2 -4 Y, while a : al -
Y, o : a2
-
X, for distinct X, Y E X.
If 7r, a differ by a transposition, then either X, Y E C(em) for some m, corresponding
to case I above, or X E C(ei),Y E C(ej) for distinct i,j, which is case II. Both are
sign reversing in P. If fx = fy, the transposition corresponds to the dual case I for
vectors, if fx # fy to the dual of case II, so both are sign reversing in Q as well.
CHAPTER 2. ARGUESIAN POLYNOMIALS
Claim: There is a sequence of transversals of P, 7r= po, Pl1,... , Pm = 7r',
where pi
differs from Pi-1 by a transposition and such that 7r' satisfies the following property
P1,
P1: If 7r' : a -- X, r' : b -+ Y, and X, Y E C(ea) f C(eb), with possibly a = b, then
a(a) j 7r'(b).
For suppose there are a E V(ea), b E V(eb) with Pi-1 : a
-+
X, pi-1 : b -4 Y, X, Y E
C(ea)nC(eb) and a(a) = pi-1 (b). Set pi = pi-1 except pi(a) = a(a), pi(b) = Pi_1(a).
If this transposition is applied to Pil-then pi-l(a) # a(a) and pi(a) = a(a). Since
lal = n, there can be at most n violations to P1. Each transposition reduces the
number of violations by one, so the sequence is finite.
The bijections r'- 1 and a - 1 are transversals of Q, and satisfy the dual property to
P1 on Q, since by construction, X, Y E C(ea)fn C(eb) iff a, b E V(fx) n V(fy), and
7r-1X
:-+ a, 7r- 1 : Y -- b with a-'(X) = 7r-'(Y), iff r : a -+ X, r : b -4 Y, with
a(b) = 7r(a).
Let p' be the cycle induced by r' and a of P. For each vector a assigned to a
covector permuted in p', the pair 7r'
: a -+Xm,i, a : Xm,j satisfies Case II. Then
sgn(P.,) x sgn(P , ) = (-1)P'I x sgn(B,,) x sgn(B , )
and since the i-th element of B, is either identical to the i-th element of B,, or equal
to the element of p' occuring before the i-th element of B,, sgn(B,) x sgn(B , ) =
-1Ip'l- 1,and sgn(Pl,,) x sgn(Pj, ) = -1. The dual to P1 holds on Q for ir'-, a -1 ,
so sgn(Pl,,) x sgn(PIl) = sgnt(QI,,,-) x sgn(Q41-
,
), and as transpositions are sign
reversing in both P and Q, sgn(PI,,) x sgn(Pl, ) = sgn(Qj,7-i) x sgn(QjI-
,
).
O
Theorem 2.30 (Arguesian Law) In a Grassmann-Cayley algebra of step n let
the vector set a be partitioned into three sets {al, a2,. . , a/k }, {bl, b2 , ...
, bk2}, and
{cl, c2,..., ck, I of sizes kl, k2 and k 3 respectively, with k1 + k2 + k3 = n and set
a(kl) = ala2 ... ak1 , b(k') = b1 b2 ... bk2 and c(k3) - C1C2... Ck3 . Similarly, partition
the covector set X into sets {X, X2, .X,
X },{Y1,
Y2,.. , Y1,2 }, and {Zi, Z2 ,..., Z1 3 }
setting X (") = XIX 2 . .. Xi,,Y(12) = Y 1Y'2 " "Yl2 and Z (13) = ZIZ 2
Z3
Z
1 with 11 +
12 +13 = n. Then the following is an identity in a Grassman-Cayley algebraprovided
lI + 12 > k3 ,12 +13 > kl,ll + 13 > k2
[a(ki), b(k0),c(k3)](a(k ) V Y( )Z(t3)) A (b(k2) V X(t1)Z
(t3 ))
A (c(k3) V X(')Y( 12))
[[X(I), y(12), Z(la)]](a(kl)b(k2) A Z (13))V (a(k' )c(k3) A Y(I•)) V (b(k2)c(k3) A X
(I s))
PROOF. A Corollary to Theorem 2.29 with the covector graph C(P) a multicycle
of length 3, as 11 +12 > k3 , 11 +12 +13 = n, k1 k2 k3 = n implies l3 < kl + k . O
2
2.4. ARGUESIAN LATTICE IDENTITIES
Replace X (I 1 ) = X 1 .. X 1 , y( 1') = X-z+1
new basis of vectors, a/l,..., a n setting Xi
.
=
XI, Z ( 13 ) = X 12 +1 ... Xn and choose a
a'
. a'. The meet X(1i) becomes
which we denote by a' ( 1"). After appropriate cancellation
[a~. , a~
The identity 2.30 may be written
1
.. . a
Theorem 2.31 (Arguesian Law) If k 1 + k 2 + k 3 = n and 11 + 12 + 13 = n then
[a(kl), b(k2), c(k3)][a/(I1
•
), b'(12) , c/13](a(kl)aI (l z ) A b(k2)b
(b(k2)c(k3) A b'(1 2) e
t (12) A C(k3) C(13)) =
3
V (a (k
~')c(k
) A a't(")c'~)) V (a(k)b(k ) A '(l)
b'( 12))
Corollary 2.32 Let a (2 ) , b(2 ) be lines, and let c be a point in projective four space,
and a' a point and b0(2), (2 ) lines. Then the plane formed by joining a(2)a', the
solid formed by joining lines b(2 ) bl (2 ) , and the plane formed by joining cc (2) contain
a common point, if and only if line formed by intersecting the plane b(2)c with the
solid b'(2)c' (2 ) , the point formed by intersecting the planes a(2)c, a'c (2 ) , and the line
formed by interesecting the solid a(2) b(2) with the plane a'bt (2), all lie in a common
solid.
The Arguesian identities given by Theorem 2.29 are in fact direct consequences of
Arguesian lattice identity. Any lattice equality is equivalent to a lattice inequality,
and it can be shown [Hai85] that the Arguesian law may be written
c A ([(a V a') A (b V b')] V c') <
a V ([((a V b) A (a'V b')) V ((b V c) A (bV c'))] A (a' V c'))
(2.14)
(2.15)
where the operations join V and wedge A are lattice theoretic join and meet. The
equivalence of 2.31 to the Arguesian lattice identity in the case where the flats
correpsonding to a(ki), b(k2), (k3) and a' ( "), b'(12), c'(13) are in general position is easily
seen. Identity 2.15 was shown by Haiman [Hai85] to hold in all linear lattices, lattices
representable by commuting equivalence relations of a set, and is therefore valid in
the lattice of subspaces of a projective space. Assuming that the flats corresponding
to a, b, c have the zero element as their meet, the lattice elements aVa', bVb' represent
the subspace of V spannned by a, a' and b, b'. Intersecting these two subspaces and
joining the resulting flat with the flat c' and then meeting with c, the result gives
c or the zero element depending on whether the subspace configuration contained
a common point. It is the zero element precisely when the left side of 2.31 vanishs,
and in this case the subspaces are centrally perspective. On the right side of the
2.15, the clause in square brackets is the subspace containing (a V b) A (a' V b') and
(b V c) A (b' V c') which assuming general position of the subspaces represented by
CHAPTER 2. ARGUESIAN POLYNOMIALS
) A bl(l2)c'(13)).
the extensors, corresponds to the flat (a(kl)b(k2) A a'(ll)b'(12 )) V (b(k~)
Meeting the subspace a'Vc' then joining with a we obtain a subspace passing through
c only when the given term lies on a common hyperplane with (a(kl)c(k3) Aa t (l')c4(3)).
We conclude that the Arguesian law, in the case of subspaces in general position, is
realizable as a set of Grassmann-Cayley algebra identities.
Theorem 2.33 (M-th Higher Order Arguesian Law) In a Grassmann-Cayley
algebra of step n let the vectors set a be partitionedinto m+3 sets {ai , ai2,..., aiki }
of sizes ki for 1 < i < m + 3. Set a k i ) = V= ai . Similarly partition the covector set X into sets {Xil,Xi,...,Xi, }, setting X• t )
1Xi.
Then provided
li + l,+1 > ki and ki + ki+1 > li+1, for i = 1,..., m, the order being cyclic modulo
m so that m + 1 = 1 the following identity is valid.
m+3
[a,
a
,
+3 )]
[[X(l), XX,,...,
(a ki) V
,' ,,
V
m+3
)
i+
) =
(ki)ýk (i+,) AX("I+'))
i=1
PROOF. A Corollary to Theorem 2.29 with the covector graph C(P) a multicycle
of length m + 3.
O
Corollary 2.34 Let a, b, c, d and a', b', c', d' be two sets of points in three-dimensional
projective space, and consider the two sets of lines ab, bc, cd, ad and a'b', b'c', c'd', a'd ' .
Then the four planes ac'd', ba'd',ca'b', db'c' are pass through a common point if and
only the four points formed by intersectingthe lines ab, bc, cd, ad with the corresponding planes a'cd' , a'b'd',a'b'c', b'c'd' all line on a common plane.
PROOF.
The identity
[a, b,c, d](a V AB) A (b V BC) A (cV CD) A (d V AD) =
[[A, B, C, D]](ab A B) V (bc A C) V (cd A D) V (ad A A)
is valid in GC(4). Substituting A = b'c'd',B = a'c'd',C = a'b'd',D = a'b'c' we
obtain
[a, b, c, d][a', b', c', d'](ac'd' A ba'd' A ca'b' A db'c') =
(ab A a'c'd') V (bc A a'b'd') V (cd A a'b'c') V (ad A b'c'd')
2.4. ARGUESIAN LATTICE IDENTITIES
Figure 2.5: The First Higher Arguesian Identity
An illustration of Corollary 2.34 is given in Figure 2.4. Geometrically, this identity is
stricly weaker than the generalization of Desargues' theorem given by Theorem 1.13
in three dimensions for if aa', bb'cc', dd' are four concurrent lines in three space, then
the planes ac'd', ba'd',ca'b',db'c' all pass through a common point so the geometric
identity holds. Conversely, if the three planes dbc', ba'd' and ca'b' intersect in a
point p the point a can be chosen such that the plane ac'd' passes through p. Then
a is free to move anywhere in this plane. If bb', cc', dd' intersect in a point q with a'
chosen in space, there is no plane P containing a such that qa'a is a line for every
choice of a in P.
To understand the higher Arguesian identities we proceed as follows. The N-th
higher Arguesian law as given by Haiman in [Hai85] may be written given alphabets,
al, a2,.. ., an, and bl, b2,..., b, as
n-1
an A([A (ai Vbi)] Vbn) <
i=1
n-1
al V ([ V ((ai v ai+) A (bi V bi+l))] A (bi Vbn))
(2.16)
i=1
Proposition 2.35 (Rota) Any lattice identity P < Q is equivalent to one in which
every variable appears exactly once on each side.
By applying Proposition 2.35 the N-th higher Arguesian law may be written in the
following form. A remarkable property of this identity that it is self-dual.
CHAPTER 2. ARGUESIAN POLYNOMIALS
,..., a/ and bl,..., b
b'l,..., bn be alphabets. Then the following identity holds as a linear lattice identity
N-th Higher order Arguesian Law Let al,...,a,
n-1
a, A ([A((ai A ) V(bi Abý))] V(bi Ab')) <
i=1
n-1
al V ([V ((a Vai+) A (b\ Vbi+l))] A (bl V bn))
(2.17)
i=1
In identity 2.17 let A 1 , A 2 ,..., A,, B 1, B2,..., Bn be new variables and substitute
bi = Ai, bý = Ai+l with b', = A1 and ai = a = Bi. Then 2.17 becomes, after
application of the lattice rules Bi A Bi = Bi, Ai V Ai = Ai and the commutativity of
lattice theoretic join and meet,
n-1
B,, A ([A(Bi V (Ai A Ai+l))] V (Al VA,) <
i=1
n-1
(Bi V
((Bi v Bi+l) AAi+l)] AA))
(2.18)
i=1
and Equation 2.18 is a linear lattice identity. The left hand side of this identity
is zero when the subspace Bn, V (A 1 A A,,) has some point in common with the
bracketed term on the left hand side of 2.17. Meeting both sides of this identity
with Bn the left hand side remains invariant while the right hand side vanishs is
zero when A 1 V (B 1 A B,,) lies on a common hyperplane with the bracketed term on
that same side. These are precisely the conditions making the left and right hand
sides of 2.33 vanish. The identity 2.17 has a natural geometric interpretation. If
a l bl,..., an+Ibn+l are n + 1 concurrent sets of lines in projective n-space. Then the
n+ 1 points, whose intersection must exist, ala2 nbib2 , a2 a3nb2b3 ,..., alannblb all
lie on a common hyperplane. (See Theorem 1.13 of Chapter 1 for a proof.) Haiman
[Hai85] has shown that the (N + 3)rd higher Arguesian law is a strictly stronger
lattice identity than Nth order law. It is conjectured that the N + lrst is strictly
stronger than the Nth.
2.5
A Decomposition Theorem
In the final section of this Chapter we prove a decomposition theorem for Arguesian
polynomials. Theorem 2.36 enables the classification results of the previous section.
We begin with a definition.
2.5. A DECOMPOSITION THEOREM
Definition. A non-zero type I(II) Arguesian polynomial P(a, X) is decomposable
if there exists Arguesian polynomials Q(al, X 1 ) and R(a 2 , X 2 ) of the same type on
disjoint variable sets al U a2 = a and X 1 U X 2 = X such that C(P) = E(Q) x E(R).
We first state the Theorem.
Theorem 2.36 Let P(a,X) be a non-zero type I Arguesian polynomial in GC(n).
E
Then P is decomposable if and only if P = P'(a,X) with the property that there
exists equinumerous proper subsets al C a of vectors and X 1 C X of covectors such
that: For any a E a ,X* E X*, if [a,X*] E C(P'*) and either a E al or X* E XT
then a E al and X* E X-.
For non-zero Arguesian P, Theorem 2.36 may be equivalently stated as P is decomE
posable iff P = P' with associated graph Bp, disconnected. Corollary 2.37 follows
immediately from Theorem 2.36.
E
Corollary 2.37 Let P and Q be Arguesian polynomials in GC(n) with P E Q and
P type I, Q type II. Then P is decomposable if and only if Q is decomposable.
PROOF. We have E(P) = 8(Q) = S(R) x S(S) for type I Arguesian polynomials
R and S. By the proof of Theorem 2.36 applied to type II polynomials and setting
al = V(R) and Xi = C(R) we may write 8(Q) = E(R') x E(S') with R' and S'
both type II.
O
Lemma 2.38 Let P(a,X) be a non-zero type I Arguesian polynomial P in GC(n).
Then there exists an Arguesian polynomial P'(a,X) with P = P' satisfying,
For any Q'C P', with XJ E C(Q'*), [a, XJ*] E(Q'*) for any a E V(Q') if and only
if there is a transversal7r' of P' with [b, Xj] E E(Q') 1,for some b E V(Q').
PROOF. The direction (=) is clear. (-=) We construct P' recursively. Replace the
join of vectors or meet of covectors identically. If f is a type II basic extensor, then
for every transversal r of P, Xj E C(f) satisfies [b, Xj] E E(f) , for some b E V(f),
replace f identically. If Q is a type I basic extensor, Q = e = a1a2 ... akVX1X2 ...X
with k < I then replace Q by Q' = (ala2... ak V XIX 2 .. XI \ {Y1,..., Ym}) A Y1 A
• A Ym where
•Yj} C X and [ai, Yj] V C(P), for any ai E V(e), j = 1,..., m. Since
P is multilinear in the set of vectors, the meet of type II subexpressions does not
occur unless P is zero or the trivial. If Q = S A T with both S and T type I, or
different type, replace Q identically.
CHAPTER 2. ARGUESIAN POLYNOMIALS
Let e = al .. a, V X1 .-.
Y1 .Ym, "Xk C P. Then e = e' = al "as.
Y1 . Ym, and setting U = {Xl,X 2 ,...,Xk} \{Y1,., Ym},
E(e') =
VX1 ... Xk
sgn(a)[al, X.(1)][a 2, X(2)] . [as, X,(s)]
{o:{xf
,(,)... x. ()}}1 U
Xa(s+l) ... Xa(k-j-s+l)
A YY2
"Ym +
sgn(a) [al, X,(1)][a2, Xa(2)] .'[as,
S
{a:3i:X, (i)E{Y,...,Y
X,(,) ]
I}}
(2.19)
Xa(s+l) ... Xa(s-k+l).
If [ai,Yj] ý £(e') for j = 1,...,m, ai E V(e) then C(e') is precisely the first
summation of equation 2.19, which may be rewritten as (ala2 ... as V XI
...
Xk \
{Yi,..., Ym}) A Yi A ... A Y,,,. Thus £(e) = ±6(e').
If Q = S v T with S type II and T type I then minimally write
Q = S V (TI A... A Tk)
(2.20)
where each S is type II and each Tj is type I, and is either a covector, a type I
R V W with R type II and W type I, or the join of type I expressions. Given
S(al,..., ak) and a set {Ti, , j = 1,...,1, if there does not exist a transversal ir with
[a,X*] E E(Q*)I,. for a E {al,...,ak}, X* E ext(E(Ti)lj,.), for any j = 1,...,2
then replace Q with
Q' = (S V ((T1 A ... Tk) \
,..., Ti,)) A Ti,,...,Ti,
(2.21)
which by an extension of 2.19 by linearity satisfies £(Q') = =E(Q). If Q = S A T
with S type II and T type I then for every transversal 7r, XJ* E ext(&(T*)I-.) implies
[a, XJ] E E(Q)I,. for some a E V(S). Replace Q identically.
The case Q = S V T both S, T of type I remains. Suppose Q*(Xj,) for Xj , E X*
but [a, Xj] 0 E(Q)1, for any transversal 7r, and a E V(Q). Then either
1. T*(Xj, ) and S*(Xj 2 ), for Xj,, Xj 2 E X* of label Xj, or
2. T*(Xj,) and there is no Xj 2 E C(S*), of label Xj with S*(Xj2).
In case 1, (and if as a base case Xj, occurs external to the join of subexpressions of
T*, and similarly for Xj, in S*), then by induction [a, Xj] 0 £(S)1,£
E(T) , and we
may write Q* = (S'* A Xj.) V (T'* A Xj,), for type I S', T'. Compute the join by
splitting T' A Xj. Then
Q=
[S' A Xj, B1 ]BoXj
(Bo,B 1 )EC(T';mno,rn 1 )
(2.22)
2.5. A DECOMPOSITION THEOREM
where ml = n - step(S) and mo = step(T2) - mil, equation 2.22 is (S* V T'*) A Xj,,
and Xj, is external to S* V T'*. As P is non-zero, there is no X; : Xj, with T'(Xj).
If case 2 occurs, then by formula 2.22 Q* may be replaced by non-zero E-equivalent
Q'* = (S* A Xj,) V T'* and again Xj, is not contained amongst the extensors of
E(Q'*).
Let P = Po, P 1 , ... , P, = P' be a sequence of Arguesian polynomials produced by
the construction. The expansion £(Pr) is evaluated recursively, and Pr differs from
P,_ 1 by replacement of Qr-1 by Qr with S(Q,_1) = £(Q,). Thus £(P,_1) = £(P,)
for all r = 1,...,n and E(P) = S(P').
Suppose there is Q' C P', with XJ E C(Q'*), [a, XJ] E g(Q'*) for some a E V(Q'),
but [b, Xj] ý E(Q')| , for any b E V(Q'), and transversal 7r' of P'. In passing from
Pi to Pi+I, Q C Pi is replaced by Q' C Pi+1 such that if Q'*(X*) then Q*(Xj).
Thus if [a, Xj] E S(P, 1 ) then [a, Xj] E S(PB*). By induction [a, Xj] E
G(P*).
There is either R A S C P or R V S C P with R type II, S type I, and a E V(R),
XJ E C(S*). In the former case, as [b, Xj] .0 (S*)I,. for b E V(S), and transversal
ir, there is some Pi, i > 0 such that Q" = R* A (S A..AXA
AX
SA) C_ Pi*. Then
[a, XJ] E S(Q")I,- for some transversal 7r, a contradiction. In the latter, there is Pi,
i > 0 such that Q"* = R* V (S A -..
A XJ A ... A S7*) C Pi*, and Q"* is replaced by
(R* V (S A
XjA..-A S))AX in some P,r
i.
O
Lemma 2.39 Let Ti C P, i = 1,..., 4 be type I subexpressions of a type I Arguesian
polynomial P in GC(n). Let Ci = {X E XIX E ext((Ti) 1,) for some transversala
of P }, and suppose that C1 l C. = C3 f C4 = 0. Further,suppose that X E C1 but
X 0 ext(E(T)I),) for a transversal r, implies X E ext(S(T 3 )J,), and that X E C2
but X . ext(S(T.2) ,) for a transversal7r, implies X E ext(E(T4 )I,). Then
E
(T1 A T2) V (T3 A T4 ) _ ((TI A (A X
\ C1))V T3 ) A ((T2
A (A X \C 2))V T4 ) (2.23)
PROOF.
Given 7r of P, let T, = 8((Ti A T2) V (T3 A T4 ))Ir. Then sgn(T,) =
Hi =1 sgn(S(Ti)) times the sign from computing the join. Let ext(S(Ti) ,),i =
1,...,4 be denoted U = U1 ... Uk, V= V1 ""V1 , Z= Z1
i
Z,,, W = WI..Wp respectively. In computing the join of the left side of equation 2.23 compute UV V ZW
as
[UVZiI,..ZijIWiI,...,WJ]Z\{Zi,
sgn(Z \ {Zi,
-- - Zi,.}W \ { Wi
,
*Zi.}W\{W ,,..., w,}x
Wj, } Zi,,.
Z,.,Wj,,
,
. ,. Wj.,) (2.24)
, Zi, } and { Wj,, , , Wj, } are the disjoint sets of covectors,
disjoint from U U V, required to complete the bracket in a non-zero T,. We claim
where the sets { Zi,,.,
CHAPTER 2. ARGUESIAN POLYNOMIALS
that subject to the hypotheses, the sets U U {Zi,. .. , Zi, } and V U {Wjl,..., Wj,}
are identicalover all ?r. For each ir, the correspsonding expression 2.24 is then equal
to, for all 7r, to
(-1)r(p-s+) [U, z,..., Zi,, V,Wj,,..., Wja]
i
}W \ {Wj,
Z\ {Zi, - -ZJ,
Wj.}
sgn(z \{Zi,--..Zi,},ZiI...,Zi, W \{Wil,., Wj,}W ,..., W,)(2.25)
Once the sets UU {Zi,,... , Zir} and VU {Wj,,..., Wi } have been linearly ordered
as in X, the bracket of 2.25 may be linearly ordered in the same number of transpositions for every T. Expression 2.25 is then equivalent, up to global change of sign,
to
((UV A(AX \C) V Z) A ((VA ( X \C3) V W)
(2.26)
The Lemma then follows by the linearity of the join and meet.
Proof of claim: We first show for any r, X E C1 implies X E U U {Zi,..., Zi }, and
X E C2 implies X E V U {Wj,,..., Wj,}. For let X E Ci but [a,X] 0 E(Ti)lI for
any transversal r and a E V(TI). Then by Lemma 2.13 part 1, X E ext(E(Tl)Ir) for
every 7r, X E U, and X does not occur elsewhere in the bracket of 2.24. If X E C1
and 37r such that [a, X] E E(TI) , but X ý ext(C(TI)I,) then X E ext(E(T 3 ) r) and
X J ext(E(T4)
I ), as 7ris a transversal. Thus X = Zi, for some j. If X E C1 and
3r such that [a,X] E £(Ti)I, and X E ext(E(T)I),), then by Lemma 2.13 part 2,
X E ext(E(Tl)I,) for every transversal 7r, and X E U again. If X E C2, the proof is
identical.
If X e C3 but X 0 Ci U C2 then X 0 C4, as C3 n C4 = 0, so X E Zi, for some j.
If X E C4, but X 0 Ci U C2, then X = Wj, for some i. It is not possible to have
X E C2 n C3 or X E Ci n C4, and by hypothesis, there does exist X E C1 n C2 so
the Lemma is proved.
O
We are now ready to prove the Decomposition Theorem.
PROOF OF THEOREM 2.36. (•=) Suppose P(a,X) is decomposable. Then there
exists Arguesian Q(al, Xi) in step Jail and R(a 2 , X 2 ) in step la2|, on disjoint variables sets al U a2 = a and Xi U X 2 = X such that £(P) = £(Q) x E(R). We show
E
that by induction that P = P1 V P2 with P1 , P2 type I, both step 0, such that if
[a, X*] E E(Pi*) then a E ai and X E Xl', i = 1, 2. Recursively construct a sequence
P = Po, P,..., Pm = P1 V P2 . Let R C Pi be given and assume by induction for
all Q c R, with Q : Q1 A Q2 for proper Q1, Q2 both type I, and Q # Q1 V Q2 for
proper Q1, Q2 both type II, the following: If [a, X*] E E(Q*) then either
2.5. A DECOMPOSITION THEOREM
1. a E
l, X* E X*, and the linear combination Q*(X,... ,X ) implies
{X1, ...,X1
if Q type II.
}
f
C XY, if Q istype I, or Q*(al,.. ,ak) implies (al,...
ak
2. a E a2, X* E X., and the linear combination Q*(X*,... ,X*) implies
{X*,...,X*} C X , if Q type I, or Q*(al,...,ak) implies {al,...,ak}
C al,
a2,
if Q type II.
Construct Pi A P2 as follows: Replace the join of vectors or meet of covectors in
Pi identically. Let e be a type I basic extensor occurring in P. By Lemma 2.38,
replace e by e' with V(e') C al and C(e') C X 1 , without loss of generality. The
extensor e' trivially satisfies inductive hypothesis 1. If f is type I, take a minimal
type I R, f C R C P. For any transversal 7rof Pi and each X E C(f), there is
a E V(f) such that ir: a -+ X. If b E ext(C(f)J,) then as R is type I, 3Y E C(R)
such that r : b -+ Y. Then there is another transversal a(c) = Ir(c) for c $ a, b, and
a : a -+ Y,a : b -+ X, and so without loss of generality, V(f) C a,, and C(f) C X 1.
If R = SV/AT with S and T of same type replace R identically. If R = SVT with S
type II and T type I minimally write, as the join of type II Si and meet of type I Tj,
R = (SIV ... VSk)V(TI A...ATi). By induction, Sj*(al,..., am) has {al,..., am} C al
or {ai,. . ., a,,} C a, for every 1 < j < k and by Equation 2.19 we may factor T
such that for every 1 < j
l'there is Xý E {X'*,..., X } in T,(X*, ... ,X) and
a transversal irwith [a,X.l] E (Pi+1),.*, i.e. [a,Xi] E E(Pi+1)|,. By induction
[a,X*] E E(Tj) 1 < j 5 l'implies a E al, X* E X*, {X ,...,Xp} C X or a E a2,
X* E X2, and {X*,...,Xp} C X. .Thus R can be replaced by an E-equivalent R'
satisfying either 1 or 2.
If R = SAT with S type II and T type I, write R = (S1 V . V Sk) A (T1 A '" ATm)
with Ri, Sj satisfying 1 or 2. Let R' be minimal type I with R C R'. By an argument
analagous to the case of a type II basic extensor, if P is non-zero, R must satisfy 1
or 2, replace R identically.
Let R = R 1 V R 2 where R 1,R 2 are type I and R 1 = Ti A T2 , R 2 = T3 A T4 , where the
result holds for Ti, i = 1,...,4 by induction, and T ior T i+l, may be empty i = 1,2,
but not both. For any 7r,
the union of the covectors in ext(E(Rl)1,) and ext(E(R 2 ) 1)
must span X. If at most one Ti is empty and either Ri contains Ti, Ti+1 satisfying
different hypotheses, let Ti, T3 satisfy hypothesis 1 and T2, T4 satisfy hypothesis 2.
Then as P is non-zero, Lemma 2.39 applies. In the E-equivalent
((T A AX2) V T3) A ((T
2 AA Xi) V T4 )
(2.27)
no covector of X* appears amongst the extensors of F(((T1 A A(X2) V T3 )*), and
(( T 1 A A(X 2 ) V T3 ) satisfies hypothesis 1, (similarly for ((T2 A A Xi) V T4 )). If T2, T4
CHAPTER 2. ARGUESIAN POLYNOMIALS
are empty, and R = T1 V T 3 , replace R identically, for given Ir, the union of the
covectors in the extensors of £(T1) , and E(T3)1, must span X. Hence T1 and T3
satisfy opposite hypotheses, P has step 0. If Pi = R1 V / A R2, with R 1 , R2 are
maximal we may take Pi = R1 V R2, and the outermost A of 2.27 is a V of step 0
subexpressions.
Let P = Po,..., P, = P' be a sequence produced by the construction, Since each
R is replaced identically or with R' satisfying S(R) = 6(R'), we have C(Pi- 1 ) =
6(Pi),i = 1,...,mn, and thus S(P) = 6(P'). Suppose [a,X*] E g(P'*)
with a E al
and X* E X*. As in the proof of Lemma 2.38 we must have [a, X*] E E(P*). Then
there is either
1. Type II R A S C P with R type II and S type I, a E V(R), X* E C(S*), and
[a, X*] E E(R* A S*). Then there is m > 0 such that (R1 V - -* V Rk) A (Sj A
... A Si) C P,,, with Ri, Sj satisfying hypothesis 1 or 2 with a E V(Rp) for
some 1 < p < k, X* E C(S,*) for some 1 < q 51. As P is type I, RAS is
replaced identically and a E al, X* E X or a E a2, X* E X., a contradiction.
2. Type I RVS C P with R type II, S type I, and a E V(R), X* E C(S*). Then
there is m > 0 such that (RI V ... V Rk) V (S1 A... A SI) C Pm, with Ri, Sj
satisfying hypothesis 1 or 2 with a E V(Rp) for some 1 < p < k, X* E C(S*)
for some 1 < q < 1. Then R V S is replaced in some Pq, q > m, by equivalent
R' VS' with [a, X*] 0 E(R'*AS'*). Since V(P) is multilinear, [a, X*] J E(P'*),
a contradiction.
(=) Suppose there exists equinumerous al and X 1 satisfying [a, X*] E E(P'*) and
a E al or X* E Xt implies a E al and X* E Xj. If P is non-zero [a, X] E S(P)
implies a E al, X E X 1 or a E a \ al = a 2 , X EX \ X1 = X 2 . By the above
argument, P' - P1V P2 with [a,X] E S(P') implies [a, X] E S(P1 ) or [a, X] E 6(P 2 )
and [a,X] E 6(Pi) implies a E ai and X E Xi, i = 1,2. The expressions P1 (P 2 ) have
the property that covectors of X 2 (X 1) occur only in subexpressions Q1 C P1 (P2 ) of
form (Ti A (A X 2 )) VT3 ((T2 A (A Xl)) VT4 ). Let T1 VT2, an expression in al, X 1,be
obtained from (T1 A (A X 2 ) by deleting all occurrences of X 2 Then as X1 n X 2 = 0,
E((Ti A A X 2 ) V T3 ) in step n is identical in a field of charcteristic 2, to 6(Ti VT3) in
step jall. For a transversal 7r, denoting ext(S(Ti)1,) = X - - -, Xp, AX2 = Y1 ".."
Yq,
and ext(E(T3 )I,) = Z 1 . . Z, the monomial C(Ti AA X2) V C(T 3 )I, is given by
lI...,x,,Ip..Y,,yq,, Zil ... , z
Z.l.-.] \ zi}....,zi_,_,}
z...z,
sgn(S(Tj)l,) x sgn(E(T)l|,)
(2.28)
2.5. A DECOMPOSITION THEOREM
x sgn(Xi,..., Xp, YI,..., Yq, Zil ... , Zi-,-,,)
(2.29)
x sgn(Zi ... Z. \ {Zi,..., Zip-, },Zil,... Zip, )
Then sgn(E(T VT )l,I,) in step lall with 7r' = -rlp1 is identical except 2.28 is replaced
by sgn(E(T 1)Iff) x sgn(E(T)If,) and 2.29 is replaced by
sgn(XI,..., Xp, Zil ,..
Zi._,_,).
(2.30)
Since over all 7r, {Yi,..., Yq are identical and disjoint from Xi, the sign of 2.29
is consistently equal or opposite to the sign of 2.30. By induction, £(PI) equals
the alternative expansion of the Arguesian polynomial Q in step (jali ) obtained by
deleting all occurrences of X 2 . If R(a, X) denotes the corresponding polynomial in
a 2 , X 2 upon deleting X 1 then E(P') = E(Q) x E(R) = E(P) as P E P'
O
Example 2.40 In the following Arguesian polynomial P in GC(8), if non-zero must
be decomposable.
((((a V AB) A C) A DEF)V b) A GH)V
((((cV ABC) A DEFG)V de) A H) V
((((DEH V g) A (f V AC) A B) V h) A FG).
By repeated applications of Lemma 2.38,
E
P = ((DEF V b) A GH A (a V AB) A C))V
((DEFG V de)A H A (cV ABC)) V
((DEH V gh) A FG A (f V AC) A B)).
An application of Lemma 2.39 gives
(((((a V AB) A C) A DEFGH)V (c V ABC))A
(((DEFGV de) A H) A ABC) V ((DEFV b) A GH)) V
((DEH V gh) A FG A (f V AC) A B)
and a second yields,
(((((a V AB) A C) A DEFGH)V (c V ABC)) A DEFGH)V ((f V AC) A B))V
((((DEFG V de) A H) AABC) V ((DEF V b)A GH)) A ABC) V ((DEH V gh) AFG)
whose alternative expansion, by Theorem 2.36, is equal to product L(Q) x E(R)
Q =
((aVAB) AC) V (cVABC)) V ((fVAC) AB)
R = ((DEFV b)A GH) V ((DEFGv de) A H) V (DEHV gh) AFG)
CHAPTER 2. ARGUESIAN POLYNOMIALS
Arguesian P vanishs iff either Q or R vanishs. By applying Propositions 1.5 and
1.9 the identity may be interpreted as a geometric theorem of seven-dimensional
projective space.
If a non-zero Arguesian polynomial is indecomposable, we may prove the following
Proposition.
Proposition 2.41 If P(a,X) is a non-zero indecomposable type I Arguesian polynomial and if [a, X*] E E(P*) for XJ* E X*, then there is a pre-transversalf* with
f* : a -* Xj.
PROOF. The pre-transversals of P are perfect matchings of the associated multigraph Bp. Suppose for contradiction there is no perfect matching containing a
multi-edge (ai, Xj) E Bp, corresponding to f* : a -- XJ. Then by Hall's matching
theorem, which remains valid for multigraphs, there is a subset A C a \ a with
IAI > IRX\x (A)I, where IRX\x (A) denotes the cardinality of the set of relatives
of A in X \ Xj. As P is non-zero, [b, Xj,] E S(P*) for some repeated Xj, of label
Xj and some b E A. The sets A and IRX\.x (A) IU Xj therefore form equinumerous
subsets of a and X. Thus in any monomial M E E(P), if [a, X] E M and a E a or
X E Rx(A) then a E A and X E Rx(A). By the proof of Theorem 2.36 (+), P
may be converted to an E-equivalent P' satisfying [a, Xj] E E(P*) and, a E A or
Xj E RX(A)* implies a E A and XJ E RX (A)*, and we obtain the contradiction
that P is decomposable.
O
Chapter 3
Arguesian Identities
3.1
Arguesian Identities
We present a general construction for identitites between Arguesian polynomials
of either type, which may be viewed as the analog of alternative laws in the sense
of Barnabei, Brini, and Rota [MB85]. In general, the existence of the Grassmann
condition makes the construction of Arguesian identities quite complicated, however
Theorem 3.1 gives a contruction fundamental to all Arguesian identities, yielding a
large class of dimension independent identities, including all those presently given.
Particularly interesting is an n-dimensional generalization of Bricard's Theorem.
The proof of Theorem 3.1 forms the starting point for construction of a much larger
E
class of identities P - Q; identities whose validity depend both on subtle matching
properties of Bp, and on sgn(E(P)).
Theorem 3.1 (Main Theorem on Arguesian Identities) Let B = (a U X, E)
be a simple bipartite graph with equinumerous labeled vertex sets a and X of cardinality n, and edge set E, satisfying VA C a, JAl < IR(A)I where R(A) = {X E
XI(a,X) e E,a E A}. For a E a, form type I basic extensors ea = a V A{Xj}
where X E {Xj } if (a, X) E E. Similarly, for X E X, form type II basic extensors
fx = V{ai } A X. Let P be a type I Arguesian polynomial in a Grassmann-Cayley
algebra of step n formed using the set {ea} U X and the rules,
1. Given type I R, whose covectors C(R*) contain no repeated labels of X, and
a type I basic extensor ea = a V A{Yj} with {Xi} C {Yj}, set R' = a V (R A
(A{Yj} \ {X})).
CHAPTER 3. ARGUESIAN IDENTITIES
2. Given type I R and S, where R, S may be simply the meet of covectors, form
RAS.
Let Q be type II Arguesian formed using the set {fx } U a and dual rules 1 and 2.
a) If P and Q are type I, II Arguesian polynomials, homogeneous of order 2, then
E
P -Q.
b) If P = Vi=l Qi and Q = Aj l Pj, 1,m 2 3 with C(Qi) = X, V(Pj) = a, multilinearly, for 1 < i < 1, 1 < j < m, then P-
E
Q.
PROOF. We prove parts a) and b) simultaneously by verifying that given Arguesian
P and Q constructed in this way, P and Q have the same transversals occurring
with equal or opposite sign uniformly.
By construction, X E C(ea) -+ a E V(fx), and therefore 3X* E X* of label X
such that [a, X*] E C(e) iff a* E a*
a*of label a such that [a*, X] E £(fif). Suppose
ea = avA {j}, and [a, Yi,] E C(e,) for Y1, E X* of label Y1 E { Y}. Let R be type I
with C(R) = {Xi}
{Yj }, and apply rule 1. Forming R' = aV(RA(A{Y} \ {Xi})),
if Y1 E {Yj} \ {Xi} then [a,Yi,] E E(R'*). If Y1 E {Xi}, then Y1 E C(R). As
C(R*) contains no repeated labels of X, R contains no non-trivial joins of type I
subexpressions. By construction, R contains no type II subexpressions other than
the join of vectors. Thus, 3Yi• E X* of label Y1 such that R*(Y,)12 and then [a, Y,2] E
S(R'*). For rule 2, [a,X*] E £((R V S)*) iff [a,X*] E E(R*) or [a,X*] E E(S*).
We conclude that [a, XI,] E £(e*) for X1 , of label X1 iff 3X* of the same label
with [a, X'] e g(P*). Therefore for each pre-transversal of P there corresponds a
pre-transversal of Q with identical bijection r : a -4 X, and vice-versa.
Let P = V= 1 Qi. Grassmann condition 1 does not apply to any Qi as C(Qi) = X
without repetition. If 7r* is a pre-transversal of P with 7r* : a -+ X*, X* E X*,
then as C(Qj) = X, 1 < j 5 1, the covector of label X appears in I - 1 distinct
ext(E(Qi)I,r). 1 < i < 1, and the join E(P)I, = Vi=l1 E(Qj) I, is non-zero, Grassmann
condition 2 does not apply. Suppose P has order two. Only Grassmann condition
1 may apply. By Proposition 2.17, if P is zero under pre-transversal 7r*, there is
T = RAS C P with Xj E ext(E(R)l,) n ext(E(S)j,). Let Xj, E ext(E(R*)l,.) and
Xj, E ext(C(S*)1,*). Then there does not exist a E V(R) U V(S) with r* : a -+ Xj,
or 7r* : a -+ XJ. Since 7r* : b - Xj, i = 1, 2 for some I and b E a, we have T C S',
R'V / A S' C P and b E V(R') \ V(T). But C(R A S) has repeated Xij, ,X 2 so R' VS'
is not formed. Hence P and Q have the same transversals. By Corollary 2.15 all
transversals occur with coefficient ±1.
3.1. ARGUESIAN IDENTITIES
Following the proof technique of Theorem 2.29, let x and a be transversals of P, at
least one exists as the hypothesis of Hall's matching Theorem are satisfied on Bp.
When no confusion results we shall identify 4r with its corresponding transversal r-1
of Q. We construct a sequence of transversals r = rO, 1, -...- , rs = a in which
sgn(E(P)I,,) x sgn(E(P)I,i,+) = sgn(C(Q)| 1,) x sgn(E(Q)I,r,+)
(3.1)
from which it follows that,
sgn(C(P)j,) x sgn(E(P),) = sgn(E(Q)l,) x sgn(C(Q)I,)
(3.2)
E
and P
Q. If 7r and a are transversals of P then by Lemma 2.20, if r* : a -+ Xj4
and a* : a -+ Xjm for repeated Xj, , Xj m E X* of label Xj then 1 = m. Thus
4r and a induce a permutation of X defined as p : r(ai) -+ a(ai), and it suffices
to verify 3.2 for the case p is a cycle. Set V(p) = {ai E al 7r(i) $ a(i)}, and
C(p) = {X EX : a -+ X,a V(p)}.
Lemma 3.2 Let r, a be two transversals of a type I Arguesian P(a,X) in GC(n)
and suppose that the permutationp of X induced by p : r(i) -ý a(i) is a cycle. Then
there is a sequence of transversals
7r =
0,r1,...,rm-1,4rm I, 4
m l+
, . . . I,
s
= a
such that the permutationpi induced by iri, ri+I is a transpositionfor all i 6 m, and
if i = m, Pm induced by rm
', 7r,,m+ is a transposition or is a cycle satisfying: For
a E V(pm), if [a, Xj] E E(P*) ,•
for Xj E X* of label Xj, then there does not
exist bE V(pm), such that [b,XJ] E E(P*)I,.
PROOF. While valid more generally, we prove the Lemma only for P satisying the
hypothesis of Theorem 3.1. The sequence is constructed iteratively. If 7r and a differ
by a transposition, 7r = a, or 7r, a satisfy the condition of transversals irm,, rm+1, the
Lemma is trivial. Let Ct denote the cycle induced by the pair 7rt, a, 0 < t < s.
If [a, Xj] E E(P*)a*, [b, Xj*] E
(P*)|1.
for a,b E V(Ct), and 0 < t < s, let 4 : a -+
:• b -+ X, a* : a -4 X~, and a* : b -+ X~, with Xj, X* , X* E X* of labels
X* ,I
Xj,XI, X, E X. Then either
Case 1.) 3R V S C P with R type II, S type I and linear combinations R*(a, b),
S*(XJ, X;, X;).
Case 2.) 3T = RV S C P with R type II, S type I, R*(b),S*(Xj,Xr, X;), and
T C S' with S'VR' C P and S' type I, R' type II, R'*(a), and X7 E C(S*). If S C P
CHAPTER 3. ARGUESIAN IDENTITIES
with P of any order satisfying the hypotheses of Theorem 3.1 then as S is proper,
and contains no non-trivial type II subexpressions, an easy induction shows repeated
X* E C(S*) implies X* E {Xj ,... ,XI } in the linear combination S*(Xj,... ,X).
Therefore S'"(X, Xl),X*), and [a, X] e £((R' V S')*).
Case 3.) 3T = R V S C P with R type II, S type I, R*(b), S*(Xj, X*), and T C S'
with S'V R' C P and S' type I, R' type II Ri*(a), S'*(XJ, X, X,.) and X* 0 C(S*)
In case 1 or 2, set r*+ 1(c) = ir*(c) if c 5 a, b and r*+l : a -4 X,
b
X1.
Then as 7rt+1(a) = a(a) and rt+l(b) = 7rt(a), no new violations as in cases 1-3 occur.
Since 7r+1 is a pre-transversal, 7rt+l is a transversal, and if Ct has length 1 then Ct+l
has length 1 - 1. Given 7r= 7r0 , eliminate each occurrence of case 1 or 2, by the
above reassigment, to form 7r= 7r0 , 7r,. . ., 7
1,7r,,
We therefore assume every violation of a* : a -- XJ, r'*, : b -+ X for Xj E X*,
a, b E V(Cm) is of the form of case 3. Let X* E X* and consider a maximal length
sequence i7r : ai -* Xf, * : ai -- X
, for 1 < i < k. By construction of P for
every i there must exist Ri V Si C P with Ri(ai), Sj (X, Xi*+l), as P contains no
proper type II subexpressions.
We claim the sequence {Ri V Si, 1 < i < k} satisfies Ri+1 V Si+1 C Si, where the
inclusion is proper, or else case 1 or 2 applies. As RiVSi Cg P, Rý (ai), Sý (X., X* 1 ),
and [ai+l, Xi+1] E E(P*)|I7, we must have either ai+l f1V(RiVSi),ai+l E V(Ri), or
ai+l E V(Si). In the first case, there is type I S', with, Ri VSi C S', R'VS'C P, and
R'*(ai+), S'"(Xý, X4i1 ). Then the case 2 transposition 1r+1 : ai -ý X
, 7r*+1
ai+l -4 X, applies, a contradiction. Similarly, if ai+l E V(Ri), case 1 applies. As Si
is type I, then Ri+ V Si+l1 C Si. The inclusion must be proper, for else Ri V (Ri+1 V
Si+1) C P and by associativity, and anti-communtativity (Ri V Ri+l) V Si+1 C P and
case 1 applies. Thus for 1 < i < k, Rji+ V Sji+ C Si, R 1 V S1 C P is maximal, and
we may write,
RI(al)
S(X+ ...,X;,X*)
R2 (a
x,,X
2 ) S.(X., ..
Rk(a k)
i)
Sk(X ,X +1)
Form rm+1 as follows: For each maximal sequence of the above type set 7rm+ 1
al -- X~+ 1, leaving fixed 7r*+1 : ai -+ X*, 2 < i < k. Further, if a E V(Cm) such
that there does not exist b E V(Cm,) with ir$*(b) = a*(a), set 7r*+ 1(a) = a*(a), are
bijections, 7r,,,+
is a well-defined transversal, and the cycle pm,, induced by 7r' and
7rm+l has cardinality strictly less than Cm.
3.1.
ARGUESIAN IDENTITIES
The transversal a may be recovered from 7rm+1 by a sequence of case 2 transpositions applied to each maximal sequence. As R*(ax),S`(X*,...,X*,X +1 ), and
R(_,,
l(ak-i+l), Si+ 1(X_*i+I,...,X+IZ), for 1 < i < k,define r,m+i+l as
7r*+i+1(a ) = 7m+i(a) for a j
al,ak-i+l, and 7r*+i+l : al -
X•_i+ 1, 7r+i+l "
aki+1 -+ X-_i+2. Thus at step i, 7rm+i+l(ak-i+l)= a(ak-i+l).
As ICm I = IpI - m, and there is a bijection between the set of transversals, {1rii >
m+2}, and the set {X e C(Cm.)I a*(a) = r'*(b) = X*, a, b E V(Cm), X* of label X},
the sequence 7r= ro,. . . , rs = a is of length I|p
- 1.
0
Lemma 3.3 If the cycle p,, induced by 7r
m , 7r,,+1 satisfies the property of Lemma
3.2 then,
sgn(E(P)|r ) x sgn(E(P)Inm+,) =
-11P-1-1
order P =23
PROOF. If Pm has length 2, then the Lemma is evidently true. Let R C P in which
R contains no join of type I subexpressions, and P has any order. Given T C R, let
ext(E(T)l,- ) = XI ...
Xp, and ext(E(T)lm+, ) = Y1 . Yp, with repeated representations XI*,..., X;, Y*,..., Yp*. We claim by induction, that ext(E(T),.m+,) can be
reordered, without affecting sgn(E(P)f,, ) x sgn(S(P)ri,,+,), so that ext(E(T)lf,m)
and reordered ext(E(T)|,,.+, ) satisfy,
For all j = 1,...,p, either 1) Xj = Yj or, 2) if Xj # Yj* then Xi # Yj, and
Xk for j # k and there is a E V(T) fn V(pm) such that 7r*+l :a -- XJ. and
Xij
x'*
:
a
7r
a - Yj*
The result is trivially valid if T = e C R is a type I basic extensor, as 1) is satisfied.
Also if T = T 1 A T2 with T 1 , T 2 type I, since by induction T 1 , T 2 satisfy 1 or 2, the
result is valid.
Thus let T = T1 V T2 with Ti type II and T2 type I. We may write
ext(E(T)l1 7 )V ext(E(T 2)|•7.)
= al ." akVX1"..XI
ext(E(T)l,m+,) V ext(E(T2)l,r,,+) = al ak V Y1 ... Y
Then
8(TI V T2) I ,,= X1 ...X, \ {X
(1),..., X7, (k)
(3.3)
x sgn(E(T1)|r-) x sgn(E(T2)J~-)
(3.4)
x sgn(X... X,\{X,r(1),... ,X(~(k)I},Xr' (),...,X5,)r(k))
(3.5)
CHAPTER 3. ARGCUESIAN IDENTITIES
while
S(Ti V T2 )lIrm+ = Y1 ... Y \ {Yr,, +1(1), ... Yr,,+l(k)}
(3.6)
x sgn(S(Ti)ljm+,) x sgn(S(T2)l,m+,)
(3.7)
x sgn(Y ...
Yp \
{Y,,,+, (1),- - ,Y"rm+(k)}, Yrm+(1), .,Yrm+
(3.8)
(k))
Let a E {al,..., ak}, and r'* : a -+ X i ,7r*+ : a -+ Yj*. By induction, assuming the
claim holds for £(T2)1,, and 8 (T2)|jm+l, there are cases.
Case 1.) If i = j and X" = Yi*. Then a 0 V(pm,), and Xi 0 ext(E(T)jl,),Xj
,
ext(C(T)ln m+,). The case i # j, Xi = Yj, does not occur, as then Xi
#
Yi, (else
7rm+l is zero), contradicting the inductive hypothesis.
Case 2.) If i # j, X* = Yi* and Xj= Yj.,then Xj E ext(S(T)IEi), Yi E
ext(E(T)
1
+,), and 7r,,
+ 1(a) = X -=Yj ,r*(a) = Yi = Xi.
Case 3.) If i = j but Xi # Yi, then by induction, X i = 7r* i(b) = r'*(a),Y* =
7r (b) = rm+1 (a), for b E V(p,,,) n V(T2). Then Ip,,, I = 2, and Lemma 3.3 is valid.
Case 4.) If i # j, Xi # Yi, or Xj # Yj, then there is b E V(pm) n V(T 2 ), with
+l(b) = XYj*, 7r',(b) = Yj, assuming the latter. Then r(b)
r'* = 7r+l,(a), a E
V(T \ T2 ), and a $ b, contradicting Lemma 3.2, unless IPm,,, = 2.
All elements of {a1 ,.. , k}in V(pm) are assigned by 7r',, 7rm+1 as in Case 2. By hypothesis there does not exist ai, aj E {al,... , k} V(p,,,) with 7rr+l(aj) = 7r*(ai),
the position of r*+ 1 (ai) in X* ... - X is distinct from the position ofir'(ai)
in
Y* ·... Yp*, and both are distinct from either of the positions of 7r*+l(aj) and r* (aj)
for j : i. Thus the covectors of 3.6 occurring in both 3.6 and 3.7 may be simultaneously reordered, without affecting sgn(9(P),,,) x sgn(S(P)lrm.+,), to satisfy the
claim. The claim follows by induction. Furthermore 3.4, and the reordered 3.7 may
be written
sgn(X,,,,+
XI, .1 .
X,
XI\,
{X ,,,
(i
,
(
l),."",X
,..., X
,,, +
(is),
(3.9)
rX(i,),
X r (1),"
, Xrf,,(k)
X ,,'
(1X e;,,(i)....,X ,,,(i,),...,Xr'(k))
(3.10)
sgyn(Y,7r, (i1),
- ,Y,:,,(i,),
YI,. . Y].1,
\ {Y r'
,,(i,
(i),
, ,,(i) ,,+l (1i.
) , .Y7•m
+l(k),
where each i? E {1,... , k} corresponds to one of s vectors of {al,...,ak} n V(pm),
and 3.9 and 3.10 are identical except in a set of common positions occupied by
covectors with subscript ij. Then the product of 3.9 and sgn(X 1 ,... ,XI) differs
from the product of 3.10 and sgrn(Y 1 ,...,Y 1 ) by the parity of j{al,...,ak}n V(p,,m) .
3.1. ARGUESIAN IDENTITIES
Let P be Arguesian of order 2. By the claim just proved, sgn(S(P)|I,') x
sgn(E(P)lm+,. ) is (-)IPml, times the parity difference of the step zero extensor
X1,..., X, given by ext(E(P) ,,,m+,), from the reordered Y 1,..., Yn, by ext(C(P)lI).
Passing to non-repeated variables, for j = 1,..., n either XJ = Yj or if Xi : Yj there
is a E V(pm) with 7r,,+1 : a -* Xj and 7r' : a -+ Yj. Let P = {i Xi : Yi}, and
j E P. The covector Xj E X must appear as some Yk amongst the set {Y1,.... Y,
and hence k E P as {Xi,... ,Xn} has no repetitions. Thus r, : c -+ Yk, c E V(pm),
{Xi}iEp = {YiyiEp and the map p' : Xi -+ Yi for i E P defines a cycle of length
IPmi. The extensor Y1,... Y,, may therefore be ordered as X 1 ,..., Xn in (-1)IPml-1
transpositions, and
sgn(s(P)l|.) x sgn(S(P) •,n+ )= (- 1 )Ipml
x
(-1)Ipml
- 1
= -1
Let P = V,=_ Qi, I > 3. Then sgn(E(P)l)
Im+l) x sgn(S(P),. ) is given by (-1)IPml
times parity change of Vi=l ext(S(Qi)lJ,, ), from reordered V= 1 ext(e(Qi)ir,,, ).
For i = 1,... ,1 let the jth covector of the extensor of ext(S(Qi)|,l ) and reordered
ext(E(Qi) 1,,+,) be denoted Xi,j and Yij. Let P = {(i,j) Xi,j :A Yi,j}. If (i,j) E P
there is a E V(p,,,) with r,,,+l : a -+ Xij, 7 ,/ : a - Yi,j. Then 7lrm+l(a) = 7r'(b)
for b E V(p,,) and so r',(b) = Yij, for (i' j') E P, (as 7r,,+ 1 is a bijection), and
i # i' as C(Qi) = X identically. Similarly, 7r',,(a) = 7r,,+l(c) for c E V(pm), so
{Xi,j)(ij)Er = {Yi,j}(i,j)EP, and the map p': Xi,j - Yij, for (i,j) E P, defines a
cycle of length Ip,,,. By Lemma 3.5, the parity change of Vi=, ext(S(Qi) 1,, ) from
reordered V>=,
1 ext(E(Qi)l[,,,+,) is always -1, (see Example 3.7), and therefore
sgn(S(P)|,,,) x sgn(S(P)J|,.1 +) = (-1)IpmIl- 1
S We now complete the proof of Theorem 3.1. The
transversals ir of P and 7r- 1 of Q are identical perfect matchings of the associated
1 or-l,a
- 1 induce
bipartite graph B. Given 7r, a, transversals of P,
a permutation
j
1
1
1
p
induces
a cycle
p- of a defined as p- : 7r-l(Xj) -+ a (X ), for 1 < j _<n. If
Cp, then the set of edges {(ai,wr(ai)), (ai, a(ai))}, for ai E V(Cp) form a cycle in B.
The same cycle of B may be equivalently defined as {(Xj, 7r-1(Xj)), (Xj, a- 1 (Xj))},
for Xj E C(Cp), and thus p-1 induces a cycle CQ in a of length ICpJ = ICQI. For
notational convenience let 7r, a and 7r- 1 , a - ' be denoted identically.
Let 7r, a be transversals of P, inducing a cycle Cp, and let
7r = 7r0 , 7r1,...,.
N7r_,
717r?)?,
7,,+1,...r, = a
(3.11)
be the sequence given by Lemma 3.2. Consider 3.11 as a sequence of transversals of
Q. As p- 1 , induced by 7ri, 1ri+1, in Q, 0 < i _<m-1, nm+1 < i < s, is a transposition,
CHAPTER 3. ARGUESIAN IDENTITIES
we may apply Lemma 3.16 to obtain,
sgn(E(P)I,r) x sgn(E(P)|I,,) = sgn(E(Q)I,r) x sgn(E(Q)l,),
x sgn(E(P)|j) = sgn(E(Q)jrr+) x sgn(E(Q)Ia).
sgn(E(P) ,rm+,,)
1
is a transposition, then Theorem 3.1 is true. Hence assume pM1 is a cycle
1
|IC| ;> 3. By Lemma 3.5, IC,| < ICp|, with inequality strict. If p- satisfies the
dual to Lemma 3.5, then Theorem 3.1 is true, as
If p•
sgn(C(P)jI,)
x sgn(SE(P)Ifm+,) = sgn(E(Q)I,|f) X sgn(E(Q)j.m+,) = -1.
Otherwise, apply dual Lemma 3.2 to transversals irm, rm+ of Q, substituting the resulting sequence · r' = (0,
o - ,, (q+1,
,
=rm+, (with r < IC 1), for n, rm+l
in 3.11. If C" denotes the cycle pq induced by (C,(q+ then by Lemma 3.5, ICII <
IC| < ICp . We may iterate this procedure obtaining a refinement
ir = 70o, 71, ..- ,
, p+l, ... ,t
= a
with t < ICpI, where 7i, 7i+1, i # p differs by a transposition, and as ICpI is finite,
/P,, 7p+l differs by a transposition or a cycle simultaneously satisfying Lemma 3.2 in
P and its dual in Q. Thus
sgn(6(P)Ij ) X sgn(E(P)I•p,4) = sgn(S(Q)|fi) X sgn(E(Q)I7p+,)
and Theorem 3.1 follows.
0.
The connection between Arguesian polynomials of order two, and those of higher
order is implicit in the proof of Theorem 3.1. Corollary 3.4 itself gives a large class
of geometric identities.
Corollary 3.4 If Arguesian P has order 2, and Q has order1 2 3, where P and Q
E
are constructed from the same bipartitegraph B using Theorem 3.1, then P = Q if
and only if the permutation p induced by any pair of transversals7r and a is even.
The following Lemma completes the proof of Lemma 3.3.
Lemma 3.5 Let Ai,i = 1,...,I be ordered sets of distinct covectors C(Ai) C X
with
V AA= -[[Xi,...,X
i=1
n
- 1
3.1. ARGUESIAN IDENTITIES
Let {Cj}j=l,...,d C X, 2 < d < n be a set of distinct covectors each occupying a
fixed position of some Ai. Let p be a permutation of {Cj} which is a cycle, having
the property that if Cj E C(Ai), for i = 1,...,l then p(Cj) ý C(Ai). Further, let
Bi, i = 1,.... I be the ordered sets of covectors formed by setting Bi = Ai except in
a position occupied by an element Cj which is replaced by p(Cj). Then, V= ABi =
±[[Xi,... ,Xm]]'
- 1
and
I
I
syn(V AAi)= -sgI(aV
i=1
ALi).
i=1
PROOF. The join Vi=1 ABi must be non-zero as if Vi= 1 AAi = [[X1,... ,X,]] - 1 then
the covector Xi is contained 1 - 1 times amongst the covectors of Ai, i = 1,...,1.
No Ai may contain two Xi, so p must permute Xi to the unique Aj not containing
Xi. Then each in pair of Bj, the covector Xi is contained at least once.
It suffices to prove the Lemmlna for the case where each Ai contains at most one
element of {(j}j=l ..... ; for suppose C;i,Ci, E C(Ai). There are Cji,Cj, with p:
Ci, -+ Cj, p : Ci -_+Cj,j and since p is a bijection Cj, : Cj,. Further, Cj,, Cj2,
C(Ai). Let o' = p except p' : Ci, -+, ,,p' : Cri -+ CQi. Since A Bi is non-zero,
A B$ is non-zero where B$ is formed from Ai by replacing Cj by p'(Cj). In this case
p decomposes into two cycles, pl, p2, as the set of edges E of p starting from edge
e = (Ci, p(Ci )) and containing (p(Ci1 ),p 2 (Ci)) must pass through (Ci,p(Ci2))
before returning to e. Since p' : Ci, -+ Cjl, the set E\ e U (Ci;, Cji) forms a cycle.
The ordered set Bi, the only set in which p' # p, may be obtained from Bý by
applying the transposition Cj, + Cj,. Assuming the result holds by induction on
pl, P2, since a transposition of covectors AB i is sign reversing, the overall sign change
is -1, and thLe Lemma holds.
As the position of Cj in A, is identical to the position of p(Cj) in Bi, and Ai and
Bi are otherwise identical, we may assume that the sets Ai are linearly ordered as
in X. Denote by B*, the set Bi linearly ordered as in X. For {Cj}j=1,...,d, verify
we have Cj E C(AAi) * p(Cj) E C(ABi) * p(Cj) E C(AB ), and the position of
p(Cj) E B3 differs from the position of Cj E Ai by the number nj of covectors X E
C(AB ) U C(AAi), satisfying Cj < X < p(Cj) in the order X, (or > if Cj > p(Cj)).
For {Cj}j=1,...,d, we have Cj E C(AAi) # Cj _ C(ABL) <C
Cj E ( I A Bf*), and
IA BL3is linearly ordered as in X. Similarly, p(Cj) E C(ABN) * p(Cj) E C(IAi). If
X . C(AB) UC(AA;), then X E C(I AB) n C(I A Ai). Thus the sum of the number
of covectors between the position of Cj in AAi and p(Cj) in AB7 plus the number of
covectors between the position of p(Cj) in IA Ai and Cj in I A L3 is the total number
of covectors satisfying Cj < X < p(Cj), (or >) in X. Denoting this number by nj,
and summing over {Cj}j=... ,,the result depends only on d and d 1 nj = d - 2.
_3 1f
rj=1'..d'J
L)C
d -2
92
CHAPTER 3. ARGUESIAN IDENTITIES
Thus in d - 2 transpositions we may simultaneously reorder each ABS as ABi, and
each I A B' as an extensor ADi satisfying: 1) A A{ and ADi are identical except in
set of common positions which contain elements of {Cj}j=1,...,d. 2) If a position of
I
I A Ai contains p(Cj) then the corresponding position of ADi contains Ci.
As I A A4 and ADi are both the meet of covectors, it is then clear that
sgn(I A Ai,..., I A Ad) x sgn(AD1,..., ADd) = (_1)d -
1
(3.12)
By an application of the dual to Lemma 3.6, and the above remarks we have,
d
d
sgn(I A A,..., IAAd) X sgn(V ^AA) = sgn(I^A Br,..., IA^ B) x sgn( V ABL')
i=l
d
i=1
d
sgn( V AB1) x sgn(V ABe) = (-1) d- 2 sgn(AD1,..., ADd) x syn(I A 13,..., IA B$)
i=l
i=1
From which it follows that
d
d
sgn( V AAi) x sgn(V ABi)
i=1
i=1
=
(-1)2d-
3
for a sign change of -1, as d > 2.
O
Lemma 3.6 Let {Si C a, 1 < i < k be proper subsets of ordered vectors of a
unimodular basis a = {al,..., a,, } in GC(n), VSi the join of vectors in Si, and
V Si the supplement of Si. If i=l VSi = ±[a,...,a,]-, then
k
sg7n( V SI,...,
where p = 1/2(n
2
-
=1n -
P
V Sk)= (-1)" A VSi
i=1
ISWI)2).
PROOF. If A_, VSi = [al,...,a,]k-l 1, then the set {ISi}i=1,...,k forms a partition
of a. Let the extensor ISj be denoted as aj, V ... V aj,,_Isl , where each index ji E
{1,..., n}. We apply Hodge duality with respect to a unimodular basis. Spplying
the * operator to the join of these vectors,
*aj, V. -
V ajll-lsjl = (-l1)J+--+J(lis,Ijl)-card(ISj)(card(ISj)+1)/2apV
...
V aplsj
3.2. PROJECTIVE GEOMETRY
where ap are the set of vectors of a linearly independent from aj. By Proposition
1.27 the operator * is an algebra isomorphism of (G(V), V) and (G(V), A) so,
k
sgn(I V S 1 ,..., IV Sk) = sgn( V a
... ai
)
=il
i=1
k
sgn
*
(V
ail . . ai,...S, ) =
i=1
k
*ai l ...ai-is)
sgIn(
=
2
sg7l( (71
i=1
k
1
k
Sl)2))
n-
i=1
VSi
i=1
where the last equality holds by a simple calculation using
=1 card(ISi) = n.
0
Example 3.7 The Arguesian P = V3=1 Qi in step 6,
P = ((aV ABC) A (bV DEF))
V((((c V ADE) A BC) V d) A F)
V((((e V BDE) A AF) V f) A C)
has transversals ir : a -4 C, b -4 F, c -+ E, d -+ B, e -- D, f -4 A, and a : a -A, b -+ E, c -+ D, d
C, e -- B, f -+ F. We may write E(P)I, = V= 1£(Qi)1, and
(after reordering) £(P)I = V3= 1 E(Qi)1,, as
A BDE
ADCF
BEFC
CBDF
AEBF
DEAC
A cycle p is defined as (A, C, B, D, E, F) and sgnl(E(P) ,) differs from sgn(E(P)lj)
by the difference in parity of ABDE V ADCF V BEFC from CBDF V AEBF V
DEAC, as the six transpositions are even parity. By Lemma 3.5 sgn(&(P)J,) =
-sgn('(P) i,).
3.2
Projective Geometry
We illustrate Theorem 3.1 with several examples from higher-dimensional projective
geometry.
CHAPTER 3. ARGUESIAN IDENTITIES
Theorem 3.8 In a four-dimensionalprojective space, the intersection of the solid
abd'e' and the line a'b'c' when joined with the point c yields a plane. The two planes
da'c' and b'd'e' when intersected and joined to the point e give a line. Denote this
plane by P1 and the line by l1. The planes cde and a'b'c' when joined with the line ab
yield a plane. Intersect this plane with the solid a'c'd'e' to obtain a line 12. Intersect
the solid abce with the plane b'd'e' to obtain another line 13. Then the plane P 1 and
line 11 contain a common point if the lines 12, 13 and the point d lie on a common
hyperplane.
PROOF. In step 5, let the polynomial P be
P = (((ab V ABC) A DE) V c) A (((d V BDE) A AC) V e).
The basic extensors fj obtained from Bp are abce A A, abcde A B, abce A C, cde A D,
and cde A E.
Applying 1, form cde A DE from cde A D and cde A E. The resulting extensor may
be combined with abcde A B to form ((cde A DE) V ab) A B). Similarly, combine
abce A A, abce A C to form abce A AC. Finally, by two applications of 2, join these
expressions with vector d to form:
Q = (((cde A DE) V ab) A B) V (abce A AC) V d.
Then by Theorem 3.1 we have
(((ab V ABC) A DE) V c) A (((d V BDE) A AC) V e) =
(((cde A DE) V ab) A B) V (abce A AC) V d.
0
Theorem 3.9 In 7-dimensional projective space, the plane formed by intersecting the hyperplane abd 'e 'f 'g 'h' with the solid a'b 'c ;f' then joining this plane to the
point c and intersecting the resulting solid with the hyperplane a'b'c'd'e'g'h',lies
on a common 6-dimensional hyperplane with the line formed by intersecting the
4-dimensional subspace da'c'd'h' with the 5-dimensional subspace eb'e'f'g'h' and
the hyperplane a'b'c'd'e'f'g', and with the plane formed by intersecting the two
5-dimensional subspaces fc'e'f 'g'h', a'b'd'f'g'h', joining the resulting solid to the
point g, intersecting this 4-dimensional subspace with another 4-dimensional subspace a'b'c'd'e', and joining the resulting line to the point h, if and only if the
hyperplane formed by intersectingthe 4-dimensional subspace with the hyperplane
a'b'c'e'f'g'h'andjoining this solid with the plane abd, contains the line formed by
3.2. PROJECTIVE GEOMETRY
intersecting the 4-dimensional subspace formed by intersecting the hyperplane
a'b'c'd'e'f'g' with the line ch, and joining this point to the line dg, intersecting
the resulting plane with the hyperplane a 'b 'c 'd 'f 'g 'h', then joining this line with the
plane abf to form a 4-dimensional subspace, then intersecting with the hyperplane
a 'c 'd'e 'f 'g 'h' and joining with the point e, the 5-dimensional subspace formed by
intersecting the line dh with the hyperplane a 'b 'c 'd'e 'g 'h',forming the line which is
the join of this point with the point c, intersecting the resulting line with the hyperplane a 'b 'c'd'e'f 'h', to form a point which is joined with the 4-dimensional subspace
abefg, and the 5-dimensional subspace formed by intersecting the 5-dimensional
subspace abcegh with the hyperplane a'b 'd'e 'f 'g
g'h' to form a 5-dimensional subspace
which is joined to the point f, then intersecting with hyperplane b'c'd'e 'f'g'h' and
joining this 4-dimensional subspace with the point d.
PROOF. Let P = V Qi be the type I Arguesian polynomial in step 8 with
Q1
=
((((ab V ABC) A DEHG) V c) A F)
Q2
=
(d V BEGF) A (e V ACD) A H
Q3
=
(((((f V ABD) A CE) V g) A FGH)V h)
Forming the eight basic extensors abce fgh A A, abcdfgh A B, abcegh A C, cef gh A
D, cdgh A E, dh A F, cdh A G, ch A H, they may be combined using rules 1-3 in several
ways one of which yields the polynomial
Q
= ((((((ch A H) V dg) A E) V abf) A B) V e) A ((((dh A F) V c) A G) V abefg)
A((((abcegh A C) V f) A A) V d) A ((cef gh A D) V abd)
which by Theorem 3.1 are E-equivalent. Upon specialization to a second basis of
vectors,
[a,b,c,d,e, f,g,h]3[a',b'c',d' , e ' , f
' , g ' , h ' ]7 ( (( a bd
v
'e'f'gh'A a 'b'c'f')
c)A a
'b'
c
d'e'g'h') V
((da'c'd'h'Aeb'e'f'g'h') Aa'b'c'd'e'f'g')V( (((fc'e'f g'h'Aabd'f g'h )Vg)Aa'b'c'd'e' )Vh)
= ((((((ch A a'b'c'd'e'f'g')V dg) A a'b'c'd'f'g'h') V abf) A a'c'd'e'f'g'h')V e)
A((((dh A a'b'c'd'e'g'h')V c) A a'b'c'd'e'f'h')V abef g)
A((((abcegh A a'b'd'e'f'g'h')V f) A b'c'd'e'f'g'h') V d)
A((cefgh A a'b'c'e'f'g'h') V abd)
which directly interprets the theorem.
O
Many natural and less absurd examples are also possible and the reader may verify
that every Arguesian identity thus far given may be constructed via Theorem 3.1.
Some interesting examples include,
CHAPTER 3. ARGUESIAN IDENTITIES
Theorem 3..10 In three-dimensionalprojective space, the lines aa', cb' intersect the
planes bcd and acd in two distinct points. Form the lines 11,12 joining these distinct
points to the points b and d respectively. Construct two other lines 1', 1• as follows:
Intersect the planes bcd, b'c'd' and abd, a'c'd'. Join these lines to the points a and c
respectively. The two planes thus formed are then intersected with the planes a'b'd'
and a'b'c' to give l', 1'. Then, the lines 11,12 are coplanar if and only if the lines
lI, I2 are coplanar.
PROOF. In a GC algebra of step 4, let
P = (((a V BCD) A A) V b) A (((cV ACD) A B) V d)
Form the basic extensors bcd A A, abd A B, abcd A C, abcd A D. Applying rule 2,
combine the basic extensors bcd A A, abcd A C to form ((bcd A A) V a) A C. Similarly
combine abd A B, abcd A D to form ((abd A B) V c) A D. Applying rule 3 we may
write by Theorem 3.1,
[a, b, c, d](((a V BCD) A A) V b) A (((cV ACD) A B) V d)
[[A, B, C, D]](((bcd A A) V a) A C) V (((abd A B) V c) A D).
By specifying the covectors this identity is equivalent to
((aa' A b'c'd') V b) A ((cb' A a'c'd') V d) =
((bcd A b'c'd') V a) A a'b'd') V (((abd A a'c'd') V c) A a'b'c').
Theorem 3.11 (Fontene) Let a, b, c, d and a', b'c', d' be the vertices of two tetrahedra in projective three space. Intersect the lines aa', bb', cc' and dd' with the faces
bcd, acd, abd and abc of tetrahedrona, b, c, d. Then these four points are coplanar if
and only if the four planes formed by joining the lines bcd n b'c'd', acd n a'c'd',abd n
a'b'd' and abe na'b'c', which are the intersection of opposite face planes of the tetrahedra, to the points a', b', c', d', all pass through a common point.
PROOF. In a Grassmann-Cayley algebra of step 4, let a, b, c, d be points and
A, B, C, D be planes. Then the identity,
[a, b, c, d](a V BCD) A A) V ((bV ACD) A B) V ((cV ABD) A C) V ((dV ABC) A D)
= [[A, B, C, D]]((bcd A A) V a) A ((acd A B) V b) V ((abd A C) V c) A ((abc A D) V d)
3.2. PROJECTIVE GEOMETRY
holds. Whereupon specialization of A = b'c'd', B = a'c'd',C = a'b'd', D = a'b'c' one
obtains;
[a, b, c, d][a,' b', c', d']5 (aa' A b'c'd') V (bb' A a'c'd') V (cc' A a'b'd') V (dd' A a'b'c') =
((bcd A b'c'd') V a) A ((acd A a'c'd') V b) A ((abd A a'b'd') V c).
Corollary 3.12 In projective three space, let {ab, cd}, {bc, ad}, {cd, ab}, and {ad, bc}
be four pairs of skew lines and let {a'd', b'c'}, {a'b', c'd'}, {b'c', a'd'}, {c'd', a'b'} be another set of skew lines. Then the four points, formed by intersecting each of the
second four lines from the first set, cd,ad, ab, bc, with the planes formed by the join
of the first four lines of that set ab, be, cd, ad to the vertices a', b', c', d', are coplanar,
if and only if the four planes formed by intersecting the planes bcd, acd, abd, abc with
the second four lines of the second set b'c', c'd', a'd,a'b' and joining those points to
the first four lines of the second set a'd', a'b,, b'c', c'd' pass through a common point.
PROOF. In a Grassmann-Cayley algebra of step 4, the following identity holds:
[a, b, c,d] 3 ((a V CD) A AB)V ((bV AD)A BC) V ((cV AB) A CD) V ((dV BC) A AD) =
[[A, B, C, D]]3 ((bc A A) V ad) A ((cd A B) V ab) A ((ad A C) V cd) A ((ab A D) V cd)
Upon specializing the covectors one obtains the desired theorem.
O
Theorem 3.13 (N-dimensional Bricard) Let al,..., a,, be vectors and X ,...,
1
X, be covectors in a Grassman-Cayley algebra of step n. Then the following identity
is valid:
[ai,a2,...,an] V ((ai V XlX2""2.Xi ...X,) A Xi)
i=1
[[Xi, X2,..., Xn]] A((ala2"
i=1
or upon substituting Xi = a' a'
a
. a,, A Xi) V ai)
di
- - a' for 1 < i
[al, aV2
a2,..., aJ[al, a2,... , a,](
n- 3
+
< n,
(aia A ala2 ... da... a) "=
l)
i=
A((ala2"a
i=1
.a
..
.a,,
aa1
..
di'... an) V ai).
CHAPTER 3. ARGUESIAN IDENTITIES
Theorem 3.14 (N-dimensional Generalized Bricard) Let al,..., a, be vectors
and X 1 ,..., X, be covectors in a Grassmann-Cayley algebra of step n. Let Ira =
(7rl,...,rk) be a partition of the vectors into k sets of size I|ril >_ 1. Let V7ri =
V{ala E 7ri}.
For each i let Ci be a set of covectors with ICil
> 17rir.For each
covector X E X form the basic extensor fx as in Theorem 3.1, using only rule 1,
to produce a set of 1 type II extensors. The type II extensors {fili = 1,..., l} define
a partition ;, = (El,... , El) of the covectors X, and associated to each fi is a set
of vectors Vi. Then the following is an identity:
k
[[Xi,..., X,,]]
I- 1
((Vri V AC;) AAC
=)
i= 1
[al, ... , a,]k-1
V ((VVj
AAEj)V
Vvc)
j=1
[] Corollaries of the following type can be obtained from theorem 3.14.
Corollary 3.15 In four-dimensionalprojective space, let a, b, c, d, e and a', b', c', d', e'
be two sets of points. Then the points determined by the intersection of five pairs
of planes aba' A cde, bcb' n ade, cdc' n abe, ded' n abc and aee' n bcd all lie in
a common three-dimensional hyperplane if and only if the five three dimensional
solids determined by joining the lines a'e', a'b', b'c', c'd', d'e' respectively to the lines
bcde n b'c'd', acde n c'd'e', abde n a'd'e', abce n a'b'e' and abcd n a'b'c' all contain a
common point.
PROOF. In a Grassmann-Cayley algebra of step 5 the identity
[a, b, c, d, e] 4 ((a V CDE) A AB) V ((b V ADE) A BC) V ((c V ABE) A CD)V
V((d V ABC) A DE) V ((e V BCD) V AE) =
[[A, B, C, D, E]]4(ae V (A A bcd)) A (ab V (B A cde)) A (bc V (C A ade))A
(cd V (D A abe)) A (de V (E A abc)).
3.3
The Transposition Lemma
Two transversals ir, a of a type I Arguesian polynomial P define a permutation p
of the set of covectors X, as p : 7r(i) -ý a(i), for i = 1,...,n, and p naturally
3.3. THE TRANSPOSITION LEMMA
decomposes into the union of cycles. Lemma 3.16 shows that if r and a have
corresponding p which amounst to a transposition of covectors, then sgn(E(P) ,)
and sgn(S(P) ,) alternate. In general, given transversals 7r, a of P there does not
necessarily exist a sequence of transversals -r = 7ro,... , 7rs = a with induced pi, 0 <
i < n - 1 a transposition. Further, Lemma 3.16 is somewhat surprising in that no
obvious extensions based on standard permutation statistics, such as cycle length, or
number of inversions of p are valid. In his study of Cayley factorization algorithms
expressions, White [Whi91] has studied the change of sign of Grassmann-Cayley
algebra expressions upon permuting variables.
Lemma 3.16 Let P be an Arguesian polynomial in a GC(n) with transversalsr and
a such that 7 = a except 7r : a -+ Xi, 7 : aj -+ X,, while a : ai - Xm,,a : aj - + X1 ,
for distinct ai, aj E a, Xi, Xr, E X. Then
sgn(S(P)j|) = -sgn(S(P)l )
PROOF. Let P be type I and by Lemma 2.20 let Xi,, XI 2 , Xm,,,
E X* be the
unique covectors satisfying 7r* : ai -- X 1 , aj -* Xm,, a* : ai -4 Xm , aj - X ,
2
12
where possibly 11 = 12 or m 1 = 712. As [ai, Xi,] E $(P*),1,
there is either type I/II
Q = RV / A S, with a E V(R), X, E C(S*) and linear combinations R*(ai),S*(XI,).
Suppose for type I S C P, X,1 ,X 1• E C(S*) (equivalently Xm,, Xm 2 E C(S*)) with
both ai, aj _ V(S). Then 7r* : ai - X 1, and a* : aj - XI, necessarily imply
Xi E ext(E(S*)l,.)
X•, E ext(S(S*)|,.)
(3.13)
and by Lemma 2.19, for every transversal y, [a, Xi] E (S)|1, for some a E V(S),
which is a contradiction. It follows at once that if Q = RV / A S, with ai, aj E V(R),
and Xi,X 2 , Xm,, Xn1, E C(S*), then 11 = 12, nl = m12. Denoting these unique
covectors as X7*, X,*,, the expansion E(Q*) is given as case 1 in the list below.
a) Now suppose Q = R A S is type II, with R type II, S type I, and a; .E V(Q),
XI, E C(S*). By Lemma 2.13 part 3, the covector X 1 E X satisfies [a, Xi] E S(Q)I ,
for a E V(Q), for every transversal y of P. Thus if aj 0 V(S) then aj E V(R).
Again by Lemma 2.19 11 = 12 and we may write R*(ai, aj) and S*(Xf) for unique X*
of label X 1 . If R A S is minimal with respect to a) then we may further assume that
Xml,Xm?
C/(R*) for suppose X,,,I E C(R*). As 7r* : aj
X,,,,, 3R' V / A S' C R
7 2
with tnype II R', type I S', aj E V(R'),X,,,,
E C(S'*). Clearly we may not have
type I R' V S' C R, as this violates R*(aj). Therefore R' A S' C R, and ai 0 V(S')
or R*(ai) is violated. Then a, E V(R'), and R' A S' C R is a smaller subexpression
satisfying a).
CHAPTER 3. ARGUESIAN IDENTITIES
100
b) On the other hand, if Q = R A S with ai E V(R), XI, E C(S*) but aj E V(S).
Then there exists type I Q' = R' V S' C S with aj E V(R'), and X12 E C(S'*). Then
Xm, E C(S'*), Xi 2 E C(S'*), and S'*(XI,Xm,Xm) is required.
If minimal a) occurs but case 1 does not, R A S C P with R*(ai, aj), S*(Xf), and
R C R' with R' V / A S' C P minimal such that R'*(ai,aj), and S'*(Xm,) or
S'*(Xm2 ). If either V/A then Xm, E ext(C(S'*)Ij,) and Xm, E ext(9(S'*)la*) which
by equation 3.13 is a contradiction. Thus again mnl = m2 and denoting the unique
covectors as Xj, X,1•, we obtain case 2 below.
If b) occurs but a) does not occur. Then consequently cases 1 or 2 below do
not occur. We may assume without loss of generality that Q = R V S with
ai E V(R),X 1 •;,X,X 2 E C(S*), and aj _ V(Q). Then as a* : aj -+ X 12 , 3Q' =
R'V/ AS' C P with R'(aj), S'*(Xt,). As Q, Q' are subexpressions of a parenthesized
binary expression in V/A, either Q C Q', Q' C Q, or Q*, Q'* contain no variables in
common. If Q*, Q'* have no variables in common, and type I Q' = R' A S' C P then
by Lemma 2.13 part 3, 7r* : ai -- X 1 , is violated. Hence R' V S' C P with R'*(aj)
and then S'*(X 1 , X,,,), which is case 4 below. Otherwise, Q*, Q'* share variables of
a, X* but aj ý_ V(Q). As aj E V(R'), R' g Q. We may not have Q C R' either as R'
is type II and a
--+ XI, is violated. Thus R'* and Q* share no variables. Then
if S'* and Q* have variables in commnon, and in particular, if S' C Q with inclusion
proper, then R' V / A S' is not a subexpression. The only remaining possibility is
that Q' = R' V / A S' C P with type I Q = R V S C S', and aj E V(R'). Then
S'*(Xi, Xn 1 ) is required, and we proceed to consider this case.
Let Q = RV S with R*(ai), S*(X 1 ,, X,,), aj . V(Q), and further suppose Q C S'
with S' V / A R' C P, and S'*(X 2, X,,, ), R'*(aj). Again we claim 11 = 12, ml = m .
2
To show this we proceed as follows: [ai, X,1 ] E $(Q*) I, and if XI E ext(E(Q) ,),
then by Lemma 2.13 part 2, for every transversal y, [a, X1 ] E S(Q)j, for a E V(Q).
As aj . V(Q) this contradicts a* : aj -- Xl. Therefore Q satisfies the hypothesis
of Lemma 2.113 part 1 for Xi. In fact, by Lemma 2.19, for any transversal - we must
have, for any a E V(Q),
[a, XI] € S(Q)|Z 1
X1, E ext(S(Q*)I|*)
(3.14)
[a, X,,] € S(Q)|1
Xm2 E ext(S(Q*)a1,)
(3.15)
We recursively evaluate the subexpressions T with Q C T C_ S' showing in fact that
equations 3.14 and 3.15 are valid when Q* is substituted by T'*. If this is true, as P
is non-zero, it follows that 11 = 12, ml = m2 as claimed. If any T is type II, Lemma
2.13 part 3, contradicts r* : aj -+ Xm,.
Also, every type I T must satisfy Lemma
2.13 part 1. Initially, set Q' = Q, and assume by induction that equations 3.14 and
3.15 are valid for Q'*.
3.3. THE TRANSPOSITION LEMMA
Let Q' A Q" C S', Q', Q" both type I, where by induction Q'* satisfies Lemma
2.13 part 1, with respect to XI,Xm and equations 3.14, 3.15 hold. Then [a, Xj] ý
e(Q'AQ")I implies X1 , E ext(6(Q'*)l1.-), and as y is a transversal, X1 , E ext(C((Q'A
Q")*)1 7 .) (and similarly for X,,1). Set Q' +- Q' A Q" and proceed to next level of
recursion.
Let Q' V Q1 C S' for Q',Q" type I. If [a, XI] 0 £(Q' V Q")Jy, then by induction
XI, E ext(E(Q'*)Iy.), and as Q' V Q" satisfies Lemma 2.13 part 1, r*: ai -+ X1 ,
implies 3X1 E ext(E(Q")l,). Replacing Q' VQ" by Q" VQ', the resulting Arguesian
polynomial is E-equivalent to P, and equation 3.14, is valid for Q" V Q'. As the
same argument holds for Xm, E C(Q'*), equation 3.15 is valid for Q" V Q' as well.
Set Q' +- Q" VQ', and continue.
Let Q' V Q1" C S'with Q' type I, Q" type II. If [a, Xi] 0 E(Q' v Q") 1 then X 1, E
ext(E(Q"*) y.) by induction and then X1, E ext(E((Q'VQ")*)j• ). Set Q' +- Q'VQ"
and continue.
By induction we obtain the contradiction that Xt1 E ext(C(S'*)Ia) and X 1, E
ext(E(S'*)o.I), and thus 11 = 12, (ml = 7n2 ) as required. This case gives the third of
the list.
We remark that the above argument depends strongly on the multilinearity of the
vectors. It follows that if ir and a differ by a transposition of covectors then one of
the following expansions in E(P*) occurs, where in each case the we may assume
X*,X* are the unique covectors satisfying the hypothesis of the Lemma.
1.R*(ai,aj) V / A S*(X*, X•),
2. (R*(ai, aj) A S*(Xr)) C R'*(ai, aj), and R•*(ai, aj) V / A S'*(X),
3. (R*(ai) V S*(X*,X*,)) C S'*(X, ,Xn), and S'*(X, ,X,) V / A R'*(aj),
4. 3 two type I R*(ai) V S.2(XI, Xm,), R.(aj) V SV(XI.,Xm,).
It remains to show that sgn(C(P)j,) and sgn(E(P)I,) alternate. We pass to nonrepeated notation for the remainder of the proof. The sign of a transversal 7r of a
type I Arguesian polynomial is calculated by Proposition 2.1 recursively.
Suppose that a type I Q, as in case 1, occurs in P. Since ai, aj are assigned to
either of Xi, X, in both 7rand a, and 7r and a differ only in the assignment
of these vectors, we must have ext(E(R)j,) = ext(C(R)la) and ext(E(S)j,) =
ext(E(S)l,) with corresponding signs equal. Given ext(E(R)j,) = ex:t(S(R)|,) =
al ... ai... aj ... ak, and ext(C(S)I,) = ext(S(S)la) = X... X
... XI
X 1 , then
CHAPTER 3. ARGUESIAN IDENTITIES
102
sgn(C(Q)jr) x sgn(C(Q)I,) is precisely the difference in sign of
sgn(X ...X \ {X,(1),..., Xi,..., Xm,..., Xr(k)},
X,(1),
,
, X,,X,
.X,(k)),
*
and
sgn(X 1 ... XI \ {X,(l), ...
X 1 ,... ,Xm,... , Xr(k)},
X,(1),"'-,Xm,''-', Xt,-'* X(k))
and sgn(E(Q)(1)
and sgn(C(Q) 1)differ by the sign change of a transposition. Interchange ai and aj in al
...
ai ... aj ... ak, and evaluate E(Q)I,. Interchanging ai and
aj does not affect ext(E(Q)I,) and ext(E(Q)j,), and sgn(C(Q)I,) before the interchange is equal to sgn(C(Q)Ia) after the interchange. Now sgn(E(T)j,) for any Q C
T depends only on sgn(C(Q)1,) and ext(C(Q) ,). Hence sgn(C(T)1,) x sgn(E(T)I,)
before and after the interchange are identical, so sgn(E(P)I,) and sgn(E(P)JI) al-
ternate.
If a type I Q as in case 2 occurs in P then, ext(E(R) ,) = ext(E(R) ,), ext(C(S) ,) =
ext(E(S)Ia), and ext(C(S')j,) = ext(E(S')Ia), and the corresponding sign terms
are equal. Thus E(R)I, A E(S)J, = C((al ... ai ... aj ... ak) A Xi, ... X1 ... Xi,) all
scalars. As above, interchanging the positions of vectors ai, aj and applying a,
sgn(E(R)I, A E(S)I,) before the interchange and sgn(C(R)I, A E(S)I,) after the
interchange are identical. Further, sgn(S(R')Ij) equals sgn(C(R')|a) after the interchange, and the position of vector aj in ext(E(R')j,) is identical to the position of
ai in ext(E(R')I,). Then £(S')I, = E(S')I,, and as 7 : aj -+ X,,, a : ai - X 1 again
sgn(E(P)I,) and sgn(E(P)Ij) alternate. Case 3 is identical.
We may therefore assume that P contains two type I subexpressions Q1, Q2 as in
case 4. Since 7r(al) = a(al)l i,j, we may write,
e(Q1)Ir = E(R1V S1)Jr = E(klal... ai ... ak V k 2 X... X
= kIk 2X 1 ... Xm -- Xp \ {X,(i)
x sgn(X 1 ... X,,...
Xp \ {Xr()
.'.XI ...
...
X
...
... Xm ...... Xp)Ir
X(k)}
Xir(k)}, Xr(1) ... Xi
...
(3.16)
XrF(k))
(3.17)
where kl, k2 are constants containing any brackets [a, X]. We may similarly write
E(Ql)la = k 1 k2 XI ..
XI ... Xp \ {Xa(1)
...
Xm
...
Xe(k)}
x sgn(Xl ...
XI...Xp1\ {X,'(1)''X
X,-'-'X
XX(k)}, X(1) ** Xn ... Xa(k))
(3.18)
(3.19)
where Equations 3.16 and 3.18 are identical except for Xi and Xm occuring in possibly distinct positions. The meet Xi . .
. Xm
Xp \ {XP (1) ... Xi . . X..(k)} occurs
3.3. THE TRANSPOSITION LEMMA
103
in ext(C(Q1)j,r), and in the corresponding sign term. Hence sgn(C(Qzl)I,) is unaffected by simultaneously transposing Xm within Equation 3.16 and within the sign
of 3.17, to the position occupied by X1 in the corresponding term of 3.18. The positions of XI and Xm are identical in X,(1)'"
- X 1t
*Xir(k) and Xa( 1) ... Xm - - Xa(k)
since the position of ai is fixed. Now equations 3.16 and 3.18 differ only in that XI
of equation 3.16 is exchanged for Xm in equation 3.18, while 3.17, 3.19 are identical
except for a transposition of X1 and Xm. The same holds for C(Q2)Ir and E(Q2)1,.
Then 3.17,3.19 differ differ by a sign change as do the corresponding sign terms of
£(Q2)Iv and E(Q2)1,.
Both may be ignored since they will not contribute to the
overall change in sign of sgn(E(P),r) x sgn(C(P),,).
As P is type I 3 type I R containing both Qi and Q2. Evaluate £(R)J, and E(R)Ja
recursively. Any T' C Q1 has £(T')I, = C(T')I . Suppose first that R = Qi A Q2
identically. Then
E(R)), = klXi . · Xi, A X,, A Xi,
,
i
E(R)I = k:X,
X ..Xi, ^A
XX
AXi,, ...
Xi,
A
X, ... Xj, A XI A Xj,,
·... Xi.
(3.20)
A..
x, .x , AXm AXjr+, ... Xi.
(3.21)
and after sign adjustment, 3.20 and 3.21 differ by a transposition alone. Hence
interchanging X 1 , Xm every subexpression R C T C P evaluates identically and
sgn(E(P)I,) = -sgn(E(Q)Ia).
If R = Q1 V Q2 identically then
E(R) , = ki(Xi,
. . . Xip
AX,,, AXi,,~
Xi,) V(xj, ... x, Ax, AXjr+, ... Xj.)
(3.22)
Ec(R) IJ = kl(Xi,
..
Xi, A X A Xi+,I ... Xi,) V(xj, ... Xjr AXm A Xj+, ... Xj)
(3.23)
and we may write equations 3.22 and 3.23 as
E(R)l, = ki(Xi, ... Xi, A Xnm A Xi,+, " " Xi,)
V(-l)-g-p(xx,
S(R)(,
. ..
xj, A Xjr,,
. . Xj,, A XI)
= kl(Xi, . .. Xi, A X1 A Xi,,+
1 ... Xi)
V(-1)8 (xi, . . Xj, AXj,+, ... Xj, AX,,)
then computing the join, setting B = X \ {Xi,,..., Xm,, ... ,Xi,
{Xi[,
..
and B' = X \
, X, .. . , Xiq },
S(R)I, = ki(-1)-"[Xi,,..., Xm,..., Xi,, B \ {Xt}, Xt] Xj, ... X,XI \ B
(3.24)
104
CHAPTER 3. ARGUESIAN IDENTITIES
xsgn(Xj, --- Xj, X \ B,B \ {X 1},Xt)
C(R)I, = k 1(-1)"-P[Xi,,... ,X,. .. ,Xi,,B'\{Xm},Xm] Xjl ".Xj.Xm\B' (3.25)
xsgn(Xj, .. .Xj X,,\ B',B'\ {Xm}, Xm)
Since Xt and Xm occur in final position of the split extensor and X,1 ... XJXI \
B, B \ {XI} = Xi ... XjX,n \ B', B' \ {Xm } the sign terms are equal, while the
two brackets in equations 3.24 and 3.25 differ by a transposition. Hence again
E(T)Ir = (T)Ia for any R C T and then sgn(E(P)1,) = -sgn(E(Q) 1).
If the subexpression R containing Q1 and Q2 is neither Q1 A Q2 nor Q1 V Q2,
let Q1 be innermost and let P contain (Q1 A S) V T, with possibly empty S or
T, with £(S)I, = E(S)I, and £(T)J, = E(T)I,. We may not have X1 or Xm
in ext(E(S)I,)(= 6(S)a,) since ir and a are both non-zero. For the same reason
ext(C(T)I,) must contain both X1 and Xm in identical positions. The evaluations
(E(Q1)I, A C(S)I() V £(T) , and (£(Q 1 )I, A E(S)Io) V E(T)1, have the form,
ki(Xi, .. "Xip AXm AXip+,,
ki(Xi,
"Xiq AC(S)1,)V Xm, ... XI'".Xm",.Xm,
Xi, A X A Xip+, . * Xi, A E(S)a)V X,,,
X ... X, ... Xm,
(3.26)
(3.27)
To calculate the join of 3.26 and 3.27 we proceed as above equivalently transposing
both XI and X,, without affecting sgn(&(P)j,) x sgn(C(P)Ia) to obtain
(Xi, - XiAX,
A X1,
(Xi,
,
X A XI1,A Xi,+l, .
Zip
...
Xiq A E(S)I,) V Xn,, ... Xm,. A Xt A Xm
(3.28)
Xi A E(S)la) V Xm
rn ' Xm, A Xi A Xrn
(3.29)
Setting B, B' as above, and splitting the extensor on the right 3.28 and 3.29 become
[Xi, ... Xi, A X,, AXi,,
Xi, AE(S)1, B \ {Xi}, Xi] Xm1, .. . X,, A A Xm \ B
(3.30)
xsgn(Xn,,
[Xi, ..Xi, AXiA X,,,
... X,, A Xi A Xmn \ B, B \ {X}, X)
Xi,qAC(S) ,, B' \ {X,,, },Xm] X,,, ... Xm, AX1AXm\B'}
(3.31)
xsgn(Xm . .. Xn,. A Xi A Xm \ B', B' \ {Xm}, Xrn)
In equations 3.30 and 3.31, each bracket and its corresponding sign differ by a transposition of XI and X,,, alone, all other covectors being equal. Interchange X1 and
Xm in 3.31 in both the extensor and sign term without affecting sgn(&(P)I,) x
sgn(C(P)I,) The covectors of the extensors of 3.30 are identical to those of 3.31
with XI and Xm of this new extensor occupying identical positions. A simple induction shows that sgn(E(R) ,) = -sgn(E(R)l,) so sgn(E(P)J, = -sgn(&(P)I,) as
required.
O1
Chapter 4
Enlargement of Identities
The algebra of vectors created by Grassmann and
Hamilton has at last won an established place in
Physics. Grassmann's methods are of equal use in
Geometry but this application is less widely appreciated.
The Calculus of Extension, H.G. Forder 1939
4.1
The Enlargement Theorem
The goal of Chapter 4 is to prove a theorem which effectively states that any identity
E
P Q between Arguesian polynomials in a Grassmann-Cayley algebra is dimension
independent. Theorem 4.1 identifies a class of Grassmann-Cayley algebra identities
which may have elegant and simple proofs in the context of Superalgebras [FDG85],
[RQH89], [GCR89]. Such a demonstration however will require extending our current understanding of the properties of the meet of positive variables.
One of the most interesting aspects of Theorem 4.1 is that it depends critically on
the multilinearity of Arguesian polynomials. For example, counterexample 4.9 gives
an example of a valid identity neither multilinear in vectors nor covectors which does
not enlarge to a valid identity in any higher dimension. More interesting perhaps
105
106
CHAPTER 4. ENLARGEMENT OF IDENTITIES
are two Arguesian polynomials, type I P and type IIQ having identical sets of
transversals yet having inconsistent parity of signs (Example 4.10). In the case of
the meet of basic extensors P = A ei, the enlargement theorem produces a set of
identities representing consequences of the n-th higher Arguesian law. Theorem 4.1
in conjunction with Theorem 3.1 suggests that all Arguesian identities represent
geometric theorems that are in fact consequences of lattice identities, which we
conjecture.
Let P be a type I Arguesian polynomial of step n on vector set a and covector
set X. We define the enlargement by k of P(a,X) to be the multilinear Arguesian
polynomial P(k) (a(k), X(k)) in step k - n on variable set a(k) = {ai i = 1, ... , n, I =
1,..., k} and X(k) = {Xj,mlj = 1,..., n,l = 1,..., k} in which each vector ai E a
is formally replaced by the join of distinct vectors ahi,
V ai,2 V ... V ai,k and each
repeated covector Xj, E X* of label Xj, is replaced by meet of distinct covectors
*.
Xj,,i AXjp,2 A ... A Xjp,k, each Xjp,i E X(k) The enlargement of a type II Arguesian
polynomial is analogously defined. The variable sets a(k) (and X(k)) are ordered by
convention as, for ail, aip,, E a(k), ai,l < ai,,l' if i < i' or if i = i' and I < 1', that is
lexicographically. The enlargement theorem may be stated precisely as follows:
Theorem 4.1 (Enlargement) Let P(a,X) and Q(a, X) be type I or type II nonE
zero Arguesian polynomials in a Grassmann-Cayley algebra of step n with P
Q. Let P(k) (a(k), X(k)) and Q(k) (a(k), X(k)) be enlargements by k of P and Q in a
Grassmann-Cayleyalgebra of step k - n. Then
P
Q #
P(k) E Q(k)
To prove the theorem we shall require some preliminary results. The description of
our overall strategy is as follows:
We use the fact that a pre-transversal ir(k)* of type I p(k) decomposes into k partial
mappings 7r
(k)* . a(k) -- X(k)*, for i = 1,...,k, having the property that ,(k)*
has associated 7r,formed by deleting second subscripts from the variables, which
is a pre-transversal of P*. Lemma 4.5 shows that a transversal r (k)of p(k) is, in
E
this sense, composed of k transversals 7ri,i = 1,..., k, of P. Given P - Q, and
(k
) ar
a transversal of p(k) one verifies that 7r(k) is a transversal of Q(k), and vice
versa. To show that (k) •
(k) we must then show that sgn(E(P(k))I,(k)) and
sgn(E(Q(k))l(k)) are uniformly equal or opposite over all r(k). We exploit the block
structure of the enlargement p(k). Given transversal 7r of P, a canonical transversal
('r
k) is defined.
Then the vectors and covectors assigned by partial mappings ( k) of
4.1.
THE ENLARGEMENT THEOREM
107
p(k) are reassigned, for i = 1,..., k, as in ir(k) to obtain a sequence of transversals
of p(k), 7 (k) =
(k)) = . (k) having the property that
sgn(E(P(k))j
(k))
x sgn(S(P(k))l (k))= sgn(E(P),,) x sgn(E(P)I,).
(4.1)
As P
Q, Q(k) satisfies a similar equation to 4.1 and the Theorem follows. A
simple induction shows,
Proposition 4.2 The polynomial P(a,X) is a well-defined Arguesian polynomial
in step n if and only if the enlargement p(k) (a(k), X(k)) is a well-defined Arguesian
polynomial in step k - n.
Given Arguesian polynomial P(a,X) and enlargement P(k) (a(k), X(k)), the vector
aj,i E a(k), for I = 1,..., k, (covector Xj,m E X(k ), m = 1..., k) will be said to be associatedto the vector ai (covector Xj) of P. The well-known Birkhoff-VonNeumann
theorem will be necessary in the proof of the subsequent lemma. For a proof, we
defer the reader to [LP86]. A real-valued n x n matrix is doubly stochastic if all row
sums and column sums are equal to 1.
Theorem 4.3 (Birkhoff-VonNeumann) Every doubly stochastic matrix A has a
permutation set of non-zero entries.
Corollary 4.4 A doubly stochastic matrix A can be written as a convex combination
of permutation matrices, A = cIP 1 +. + ckPk where
=1ci = 1, where each Pi is
a permutation matrix and all ci > 0.
Lemma 4.5 Let P(a,X) be an Arguesian polynomial and P(k)(a(k), X(k)) an enlargement, with 7r(k) * a pre-transversalof P(k)*. Then
U 4-k)*
k
7r(k)=
i=1
where for each i = 1,..., k, r(k)* : a(k) -) X(k)* is a partialmapping such that each
7rf C a x X*, obtained by deleting second subscripts from the variables of 7r•)*
a pre-transversal7r* : a -+ X* of P*. Further, for any Q(k) C p(k),
is non-zero if and only if for i = 1,... , k,
Q CP.
is
E(Q(k)) l(k)
(Q) ,i is non-zero, for corresponding
108
CHAPTER 4. ENLARGEMENT OF IDENTITIES
PROOF. Let P be type I. We first claim that [ai,t, Xil,m] E g(p(k)*) iff [ai, Xjl E
£(P*) for all 1,m = 1,2,...,k, Xjl E X*,Xji,m E X(k)*,a E a, ai, E a(k). Let
Q(k) c p(k) be type I. We show by induction that Q(k)*(Xi•,i) for Xi,Ij E X(k)* iff
X(k) * .
Q(k)*(Xi,1,... ,Xii,k) for the set of associated covectors {Xi, 1 ,,...Xi,k
If Xi2 ,, is a repeated covector of Q(k) then by the substitution, Xi, -+ Xil,l A ... A
Xi,k,, the latter occurs as a subexpression in Q(k). Assume Q(k) = R(k) V S(k) with
R(k) type II and S(k) type I, and the result holds by induction for the subexpressions.
We must have S(k)*(Xi 1 ,1) as R(k) is type II and then S(k)*(Xi,, 1,.. . , Xi,k). Since by
Proposition 4.2 Q(k) is proper type I iff Q is proper type I the expansion £(Q(k)*) =
C(R(k)*) V E(S(k )* ) yields Q(k)*(Xi
2 ,l1,... ,Xi1 ,k). The result is analogous for type
II subexpressions, and for Q(k) = R(k) V / A S(k) with R(k) and S(k) type I. The
converse of the claim is obvious. Next, observe that for type I Q C P, Q*(Xj , )
iff Q(k)*(Xj, ,,...
, Xj,,k) for corresponding Q(k) C p(k). For let Q = R V S where
1
R is type II and S is type I. If Q*(Xj,) then since Q is type I, S*(Xj,) and then
S(k)*(Xjl,1,... XjX,k ). Again, g(Q(k)*) = £(R(k)*) V £(S(k) * ) and as Q(k) is
proper,
Q(k)*(Xjl•,,..., Xji,k). The converse is analogous. Suppose that [ai,t, Xjl,m] E
E(P(k)*) for ai,l E a(k) and Xjl,m, E X(k)*.
Then 3Q(k) = R(k) V / A S ( k) with
R(k) type II and S(k) type I such that R(k)*(ai,l) and S(k)*(Xj,,m). By the above
remarks R(k)*(aij,... ,ai,k), S(k)*(Xj, 1,,...,Xj1 ,k) and then R*(ai) and S*(Xj~),
for corresponding R, S C P, hence [ai, Xj , ] E * (P). The converse follows similarly.
Let B be the bipartite multigraph to associated to P and Bk be the bipartite
multigraph associated to p(k). Since [ai, Xj,] E E(P*) iff [ai,t, Xji,m)] E g(p(k)*) for
1,m = 1,..., k, the multigraph Bk may be constructed from B by replacing each
vertex with label ai E a, (or Xj E X) of B by k distinct vertices ai,l E a(k), (and
Xj,m E X(k)), 1, m = 1, ... , in Bk. Two distinct vertices ai,i and Xj,m of Bk are
connected with an edge (ai,l, Xj,m) if and only the associated vertices ai and Xj
have (ai, Xj) E B. For any i, mn = 1,..., k we say that the edge (aij, Xj,m) of Bk is
associatedwith the edge (ai, Xj) of B.
The perfect matchings of the multigraphs B and Bk are closely related. Let Mk be
a perfect matching of Bk. We show that the edges of Mk may be partitioned into
disjoint Mk = U= 1 Mi such that for each Mi, the associated set of edges to Mi in
B forms a perfect matching of B. Given the induced subgraph Mk of Bk form the
k-regular bipartite multigraph ByA by contracting to a single vertex a all associated
vertices ai,j,l = 1,...,k to ai of Mk and to Xý all associated vertices Xj,m to Xj
of Mk. By taking the adjacency matrix of the multigraph BM, recording multiple
incidences if vertices da,
X of BA! are connected with more than one edge, the graph
BM may be represented by a n x n matrix MatB,, having all row and column sums
equal to k. By appropriate scaling, the Birkhoff-VonNeumann Theorem 4.3 clearly
holds when a doubly stochastic matrix is extended to have all row and column
4.1.
THE ENLARGEMENT THEOREM
a2
109
Al
A2
b
B'a
0
--- -- --
d '.-_-----
bl
BI
C
b2
B2
D
cl
CL
c2
C2
dl
d2
Figure 4.1: Bp for P = (a V BC) A (bV AC) A (c V BCD) A (d V CD) and B4.
sums equal to a constant k since we replace every element of MatBM by 1/k. Hence
MatBM contains a permutation matrix of unit entries. Each such matrix is a perfect
matching of BM, corresponding to a perfect matching of B (there are no two edges
in a matching of BM of form (a, Xj), (a', X')). Since all entries of MatBM are
multiples of 1/k, by Corollary 4.4 MatBM is the sum of permutation matrices with
integer coefficients, each permutation matrix corresponding to a perfect matching of
B and the claim follows. As a consequence, an edge (ai,j, Xj,m) of Bk is contained in
a perfect matching of Bk only if its associated edge (ai, Xj) is contained in a perfect
matching of B. The converse is evident, as if M is a matching of B with (ai, Xj) E M
then a canonical matching of Bk may be formed by setting (ai,t, Xj,t) E Mk for
l= 1,...,k.
We have shown [ai,j,Xjl,m] E £(P(k)*) iff [ai,Xj,] E £(P*) for all I,m = 1,2,...,k,
XjI E X*,Xji,m E X( k )*, and the pre-transversal r(k)* = U,=1 rk)* where associated r*f is a pre-transversal of P*. By the Grassmann condition 2.17 for Arguesian
polynomials a pre-transversal x* of P may fail to be a transversal, only if one of
two conditions applies.
1. There exists a type I RA S C P with R and S both type I such that E(R)I, A
6(S)1, = 0, but E(R)I(, E(S)1, non-zero. This is equivalent to =Xj E X such
that Xj E ext(E(R)l,) n ext(E(S)I,).
2. There exists a type I RV S C P with R and S both type I such that
E(R)I,
V 6(S)J, = 0, but £(R)I,,E(S)I, non-zero.
3Xj 0 ext(E(R) ,) U ext(C(S)(,).
This is equivalent to
110
CHAPTER 4. ENLARGEMENT OF IDENTITIES
Given a pre-transversal r* of Arguesian P(a,X), define the canonical extension
ir(k)*, to be the pre-transversal of p(k)* defined as jr(k) :ail - XI for I= 1,..., k,
X*,I E X(k)*, ifr ai -+ X*, associated XJ E X*. A pre-transversal xr* of P
isa transversal 7r of P iff *(k)* isa transversal * (k ) of p(k). For by induction,
given lr*, with R C P R type I, and corresponding ir(k)*, R(k) C p(k), we show
E(R)I, is non-zero iff C(R(k)) I(k) is non-zero. If R = e -=aI ap V X1 ...Xq,
then R(k) = e(k) = aij, ...al,k, ... ap,1l "
ap,k V Xl,l
-. X1,k... Xq,i
...
Xq,k.
As
V(e), C(e), V(e(k)), C(e(k)) are distinct, 7r* and 7r(k)* determine unique maps 7r:
V(e) -- C(e), r(k) : V(e(k)) -+ C(e(k)). The covector Xj E ext(C(e)f,) iff {X j ,i
S, Xj,k} g ext(g(e(k)) I(k)). If Q = R A S, then Q(k) . R(k) A S(k) and then
E(Q)I, is non-zero iff both &(R)I,, (S)I, are non-zero, and if each covector of
label Xj E ext(E(R)l, ) satisfies Xj g ext(E(S)I,), and vice versa. By induction,
Xj E ext(£(R)1,) iff {X,1,. .. , Xj,k C ext(k(R())lI(k)). Thus C(Q(k))i (k) is nonzero iff E(Q) , is non-zero. The cases Q = R V S, for R, S type I, RA S,for R type
II, S type I are similar.
Now given transversal 7r with 7r*: ai -+ Xi, a transversal 7r(k) with 7r(k)* : ai,l -X,,, for any 1,m = 1,...,k may be easily constructed. If 7(k)*
ai,j -+ X*,l
and i(k)* • ai,m -4 Xm
form i1(k) by setting ifr(k)*(aj,s) - *(k)*(aj,s) for j =
1,...,n, j
i, s = 1,..., k, and for j = i, all s 1,7m. Then set 7i(k)* ahi,j -+ X*mM
and i(k)*
i,m - XI. The previous argument is identical and the resulting map
clearly remains a transversal for p(k) with irl(k)* : ai,l - Xm*,M. We shall call this
map the canonical extension of 7r containing the assignment 7r( k )* : ai,l - X*,m, and
denote it *(k) ifno confusion results.
(=) Suppose there is a pre-transversal ir(k)* = U 1 Ir(k)*, and Q(k) C p(k), such
that for some i = 1,... , k the associated pre-transversal i7rof P* has E(Q)I , i = 0.
Then the canonical extension r(k)* containing the partial matching determined by
7rk) ai, - X,,, satisfies C(Q(k))I(ik ) = 0,by the preceding paragraph. Therefore
in Q(k) under _(k) either of the two Grassmann conditions hold.
Case 1) 3R(k) A S(k) C Q(k) with {XJ,1,..., X,k} _ ext(E(R(k))l•(k))
n
ext(C(S(k))l(k)). Let Mk denote the perfect matching in Bk corresponding to
7(k)*
of p(k)* with Mk = Uý= 1 Mi as above, and let Mi denote the set of edges corresponding to rk)* . The set M(k) \ Mi is the disjoint union of sets, each of which whose
associated edges form (k -1) perfect matchings of B. If(ai,l, Xj,m) and (ai,ti, Xj',m')
are both edges of Mi then i : i' and j : j'. The pre-transversal 7r(k)* may now be
reconstructed by removing the vector and covector assignments of #"k)* correspond-
4.1. THE ENLARGEMENT THEOREM
111
ing to Mk \ Mi and reassigning according to the assignment of ir(k) * . In replacing
each assignment corresponding to Mi, at most one repeated covector with label from
the set {Xj,m : m = 1,..., k} is reassigned for each i. Since only (k - 1) sets Mi
are reassigned, the assignment of one of the covectors Xj,m E {Xjl,..., Xj,k} must
remain unchanged. By Lemma 2.13 Xj,m E ext(E(R(k))I(k)) n ext(&(S(k)) c(k)) so
that £(R(k)) ,(k) A C(S(k))I.(k) = 0, a contradiction.
Case 2) 3R(k) A S(k) C Q(k), and the extension (rk)*of pre-transversal r7, satisfies
ext(E(R(k))l (&)) and ext(E(S(k)) &,k)) contain a set C of covectors which do not
span X(k). Then 3 a set {Xj,1,..., Xj,k} C C for some j. In the same manner,
reconstruct 7r(k)* by replacing each assignment of Mk \ Mi. This reassignment
changes the assignment of at most one covector with label in {Xj, 1 ,..., Xj,k) for
each set Mi reassigned. Then 3Xj,m V ext(W(R(k)) r(k)) U ext(E(S(k))Ir(k)), and
£(Q(k)) l(k) = 0, a contradiction.
(ye) Suppose 3(k)* =- U 1 r=k)* such that C(Q)j,, is non-zero for each i = 1,..., k,
yet E(Q(k))Ia(k) = 0. Then either
Case 1) 3R(k) A S (k) C Q(k) with Xj,m E ext(&S(R(k)) r(k)) n ext(E(S(k))lr(k)). If
3aij,E V(R(k)) such that [ai,j, Xj,m] E C(R(k)) r(k), then as Xj,m E ext(C(R(k)) r(k)),
the type I R(k ) satisfies the hypothesis of Lemma 2.13 part 2. Then for every pretransversal a(k)* of p(k)* with £(R(k))I (k) non-zero, [ai,, Xj,m] E E(R(k))l (k) and
Xj,m E ext(&(R(k)) 1,(k)). Hence there is no ai'l,' E V(S(k) with [ai,,p,,Xj,m] E
C(S(k)) 0(k)), and as Xj,m E ext(E(S(k)) r(k)), by Lemma 2.13 part 1, Xj,m E
ext(E(S(k)) 1(k)).
Therefore Xj,m E ext(E(R(k)))
,
(k)) n ext(E(S(k))Il(k)), for the
arbitrary pre-transversal u(k)* and P(k) is zero, a contradiction.
Thus R(k),S(k) C p(k) both satisfy Lemma 2.13 part 1. We claim the corresponding
R, S C P are type I satisfying Lemma 2.13 part 1 as well. For given Xj,m E
C(R(k)) fn C(S(k)), as above, the associated Xj E C(R) must satisfy the hypothesis
of either part 1 or part 2 with respect to type I R, (similarly for Xj E C(S)), and
these hypotheses are mutually exclusive. Let f be a pre-transversal of P with E(R) I
non-zero, satisfying Lemma 2.13 part 2. The canonical extension j(k), by the above
remarks, has E(R) fk) non-zero, and [ai,Xj] E (0)E(R)jI and Xj E (V)ext(E(R)If),
iff [aij, Xj,] E (O)E(R(k))I,(k) and Xjj E (V)ext(E(R(k)) I(k)), for 1 = 1,...,k. Thus
for m
{1,..., k}, Xj,m satisfies the hypotheses of part 2 of Lemma 2.13 with
respect to R(k) (or S(k)) iff Xj satisfies the same hypotheses with respect to R, and
the claim holds.
Then R C P satisfies: For any pre-transversal r* of P* with E(R) , non-zero,
[ai,Xj] e E(R)I, for some ai E V(R) iff Xj 1 ext(E(R)I,. (Similarly for S.)
CHAPTER 4. ENLARGEMENT OF IDENTITIES
112
By hypothesis, £(Q)I,, is non-zero, for p = 1,..., k. Hence there exists a set
of repeated covectors {Xj} C C((R A S)*), each of label Xi, such that for all
p = .. , k, * : ai
X for some ai EV(R AS)
_(k)and some X E {Xj}. As ir
are obtained by deleting second subscripts from irk)*, pre-transversal 7r(k)* maps
some vector of V(R(k) A S(k)) to a repeated covector of the set {X,,... ,X*,k}. As
the projection ir(k) a(k) -_+X(k) is a bijection, the image of V(R(k) A S(k)) under
7r(k) IR(k), r(k) IS(k) contains the entire set of labels {Xj, 1,... , Xj,k}. Then for Xj,m,
3ai,l E V(R(k) A S (k)) such that [ai,t, Xj,m] E 6(R)ff(k) or [ai,l, Xj,m] E E(S)I,(k), and
Xj,m .ext(C(R),,(k)) n ext(E(S)•c(k)), a contradiction.
Case 2) 3R(k) V S (k) C Q(k) with Xjl,m .ext(E(R(k)) 1(k)) U ext(E(S(k))l c(k). We
proceed similarly. If [ai,t, Xj,m]
E(Q(k))ln(k) then by Lemma 2.13 part 2, p(k)
is zero. Without loss of generality assume [ai,t, Xj,m] E £(R(k)) (k),. As Xj,m 0
ext(E(R(k))I(k) ), and R(k) satisfies Lemma 2.13 part 1 and therefore S(k) satisfies
Lemma 2.13 part 2, as [bi,l,Xj,,
E V(S (k)) and Xj,m .
7 >] 0 E(S(k))lc,(k, for any bi,l
E(S(k))lf(k) . Again, the associated Xj E C(R) (which must exist as P is non-zero),
satisfies the same hypotheses with respect to corresponding R, S C P.
For R, S C P and any 7r* with E(R)J,, E(S)J, non-zero, [ai, Xj] E S(R)•, iff Xj 0
ext(E(R) J), and [bi, Xj]
each factor
7r*
E(S)I, for any bi E V(S) and Xj 0 ext(C(S)(~). As
i = 1,... , k satisfies E(Q),i, is non-zero, the set of covectors {Xj} C
C((R V S)*) of label Xj has ir7 : a /+ Xj for any a E V(R V S). Pre-transversal
r(k)* maps no vector of V(R(k) V S(k)) to the set {XjI 1,..., Xj*,k}, and the image of
V(R(k) V S (k)) under
7r(k)IR(k), 7r(k) IS(k) must avoid the set of labels {X,1,j,.,
Xk .
As R(k) satisfies Lemma 2.13 part 1, we have the contradiction Xj,m E ext(E(R())).
We now give the proof of the enlargement theorem.
P
E
Q
ۥ
p(k)
-
Q(k)
PROOF OF ENLARGEMENT THEOREM. Let P be type I and Q type IIArguesian,
although the proof is valid if P, Q have the same type. Let r(k) be a transversal
of p(k) with factorization r(k)* = U=
k)*. By Lemma 4.5 each partial mapping
r)*, for p = 1,..., k, has associated E(P)I,f
(k)*
7k)*
As P
non-zero, 7r, is a transversal of P. If
ai, -,XTj,m in P(k)*, the associated transversal of P satisfies * : ai• X.
Q, regarding 7rp as a transversal of Q the canonical extension fi(k) in
Q(k) identifies the unique ahiE a(k)* such that [ai,,Xj,,,]
1 E p(Q(k)). As Q(k) is
4.1. THE ENLARGEMENT THEOREM
113
multilinear in covectors, setting 7r(k)* : Xj,m -+ ai,l, the partial mappings
pk)*,
p = 1,..., k, and pre-transversal 7(k)* are well-defined in Q(k)*. Then £(Q)I,, is
non-zero for each partial mapping r(k)*, so by dual of Lemma 4.5, C(Q) 7(k) is nonzero, or 7r(k) is a transversal of Q(k). The coefficient of each transversal in E(P(k)) I(k)
is ±1 by Proposition 2.15.
To show that p(k) f Q(k) it remains to show that sgn(p(k) k) )
sgn(Q(k) 1,(k)
for every transversal ir(k) or sgn(P(k) l(k))
-sgn(Q(k) If(k)) for every transversal
7r(k)
.
We may relate the sign of a transversal of p(k) to the sign of a transversal of
P by the following steps:
1. Given a transversal 7r of non-zero P with r : ai -- Xj, calculate the sign of
(k) : ai,l -+ Xj,i for all 1 = 1,..., k, as
the canonical extension ir(k) of p(k)
sgn(C(P)l (k))
(-1)ksgn(&(p) k).
2. A transversal a (k) of p(k) represents a matching, Mk = Uk=l Ms in B k in
which each set Ms corresponds to a partial mapping oak) : V(P(k)) -+ C(P(k))
which determines a transversal as of P.
3. Using Lemma 3.16 apply a set of transpositions converting U(k)
( k ) s=,...,k
to a new transversal &(k) - {. (k)}s= 1 ... k such that for all s = 1,..., k, ,k)
assigns ai,, to a covector Xj,, with second subscript s, and each sk) remains
a partial mapping of p(k) whose associated vectors and covectors determine a
transversal &, of P. By Lemma 3.16 Each such transposition reverses the sign
of the previous transversal.
4. Form a sequence
(k)
(k)*
is a transversal, and p )
k)
(k)
P(k) =(k) such that for s = 1,..., k, p,
(k)*(k)
Ps-i except for vectors ai,, for which 5,k) : ai,s -+
Xi,8 in which case if i(k)* : ai, -+ X!,
then p(k): ai,s --+ Xj*,,.
Then
sgn(E(P(k))I (k) x sgn(g(P(k))l (k)
, ) = sgn((P)(,.) x sgn(C(P)f,)
Pa
P8 1r
The steps 1-4 are illustrated in example 4.8, several requiring justification.
Step 1) By construction of fr(k) , if ext(C(R)I~) V ext(E(S)J,) = al
then ext(E(R(k)) I(k)) V ext(S(S(k))If(k)) =
al,1 * * * al,k * aI,1
... al,k V
X 1 , 1 * * *X1,k
*.Xm, 1 *-
.
a.V Xi
Xm,k
...
Xm
(4.2)
CHAPTER 4. ENLARGEMENT OF IDENTITIES
114
Calculate sgn(P(k) 1,ck)) recursively. If R(k) V S(k) C P is type I with R(k) type II
and S(k) type I, then
sgn((E(R(k))I(k) VC(S(k))I
)=
sgn(S(R)J, (k)) x sgn(C(S)Jj(k,)
x sgn(Xl,l"
.
Xn,k \ {Xf( ,'),
...,Xf.(,},
.
X*( 1, 1),...,
Xf ,
)
which may be written as
sgn(6(R)Jicfk)) x sgn(9(S)lf(k)x
sgn(Xl,1i " Xm,k \ {Xi,1,
Xi,k,
,Xj,1, Xj,k })Xj i,1*,- Xji,k, .. ,Xj.m,1, Xjm,k
for some subset of covectors {X,,...,
i,I,...
,k...,Xj,1,..., Xj,k}. In the lexicographic
order on X(k) if i < j then Xi,i < Xj,m for all 1, m. Thus if i < j sgn(Xj,1,...,Xj,k,
k
Xi,,...,Xj,Xk) = (-1)k2sgn(Xj,Xi). If by induction, sgn(E(R(k)) (k) = (-1)
sgn(E(R)1,) and sgn(C(S(k))lfc(k) = (-1)k2sgn(6(S)I') then sgn(R(k) I (k)VS(k)•,(k)) =
(-1)3 k2sgn(E(R)1, V E(S)~,) = (-1)ksgn(E(R)J, V E(S)JI). A similar calculation
gives the same result for the join or meet of type I subexpressions.
Step 2) is proven in Lemma 4.5.
(k) such that a (k)*
)*
ai,l -+ X(,m, ai,, - X,,m, define k) as
: ai, -+ Xi*,,' ,, apm -4 Xi
df
Ya
Ya
a(k)* if i
i' and if i = i' then 7t)* = .(k)* except 7yk) ai,j - X*,, ,ai, - Xm.
*
Define 7x as X(k) = a()* if j # j' and if j = j' then
= a(k)* except 7*
Step 3) Given
0
.(k)* define perfect matchings of
B(k) since if (ai,1,Xj,n) E B(k) then (ai,t,,Xj,pr1) E B(k) for any 1' = 1,...,k and
thus 7 k), ( are pre-transversals of p(k). Both are transversals, as well, as ifP(k)
is type I and E(P)(k)I k) = 0 then either
ai,,
X,,,
- aij
-4
Xm,. Given a(k), both
7
(k)*
7a
Case 1) 3R(k) A S(k) C p(k) with some covector Xj,m E ext(E(R(k))l .k)) n
ext(E(S(k))l
( )*
a k
(k)).
Since a (k) is non-zero there is a vector ai,j E V(R(k))UV(S(k))
with
ai, - X- , X>,E C(R(k) A S(k)), with label Xj,m. By Lemma 2.13 Xj,mn
)
ex\t(E(Rk")ki))anext(S(ds)(k))la}). Applying
7
k)
we have a
i,,-+ Xm
where
ail, E V(R(k)) U V(S ( k )) is unique by the multilinearity of vectors, a contradiction,
or Case 2) A similar argument holds when 3R(k) V S (k) C p(k). Similarly, given a(k),
then
7 k) is a transversal of Q(k), being multilinear in covectors. As p(k) and Q(k)
have the same transversals
7k) are both transversals of P(k) and Q(k)
Now construct &(k). By Lemma 2.20 we may pass to unrepeated variables. Given
(k
suppose ak)
,j -+ XJ,m.n There exists k : ai,1 -• Xj',m, with s $ 1,j
j'
ru~V
•-~
ai,l -)
";a,1-
,,,wts
1,
4.1.
THE ENLARGEMENT THEOREM
115
k) yielding 7ak) ai,1 -4 Xj,m.
Replacing the edges of Bk, (ai,l, Xj,m) E Mi, (ai,l,Xj',m,) E M, with new edges
(ai,, Xj,m) E M, (ai,, Xj',mrn') E M,, the new set {fo,(k) } defines a set of partial
k ) from o''(k)
mappings with each a'a transversal of P. Form 7\
to obtain 7 :ai,i-(
k
Xj,1. Inductively, assume assume a.l matches vectors and covectors with second
subscript 1 for a subset S of pairs (ai,, Xj,1) of size t and let (k) a'
4 X',m',
with 1', m' 0 1. Since a'(k) projects to a transversal oa of P the transversals (k) (k)
required to map a,i to Xj,1 must involve covectors disjoint from those in S and after
two transpositions, a set of t + 1 pairs satisfying having second subscript equal to 1
have been matched. By induction, a, : ai,i -+ Xj,1 for the n pairs matched by o/.
There are no vectors ai,1 or covectors Xj, 1 matched by r,.(k) for s > 2. The same
since al, o, are both transversals of P.
argument applies
Form
o'(k) for all s > 2 and the transversal
&(s)
may be constructed.
Step 4) Since &sk) and ^(k) both map vectors to covectors with common second
subscript s, and d, and 7r are both transversals of P, each pk)* is a pre-transversal
of P(k). By hypothesis, p() isa transversal. Suppose by induction that p(),..., pk)_
1
are transversals of p(k) but that p k) is not, for 1 < s < k. Then Case 1) R(k) AS(k) C
p(k) such that Xr,s E ext(C(R(k))I Pk)) n ext(E(S(k))I (k)). The covector Xr,, must
Pa
Ps
have second subscript s, as only covectors of this type have been reassigned in passing
(k)
(k
plk) . .,p(k,,)
from P-i to psk) But
.
k). for s + 1 < t < k reassign only covectors with
second subscript t and hence no assignment to X,s,, is changed. By Lemma 2.13
X,, E ext(E(R(k))lfr(k)) n ext(E(S(k))Ir(k)) which is a contradiction. An identical
argument holds in Case 2).
To prove the sign equality of step 4 we shall require two final lemmas. Intuitively,
the Lemmas say that the block structure of p(k), i.e. the stucture of P determines
the sign of a transversal ir(k). The following Lemma remains valid when V and A
are interchanged, and vectors and interchanged with covectors.
Lemma 4.6 Let A 1 , A 2 (B 1 , B2) be ordered sets of distinct vectors (covectors)
V(Ai) 9 a, (C(Bi) C X), i = 1,2 with IV(A1)I = 1V(A 2 )1, (IC(B1) = IC(B 2 ) ), and
Ak)
k)
,k),
Bk)) be ordered sets of distinct vectors (covectors) V (A()
C a(k),
(C(B k)) C X(k)), with IV(A (k))I
-= V(A.(k))
k))) - IC(B3k))).
Suppose that Alk) (fUk)), i= 1,2 are identical except in a set S(T) of common posi-
tions which are occupied by vectors {ai,,} (covectors {X,,, }) with second subscript
s for some 1 < s < k. Suppose also that the associated set to the ordered subset of
A k), (Bk)) determined by positions S(T) is precisely Ai, (Bi),i = 1,2.
116
CHAPTER 4. ENLARGEMENT OF IDENTITIES
Let Ri = (V Ai) V (A Bi) and Rik) = (V A k)) V (A BDk)) be type I basic extensors
with 7r
i Ai -+ Bi, 7r k) : Alk) _ tk),i = 1, 2 assignments from vectors to covectors
such that k) = 7r(k ) except on vectors and covectors with second subscript s, which
are assigned to each other such that r k) : al,s -+Xm,s iff 7F
i : at - Xm for i = 1,2.
Then
sgn(E(RIk))
PROOF.
x sgn(E(Rk))I &(k))= sgn(E(R) I,)x sgn(E(R)I,,)
(4.3)
(,))
We may choose representations,
R 1 = ai"ati V X1i
R 2 = b."
R k)
bi V Y " Ym
= C1,1'"' a,s'" al,s** Cp,q V Z,i
R(k) =
(4.4)
.Xm
...
(4.5)
Xl,s"... Xm,,... Zr,t
cl,l ... bl, ...b,s ...
.cp,q V Zi,l ... Y,s
with m > 1. Computing £(R1)I,,, E(R 2 )1
and £(R k)l
2
... Ym,s
(4.6)
... Zr,t
(k),E(RP2)I
(k),
(4.7)
write
E(R1)1,, = X...Xm \ {X,,(1),... ,X 1,(}
(4.8)
x sgn(XI ...Xm \Xr, (),...,X ,()},X,(,(,i,...,X , )
)
E(R 2 )i. 2 = Yl
x sgn(Y""Y,\
E(R k))I
\fZ 7r(k1)
x
sgn(Zl,l
'
r k)(1,s)
rl(k ,)
'
Zrr(k) (1,s)
8(R(k))(•k) =
\{Zr(k)(
x sgn(Zl,
Z,l ...
...Xi,,
Xm,s " Zr,t
Xm
Zi T(k)
(1 .9),
...X ,s...Xm,s ...Zr,t\{Z Ik)(1,1)..., Z
Z
...Yl,.
Z
2
2
. .,
'
Z...,
'
z7
(4.10)
(p,q)
Z rr(k)
L)(1,s),...
)l"s)Z)((
(/,s
'' '
(1,S)
Zl ,1-. Yl,... Ym,s'
.k)
.*.
,ZrXk)(pq)}
7r(k)(p,q)
(4.11)
Zr,t
if
(k)(198),.
Ym,s... Zr,t\{Zrf(k)
(k)
(4.9)
{Yr•.,...,Yxr2 (t)},IYr2 (1),W...,2(i)
(k)=
"""
.
. Y \ {Yif
2 (1),..., Yr,(i)}
,k)i S),s
.,Z7r(k) (1,s)....I Z
,
1 )...
(k)(
Z•T
1
")
'.
k)(p,q)
,...,Z
(k)(s),...,
Z8 (,Z
k)(pq))
(k)(pq
4.1. THE ENLARGEMENT THEOREM
117
By hypothesis 7r(k) a,s --+ Xm,s iff r : al -+ Xm so the covector Xi,, appears in the
extensor of 4.10 iff the associated Xj appears in the extensor of 4.8, and similarly
for 4.11 and 4.9. Also, Z k)(j,)= X 1t, for some covector X1 ,, iff X, (ji)
= XI for
the associated X1 . Hence Xj,s or Yj,, appears in the sign term of 4.10 or 4.11 iff the
repeated associated Xj or Yj appears in the sign term of 4.8 or 4.9. The extensors
and sign terms of 4.10 and 4.11 are of the same length as sequences, and since the
extensor
Z,1" ...Y1,S .
Zrt \ {Zk,)(1,1) ... ,Z r(k) (pq)}
Ym,s
(4.12)
appears as both the extensor of 4.11 and in its corresponding sign term, we may
simultaneously reorderthe covectors of 4.12 in both without change of sign to 4.3, so
that in position i of the extensors (and sign terms) of both 4.10 and 4.11, covector
Zj,l occurs identically if 1 # s, and if 1 = s and Xj,, appears in position i of
4.10 then in position i of 4.11 the covector Yl,, appears, where the associated Xj
and Yj appear in the same position of the extensors 4.8 and 4.9 respectively. Let
4.10 and 4.11 now represent the reordered versions of extensor and corresponding
sign term. To compute the left side of 4.3 transpose the covectors X 1,,- Xm,s \
to the far right in 4.10 to obtain,
k),s)} and Z k)(,),..., Z k)
(k)
{Z
sgn(Z 1,1 ... Zr,t \ {Z
XIs ... Xs
(k)
(pq)}, Zr
• (11)'
\ {Z (k)( ,..
k)( 1 1),
,Z
Z (k)(.. } Z (k)
k)
(4.13)
Z k)))
and in an identical number of transpositions 4.11 may be written,
sgn(Z 1ji
YI,"S
Zr,t \ {Z,7
.
k)
.
Zk)(p,)
(
} Zk)(I,i)
,Z 7k)p,q,
(4.14)
"Y ,,S\ {Z r()(I'$),... Zr(k,)(,s) Zr (k)(1,s), ' ,Z rk)(,))
Since r(k) (ai,) = 74(k)(ai,j) if1 : s the terms Z1,1 . . . Zr,t \ {Zi•k)(1,)...,Z
k)()}
i = 1, 2 are identical in 4.13 and 4.14. Since the associated covectors to
Xl,s -. "Xm,s \ {Z Ik)
,.s),
. Z k)(s)} Zr k)(1,s), " ,Z
k)(s)
(4.15)
are identical in content and order to X 1
Xm\{X, (1),..., X,1 (1)}X,,(1),..., X,1 (I)
(with X,,(i) associated to Z,(k)(i,,),) and similarly for
**Y , \
Z
k)
.
IZ (zk)
Z
k)(1,)
(
Z (k)
(4.16)
CHAPTER 4. ENLARGEMENT OF IDENTITIES
118
the number of transpositions required to order 4.15 and 4.16 as in 4.6 and 4.7 is
identical to the number of transpositions required to linear the associated covectors
of 4.8 and 4.9 as in 4.4 and 4.5 and we may write
sgn(Xi,, "Xm,
,
\ Z ({Z
k)),...Z
x sgn(Yl,'
Ym, \ {Z
= sgn(Xi
....
k) },Z
•(k),... ,Z (k)(1s)
X
Xm \ X
x sgn(Y1 ""
-Ym
(
k) Z
(1,8)) (4.17)
"Y ,Zr( s )
Z(k)(,1
... X, (1) }, X 7,r(1)...
Xr(1)
(4.18)
f\
Y(),....Y-2( )},Y(1)
(
Y ,2(1)
After reordering the sets of covectors of 4.17 according to 4.6 and 4.7 to obtain
Xi,s,,... ,Xm,s and Y1,8,...,Ym,, and the sets in 4.18 to obtain X1,...,Xm and
Y1 ,..., Y, the latter sets are in original order. Since the positions T of covectors
with subscript s are identical in 4.6 and 4.7 we have
sgn(Zl,1,
- - -, Zr,t \ {Xi,,, ,... ,Xm,s}, Xl,s
X Sgn(Z,I,
,X
l , ,s ''' X m ,s' ' '
. .
Zr,t)
= sgn(Zi,1, . ,ZZr,t \ {YI,,,..., Y,s),Y1,,
x sgn(Z1, 1 ,. .. ,Yi,,s* Ym,s*
(4.19)
Xm,s)
Ym,s)
Zr,t)
and the Lemma follows.
0
Lemma 4.7 Let Ai, i = 1,..., 4 each be ordered sets of covectors with C(Ai) C X
k),i = 1,...,4 be ordered
for i= 1,...,4 with ,Aij= 1A3 1,and JA21 = ,A
41. Let A-
sets of covectors C(A(k)) C X(k) with 1A k) =
k) and J.k), =
(k).
Suppose that the ordered sets Ak) (k))
and A (k)(A4k) ) are identical except in a
set S (T) of common positions which are occupied by covectors Xil, having second
subscripts, for some 1 < s < k. Suppose also that Ai, for i = 1, ... , 4,is the ordered
set of associated covectors of the subset of A k), with second subscript s. Then
vA
sgn( A(k)
sgn(A A 1oA.k
k)v A. ))x
x.--(A
3
vA (.4k)) =
(4.20)
4.1. THE ENLARGEMENT THEOREM
119
sgn(A1vAA
V
) x sgn( A3 V A4)
PROOF.
Choose representations
A
,=X 1 A... A X,
AA3 = Y A ... AY
AA2 = ZI A... A Zm
(4.21)
A 4 = WA... AWm
(4.22)
A
k)= U1,1 ... 1,,...x,,... Up,k
A
= U,1
(Ak)
A4
(4.24)
1,s'..." Y ,s... Up,k
..
= Vil""
(4.23)
Vq,k
(4.25)
Wm,," Vq,k
W•s ...
A 4k) = V ...
(4.26)
k)
Zi,s
Zm,s,"
where the positions of Xj,, and Yj,, are identical as are the positions of Zi,, and Wi,s.
, since the sets S (and
In computing the join of A Ak) VA A ) and A k) V A
T) are identical for Alk) (A2k)) and Ak) (Ak)), we may simultaneously transpose
the covectors {Xi,s}i=1,...,i, {Yi,s}i=1,...,I,
{Zi,s}j= ,...,mn, {Wi,s}j=1,...,m to the far right
in 4.23, 4.24, without affecting the right-hand-side of 4.20. We compute
A Ak) VA
A A v AA
'
U,1
U1,1
U,1
...
Up,kX1,8
...
Up,kY1,s
.
Xt,s V V1, 1
Y1,s V Vii
...
...
Vq,kZl,s ... Zm,s
V,k
,s
Wm,,
(4.27)
(4.28)
By splitting the extensors on the right of 4.27 and 4.28, the Lemma easily follows.
To complete the proof of step 4, we show recursively that for any s = 1,..., k, and
type I Q(k) C p(k) with corresponding Q C P,
sgn(Q(k) I(k)) X sgn(Q(k) (k• ) = sgn(QO,) x sgn(QIf,)
(4.29)
Any non-trivial type I ArguesianP, contains a basic extensor Q = e = al ... a1 V / A
XI- Xm, and thus p(k) contains a basic extensor Q(k) _ e(k) = a, ... al,k V
/ A Xi,i ... Xm,k. Setting VA41 = VA 2 = al ... al, AB1 = AB 2 = X1 . . Xm,
k)
= XI,1 ""X,k, we have that p(
-B)k)
VA(rk) = VA•(k) = a1,1 ""a,k, AlB
and p(k
satisfy the hypothesis of 7rk),
(k )
of Lemma 4.6 (or its dual) with respect
CHAPTER 4. ENLARGEMENT OF IDENTITIES
120
to as and 7r. Thus sgn(E(e)(k)I
(k))
x sgn(E(e)(k) (k)) = sgn(E(e),,.) x sgn((e)lr).
By the reorderingproperty on covectors in the proof of Lemma 4.6, Xj,l occurs identically in ext(&(e(k))l (k)) and ext(g(e(k))) (k) ) in position i if I $ s and if I = s
Pa
Pa-i
and Xj,, appears in position i of ext(E(e(k))
i of ext(E(e(k))l
k))
then Yn,, appears in position
•, ) where associated Xj and Y, occur in identical positions of
Pa-1
ext(£(e)l,) and ext(E(e)la,). Inductively, there are three cases:
Case 1) Q(k) = R(k) V S (k ) where both R(k) and S (k) are type I, and by induction the result holds for R(k) and S(k) and ext(E(R(k))P (k)), ext(C(R(k))lp_-1(k))
satisfy the reordering property with respect to ext(E(R) I,) and ext(E(R)I,) and
similarly for S(k) and S. Setting ext(E(R(k))I (k)) = AAk),ext(6(S(k))I (k))
Pa
AA(), ext(F(R(k))j~ ( ) = AA k),ext(.(S(k))j
(k) -
s
AA(k) while ext(E(R) ,)
AAl,ext(E(S)I,) = AA 2 ,ex t((R(n),) = AA 3 ,ext(S(S)I,,) = AA4 , the hypothesis
of Lemma 4.7 are satisfied. Then
sgn(((Q(k•)) (k)) x sgn(E(Q(k))• (k))
Pa
P-
(4.30)
1
= sgn(C(R(k) V S(k))l ( )) x sgn((E(R(k) V S(k))I (k) ),
= sgn(x(R(k))
x sgn(E(S(k))
(k)
sgn(E(R( k))j (k)
-i
Ick))
x sgn(6(S(S))
x sgn((R (k))
(k)V(S(k))I
X sgn((R(k))
(k))
I
(k) V
-
(4.31)
(k))
E(S(k)
x (4.32)
(k)
P-1
a-1
By induction hypothesis,
sgn(S(R(k))
(k) ) x sgn(g(R(k))
(k)
)=
sgn(C(R)O,,)
x sgn(C(R)*I)
(4.33)
and similarly for S, S (k ) . An application of Lemma 4.7 gives
sgn(C(R(k))I(
k)V
£(S(k)I
(k))
Pa-1
x sgn(
k(n(k)•(k)V &(S(k),)
I(k)
Pa-
(4.34)
= sgn(S(R)I,, VC(S)I,,) x sgn(C(R)j, V £(S)I,)
and so 4.32 is equal to
sgn(E(R)]I,) x sgn(&(R)j,) x sgn(E(S)1a, ) x sgn(E(S)I,)
(4.35)
x sgn(S(R)o,, VE(S)I,.) x sgn(C(R) 1, V E(S)1,)
= sgn(C(RV S),.,) x sgn(E(RV S)j,) = sgn(E(Q)Ia,) x sgn(E(Q)jA)
(4.36)
4.1. THE ENLARGEMENT THEOREM
121
Hence 4.29 is satisfied, and by the remarks in Lemma 4.7 the reordering property
holds on the covectors of ext(&(Q(k))l (k)) and ext(&(Q(k))l (k) ), with respect to
ext(E(Q) a,,), ext(6(Q),).
Case 2) Q(k) = R(k) V S
(k
) is type I (or equivalently Q(k) = R(k) A S(k) is type II)
where R(k) is type II and S(k) is type I, by induction the result holds for R(k) and S (k)
and ext(E(R)lp,(k)), ext(E(R)p.__ (k)) satisfy the reordering property with respect to
ext(E(R)j,,) and ext(E(R)j,) and similarly for S(k) and S. Assume Q(k) is type I.
Setting ext(E(R(k))I (k)) = VA( k),ext(E(S(k)) ()k))
ABk), ext(E(R(k)) I
= VA k),ext(E(S(k))I l))
ext(E(R)I,)
with
= AB k )
while ext(E(R)I,,) = VAI,ext(E(S)Io) = ABi,
= VA 2 , ext(E(S) I,) = AB2 , the hypothesis of Lemma 4.7 are satisfied
(k) = Pk) r(k) =_P
, 7l =
and r2 = 7r. Then the sequence of calculations
from 4.30 to 4.36 are identical and the reordering property holds on the covectors
(or vectors if Q(k) is type II) of ext(&(Q(k))I (k)) and ext(C(Q(k)) (k)
)
with respect
to ext(E(Q)jI,), ext(C(Q), ).
Case 3) Q(k) = R(k) AS(k) where both R(k) and S(k) are type I, by induction the result
holds for R(k) and S(k) and ext(S(R)Ip.(k)), ext(-(R)lp_l,(k)) satisfy the reordering
property with respect to ext(E(R) I,) and ext(E(R)j,) and similarly for S (k) and S.
Since ext(E(Q(k))|p,) is the meet of covectors of ext(E(R)lp,(k)) and ext(E(S)Ip.(k))
each satisfying the reordering property by induction. ext(E(Q(k))l (k)) and
ext(E(Q(k))
I
(k)
) satisfy the reordering property with respect to ext((Q)1a,,) and
ext(C(Q)I,). No additional sign is computed in this case. If If the step of Q(k) is
full, R(k) A S(k) I R(k) V S (k ) and this case agrees with case 1.
E
We now complete the proof. Since P = Q we may without loss of generality assume
that for every transversal 7r, sgn(E(P)I~) = sgn(C(Q)~,). Any transversal a(k) of
p(k) (also of Q(k)) may be converted to &(k) by steps 2 and 3. Since by Lemma 3.16
each transposition is sign reversing in any Arguesian polynomial, and
sgn(e(P(k)) (k) Xsgn(Q(k))l ,t)) = sgn(C(P(k) a(k)))x sgn(E(Q(k) )
The sequence
&(k )
k)
k)
(k)
.
k))
(4.37)
(k) is a sequence of transversals such that
for s = 1,...,k,
sgn((P(k))p(k) ) x sgn(E(P(k)) (•) ) = sgn(E((P)ja) X Sgn(8(P)r,)
(4.38)
= sgn(E(Q)la) x sgn(E(Q)|,) = sgn(E(Q(k)) P(k)) x sgn(S(Q(k))lpk)
p,
p,
,
CHAPTER 4. ENLARGEMENT OF IDENTITIES
122
E
since P
Q, applying step 4, and regarding a(k) as a transversal of Q(k). By re( )f(k)
(k)) 1&(k)) X
= sgn(((P
peated application of 4.38 sgn((p(k)) &(k)) X 8 g?() ((k)
sgn(E(Q(k)) r(k)) and then by 4.37,
sgn(6(P(k))
1,k)
) X sgn((p(k))Icr(k))
(4.39)
=-sgn((Q(k)) Ia(k) X 8 gn(E(Q(k)))i(k))
Given any ir of P, the canonical ^(k) satisfies sgn(((P(k))l(k)) = (-1)ksgn(E(P)If).
Then by step 1,
sgn(e(P(k)) 1(k)) X sgn((Q(k))la())
(4.40)
= (- 1 )k sgn(C(P)l,) x (- 1 )ksgn(£(Q) 1,) = 1,
0.
the Theorem is proved.
4.2
Examples
Example 4.8 The step 6 Arguesian polynomial
((aVADF)A(bVACE))V((cVAEF)A(dVBCD))V((eVBCE)A(f
VBDF)) (4.41)
may be enlarged by 3 replacing ai -" ai,i V ai,2 V ai,3 and Xj -+ Xj,i A Xj, 2 A Xj,3
giving a step 18 polynomial
((ala 2a3 V AjA 2A 3 D 1D 2 D 3 F1 F2 F3 ) A (blb 2 b3 V A1 A 2 A 3 C 1 C2 C 3 E 1 E 2 E3 )) (4.42)
V((cIc 2c 3 V A1 A 2 A 3 E 1 E 2 E3 F1 F2 F3 ) A (dld
2d 3
V BjB 2 B3 C 1 C2C 3 D1 D 2D3 ))
V((ele 2e 3 V B1 B 2 B 3 C 1 C 2 C 3 E 1E 2 E3 ) A (flf2f3 V B 1B 2B 3 D 1D 2 D 3 F1 F 2 F 3 ))
Let 7r be the transversal of P r : a -+ A, b -+ C, c -+ F, d -+ D, e -- E, f -+ B.
Then the canonical transversal r(k) of p(k) is:
al -- A1 bl -+ C 1 cl -4 Fi di -+ D el -+ E 1 f - B 1,
a2 -- A 2 b2 - C 2 c-2 -+ F2 d2 - D2 e2
e
E 2 f 2 -+ B 2 ,
a 3 -- A 3 b3 --- C3 c3 -+ F3 d3 - D3 e3 -- E3 f 3 -+ B3 .
Let o(k)
-
k)
be the transversal of p(k) given by
al -4 A 2 a2 -- A 3 a3 -D
bl -+ E 2 b2 - C 1
b 3 -- Al,
cl -F
C2 - 4 E 1
c3 -+ F
d, -4C 3
d 2 -D
d3 -+D
el -- C2
e2 -+ E3
e3 --+ B 1
fl -+ F,
f 2 - B 33
1
3
f-3
2,
B 2.
123
4.2. EXAMPLES
k)
Then a(k) factors as a k) ,a),
)
ar
: al -+ A 2
(k )
a3-+ AD
1
oa
)
Sk"
: a2 -+ A 3
b2
-
C
b3 - A,
b, -+ E 2
c2 -+ El
d3 4 D 2
e3 -+ B1
c3 -4
Fl
di -+ C 3
e2 -+ E 3
cl -+ F 3
d2 -+ D3
F2 ,
fl -
f2 - B 3,
d, -+ C2 f3 -- B 2 ,
which may be converted by sign alternating transpositions to a new set of partial
(k
mappings 1( ) • 2
2 k),( (7k) forming the transversal 5 )
mappings U^11
6(k)
al -)-
5(k) . a2
~(k)
a3
a3 -+
bl -,
d, -+
el --
b2 --
d 2 -4
e2
b3 -
d2
e3 -)
-4
fl
-'
F,,
12 - B2,
13 -) B3.
Then pIk)
B 1,
B 2,
B3 ,
-+ A,
•4D2
-+ A3
and 2
=
dl -4
d 2 -4
d2 -4
and finally p
=
el -e2 -e3 --
BI,
B2,
B3,
(k).
The following counterexample shows that non-multilinear identities in general do
not enlarge.
Example 4.9 An alternate non-multilinear identity for Bricards Theorem in a
Grassmann-Cayley algebra of step 3 is the following.
[[A, B, C, D]]((a V BC) A bc) V ((b V AC) A ac) V ((c V AB) A ab)
= [a, b, c, d]((bc A A) V BC) A ((ac A B) V AC) A ((ab A C) V AB)
which may be verified by directly expanding either side to the sixteen terms:
+[c, B][b, A][c, C][a, B][b, A][a, C] - [c, B][b, A][c, C][a, B][a, A][b, C]
124
CHAPTER 4. ENLARGEMENT OF IDENTITIES
-[c, B][b, A][a, C][c, B][b, A][a, C] + [c, B][b, A][a, C][c, B][a, A][b, C]
- [c, C][b, A][c, A][a, B][b, B][a, C] + [c, C][b, A][c, A][a, B][a, B][b, C]
+[c, C][b, A][a, A][c, B][b, C][a, C] + [c, C][b, A][a, A][c, B][a, B][b, C]
- [b, B][c,A][c, C][a, B][b, A] [a, C] + [b, B][c, A][c, C][a, B][a, A][b, C]
+[b, B][c, A][a, C][c, B][b, A][a, C] - [b, B][c, A][a, C][c, B][a, A](b,C]
+[b, C][c, A][c, A][a, B][b, B][a, C] - [b, C][c,A][c, A][a, B][a, B][b, C]
-[b, C][c, A][a, A][c, B](b, B][a, C] + [b, C][c, A][a, A][c, B][a, B][b, C]
Enlarging this identity, replacing each vector ai by the line a 2) = ai,1 V ai,2 and
each line Xj by the coline X 2) - Xj,- A Xj, 2 we obtain a candidate identity in step
6 which is equivalent to
[[A, B, C, D, E, F]]((abVCDEF)Acdef)V((cdVABEF)Aabef)V(efVABCD)Aabcd)
(4.43)
[a, b, c, d, e, f ]((cdefAAB)VCDEF)A((abefACD)VABEF)A((abcdAEF)VABCD)
Denoting the right hand in repeated variables as
((cidlelfl A AIB 1 ) V C 1 DiEIF1 ) A ((albie 2 f 2 A C 2D 2 ) V A 2 B 2 E 2 F2 )
(4.44)
A((a2b2c2 d2 A E3 F3 ) V A 3 B 3 C3D 3 )
we may easily construct a transversal ir of 4.44 in which 7r : dl -- C 1 and r : d 2 D3 , with W : fh -4 F1, al -+ B 2 ,f 2 -+ E 2,b 2 -- a3. Any unassigned vector may be
then mapped to one of the covectors C 2 , D2 or E3 , F3 . However, any assignment of
the vectors d to the covectors C, D on the left hand of 4.43 must involve the vectors
d from the expressions ((abVCDEF)Acdef) and ((efVZBCD) Aabcd). In any nonzero term of the left side 4.43 both these repeated vectors may not simultaneously
appear in a transversal.
Problem. The Pappus identity of Chapter 1 may be written as a non-multilinear
Arguesian identity.
((aV AC) A (bV BC)) V ((bV AB) A (cV AC)) V ((aV AB) A (cV BC))
(4.45)
= ((a V AC) A B) V ((b V AB) A (cV AC)) V (ab A (c V BC))
as can be verified by direct expansion, and specifying the covectors A, B, C to a basis
of vectors a'b'c'. Does this identity enlarge, giving a higher-dimensional analog of
Pappus' Theorem ?
4.2. EXAMPLES
125
ABCDEFG
HI
X
X
X
X
X
IXX
X
X
XX
X
X
X
X
X X
X
X
'
X
X
X
X
X
Figure 4.2: The matrix representation of two polynomials P 0 Q.
The enlargement theorem depends both on the equality of the set of transversals of
P and Q as well as on the signs of these transversals. The following example shows
two Arguesian polynomials, type I P and type II Q having the same transversals
yet with signs which are not uniformly equal or opposite.
Example 4.10 The matrix M of Figure 4.2 represents the associated graph Bp =
BQ of two Arguesian polynomials P, Q with
P = ((a V BDI) A (b V BEG) A (c V AFH))V
((dv CHI) A (e V BEF) A (f V ADG)) V ((g V CEG) A (h V CDI) A (i V AFH)),
Q = ((cfi A A) V (abe A B) V (dgh A C))A
((afh A D) V (beg A E) V (cei A F)) A ((bfg A G) V (cdi A H) V (adh A I)).
Each row i of M corresponds to a type I basic extensor ei formed by joining the vector
labelling row i with the meet of covectors whose labels Xj have X's in row i. The
basic extensors {ei} are combined by interpreting the row lines which are unbolded
as A, while those bolded as V. The columns and column lines are interpreted dually.
The polynomials P and Q have the same transversals as may be verified from M.
E
However P f Q as the transversal 7r : a -- B, b -+ E, c -* A, d -- I,e -+ F, f G,g -+ C,h - D, i -4 H has sgn(E(P) ,) = sgn(E(Q)j,) = +1 while the transversal a : a -+ I, b - B, c -+ F,d -- H,e -ý E, f -+ G,g -* C,h - D,i -+ a has
sgn(S(P)•,) = +1 and sgn(E(Q)Jl) = -1.
CHAPTER 4. ENLARGEMENT OF IDENTITIES
126
AB
a
0
b
x
CDE
F
x
x
(2
x
C
d
e
f
xx0
xx
F
x
xA
Figure 4.3: A non-zero term of an identity P
4.3
Q.
Geometry
Theorem 4.1 suggests that each Arguesian polynomial has a minimal reduced form
in which the uniform occurrence of the join of distinct vectors ai,j1
... ai,l is replaced
by the vector ai and the meet of a set of covectors Xj,i ... Xj,,m is replaced by the
covector Xj, subject to step requirements. We conjecture,
E
Conjecture 4.11 Let P = Q be an identity of non-zero Arguesian polynomials
P(a,X), Q(a, X) of arbitrarytype in a Grassmann-Cayley algebra of step n and let
a(m) and X (m ) be enlarged alphabets of vectors and covectors of common size m in a
GC algebra of step m. Let (kl,..., kn) and (11,... ,1,In) be two sequences of integers
with Fný
ki = E7=llj = m. For every ai E a, Xj E X occuring in P and Q
formally substitute ai -+ ai,1 V ... V ai,ki and Xj -+ Xj,1 A ... A Xj,li and let p(m)
and Q(m) be the resulting Arguesian polynomials. Then if p(m) and Q(m) have the
same transversals,
P= Q '# p(m) E Q(m)
Conjecture 4.11 suggests,
E
Conjecture 4.12 Geometrically, all Arguesian identities P Q with P multilinear
in vectors and Q multilinear in covectors are consequences of the n-th higher-order
Arguesian Laws.
127
4.3. GEOMETRY
Example 4.13 The identity 4.46, whose matrix representation is shown is Figure
4.3, is a valid identity as follows by direct expansion of both sides yielding, in GC(6):
[a, b, c, d, e, f] 2 ((a V ADF) A (b V ACE)) V ((c V AEF) A (d V BCD))
(4.46)
V((e V BCE) A (f V BDF)) =
+ [a, A][b, C][c, E][d, D][e, B][f, F] - [a, A][b, C][c, F][d,D][e, E][f, B]
-[a, A][b, E][c, F][d, C][e, B][f, D] + [a, A][b, E][c, F][d,D][e, C][f , B]
-[a, D][b, A][c, E][d, C][e, B][f, F] + [a, D][b, A][c, F][d, C][c, E][f, B]
+[a, F][b, A][c, E][d, C][e, B][f, D] - [a, F][b,A][c, E][d, D][e, C][f, B]
= [[A, B, C, D, E, F]]2 ((abcAA)V(def AB))A((bdeAC)V(adfAD))A((bceAE)V(acfAF))
Geometrically, the identity 4.46 interprets as
Theorem 4.14 In a five-dimensional projective space, the lines
cb'c'd' n da'e'f ',
aab'c'e' n bb'd'f',
eaa'd'f' n f a'c'e'
lie on a common four-dimensionalhyperplane, iff the three solids formed by the span
of the lines abcnfb'c'd'e'f', def naa'c'd'e'f', the span of the lines bdena'b'd'e'f',adf n
aa'b'c'e'f' and the span of the lines bcena'b'ce'f',aacfna'b'cc'd'e'contain a common
point.
By Theorem 4.1 we may enlarge the identity 4.46 to the following identity valid in
a GC algebra of step 18.
[a( 3), b (3), c(3) d(3), e(3), f( 3 )]2 (((a(3) V A( 3 )D(3)F(3 )) A (b(3) V A (3 )C (3 )) E (3 ))) (4.47)
V((c
(3 )
VA (3) E(3)F
= [[A( 3 ), B (3 ) ,
A((b
(3)
(3)
(3 ) )
A(d(3)VB (3 )C (3 ) D (3 ) ) V((e(3)VB (3 ) C (3 E (3 ) ) A(f (3)VB (3) D (3 ) F (3 ) )
D(3)E
(3) ,
F(3 )]]2 ((a(3 )b(3 )c(3 ) A a (3 ) ) V (d(3)e(3)f
e (3 ) AC (3" ) )V(a (3) d (3) f(3) AD(3) ))A ((b (3) c (3 ) e (3 ) AE
(3 )
)V(a
(3)
(3 ) (3)
c
A B(3)))
(3)AF(3)))
The identity 4.46 may be given an geometric interpretation in 17-dimensional projective space, which we leave to the reader.
CHAPTER 4. ENLARGEMENT OF IDENTITIES
128
Example 4.15 The identity for Desargues' Theorem in the plane
[a, b, c](a V BC) A (b V AC) A (c V AB) = [[A, B, C]](bc A A) V (ac A B) V (ab A C)
may be enlarged to obtain a set of consequences of the Arguesian law
[a(k), b(k)(k)(k)
(k)V B(k)C(k)) A (b(k) V A(k) C(k)) A ((k) V A(k)B(k)) E
[A(k), B(k), C(k)](b(k)c(k) A A(k)) V (a(k)c(k) A B(k)) V (a(k)b(k) A C(k))
as verified in Chapter 2.
Example 4.16 The third of the planar identities of Chapter 2
[a, b, c]((aVBC) AA) VbV (CA (cVAB)) = [[A, B, C]]((abAC) Vc) ABV (aV (bcAA)
may be enlarged by a factor of 2 to obtain a new identity
[a(2), b( 2 ),c( 2)]((a(2 ) V B(2)C(2) ) A A(2) ) V b(2 ) V (C (2 A (c(2 ) V A(2) B(2))
-
[[A(2) B(2 ), C(2)]]((( (2)b(2 ) A C (2 ) ) V c(2 )) A B (2 ) V (a (2 ) V (b( 2 )c (2 ) A A(2)))
This new identity may be rewritten in extensors of step 2, lines, alone as an identity
in five-dimensional projective space as
(a(2)a
(2)
E ((a( 2 )b(2) A a'(2 (b
A b'(2)c' (2) ) V b(2) V (a'(2)b (2 ) A c(2)dc(2) )
2) ) V c( 2 ))
A a'(2)c' (2) A (a(
2)
V (b(2)c 2(
)
(4.48)
b'(2)(2)))
which interprets to give the following theorem.
Theorem 4.17 In a five-dimensional projective space, let a (2 ), b(2 ) , (2 ) and
a'(2), b'(2 ), c (2) be lines. The two solids, formed by the span of a(2) U a'( 2), and b'(2) U
c' (2) are intersected to form a line 11. Similarly, let the line determined by the
intersection of the two solids, a' (2) U b'(2 ) and (2) U c'(2 ) be denoted 12. Let the line 13
be b(2 ) . Also, let the three solids S 1, S2, S3 be determined as follows: S1 is the span
of the line c(2) with intersection of the solids a (2 ) U b( 2 ) and a' (2) U b' (2 ) , S 3 is span
of the line a (2) with intersection of the solids b(2 ) U c (2) and b'(2 ) U c' (2 ) , while S 2 is
the solid which is the span of the lines a'(2) and c' (2 ) . Then, the lines 11,12,13 lie
on a common four-dimensional hyperplane if and only if the three solids S1, S2, S3
contain a common point.
Chapter 5
The Linear Construction of
Plane Curves
Why should one study Pappian geometry? To this
question, put by enthusiasts for ternary rings, I would
reply that the classical projective plane is an easy first
step. The theory of conics is beautiful in itself and
provides a natural introduction to algebraic geometry.
Projective Geometry, H.M.S. Coxeter, 1963
In this final chapter we discuss another application of Arguesian polynomials to
projective geometry. While the results below apply similarly to quadric and higher
dimensional projective surfaces, we shall concentrate primarily on the projective
plane. The basic notion is that an Arguesian polynomial, P(a,X), with a and X
now of arbitrary size, and with a certain number of vectors replaced by variable
vectors, represents upon setting P(a,x, X) = 0 the locus of a plane curve. In
general, given P(a, x, X) = 0 it is natural to ask whether this curve can, by suitable
choice of the fixed vectors and covectors in a and X represent an arbitrary plane
curve of given order. This work was first investigated by Grassmann [Grall] in
giving certain forms representing the conic and cubic curve, and later by Whitehead
[Whi97]. The results below represent a significant simplification in the language of
Grassmann-Cayley algebra of this work. We develop this theory in the context of
129
CHAPTER 5. THE LINEAR CONSTRUCTION OF PLANE CURVES
130
the Grassmann-Cayley algebra, show how a certain symmetric form can be used
to represent both the planar conic and cubic, and give a partial solution to the
planar quartic, a problem considered difficult by the classical algebraic geometers.
As a final result, we obtain a generalization to cubics of Pascal's Theorem on conics.
This work may be of particular interest to those studying algorithmic computational
geometry.
5.1
The Planar Conic
We begin by studying the conic.
following:
A Proposition that we use throughout is the
Proposition 5.1 Let P(a,x, X) denote an Arguesian polynomial in step 3 in which
n distinct vectors are replaced by a common variable vector x. Then the locus of
points {x : P(a,x, X) = 0} describes a n-th order projective plane curve.
PROOF.
In step 3, the vectors a represent points in the projective plane, while
the covectors X represent lines. If we choose a reference frame el,e2,e3 for the
projective plane then an arbitrary point x may be expressed barycentrically as
a = alel + a 2e2 + ..
"
a 3e 3
(5.1)
while an aritrary line, expanding in the dual basis, may be written
X = Plele2 -- +f 2e2 e33 ' - + 3ele 3
(5.2)
Given P(a,x, X) express each vector a E a and each covector X E X of P in terms
of the basis el, e2, e3. For each variable vector x we also write x = ,lei +4b2 e2+4' 3e3.
We now recursively evaluate P. If Q V R C P is the join of step 1 points, with
Q = Qlel + Q2e 2 + Q3e 3 and R = Rlel + R 2e 2 + R 3e 3 then Q V R is a line
(Q 1R 2 - Q 2 R 1)ele2 + (QR
3
- Q 3 Rl)ele 3 + (Q 2 R3 - Q 3 R 2 )e2e 3 . Similarly, if Q AS
is the meet of step 2 lines, since the basis elements ele2, e2e3, e l e3 meet respectively
E
E
E
to give ele2 A e2e3 = e2, ele 3 A e 2 e 3
e3, ele2 A ele 3
el, Q A R reduces to a linear
combination flea + f 2e 2 + f 3e 3 for constants fl, f2, f3. Since the variable vector
x appears homogeneously n times, upon collecting coefficients of various powers of
01, 0 2 , 03 we may write P as an nth-degree homogeneous polynomial in the variables
01, 02, 03;
P[el,e 2 ]n
, e3 (Al3• '+ 123
23
=0
(5.3)
(.3))
131
5.1. THE PLANAR CONIC
Projectively we may take a basis with e3 = 1 absorbing the constant 03, and hence
5.3 describes an nth-degree projective plane curve in the variables 01i, 'P2
0.
The generalization of Proposition 5.1 to polynomials of higher step is evident. It is
interesting to inquire whether the vectors and covectors of a given P(a,x, X) can
be chosen so as to represent an arbitrary n-th order plane curve. In general,
(n+ 2)
1
points uniquely determine an n-th order plane curve, as can be seen by considering
the nth homogeneous degree equation aox"+aln-ly+. - -+a(,,+2) = 0 and specifying
one coefficient. A particularly trivial linear construction problem is the line. Any
line can be represented by the expression wx = (ax A A) V B = 0, as wx vanishs for
x = a and if x = A A B then expanding ((a V (A A B)) A A) V B = [a, A]BA V B = 0.
Given any line L choose a and A A B on L, A and B otherwise arbitrary. The
expression w, is a line by Proposition 5.1, since it vanishs on the two given points,
it vanishs on all points on L by construction. In the sequel, by Proposition 5.1, we
shall informally refer to any Arguesian polynomial with n occurrences of a variable
vector x as an nth degree projective curve.
Theorem 5.2 An arbitraryplanar conic may be represented by the expression
(xa A B) V c V (D A ex) = 0.
(5.4)
PROOF.
If a step 1 extensor p, represented as joins and meets of the vectors
and covectors a, c, e and B, D, is found to vanish when substituted for x, then p
represents a projective point which lies on the conic. This is immediately clear for
the points a and e as x V a and e V x appear in 5.4, whereas in general c is not a
point on the conic, since c, ac A B, and ec A D need not be collinear. Next calculate
the points in which lines B, D meet the conic. It is clear that
(xz
A B) V c V (D A ex) = -c V (ax A B) V (ex AD).
Suppose that B meets the curve in point p and let B = pq. Then ap A B = ap Apq =
[apq]p = [a, B]p. We may therefore write,
Wp = -(c V (ap A B) V (ep A D)) = [a, B](cp V (ep A D)) = 0
From 5.1 we have that either the bracket [a, B] = 0, or cp V (ep A D) = 0.
CHAPTER 5. THE LINEAR CONSTRUCTION OF PLANE CURVES
132
Case 1) The bracket [a, B] = 0. Then ax A B = [a, B]x - [x, B]a = -[x, B]a, and 5.2
becomes,
S= - [x, B](ca V (ex A D)) = 0.
In this case either [x, B] = 0 and x lies on the line B, or ca V (ex A D) = 0 in which
case the variable point ex A D lies on the line ca, so the curve degenerates into two
distinct lines. If D meets the curve in p so that D = pq, we have similarly [e, D] = 0,
the curve reduces to [x, D] = 0 and ce V (ax A B) = 0. Case 1 therefore corresponds
to the degenerate conic of two distinct lines.
Case 2) Assume cp V (ep A D) = 0. Then,
cp V (ep A D) = c Vp V ([e, D]p - [p, D]e) = [p, D][cep] = 0
so that p lies on the line D or on the line ce. Since the curve is a conic, the line
B, if in general position, intersects it twice, in B A D and B A ce and this give two
additional points on the conic. Carrying through the same analysis for the line E,
by the symmetry of the form we see that the points in which D intersects the curve
5.2 are BAD and DA ca.
We have therefore obtained the required number of points, five, namely a, e, B A
D, B A ce, D A ca which lie on the conic and these points are in general distinct.
Label these points of intersection as g = B A D, b = B A ce and d = D A ca.
Computing the meet b = [B, e]c - [B, c]e and d = [D, a], c - [D, c]a, one may write,
eb A ad = [B, e]ec A [D, a]ac = [B, e][D, a]ec A ac = [B, e][D, a][e, a, c]
E
c
Since g = B A D, b = B A ce, d = D A ca we have, B = bg and D = dg.
It is easy to see that 5.2 represents any conic, since given three points on a conic a, e, g
let the two lines B, D passing through g intersect the conic in two additional points
b, d. Then the five points a, b, g, d, e determine the conic, and setting c = ad n be,
5.2 is a conic passing through the five points and satisfies the required relations
amongst the points. In the case when the conic degenerates into two straight lines,
Equation 5.4 represents the conic by letting the lines be B = D and a, c, e collinear.
See Figure 5.1
0
An interesting consequence of Theorem 5.2 is that one obtains expressions of both
Pascal's Theorem and its dual Brianchon's Theorem. We note also the close relationship of this form of the conic to the third of three inequivalent Arguesian
polynomials in step 3.
5.1. THE PLANAR CONIC
133
Figure 5.1: Linear Construction of the Conic.
Theorem 5.3 (Pascal) If a hexagon is inscribed in a conic, the three pairs of opposite sides meet in collinearpoints.
Proof. Substituting for b, d, g in 5.2 an arbitrary conic may be represented as
(xa A bg) V (eb A ad) V (dg A ex) = 0
which vanishs precisely when x is a point on the conic. Let x be fixed. Then the
O
points xa A bg, eb A ad, dg A ex are collinear.
By dualizing (and taking supplements), we have a direct expression of Brianchon's
Theorem, avoiding a direct geometric proof.
Theorem 5.4 (Brianchon) If a hexagon is circumscribed about a conic, the three
diagonals are concurrent.
PROOF.
By dualizing and the identity for Pascal's theorem
(XA V BG) A (EB A AD) A (DG V EX) = 0.
O
We may approach the analysis of the conic by the following expansions which will
be useful in the sequel. We have ax A B = [a, B]x - [x, B]a then
(((ax A B) V c) A D) V x = ([a, B]xc A D - [x, B]ac A D) V x
134
CHAPTER 5. THE LINEAR CONSTRUCTION OF PLANE CURVES
-
[a, B][x, D]cx - [x, B][a, D]cx + [x, B][c, D]ax
Factoring the coefficients of cx , and using [a, B][x, D] - [x, B][a, D] = xa A DB we
obtain
(((ax A B) V c) A D) V x = [xa, DB] V cx + [x, B][c, D]ax
(ax A B) V c V (D A xe) = [xa, DB][cxe] + [x, B][c, D][a, x, e]
SO,
wX = (ax A B) V cV (D A xe) = [xa, DB][cxe] + [x, B][c, D][a, x, e] = 0
is the equation of the conic. It is easy to find where B meets the curve. Set
[z, B] = 0. Then either xa A DB = 0 or [cxe] = 0. Hence either B meets the curve
at ce A B or B A D. Similarly D meets the curve at points BD and ca A D, and that
a, e are on the conic The rest of this derivation is identical.
5.2
The Planar Cubic
The problem of representing a planar cubic requires a somewhat more complicated
form although many yield the general cubic. Several techniques are possible the
most elegant perhaps being a highly symmetric form closely related to the conic.
Theorem 5.5 An arbitraryplanar cubic may be represented by the expression
(((xa A B) V c) A D) V x V (D 1 A (cl V (B A alx)) = 0
subject to [c, D1] = [cl, D] = 0
PROOF.
We begin by studying the more general form
(((xa A B) V c) A D) V x V (DI A (c V (B 1 A ax)) = 0
(5.5)
It is clear that the points a, al lie on the curve. By the remarks following Theorem
5.2 we may write
(((ax A B) V c) A D) V x = [xa A DB]cx + [x, B][c, D]ax
and hence if we denote equation 5.2 by w, then
S= [xa, DB](cxA(Di A(cl V(B
1 Aai))))+[x,
B][c, D](axA(D A(cl V(B
1 Aalx))))
5.2. THE PLANAR CUBIC
135
which being of step 0 is equivalent to
=
=W [xa, DB]((cx A D) V ci V (BI Aa zx)) + [x, B][c, D]((ax A Di) V cl V (B1 A alx))
(5.6)
where we have changed sign twice. The form 5.2 is the sum of two conics and the
results of Theorem 5.2 may be applied. Therefore,
(cx A D1) V cl V (B1 A xal) = 0
is a conic through the five points, c, al, B 1 A D 1,clal A D 1,clc A B 1, while
(ax A D1) V cl V (B 1 A xal) = 0
is also a conic through the five points, a, al, B1 A Di,cla A D
D,ca A B 1.
The points B 1 A D 1 and clal A D 1 vanish on both conics and hence lie on the cubic.
By the symmetry of the form and by anti-commutativity, we may reverse the order
of terms in w, to write,
wX = (((xal A B1) V cl) A D1) V x V (D A (cV (B A ax)) = O
(5.7)
and conclude by symmetry that B A D and ca A D are also points on the curve.
As a check, it is possible to verify that these points lie on the cubic directly, as
E
E
E
setting x = B A D, xa A B a x so ((xa A B) V c) A D)
xc A D
and so
E
((xa A B) V c) A D) V x = 0 so 5.5 vanishs. Similarly, if x = ca A D then xa = ca so
E
(xa A B) Vc
E
(ca A B) Vc
ca, so (((xaA B)V c) AD)Vx = (caAD)V = xx = 0. In
projective space a line would in general cut a cubic in three points, hence the line D
cuts the curve in another point and we proceed to determine this point as follows:
Observe that ((ax A B) V c) A D represents some point on D, and if x is another
point on D then
E
(((az A B)V c) A D)Vx
D
except when (((ax A B) V c) A D) V x is zero. Hence substituting D for (((ax A B) V
c) A D) V x in 5.5, we see that x satisfies both ((D A D 1) V cl) A B 1 A alx = 0 and
E
(((D A D 1) V cl) A B 1 V al) A D and so D cuts the
[x, D] = 0 and therefore, x
curve in this point as well as B A D and ca A D. By a similar argument on the
symmetric form 5.7, D 1 cuts the cubic in the three points B 1 A D 1 , clal A D 1 and
(((Dl A D) V c) A B V a) A Di.
136
CHAPTER 5. THE LINEAR CONSTRUCTION OF PLANE CURVES
We can find the another point in which any line through the point a cuts the cubic.
E
Let L be the line, then [a, L] = 0, and if x is the required point of L, then ax L
and hence
(((xa A B) V c) A D) A ex
E
(((L A B) V c) A D) A ex = 0
Since x is incident on the line (((L A B) V c) A D) V e and incident on L we may write
x
E
(((L A B) V c) A D) V e) A L
(5.8)
Using 5.8 one concludes that ca meets both conics of 5.6, and the cubic 5.5 in
((((ca A Di) V cl) A B 1 ) V al) A ca.
Hence the three points in which the line ca meets the cubic are a, ca A D, ((((ca A
D1) V ci) A Bi) V al) A ca. One similarly concludes that the three points in which
the line clai meets the cubic are al, alai AD 1 and ((((clal AD) Vc) AB) Va) Acia l .
Using this technique, the points where the line (B A D) V a cuts the cubic can also
be found. We already know that two of these points are B A D and a. To find the
third, we again use the factorization
we = [xa, DB]((cx A DI) V cl V (BiA aix)) + [x, B][c, D]((ax A DI) V ci V (BI A aix))
(5.9)
and hence this third point must be the point other than a where the line cuts the
second conic, as if x is on (B A D) V a then the first bracket vanishs. By the above
remarks this point is given by,
((((((B A D) V a) A Di) V c,) A BI) V ai) A ((B A D) V a).
Similarly the three points where (BI A DI) Val cuts the cubic are given by al, (Bi A
Di) and ((((((Bi A Di) V al) A D) V c) A B) V a) A ((Bi A Di) V a)
To find the three points where B cuts the cubic, remark that if [x, B] = 0, the
factorization reduces the curve to
[xa, DB](cx A Di) V ci V (Bi A alx) = 0
Hence either, xa A DB = 0 which implies x = B A D, which has already been
discovered or (cx A D 1 ) V cl V (BI A aix) = 0, and therefore the two remaining points
are where line B meets this conic. We now simplify the expression of the cubic in
order to determine these points, setting B = B 1 The cubic becomes,
wX = (((xa A B) V c) A D) V x V (D A (ci V (B A alx)) = 0
137
5.2. THE PLANAR CUBIC
and we show that this form still yields the general cubic. The points where B meets
the conic (cx A D1) Vci V (B A alx) have been shown to be B A D 1 and ccl A B, and
the points where B meet the simplified cubic are these two including B A D. Ten
points which lie on 5.5 are therefore:
1. BAD, caAD, BADI,cial AD1
2. Points where D meets the cubic, ((((DA DI) V cl) A B) V al) A D, and ((((Di A
D) V c) A B V a) A D1 .
3. Points where ca meets the cubic, ((((caADI)Vcl)AB)Val)Aca, and ((((cla A
D) V c) A B) V a) A ciai.
4. Points where (B A D) Va meets the cubic are ((((((B AD) Va) AD 1) V c) A B) V
al) A ((BA D) V a) and ((((((BA D) V a) A D 1 ) V cl) A B) Val) A ((BA D) V a)
5. ccl A B
Claim: The specialized form
w. = (((xa A B) V c) A D) V x V (D 1 A (ci V (B A aix)) = 0
where [c, D1] = [c, D] = 0 represents the most general form of the cubic.
The three points in which D cuts the curve are B A D, ca A D, and ((DD1 V cl) A
E
E
B) V ai) A D. Since [cl, D] = O, (((DDI V ca) A B) V ai) AD- (DB V al) AD DB.
Hence D is tangent to the curve at B A D and cuts it at ca A D. Similarly, D1 is
tangent to the curve at B A D 1 and cuts it again at clai A D 1 . The line B cuts the
curve in the points B A D,B A D 1 and ccl A B.
Now take any cubic curve, as in Figure 5.2, and draw the lines D, D 1 tangent to it
at two points g, gl. Join ggl by the line B which cuts the curve in some other point
h. The point h must be distinct from g, gl as lines D, D 1 are tangent. Through
h draw any line not passing through D A D 1 cutting D in cl and D 1 in c. The
tangents D and D 1 cut the curve again in two points e, el. Now join ec, this line
cuts the curve in two points. Call one of the two a. Similarly call one of the two
points in which eicl cuts the curve al. Since the line through h was any arbitrary,
the points c on D 1 and cl on D are arbitrary and thus the points a, al can be
E
chosen to be distinct from any of the others. Then by construction h
ccl A B,
E
E
e ca A D and el
cial A D 1 . Now the tangents D, D 1 at g and gl and the points
h,e, ela, al represent the nine points necessary to determine a planar cubic, the
138
CHAPTER 5. THE LINEAR CONSTRUCTION OF PLANE CURVES
Figure 5.2: Linear Construction of the Cubic.
5.2. THE PLANAR CUBIC
139
tangents representing double points. Since 5.2 is a cubic satisfying these conditions,
0
the required points are distinct, this equation represents the assumed cubic.
The four lines A = ca, D, A 1 = clal, D 1 have a special relationship to the simplified
cubic,
wX = (((xa A B) V c) A D) V x V (Di A (ci V (B A alx))) = 0.
in addition to the fact that the points ca AD and clal ADI lie on the curve. Let the
lines A, D, A 1 , D 1 be in general position. The points A A D = e and A1 A D 1 = el
are determined. Also the remaining points, other than a, al in which A, A1 cut the
curve are given by
f = ((((A A Di) V cl) A Bi) V al) A A,
(5.10)
fj = ((((Al A D) V c) A B) V a) A Al.
(5.11)
We suppose a, al, f, fl are known, and therefore the equations 5.10 and 5.11 partially
determine the remaining cl, c, B, B 1 .
Again the arbitrarily assumed lines D, D 1 are supposed to meet the curve in e, el
and if we let the two other points be k, kl,
k = ((((DA DI) V cl) A B 1) V al) AD
kl = ((((DI A D) V c) A B) V a) A Di
Then the remaining points in which D, D 1 respectively meet the curve are B A D
and B 1 A D 1. We denote these points by g and gl. We now show that g, gi are
determined by the previous eight points, a, al, e, el, f, fl, k, kl.
Let L 1 , L 2 be lines and pl,P2 be points. By alternative laws;
(Li A L 2 ) V Pl = [L1 , p]L2 - [L2,pl]Ll,
(P1 V P2) ALi = [pl, Ll]P2 - [P2, Ll]pl.
Using these rules we may write:
fai = ((((ADI V cl) A B)
V al) A A) V al
= -[A, al]((ADI V Cl) A BI) V al.
Similarly,
E
fal A B1
(ADI V cl) A B 1 ,
(fal A B1 ) V cli
E
AD 1 V c 1 .
140
CHAPTER 5. THE LINEAR CONSTRUCTION OF PLANE CURVES
Hence we conclude
(AD1 V cl) A fal A B1 = 0,
and by similar argument
(DD1 V cl) A kal A B 1 = 0.
Line B 1 passes through the two points, (ADI Vcl)Afal = p, and (DD1 Vcl)Akal = q.
and so the line B 1 may actually be written
B1 = ((AD
1
V cl) A fal) V ((DD 1 V cl) A kaj)
Then gl = B 1 A D 1 =
((AD
1 V cl)
A fai) V ((DD1 V cl) A kal) A D 1
= pqD 1 = [p, Dl]q - [q, Dl]p.
But now [p, D1] = [(ADI V ci) A fai A Di] = -[(ADI V cl) A D1 A fai] which may
be written as [cl, D 1](AD1 A fal). Similarly,
[q, Di] = (DD 1 V cl) A kai A DI = -(DD
1
V ci) A D 1 A kal
= [cl, Di][DDi A kai]
The point gi may then be written as
91 = [ADI A fal]q - [DDI A kai]p
p = (AD
1
V cl) A fai = [AD 1 , fal]ci - [cifal]ADI
q = (DDI V c1 ) A kal = [DDl, kal]c - [cikal]DDI
But then [clfal] = -[AI, f] and [clkal] = -[A,
k] and,
91 = [AI, k][ADi, fal]DDI - [AIf][DDI, kaI]ADI
Thus the point gj is completely determined by the other elements. The calculation
illustrates that although the points are distinct there may be linear relationships
amongst them. A similar expression holds for g.
A particularly nice form for the general cubic is given by,
Theorem 5.6 An arbitraryplanar cubic may be represented by the expression
141
5.2. THE PLANAR CUBIC
((xa A A) V a') A ((xb A B) V b') A xc = 0
(5.12)
PROOF. This cubic yields an easier analysis. Nine points lying on 5.12 are
a,
a'c A A,
b'c A B,
b,
c
AA B,
aa A B
bb' A A,
aa' A bb'
Verification is straightforward, we illustrate only the points aa' A B and aa' A bb'.
Joining, (aa' A B) V a = -[a, B]aa', and then (aa' A A) V a' = [a', A]aa'. We have
(aa' A B) V b', and the intersection of lines (aa' A
(((aa' A B) V b) A B) V b'
B) V c, (aa' A B) V b' is again the point aa' A B. This point lies on the aa' so 5.12
vanishs. For the second, (aa' A bb') V a = -a V (aa' A bb') = -[a, bb']aa'. Then
(aa' A A) V a' = -[a', A]aa'. Similarly, ((aa' A bb') V b) A B) V b' = [b, aa'][b',B]bb'.
Then the line (aa' A bbl) V c contains the point aa' A bb' and 5.12 vanishs.
Denote the above nine points as a, b, c, d, e, f, g, h, k respectively. To prove that 5.12
represents any cubic, take a cubic an inscribe it in any quadrilateral khef. Let the
side kh meet the curve again in point b, the side he meet the curve again in point
d, the side ef in point g and the side fk in a. Let c be any other point of the curve
not collinear with any pair of these four points. The assumed 9 points determine
the cubic. Construct the points a' = cd A kf and b' = cg A hk. Now set fe = B
and he = A. Then the equation 5.12 is a cubic passing through the nine points, and
0
hence the equation can represent any cubic.
The cubic 5.12 gives an interesting generalization of Pascal's Theorem on a conic to
the cubic. Namely,
Theorem 5.7 (Pascal's Theorem for the Cubic) Let a, b, c, d, e, f be any six
points on a cubic, and let four other points g, h, i, j be determined by the third point
of intersection of the lines ab, bc, cd, ad with the cubic. Then let li be the line formed
by joining the point fh n ad to the point ej n bc, and let 12 be the line formed by
joining the point fg n cd to the point ei n ab. Then the lines 11, 12, ef are always
concurrent.
PROOF. In Theorem 5.8 equation 5.12 is shown to represent an arbitrary cubic.
Using the relations al = cd A kf and bl = cg A hk, set fe = B and he = A. After a
change of notation, this last equation may be rewritten as
((fh A ad) V (ej A bc)) A ((fg A cd) V (ei A ab)) A ef = 0,
(5.13)
142
CHAPTER 5. THE LINEAR CONSTRUCTION OF PLANE CURVES
O
which upon interpretation yields the result.
Another form of the cubic is given by the following expression.
Theorem 5.8 An arbitraryplanar cubic may be represented by the expression
((((xe A D) V p) A E) V d) A F) V (xf A B) V (xd A C) = 0
(5.14)
subject to the restrictions that [F, f] = [B, d] = 0.
PROOF. The points d, e, f clearly lie on the cubic. Consider the point B A C. The
expression xd A C upon substituting x = B A C and expanding becomes ((B A
C) V d) A C = ([B, d]C - [C, d]B) A C
E
B A C. Substituting the same point into
E
xf AB = ((BA C) Vf) AB BA C, and the join of xd V C and xf A B is zero. The
point C A F lies on the cubic as well, since ((((xe A D) Vp) A E) V d) A F is a point on
F, while ((C A F) V f) A B = ([C, f]FB - [F,f]CB). But by hypothesis [F, f] = 0,
so the expression reduces to a point F A B on F. Hence their join represents the
line F. Now ((C A F) V d) A C similarly reduces to F A C, a point on F again, and
on the cubic.
By a similar argument used in 5.2, two more points can be found representing
additional points where (B A C) V f and (B A C) V d cut the cubic. They are,
pl = ((((((FA C) V d) A E) Vp) A D) V e) A ((B A C) V f)
p2 = ((((((B A F) V d) A E) V p) A D) V e) A ((B A C) V d)
Their verification is essentially identical and we show only the first. First compute
the expression ((((pie A D) V p) A E) V d) A F. The innermost join is given by
pie = [(B A C) V f, e](((F A C) V d) A E) V p) A D) V e. Continuing to expand we
see that after computing each join or meet only one non-zero term survives and the
resulting expression reduces to F A C. Now expanding
Pif A B
E
((B A C) V f) A B,
this is a point C A B. Hence the join of these points is the line C. Finally the
expression xd A C is a point on C and so the cubic vanishs. A similar computation
verifies that P2 is on the cubic.
We may obtain two additional points on the cubic by further analysis. Let qi, q2 be
the third points where de, ef meet the curve, given by
qi = ((((((((de A D) V p) A E) V d) A F) V (de A C)) A B) V f) A de
5.2. THE PLANAR CUBIC
143
q2 = ((((((((ef A D) V p) A E) V d) A F) V (ef A B)) A C) V d) A ef
We compute first qld A C. Expanding by alternative laws, the resulting expression
is equivalent to de A C. Computing then qif A B, and denoting the point by p =
((((de A D) V p) A E) V d) A F the resulting expression reduces to (p V (de A C)) A
B which when joined with de A C yields the line p V (de A C). Now computing
((((qle A D) V p) A E) V d) A F this expression reduces to p and hence the join of
p and p V (de A C) is zero. In a similar manner, q2f A B is equivalent to ef A B,
((((q2e A D) V p) A E) V d) A F reduces to r = ((((ef A D) V p) A E) V d) A F and
q2dA C reduces to (r V (ef A B)) A C and the join of these three points is again zero.
We have obtained the nine required points for the cubic,
1. d
2. e
3. f
4. BAC
5. CAF
6. ((((((FA C) V d) A E) V p) A D) V e) A ((BA C) V f)
7. ((((((BA F) V d) A E) V p) A D) V e) A ((B A C) V d)
8. ((((((((de A D) V p) A E) V d) A F) V (de A C)) A B) V f) A de
9. ((((((((ef A D) V p) A E) V d) A F) V (ef A B)) A C) V d) A ef
We next convert the form of the equation of the cubic to simplified form by a set of
transformations. By replacing a = qi, b = q2, c = F A C, A = (B A C) V f and
a' = ((((((cd A E) V p) A D) V e) A A) V c) A de
b'= ((((((((B A F) Vd) AE) Vp) AD) V e) AB) V c) A ef
the above cubic may be written the form of Theorem 5.6.
To verify this we show the above nine points lie on 5.12. The points a and b evidently
lie on the cubic, as does F A C. To verify that the point B A C lies on the cubic,
note that the expression xc reduces to the line C under substitution of this point.
The expression (xb A B) V b' reduces to (B A C) V b' which when meeted with the first
expression reduces to the point B A C. Finally, ((B A C) V a) A ((B V C) V f)) V a'
144
CHAPTER 5. THE LINEAR CONSTRUCTION OF PLANE CURVES
is a line containing B A C so the cubic vanishs. The points d, e, f are shown to lie
on the curve by essentially the same reasoning. Consider the point f. We see that
E
fc = f V (F A C) F, since by hypothesis [f, F] = 0. Since b, b' are both points on
ef, (fb A B) V b' reduces to ef. Now consider (f a A A) V a'. Since A = (B A C) V f
this is a line fa' containing f, and F A ef A fa' = 0 so the cubic vanishs. A similar
argument shows that d, e lie on the curve and that the points P1, p2 above lie on this
0
cubic. Appealing to Theorem 5.6 we conclude that 5.8 is the general cubic.
5.3
The Spacial Quadric and Planar Quartic
To concude the thesis, we give some partial results and conjectures about linear
representations of the spacial quadric and planar quartic. A surface in an affine
or projective space defined by a quadratic equation is called a quadric surface. By
studying Arguesian polynomials in step 4 one is led to theorems involving quadric
surfaces. Consider the relation in step 4,
xA A B A C = 0
(5.15)
This relation does not vanish identically and is satisfied by all points x on the lines
A, B, C, hence 5.15 represents a quadric passing through A, B, C. To verify this
assertion, let the lines A, B, C be denoted as A = ab, B = cd, C = ef. We may then
write
xab A cd A e fx = [xabd][cefx] - [xabc][defx]
This expression clearly vanishes for points on A and C which are of the form x =
xla + x 2 b and x = xle + x 2 f as substitution and expansion by linearity force both
bracket products to vanish. If x = Xlc + x 2 d the expression becomes
[xle + x2d,abd][cef, xlc + X2d] - [x1c + X2d,abc][def, x1c + X2d]
which reduces to
1X2 [cabd][cefd] - Xx 2 [dabc][defc] = 0
Now let a, b, c, a', b', c' denote points and let Q = 0 be the quadric through the lines
aa', bb', cc', let R = 0 be the quadric thru bc', ca', ab', and let S = 0 be the quadric
through b'c', c'a, a'b. Then we may write
Q= [xaa'b'][bcc'ý]- [xcb'c'][aba'x]
R = [xbc'a'][cab']- [xaa'b'][bcc']
5.3. THE SPACIAL QUADRIC AND PLANAR QUARTIC
145
S = [xcb'c'][aba'x]- [xbc'a'][cab'x]
And the three equations sum to zero Q + R + S = 0. This implies that the quadrics
have a common curve of intersection, for projectively any two of the quadrics intersect in a curve. If a point is on both these curves then two of the three extensors
Q, R, S are zero and hence the third must be. Hence one proves,
Theorem If a, b, c, a'b'c' are six points on a skew hexagon then the quadrics containing the two sets of opposite sides and the set of diagonals respectively have a
common curve of intersection.
Conjecture 5.9 An arbitraryquadric surface in step 4 may be represented by the
expression
(((ax A B) V c) A D) V ((ex A F) V g)] A H A ij = 0
(5.16)
One verifies in a straightforward manner that the points a, e, BDF,((H A ij) V eg) A
BD, ((H A ij) V cg) A BF, [([(H A ij) V g V (ac A D)] A F) A eac] ((([(H A ij) V g V
(ec A F)] A D) V c) A B) V aec and (([((H A ij) V ge) A D] V c) A B) A aeg, all lie on
the quadric. Since 9 points in general determine a quadric in three dimensions, one
point remains to be found. It is also necessary to demonstrate that 5.16 is the most
general quadric.
The techniques of the previous sections may be applied to a candidate quartic, of
which we give a partial solution. The general quartic is determined by fourteen
points, of which we are able to give 11 by appealing to the following form. Consider
the expression,
S= (((((xaAB) Vc)AD) Vx) AB
1)
V cl V (D A (V(D
2
A (c2 V (B 2 Aa2x)))). (5.17)
By the transformation of following Theorem 5.2,
(((xa A B) V c) A D) V x = [xa A DB]cx + [x, B][c, D]ax.
Factoring, we obtain 5.17 as the sum of two cubics,
wX = [xa A BD](((cx A B 1) V cl) A D 1 ) V x V (D2 A (c2 V (B 2 A a2x)))
+[x, B][c, D](((ax A BI) V ci) A DI) V x V (D2 A (c2 V (B 2 A a2x))).
Thus the previous symettric decomposition applies, obtaining the quartic as the
sum of cubics,
(((cx A B 1 ) V cl) A DI) V x V (D 2 A (c2 V (B 2 A a2X))),
146
CHAPTER 5. THE LINEAR CONSTRUCTION OF PLANE CURVES
(((ax A B1) V cl) A D1) V x V (D 2 A (c2 V (B2 A a2x))),
and there are several points on both. Five points which appear immediately are
a, a2, B 1 A D 1, B2 A D 2 , c2a2 A D 2.
A similar expansion gives
X = [za2 A B 2 D 2 ](((c 2x A D 1 ) V cl) A B 1) V x V (D A (c V (B A ax)))
+[x, B2][c, D 2](((a 2 x A Di) V cl) A B1) V x V (D A (c V (B A ax)))
giving points B A D and ca A D for a total of seven points.
We can get two more points in this way. The line D 1 cuts both cubics in the first
expansion in ((((D 1 A D 2 ) V c2) A B 2 ) V a2) A D 1, while in the symettric expansion
the line B1 cuts both cubics in ((((B 1 A D) V c) A B) V a) A B 1, and these points lie
on the quartic, giving a total of nine points.
It is easy to verify directly that the above points lie on the curve. That a, a2 lie on
the curve is evident. By setting the point y = (((xa A B) V c) A D) the quartic may
be written
WX = ((yx A BI) V cl) A Di) V x V (D 2 A (c2 V (B2 A a2x)))
The expansion of the previous section to the expression ((yx A Bi) V cl) A Di) V x
shows immediately that B1 A D 1 is on the curve. Similarly, B 2 A D 2 and B A D are
easily verified to lie on 5.17. To verify the point z = ((((DIAD 2 )Vc 2 )AB 2 ) Va 2 )AD 1
lies on the curve, let ((Di A D 2 ) V c2) A B 2 = p. Then the point z may be written
z = pa2 A D1 and (pa2 A D 1 ) V a2 = -[D 1, a2]pa2. Computing the meet with B 2 we
compute
-[a2, D 1]((D 1 A D 2 ) V c2) A B 2 ) V a2) A B 2 ,
which is [a2, DI]pa2 A B 2 yielding [a2, DI][a2, B 2](R A B 2 ) where line R
D 2 ) V c2). Successively computing the join with c2 we obtain, finally
E
((DI A
[a2, D1][a2 , B 2][a 2 , B 2] [c2 , D 2 ]D1 A D 2
which is a point on D 2 and hence yields zero when meeted with D 2.
Set D 1 = B and B1 = B 2 in 5.17,
wx = (((((((xa A B) V c) A D) V x) A B 2 ) V cl) A B) V (x V (D2 A (c V (B2 A a ))))
2
2
The above two cubics in the expansions reduce to
(((cx A B2) V cl) A B) V x V (D2 A (c2 V ( B2 A a 2x)))
(5.18)
5.3. THE SPACIAL QUADRIC AND PLANAR QUARTIC
(((ax A B 2 ) V cl) A B) V x V (D2 A (c2 V (B 2 A a2x)))
147
(5.19)
and in the symmetric expansion
wX = [xa2 A B 2D 2 ](((c 2 x A B) V cl) A B 2 ) V x V (D A (c V (B A ax)))
+[x, B 2 ][c, D 2](((a2x A B) V Cl) A B 2 ) V x V (D A (c V (B A ax)))
In this case we may obtain 2 more points and the total list of points with relabeling
is,
1. a
2. a2
3. BA B2
4. BAD
5. B2 A D 2
6. a2c2 A D 2
7. acAD
8. ((((B A D 2 ) V c2 ) A B 2 ) V a 2 ) A B
9. ((((B
2
A D) V c) A B) V a) A B 2.
10. cic2 A B 2
11. cci AB
It suffices to show that cIc2 A B 1 is on the simplified quartic, by showing it is
on both the above cubics. Consider the cubic 5.18. In computing, cx A B 1 =
E
(c V (clc2 A B1)) A B 1 = [c, B 1]clc 2 A B 1 . Joining this expression with cl simplifies
to c1c2 A D 1 . Similarly, performing the calculation B 1 A (a2 V (clc2 A B 1 )), the cubic
reduces to
(5.20)
(clc2 A D 1 ) V (clc2 A B 1 ) V (clc2 A D 2 )
which vanishes. The argument is symmetric for ccl A B.
148
CHAPTER 5. THE LINEAR CONSTRUCTION OF PLANE CURVES
Bibliography
[Brill]
Raoul Bricard. Geometrie Descriptive.Octave Doin et Fils, Paris, 1911.
[Bri94]
Andrea Brini. Invariant methods in discrete and computational geometry. Curacao, Netherlands Antilles, 1994. Invited lecture.
[BS87]
J. Bokowski and B. Sturmfels. Polytopal and non-polytopal spheres:
An algorithmic approach. Israel J. Math., 96:257-271, 1987.
[BS89]
N.L. White B. Sturmfels. Gr6ebner bases and invariant theory. Advances in Mathematics, 76:245-259, 1989.
[BS91]
W. Whitely B. Sturmfels. On the synthetic factorization of projectively
invariant polynomials. J. Symbolic Computation, 11:439-454, 1991.
[Cox87]
H. M. S. Coxeter. Projective Geometry. Springer-Verlag, New York,
1987.
[Cra91]
Henry Crapo. Invariant-theoretic methods in scene analysis and structural mechanics. J. Symbolic Computation, 11:523-548, 1991.
[FDG85]
J. Stein F. D. Grosshans, G.-C. Rota. Invariant theory and superalgebras. volume 69. Conference Board of the Mathematical Sciences, AMS,
1985.
[For60]
Henry D. Forder. The Calculus of Extension. Chelsea Publishing Company, New York, 1960.
[GCR89]
J. Stein G.-C. Rota. Standard basis in supersymmetric algebras. Proc.
Natl. Acad. Sci. USA, 86:2521-2524, 1989.
[Grall]
Hermann Grassmann. Gesammelte Werke. Teubner, Leipzig, 1911. 6
Volumes.
149
150
BIBLIOGRAPHY
[Hai85]
Mark Haiman. Proof theory for linear lattices. Advances in Mathematics, 58:209-242, 1985.
[Haw93]
M. Hawrylycz. Geometric identities, invariant theory, and a theorem of
bricard. J. of Algebra, 1993. Accepted, In Press.
[Haw94]
M. Hawrylycz. A geometric identity for pappus' geometry. Proc. Natl.
Acad. Sci., 1994. Accepted, In Press.
[Jon54]
Bjarni Jonsson. Modular lattices and desargues' theorem. Math. Scand.,
2:295-314, 1954.
[LP86]
L. Lovasz and M.D. Plummer. Matching Theory. North Holland, Amsterdam, 1986. Annals of Discrete Mathematics 29.
[MB85]
G.-C. Rota M. Barnabei, A. Brini. On the exterior calculus of invariant
theory. Journal of Algebra, 96:120-160, 1985.
[PD76]
J. Stein P. Doubilet, G.-C. Rota. On the foundations of combinatorial theory ix: Combinatorial methods in invariant theory. Studies in
Applied Mathematics, 53:185-216, 1976.
[RQH89]
J. Stein R. Q. Huang, G.-C. Rota. Supersymmetric algebra, supersymmetric space, and invariant theory. Annali Scuola Normale Superiore, Pisa, 1989. Volume dedicated to L. Radicati.
[Sam88]
Pierre Samuel. Projective Geometry. Springer-Verlag, Paris, 1988.
[Whi97]
Alfred N. Whitehead. A Treatise on Universal Algebra with Applications. Cambridge University Press, Cambridge, 1897. reprinted by
Hafner, New York 1960.
[Whi75]
Neil L. White. The bracket ring of a combinatorial geometry i. Trans.
Amer. Math. Soc., 202:79-95, 1975.
[Whi91]
Neil L. White. Multilinear cayley factorization. J. Symbolic Computation, 11:421-438, 1991.
[WVDH46] D. Pedoe W. V. D. Hodge. Methods of Algebraic Geometry. Cambridge
University Press, London, 1946. Vols. 1 and 2.
Download