2 LECTURE Properties of the intersection poset ... arrangements

advertisement
LECTURE 2
Properties of the intersection poset and graphical
arrangements
2.1. Properties of the intersection poset
Let A be an arrangement in the vector space V . A subarrangement of A is a
subset B ∗ A. Thus B is also an arrangement in V . If x ≤ L(A), define the
subarrangement Ax ∗ A by
Ax = {H ≤ A : x ∗ H}.
(6)
Also define an arrangement Ax in the affine subspace x ≤ L(A) by
A x = {x ⊕ H =
⇔ � : H ≤ A − A x }.
Note that if x ≤ L(A), then
L(Ax ) ∪
= �x
x ∪
L(A ) = Vx
(7)
:= {y ≤ L(A) : y → x}
:= {y ≤ L(A) : y ⊂ x}
K
x
Ax
K
A
A
K
Choose H0 ≤ A. Let A� = A − {H0 } and A�� = AH0 . We call (A, A� , A�� ) a
triple of arrangements with distinguished hyperplane H0 .
13
14
R. STANLEY, HYPERPLANE ARRANGEMENTS
H0
A
A’
A"
The main goal of this section is to give a formula in terms of ψA (t) for r(A)
and b(A) when K = R (Theorem 2.5). We first establish recurrences for these two
quantities.
Lemma 2.1. Let (A, A� , A�� ) be a triple of real arrangements with distinguished
hyperplane H0 . Then
r(A)
b(A)
= r(A� ) + r(A�� )
�
b(A� ) + b(A�� ), if rank(A) = rank(A� )
=
0, if rank(A) = rank(A� ) + 1.
Note. If rank(A) = rank(A� ), then also rank(A) = 1 + rank(A�� ). The figure
below illustrates the situation when rank(A) = rank(A� ) + 1.
H0
Proof. Note that r(A) equals r(A� ) plus the number of regions of A� cut into two
regions by H0 . Let R� be such a region of A� . Then R� ⊕ H0 ≤ R(A�� ). Conversely,
if R�� ≤ R(A�� ) then points near R�� on either side of H0 belong to the same region
R� ≤ R(A� ), since any H ≤ R(A� ) separating them would intersect R�� . Thus R� is
cut in two by H0 . We have established a bijection between regions of A� cut into
two by H0 and regions of A�� , establishing the first recurrence.
The second recurrence is proved analogously; the details are omitted.
�
We now come to the fundamental recursive property of the characteristic poly­
nomial.
Lemma 2.2. (Deletion-Restriction) Let (A, A� , A�� ) be a triple of real arrange­
ments. Then
ψA (t) = ψA� (t) − ψA�� (t).
LECTURE 2. PROPERTIES OF THE INTERSECTION POSET
15
Figure 1. Two non-lattices
For the proof of this lemma, we will need some tools. (A more elementary proof
could be given, but the tools will be useful later.)
Let P be a poset. An upper bound of x, y ≤ P is an element z ≤ P satisfying
z ⊂ x and z ⊂ y. A least upper bound or join of x and y, denoted x ⇒ y, is an upper
bound z such that z → z � for all upper bounds z � . Clearly if x ⇒ y exists, then it
is unique. Similarly define a lower bound of x and y, and a greatest lower bound
or meet, denoted x ∈ y. A lattice is a poset L for which any two elements have a
meet and join. A meet-semilattice is a poset P for which any two elements have
a meet. Dually, a join-semilattice is a poset P for which any two elements have a
join. Figure 1 shows two non-lattices, with a pair of elements circled which don’t
have a join.
Lemma 2.3. A finite meet-semilattice L with a unique maximal element 1̂ is a
lattice. Dually, a finite join-semilattice L with a unique minimal element 0̂ is a
lattice.
Proof. Let L be a finite meet-semilattice. If x, y ≤ L then the set of upper bounds
of x, y is nonempty since 1̂ is an upper bound. Hence
�
x⇒y =
z.
z∗x
z∗y
The statement for join-semilattices is by “duality,” i.e., interchanging → with ⊂,
�
and ∈ with ⇒.
The reader should check that Lemma 2.3 need not hold for infinite semilattices.
Proposition 2.3. Let A be an arrangement. Then L(A) is a meet-semilattice. In
particular, every interval [x, y] of L(A) is a lattice. Moreover, L(A) is a lattice if
and only if A is central.
Proof. If H⊆A H = �, then adjoin � to L(A) as the unique maximal element,
obtaining the augmented intersection poset L� (A). In L� (A) it is clear that x ⇒ y =
x⊕y. Hence L� (A) is a join-semilattice. Since it has a 0̂, it is a lattice by Lemma 2.3.
16
R. STANLEY, HYPERPLANE ARRANGEMENTS
Since L(A) = L� (A) or L(A) = L� (A) − {1̂}, it follows that L(A) is �
always a meet­
semilattice, and is a lattice if A is central. If A isn’t central, then x⊆L(A) x does
not exist, so L(A) is not a lattice.
�
We now come to a basic formula for the Möbius function of a lattice.
Theorem 2.2. (the Cross-Cut Theorem) Let L be a finite lattice. Let X be a subset
ˆ then some x ≤ X satisfies
of L such that ˆ
0 ⇔≤ X, and such that if y ≤ L, y ⇔= 0,
1. Then
x → y. Let Nk be the number of k-element subsets of X with join ˆ
ˆ
ˆ
µL (0, 1) = N0 − N1 + N2 − · · · .
We will prove Theorem 2.2 by an algebraic method. Such a sophisticated proof
is unnecessary, but the machinery we develop will be used later (Theorem 4.13).
Let L be a finite lattice and K a field. The Möbius algebra of L, denoted A(L), is
the semigroup algebra of L over K with respect to the operation ⇒. (Sometimes
the operation is taken to be ∈ instead of ⇒, but for our purposes, ⇒ is more con­
venient.) In other words, A(L) = KL (the vector space with basis L) as a vector
space. If x, y ≤ L then we define xy = x ⇒ y. Multiplication is extended to all
of A(L) by bilinearity (or distributivity). Algebraists will recognize that A(L) is
a finite-dimensional commutative algebra with a basis of idempotents, and hence
is isomorphic to K #L (as an algebra). We will show this by exhibiting an explicit
⊥
=
isomorphism A(L) ∃ K #L. For x ≤ L, define
�
(8)
πx =
µ(x, y)y ≤ A(L),
y∗x
where µ denotes the M¨
obius function of L. Thus by the M¨
obius inversion formula,
�
(9)
x=
πy , for all x ≤ L.
y∗x
Equation (9) shows that the πx ’s span A(L). Since #{πx : x ≤ L} = #L =
dim A(L), it follows that the πx ’s form a basis for A(L).
Theorem 2.3. Let x, y ≤ L. Then πx πy = ζxy πx , where ζxy is the Kronecker
delta. In other words, the πx ’s are orthogonal idempotents. Hence
A(L) =
K · πx (algebra direct sum).
x⊆L
Proof. Define a K-algebra A� (L)�with basis {πx� : x ≤ L} and multiplication
πx� πy� = ζxy πx� . For x ≤ L set x� = s∗x πs� . Then
⎤
�⎤
�
�
�
x� y � = �
πs� ⎢ �
πt� ⎢
s∗x
=
�
t∗y
πs�
s∗x
s∗y
=
�
πs�
s∗x∞y
=
(x ⇒ y)� .
Hence the linear transformation � : A(L) ∃ A� (L) defined by �(x) = x� is an
�
algebra isomorphism. Since �(πx ) = πx� , it follows that πx πy = ζxy πx .
LECTURE 2. PROPERTIES OF THE INTERSECTION POSET
17
�
Note. The algebra A(L) has a multiplicative identity, viz., 1 = 0̂ = x⊆L πx .
Proof of Theorem 2.2. Let char(K) = 0, e.g., K = Q. For any x ≤ L, we
have in A(L) that
�
�
�
πy =
0̂ − x =
πy −
πy .
y∈∗x
y∗x
ˆ
y∗0
Hence by the orthogonality of the πy ’s we have
�
�
(0̂ − x) =
πy ,
y
x⊆X
where y ranges over all elements of L satisfying y ⇔⊂ x for all x ≤ X. By hypothesis,
the only such element is 0̂. Hence
�
(0̂ − x) = π0̂ .
x⊆X
If we now expand both sides as linear combinations of elements of L and equate
�
coefficients of 1̂, the result follows.
Note. In a finite lattice L, an atom is an element covering 0̂. Let T be the set
of atoms of L. Then a set X ∗ L − {0̂} satisfies the hypotheses of Theorem 2.2 if
and only if T ∗ X. Thus the simplest choice of X is just X = T .
Example 2.5. Let L = Bn , the boolean algebra of all subsets of [n]. Let X = T =
ˆ 1)
ˆ = (−1)n ,
{{i} : i ≤ [n]}. Then N0 = N1 = · · · = Nn−1 = 0, Nn = 1. Hence µ(0,
agreeing with Proposition 1.2.
We will use the Crosscut Theorem to obtain a formula for the characteristic
polynomial of an arrangement A. Extending slightly the definition of a central
arrangement, call any subset B of A central if H⊆B H ⇔= �. The following result
is due to Hassler Whitney for linear arrangements. Its easy extension to arbitrary
arrangements appears in [13, Lemma 2.3.8].
Theorem 2.4. (Whitney’s theorem) Let A be an arrangement in an n-dimensional
vector space. Then
�
(10)
ψA (t) =
(−1)#Btn−rank(B) .
B→A
B central
Example 2.6. Let A be the arrangement in R2 shown below.
c
d
a
b
18
R. STANLEY, HYPERPLANE ARRANGEMENTS
The following table shows all central subsets B of A and the values of #B and
rank(B).
B #B rank(B)
�
0
0
a
1
1
1
1
b
c
1
1
d
1
1
ac
2
2
2
2
ad
2
2
bc
bd
2
2
2
2
cd
3
2
acd
It follows that ψA (t) = t2 − 4t + (5 − 1) = t2 − 4t + 4.
Proof of Theorem 2.4. Let z ≤ L(A). Let
�z = {x ≤ L(A) : x → z},
the principal order ideal generated by z. Recall the definition
Az = {H ≤ A : H → z (i.e., z ∗ H)}.
By the Crosscut Theorem (Theorem 2.2), we have
�
µ(z) =
(−1)k Nk (z),
k
where Nk (z) is the number of k-subsets of Az with join z. In other words,
�
µ(z) =
(−1)#B .
B→A
z
T
z= H�B H
Note that z = H⊆B H implies that rank(B) = n − dim z. Now multiply both sides
�
by tdim(z) and sum over z to obtain equation (10).
We have now assembled all the machinery necessary to prove the DeletionRestriction Lemma (Lemma 2.2) for ψA (t).
Proof of Lemma 2.2. Let H0 ≤ A be the hyperplane defining the triple
(A, A� , A�� ). Split the sum on the right-hand side of (10) into two sums, depending
on whether H0 ⇔≤ B or H0 ≤ B. In the former case we get
�
(−1)#B tn−rank(B) = ψA� (t).
H0 ∈⊆B→A
B central
In the latter case, set B1 = (B−{H0 })H0 , a central arrangement in H0 ∪
= K n−1 and
H0
��
a subarrangement of A = A . Since #B1 = #B − 1 and rank(B1 ) = rank(B) − 1,
we get
�
�
(−1)#B tn−rank(B) =
(−1)#B1 +1 t(n−1)−rank(B1 )
H0 ⊆B→A
B central
and the proof follows.
B1 ⊆A��
= −ψA�� (t),
�
LECTURE 2. PROPERTIES OF THE INTERSECTION POSET
19
2.2. The number of regions
The next result is perhaps the first major theorem in the subject of hyperplane
arrangements, due to Thomas Zaslavsky in 1975.
Theorem 2.5. Let A be an arrangement in an n-dimensional real vector space.
Then
(−1)n ψA (−1)
(11)
r(A)
=
(12)
b(A)
= (−1)rank(A) ψA (1).
First proof. Equation (11) holds for A = �, since r(�) = 1 and ψ� (t) = tn .
By Lemmas 2.1 and 2.2, both r(A) and (−1)n ψA (−1) satisfy the same recurrence,
so the proof follows.
Now consider equation (12). Again it holds for A = � since b(�) = 1. (Recall
that b(A) is the number of relatively bounded regions. When A = �, the entire
ambient space Rn is relatively bounded.) Now
ψA (1) = ψA� (1) − ψA�� (1).
Let d(A) = (−1)rank(A) ψA (1). If rank(A) = rank(A� ) = rank(A�� ) + 1, then d(A) =
d(A� ) + d(A�� ). If rank(A) = rank(A� ) + 1 then b(A) = 0 [why?] and L(A� ) ∪
= L(A�� )
[why?]. Thus from Lemma 2.2 we have d(A) = 0. Hence in all cases b(A) and d(A)
satisfy the same recurrence, so b(A) = d(A).
�
Second proof. Our second proof of Theorem 2.5 is based on Möbius inversion
and some instructive topological considerations. For this proof we assume basic
knowledge of the Euler characteristic ξ(�) of a topological space �. (Standard
notation is ψ(�), but this would cause too much confusion with the character­
istic polynomial.) In particular, if � is suitably decomposed into cells with fi
i-dimensional cells, then
(13)
ξ(�) = f0 − f1 + f2 − · · · .
We take (13) as the definition of ξ(�). For “nice” spaces and decompositions, it is
¯ for the
independent of the decomposition. In particular, ξ(Rn ) = (−1)n . Write R
closure of a region R ≤ R(A).
¯ ⊕ x,
Definition 2.4. A (closed) face of a real arrangement A is a set � ⇔= F = R
where R ≤ R(A) and x ≤ L(A).
¯ as a convex polyhedron (possibly unbounded), then a face of
If we regard R
¯ in the usual sense of the face of a polyhedron, i.e., the
A is just a face of some R
¯ with a supporting hyperplane. In particular, each R
¯ is a face of
intersection of R
A. The dimension of a face F is defined by
dim(F ) = dim(aff(F )),
where aff(F ) denotes the affine span of F . A k-face is a k-dimensional face of A.
For instance, the arrangement below has three 0-faces (vertices), nine 1-faces, and
seven 2-faces (equivalently, seven regions). Hence ξ(R2 ) = 3 − 9 + 7 = 1.
20
R. STANLEY, HYPERPLANE ARRANGEMENTS
Write F(A) for the set of faces of A, and let relint denote relative interior. Then
Rn =
relint(F ),
F ⊆F(A)
⎛
where
denotes disjoint union. If fk (A) denotes the number of k-faces of A, it
follows that
(−1)n = ξ(Rn ) = f0 (A) − f1 (A) + f2 (A) − · · · .
Every k-face is a region of exactly one Ay for y ≤ L(A). Hence
�
fk (A) =
r(Ay ).
y⊆L(A)
dim(y)=k
Multiply by (−1)k and sum over k to get
�
(−1)n = ξ(Rn ) =
(−1)dim(y) r(Ay ).
y⊆L(A)
n
Replacing R by x ≤ L(A) gives
(−1)dim(x) = ξ(x) =
�
(−1)dim(y) r(Ay ).
y⊆L(A)
y∗x
Möbius inversion yields
�
(−1)dim(x) r(Ax ) =
(−1)dim(y)
µ(x, y).
y⊆L(A)
y∗x
Putting x = Rn gives
(−1)n r(A) =
�
(−1)dim(y) µ(y) = ψA (−1),
y⊆L(A)
thereby proving (11).
The relatively bounded case (equation (12)) is similar, but with one technical
complication. We may assume that A is essential, since b(A) = b(ess(A)) and
ψA (t) = tdim(A)−dim(ess(A)) ψess(A) (t).
In this case, the relatively bounded regions are actually bounded. Let
= {F ≤ F(A) : F is relatively bounded}
�
� =
F.
Fb (A)
F ⊆Fb (A)
The difficulty lies in computing ξ(�). Zaslavsky conjectured in 1975 that � is
star-shaped, i.e., there exists x ≤ � such that for every y ≤ �, the line segment
LECTURE 2. PROPERTIES OF THE INTERSECTION POSET
21
b
a
a
b
c
d
c
d
Figure 2. Two arrangements with the same intersection poset
joining x and y lies in �. This would imply that � is contractible, and hence (since
� is compact when A is essential) ξ(�) = 1. A counterexample to Zaslavsky’s
conjecture appears as an exercise in [5, Exer. 4.29], but nevertheless Björner and
Ziegler showed that � is indeed contractible. (See [5, Thm. 4.5.7(b)] and Lecture 1,
Exercise 7.) The argument just given for r(A) now carries over mutatis mutandis
to b(A). There is also a direct argument that ξ(�) = 1, circumventing the need to
show that � is contractible. We will omit proving here that ξ(�) = 1.
�
Corollary 2.1. Let A be a real arrangement. Then r(A) and b(A) depend only on
L(A).
Figure 2 shows two arrangements in R2 with different “face structure” but
the same L(A). The first arrangement has for instance one triangular and one
quadrilateral face, while the second has two triangular faces. Both arrangements,
however, have ten regions and two bounded regions.
We now give two basic examples of arrangements and the computation of their
characteristic polynomials.
Proposition 2.4. (general position) Let A be an n-dimensional arrangement of m
hyperplanes in general position. Then
� �
� �
m n−2
n m
n
n−1
− · · · + (−1)
.
t
ψA (t) = t − mt
+
n
2
In particular, if A is a real arrangement, then
� �
� �
m
m
r(A) = 1 + m +
+···+
2
n
�
� �
� ��
m
n
n m
− · · · + (−1)
b(A) = (−1) 1 − m +
2
n
�
�
m−1
.
=
n
Proof. Every B ∗ A with #B → n defines an element xB = H⊆B H of L(A).
Hence L(A) is a truncated boolean algebra:
∪ {S ∗ [m] : #S → n},
L(A) =
22
R. STANLEY, HYPERPLANE ARRANGEMENTS
Figure 3. The truncated boolean algebra of rank 2 with four atoms
ordered by inclusion. Figure 3 shows the case n = 2 and m = 4, i.e., four lines in
∪ Bk , a boolean
0, x] =
general position in R2 . If x ≤ L(A) and rk(x) = k, then [ˆ
k
algebra of rank k. By equation (4) there follows µ(x) = (1) . Hence
�
ψA (t) =
(−1)#S tn−#S
S→[m]
#S⊇n
n
= t − mt
n−1
� �
m
+ · · · + (−1)
. �
n
n
Note. Arrangements whose hyperplanes are in general position were formerly
called free arrangements. Now, however, free arrangements have another meaning
discussed in the note following Example 4.11.
Our second example concerns generic translations of the hyperplanes of a lin­
ear arrangement. Let L1 , . . . , Lm be linear forms, not necessarily distinct, in the
variables v = (v1 , . . . , vn ) over the field K. Let A be defined by
L1 (v) = a1 , . . . , Lm (v) = am ,
where a1 , . . . , am are generic elements of K. This means if Hi = ker(Li (v) − ai ),
then
⇔ � √ Li1 , . . . , Lik are linearly independent.
Hi 1 ⊕ · · · ⊕ H i k =
For instance, if K = R and L1 , . . . , Lm are defined over Q, then a1 , . . . , am are
generic whenever they are linearly independent over Q.
nongeneric
generic
0, x] ∪
It follows that if x = Hi1 ⊕ · · · ⊕ Hik ≤ L(A), then [ˆ
= Bk . Hence
�
ψA (t) =
(−1)#B tn−#B ,
B
LECTURE 2. PROPERTIES OF THE INTERSECTION POSET
1
6
3
12
4
23
12
Figure 4. The forests on four vertices
where B ranges over all linearly independent subsets of A. (We say that a set of hy­
perplanes are linearly independent if their normals are linearly independent.) Thus
ψA (t), or more precisely (−t)n ψA (−1/t), is the generating function for linearly
independent subsets of L1 , . . . , Lm according to their number of elements. For in­
stance, if A is given by Figure 2 (either arrangement) then the linearly independent
subsets of hyperplanes are �, a, b, c, d, ac, ad, bc, bd, cd, so ψA (t) = t2 − 4t + 5.
Consider the more interesting example xi − xj = aij , 1 → i < j → n, where the
aij are generic. We could call this arrangement the generic braid arrangement G n .
Identify the hyperplane xi − xj = aij with the edge ij on the vertex set [n]. Thus
a subset B ∗ Gn corresponds to a simple graph GB on [n]. (“Simple” means that
there is at most one edge between any two vertices, and no edge from a vertex to
itself.) It is easy to see that B is linearly independent if and only if the graph GB
has no cycles, i.e., is a forest. Hence we obtain the interesting formula
�
(14)
ψGn (t) =
(−1)e(F ) tn−e(F ) ,
F
where F ranges over all forests on [n] and e(F ) denotes the number of edges of
F . For instance, the isomorphism types of forests (with the number of distinct
labelings written below the forest) on four vertices are given by Figure 4. Hence
ψG4 (t) = t4 − 6t3 + 15t2 − 16t.
Equation (11) can be rewritten as
�
r(A) =
(−1)rk(x)
µ(x).
x⊆L(A)
(Theorem 3.10 will show that (−1)rk(x) µ(x) > 0, so we could also write |µ(x)| for
this quantity.) It is easy to extend this result to count faces of A of all dimensions,
not just the top dimension n. Let fk (A) denote the number of k-faces of the real
arrangement A.
Theorem 2.6. We have
(15)
fk (A)
=
�
(−1)dim(x)−dim(y) µ(x, y)
�
|µ(x, y)|.
x⊇y in L(A)
dim(x)=k
(16)
=
x⊇y in L(A)
dim(x)=k
Proof. As mentioned above, every face F is a region of a unique Ax for x ≤ L(A),
viz., x = aff(F ). In particular, dim(F ) = dim(x). Hence if dim(F ) = k, then r(Ax )
is the number of k-faces of A contained in x. By Theorem 2.5 and equation (7) we
get
�
r(Ax ) =
(−1)dim(y)−dim(x) µ(x, y),
y∗x
24
R. STANLEY, HYPERPLANE ARRANGEMENTS
where we are dealing with the poset L(A). Summing over all x ≤ L(A) of dimension
k yields (15), and (16) then follows from Theorem (3.10) below.
�
2.3. Graphical arrangements
There are close connections between certain invariants of a graph G and an asso­
ciated arrangement AG . Let G be a simple graph on the vertex set [n]. Let E(G)
denote the set of edges of G, regarded as two-element subsets of [n]. Write ij for
the edge {i, j}.
Definition 2.5. The graphical arrangement AG in K n is the arrangement
xi − xj = 0, ij ≤ E(G).
Thus a graphical arrangement is simply a subarrangement of the braid arrange­
ment Bn . If G = Kn , the complete graph on [n] (with all possible edges ij), then
A Kn = B n .
Definition 2.6. A coloring of a graph G on [n] is a map � : [n] ∃ P. The coloring
� is proper if �(i) =
⇔ �(j) whenever ij ≤ E(G). If q ≤ P then let ψG (q) denote the
number of proper colorings � : [n] ∃ [q] of G, i.e., the number of proper colorings
of G whose colors come from 1, 2, . . . , q. The function ψG is called the chromatic
polynomial of G.
For instance, suppose that G is the complete graph Kn . A proper coloring
� : [n] ∃ [q] is obtained by choosing a vertex, say 1, and coloring it in q ways.
Then choose another vertex, say 2, and color it in q − 1 ways, etc., obtaining
ψKn (q) = q(q − 1) · · · (q − n + 1).
(17)
A similar argument applies to the graph G of Figure 5. There are q ways to color
vertex 1, then q − 1 to color vertex 2, then q − 1 to color vertex 3, etc., obtaining
ψG (q)
= q(q − 1)(q − 1)(q − 2)(q − 1)(q − 1)(q − 2)(q − 2)(q − 3)
= q(q − 1)4 (q − 2)3 (q − 3).
Unlike the case of the complete graph, in order to obtain this nice product formula
one factor at a time only certain orderings of the vertices are suitable. It is not
always possible to evaluate the chromatic polynomials “one vertex at a time.” For
instance, let H be the 4-cycle of Figure 5. If a proper coloring � : [4] ∃ [q] satisfies
�(1) = �(3), then there are q choices for �(1), then q − 1 choices each for �(2) and
�(4). On the other hand, if �(1) ⇔= �(3), then there are q choices for �(1), then
q − 1 choices for �(3), and then q − 2 choices each for �(2) and �(4). Hence
ψH (q)
= q(q − 1)2 + q(q − 1)(q − 2)2
= q(q − 1)(q 2 − 3q + 3).
For further information on graphs whose chromatic polynomial can be evaluated
one vertex at a time, see Corollary 4.10 and the note following it.
It is easy to see directly that ψG (q) is a polynomial function of q. Let ei (G)
denote the number of surjective proper colorings � : [n] ∃ [i] of G. We can choose
an arbitrary ⎜proper
coloring � : [n] ∃ [q] by first choosing the size i = #�([n]) of
�
its image in qi ways, and then choose � in ei ways. Hence
� �
n
�
q
.
(18)
ψG (q) =
ei
i
i=0
LECTURE 2. PROPERTIES OF THE INTERSECTION POSET
6
5
25
8
9
4
4
3
1
2
7
2
3
1
G
H
Figure 5. Two graphs
⎜�
Since qi = q(q−1) · · · (q−i+1)/i!, a polynomial in q (of degree i), we see that ψG (q)
is a polynomial. We therefore write ψG (t), where t is an indeterminate. Moreover,
any surjection (= bijection) � : [n] ∃ [n] is proper. Hence en = n!. It follows from
equation (18) that ψG (t) is monic of degree n. Using more sophisticated methods
we will later derive further properties of the coefficients of ψG (t).
Theorem 2.7. For any graph G, we have ψAG (t) = ψG (t).
First proof. The first proof is based on deletion-restriction (which in the
context of graphs is called deletion-contraction). Let e = ij ≤ E(G). Let G − e
(also denoted G\e) denote the graph G with edge e deleted, and let G/e denote G
with the edge e contracted to a point and all multiple edges replaced by a single
edge (i.e., whenever there is more than one edge between two vertices, replace these
edges by a single edge). (In some contexts we want to keep track of multiple edges,
but they are irrelevant in regard to proper colorings.)
4
2
e
1
4
5
5
3
G
4
2
1
23
5
1
3
G−e
G/e
Let H0 ≤ A = AG be the hyperplane xi = xj . It is clear that A−{H0 } = AG−e .
We claim that
(19)
AH0 = AG/e ,
so by Deletion-Restriction (Lemma 2.2) we have
ψAG (t) = ψAG−e (t) = ψAG/e (t).
⊥
=
To prove (19), define an affine isomorphism � : H0 ∃ Rn−1 by
(x1 , x2 , . . . , xn ) �∃ (x1 , . . . , xi , . . . , xˆj , . . . , xn ),
where xˆj denotes that the jth coordinate is omitted. (Hence the coordinates in
Rn−1 are 1, 2, . . . , ˆj, . . . , n.) Write Hab for the hyperplane xa = xb of A. If neither
of a, b are equal to i or j, then �(Hab ⊕ H0 ) is the hyperplane xa = xb in Rn−1 . If
⇔ i, j then �(Hia ⊕ H0 ) = �(Haj ⊕ H0 ), the hyperplane xa = xi in Rn−1 . Hence �
a=
26
R. STANLEY, HYPERPLANE ARRANGEMENTS
G
F
Figure 6. A graph G with edge subset F and closure F̄
defines an isomorphism between AH0 and the arrangement AG/e in Rn−1 , proving
(19).
Let n• denote the graph with n vertices and no edges, and let � denote
the empty arrangement in Rn . The theorem will be proved by induction (using
Lemma 2.2) if we show:
(a) Initialization: ψn• (t) = ψ� (t)
(b) Deletion-contraction:
(20)
ψG (t) = ψG−e (t) − ψG/e (t)
To prove (a), note that both sides are equal to tn . To prove (b), observe that
ψG−e (q) is the number of colorings of � : [n] ∃ [q] that are proper except possibly
�(i) = �(j), while ψG/e (q) is the number of colorings � : [n] ∃ [q] of G that are
proper except that �(i) = �(j).
�
Our second proof of Theorem 2.7 is based on Möbius inversion. We first obtain
a combinatorial description of the intersection lattice L(AG ). Let Hij denote the
hyperplane
xi = xj as above, and let F ∗ E(G). Consider the element X =
ij⊆F Hij of L(AG ). Thus
(x1 , . . . , xn ) ≤ X √ xi = xj whenever ij ≤ F.
Let C1 , . . . , Ck be the connected components of the spanning subgraph GF of G
with edge set F . (A subgraph of G is spanning if it contains all the vertices of G.
Thus if the edges of F do not span all of G, we need to include all remaining vertices
as isolated vertices of GF .) If i, j are vertices of some Cm , then there is a path from
i to j whose edges all belong to F . Hence xi = xj for all (x1 , . . . , xn ) ≤ X. On the
other hand, if i and j belong to different Cm ’s, then there is no such path. Let
F̄ = {e = ij ≤ E(G) : i, j ≤ V (Cm ) for some m},
where V (Cm ) denotes the vertex set of Cm . Figure 6 illustrates a graph G with
a set F of edges indicated by thickening. The set F¯ is shown below G, with the
additional edges F¯ − F not in F drawn as dashed lines.
A partition β of a finite set S is a collection {B1 , . . . , Bk } of subsets of S, called
blocks, that are nonempty, pairwise disjoint, and whose union is S. The set of all
partitions of S is denoted ΓS , and when S = [n] we write simply Γn for Γ[n] . It
follows from the above discussion that the elements X� of L(AG ) correspond to the
LECTURE 2. PROPERTIES OF THE INTERSECTION POSET
27
abcd
b
d
a
c
abc
abd
ac−bd
ab−cd
ac
bc
ab
bcd
acd
bd
cd
Figure 7. A graph G and its bond lattice LG
connected partitions of V (G), i.e., the partitions β = {B1 , . . . , Bk } of V (G) = [n]
such that the restriction of G to each block Bi is connected. Namely,
X� = {(x1 , . . . , xn ) ≤ K n : i, j ≤ Bm for some m ⊆ xi = xj }.
We have X� → Xπ in L(A) if and only if every block of β is contained in a block of
π. In other words, β is a refinement of π. This refinement order is the “standard”
ordering on Γn , so L(AG ) is isomorphic to an induced subposet LG of Γn , called
the bond lattice or lattice of contractions of G. (“Induced” means that if β → π
in Γn and β, π ≤ L(AG ), then β → π in L(AG ).) In particular, Γn ∪
= L(AKn ).
Note that in general LG is not a sublattice of Γn , but only a sub-join-semilattice of
Γn [why?]. The bottom element 0̂ of LG is the partition of [n] into n one-element
blocks, while the top element 1̂ is the partition into one block. The case G = Kn
shows that the intersection lattice L(Bn ) of the braid arrangement Bn is isomorphic
to the full partition lattice Γn . Figure 7 shows a graph G and its bond lattice LG
(singleton blocks are omitted from the labels of the elements of LG ).
Second proof of Theorem 2.7. Let β ≤ LG . For q ≤ P define ψ� (q) to be
the number of colorings � : [n] ∃ [q] of G satisfying:
• If i, j are in the same block of β, then �(i) = �(j).
• If i, j are in different blocks of β and ij ≤ E(G), then �(i) =
⇔ �(j).
Given any � : [n] ∃ [q], there is a unique π ≤ LG such that � is enumerated by
ψπ (q). Moreover, � will be constant on the blocks of some β ≤ LG if and only if
π ⊂ β in LG . Hence
�
q |�| =
ψπ (q) ≡β ≤ LG ,
π∗�
where |β | denotes the number of blocks of β. By Möbius inversion,
�
ψ� (q) =
q |π| µ(β, π),
π∗�
where µ denotes the M¨
obius function of LG . Let β = ˆ
0. We get
�
(21)
ψG (q) = ψ0̂ (q) =
µ(π)q |π| .
π⊆LG
28
R. STANLEY, HYPERPLANE ARRANGEMENTS
It is easily seen that |π | = dim Xπ , so comparing equation (21) with Definition 1.3
shows that ψG (t) = ψAG (t).
�
Corollary 2.2. The characteristic polynomial of the braid arrangement B n is given
by
ψBn (t) = t(t − 1) · · · (t − n + 1).
Proof. Since Bn = AKn (the graphical arrangement of the complete graph Kn ),
we have from Theorem 2.7 that ψBn (t) = ψKn (t). The proof follows from equation
(17).
�
There is a further invariant of a graph G that is closely connected with the
graphical arrangement AG .
Definition 2.7. An orientation o of a graph G is an assignment of a direction
i ∃ j or j ∃ i to each edge ij of G. A directed cycle of o is a sequence of vertices
i0 , i1 , . . . , ik of G such that i0 ∃ i1 ∃ i2 ∃ · · · ∃ ik ∃ i0 in o. An orientation o is
acyclic if it contains no directed cycles.
A graph G with no loops (edges from a vertex to itself) thus has 2#E(G) orien­
tations. Let R ≤ R(AG ), and let (x1 , . . . , xn ) ≤ R. In choosing R, we have specified
for all ij ≤ E(G) whether xi < xj or xi > xj . Indicate by an arrow i ∃ j that
xi < xj , and by j ∃ i that xi > xj . In this way the region R defines an orientation
⇔ R� , then oR =
⇔ oR� . Which orientations can arise in this
oR of G. Clearly if R =
way?
Proposition 2.5. Let o be an orientation of G. Then o = oR for some R ≤ R(AG )
if and only if o is acyclic.
Proof. If oR had a cycle i1 ∃ i2 ∃ · · · ∃ ik ∃ i1 , then a point (x1 , . . . , xn ) ≤ R
would satisfy xi1 < xi2 < · · · < xik < xi1 , which is absurd. Hence oR is acyclic.
Conversely, let o be an acyclic orientation of G. First note that o must have a
sink, i.e., a vertex with no arrows pointing out. To see this, walk along the edges
of o by starting at any vertex and following arrows. Since o is acyclic, we can never
return to a vertex so the process will end in a sink. Let jn be a sink vertex of o.
When we remove jn from o the remaining orientation is still acyclic, so it contains
a sink jn−1 . Continuing in this manner, we obtain an ordering j1 , j2 , . . . , jn of [n]
such that ji is a sink of the restriction of o to j1 , . . . , ji . Hence if x1 , . . . , xn ≤ R
satisfy xj1 < xj2 < · · · < xjn then the region R ≤ R(A) containing (x1 , . . . , xn )
satisfies o = oR .
�
Note. The transitive, reflexive closure ō of an acyclic orientation o is a par­
tial order. The construction of the ordering j1 , j2 , . . . , jn above is equivalent to
constructing a linear extension of o.
Let AO(G) denote the set of acyclic orientations of G. We have constructed a
bijection between AO(G) and R(AG ). Hence from Theorem 2.5 we conclude:
Corollary 2.3. For any graph G with n vertices, we have #AO(G) = (−1) n ψG (−1).
Corollary 2.3 was first proved by Stanley in 1973 by a “direct” argument based
on deletion-contraction (see Exercise 7). The proof we have just given based on
arrangements is due to Greene and Zaslavsky in 1983.
Note. Given a graph G on n vertices, let A#
G be the arrangement defined by
xi − xj = aij , ij ≤ E(G),
LECTURE 2. PROPERTIES OF THE INTERSECTION POSET
29
where the aij ’s are generic. Just as we obtained equation (14) (the case G = Kn )
we have
�
ψA# (t) =
(−1)e(F ) tn−e(F ) ,
G
F
where F ranges over all spanning forests of G.
Download