Chapter 21. Upper bounds using multilinear polynomials
Tải bản đầy đủ  0trang
128
21. Upper bounds using multilinear polynomials
For I ⊂ [n] let xI =
for J ⊂ [n], it follows that
i∈I
xi ∈ RΩ ; in particular, x∅ = 1. Then,
1 if I ⊂ J,
xI (J) =
(21.1)
0 otherwise.
Lemma 21.2. Let f ∈ RΩ be such that f (I) = 0 for all I ⊂ [n] with
I ≤ r. Then the set {xI f : I ≤ r} ⊂ RΩ is linearly independent.
Proof. Suppose that there is a nontrivial linear combination
λI xI f = 0.
I≤r
Let I0 be an inclusionminimal subset such that λI0 = 0. Substitute I0 into the above linear combination:
I≤r λI xI (I0 )f (I0 ) =
0. If I
I0 , then λI = 0 by the minimality. If I ⊂ I0 , then
xI (I0 ) = 0 by (21.1). Consequently we get λI0 xI0 (I0 )f (I0 ) = 0.
Since xI0 (I0 )f (I0 ) = 0 we have λI0 = 0, a contradiction.
Let F = {F1 , . . . , Fm } be an (n, k, L)system. For x, y ∈ Rn let
x · y = ni=1 xi yi denote the standard inner product. For 1 ≤ i ≤ m
let vi ∈ Rn be the characteristic vector of Fi , and deﬁne fi ∈ RΩ by
fi (x) =
(vi · x − l).
l∈L
Since vi · vj = Fi ∩ Fj ,
= 0 if i = j,
fi (vj )
(21.2)
= 0 otherwise.
This shows that the fi are linearly independent. We claim that we can
n
add s−1
i=0 i more functions while keeping the linear independency.
Claim 21.3. Let g =
(21.3)
n
j=1
xj − k ∈ RΩ . The set of functions
{fi : 1 ≤ i ≤ m} ∪ {xI g : I ⊂ [n], I ≤ s − 1}
in RΩ is linearly independent.
Proof. Consider a linear combination
m
(21.4)
λi fi +
i=1
μI xI g = 0.
I≤s−1
21.1. RayChaudhuri–Wilson Theorem
129
n
Notice that the function g = j=1 xj − k vanishes on kelement sets.
So, for each j, substituting vj into (21.4) gives λj = 0 by (21.2). Thus
only the second term in (21.4) remains, that is,
μI xI g = 0. Since
s ≤ k and g vanishes only on a kelement set, it follows that g(I) = 0
for all I with I ≤ s − 1. Thus, by Lemma 21.2, the functions xI g
(I ≤ s − 1) are linearly independent, and therefore μI = 0 for all
I.
n
In (21.3) there are m+ s−1
i=0 i linearly independent polynomials
of degree at most s. On the other hand, si=0 ni is the dimension
of the space of polynomials in n variables of degree at most s. Thus
s
n
n
n
it follows that m + s−1
i=0 i ≤
i=0 i , that is, m ≤ s . This
completes the proof of Theorem 21.1.
Frankl and Wilson proved the modular version of Theorem 21.1.
We write a ∈ L (mod p) if a ≡ l (mod p) for some l ∈ L.
Theorem 21.4 (Frankl–Wilson [67]). Let n > k ≥ s be positive
integers, and let p be a prime. Let L ⊂ [0, p − 1] = {0, 1, . . . , p − 1}
satisﬁes the following
be a set of s integers. Suppose that F ⊂ [n]
k
conditions:
(i) k ∈ L (mod p).
(ii) If F, F ∈ F with F = F , then F ∩ F  ∈ L (mod p).
Then F ≤
n
s
.
The condition that p is a prime cannot be dropped in general. We
will prove Theorem 21.4 in a slightly stronger form in Chapter 23; see
Theorem 23.3.
Exercise 21.5. Let G := {G ∈ [m]
11 : {1, 2, 3} ⊂ G}. Then we have
G ∩ G  ∈ [3, 10] for distinct G, G ∈ G. Let F := { G
2 : G ∈ G} be
11
a kuniform family on n := m
vertices,
where
k
=
2
2 = 55 ≡ 1
(mod 6). Verify that
F ∩ F  ∈ {
i
2
: 3 ≤ i ≤ 10} ≡ {0, 3, 4} (mod 6)
for distinct F, F ∈ F, and that F = G =
that Theorem 21.4 fails if p = 6 and s = 3.
m−3
8
= Θ(n4 ). Show
130
21. Upper bounds using multilinear polynomials
Exercise 21.6. Prove Theorem 21.4 for the case s = 1 without
assuming that p is a prime. (Hint: Go through the proof of Theorem 21.1 using Z/pZ in place of R, where p is not necessarily a
prime.)
Exercise 21.7. Suppose that F satisﬁes the assumptions in Theorem 21.4 and, moreover, that
(iii) k ∈ [0, s − 1] (mod p).
Then the proof of Theorem 21.1 still works almost verbatim. Examine
the proof in this setting (using Fp in place of R) and show that F ≤
n
n
Ω
j=1 xj − k ∈ Fp satisﬁes
s . (Hint: It follows from (iii) that g =
g(I) = 0 for I ≤ s − 1. So we can apply Lemma 21.2.)
By taking p > n in the above exercise we can deduce Theorem 21.1.
21.2. Deza–Frankl–Singhi Theorem
Let p be a prime and let L ⊂ Fp . We say that a family of subsets H =
{C1 , . . . , Cm } ⊂ 2[n] is (p, L)intersecting if it satisﬁes the following
two conditions:
• Ci ∩ Cj  ∈ L (mod p) for 1 ≤ i < j ≤ m.
• Ci  ∈ L (mod p) for 1 ≤ i ≤ m.
Theorem 21.8 (Deza–Frankl–Singhi Theorem [24]). Let s = L. If
a family H ⊂ 2[n] is (p, L)intersecting, then
H ≤
n
n
n
+
+ ···+
.
0
1
s
For the proof we need some preparation. As usual we identify 2[n]
and {0, 1}n by the correspondence between a subset and its characteristic vector. A polynomial is said to be multilinear if it has degree
at most 1 in each variable. Let f be a polynomial in n variables of
degree s over Fp . Then there exists a unique multilinear polynomial
f¯ of degree at most s such that
f (x) = f¯(x)
21.2. Deza–Frankl–Singhi Theorem
131
for all x ∈ {0, 1}n . In fact, since x2 = x for x ∈ {0, 1} we get f¯
just by replacing monomials in f with the corresponding products of
distinct variables. For example, if
f (x, y) = x4 + 2x3 y 2 + 3xy 5 ,
then we have
f¯(x, y) = x + 5xy.
The set of multilinear polynomials in variables x1 , . . . , xn of degree s constitutes a vector space V over Fp :
xi : I ≤ s, I ⊂ [n] .
V = span
i∈I
We have
s
n
.
i
dim V =
i=0
For example, if n = 3 and s = 2, then
V = span{1, x1 , x2 , x3 , x1 x2 , x1 x3 , x2 x3 },
dim V =
3
3
3
+
+
0
1
2
= 7.
Proof of Theorem 21.8. Let X = {0, 1}n . To each subset Ci (1 ≤
i ≤ m) we assign a polynomial fi : X → Fp by
fi (x) :=
(x · vi − l),
l∈L
where vi ∈ X denotes the characteristic vector of Ci . Then, by the
(p, L)intersecting property, we have
fi (vj )
=0
if i = j,
=0
if i = j.
Thus f1 , . . . , fm ∈ FX
p are independent over Fp . Here is the plan for
the proof. If we can ﬁnd a vector space V ⊂ FX
p satisfying
• span{f1 , . . . , fm } ⊂ V , and
• dim V =
s
n
i=0 i
,
132
21. Upper bounds using multilinear polynomials
then it follows that
s
m = dim span{f1 , . . . , fm } ≤ dim V =
i=0
n
,
i
which will complete the proof.
By deﬁnition the polynomial fi can be written as
fi (x) := (x · vi − l1 )(x · vi − l2 ) · · · (x · vi − ls ),
where L = {l1 , . . . , ls }. Now let x = (x1 , . . . , xn ). Then fi is a
polynomial in n variables x1 , . . . , xn of degree s. This is a map from
X = {0, 1}n to Fp . Then we get the corresponding multilinear polynomial f¯i of degree s. So f¯i is an element of a vector space
xi : I ≤ s, I ⊂ [n] ,
V := span
i∈I
and the dimension is
s
dim V =
i=0
n
.
i
This is exactly what we needed.
21.3. Snevily’s Theorem
We have seen some simple applications of multilinear polynomials.
In this last section we present a more delicate usage of the method
due to Snevily to show the following strong result, which includes
some earlier results obtained by Fisher, Bose, de Bruijin and Erd˝
os,
Majumdar, Frankl and Wilson, Ramanan, and others.
Theorem 21.9 (Snevily’s Theorem [102]). Let L be a set of s positive
integers. If F ⊂ 2[n] satisﬁes F ∩ F  ∈ L for all distinct F, F ∈ F,
s
.
then F ≤ i=0 n−1
i
The upper bound for F is sharp. We give two examples. The
ﬁrst example has L = [s] and F = {F ⊂ [n] : 1 ∈ A, A ≤ s + 1}.
The second one consists of L = {1} and a Steiner system S(2, 3, 7),
that is, F ⊂ 2[7] is deﬁned by
F = {{1, 2, 3}, {3, 4, 5}, {1, 5, 6}, {1, 4, 7}, {3, 6, 7}, {2, 5, 7}, {2, 4, 6}}.
21.3. Snevily’s Theorem
133
The idea of the proof is as follows. First we assign a polynomial
to each member of the family F and show that these polynomials are
linearly independent. The dimension of the space of the polynomials
n−1
more
is si=0 ni . Then we show that we can still add s−1
i=0
i
polynomials to the same space without violating the linear indepenn−1
= si=0 n−1
.
dence. Thus we get F ≤ si=0 ni − s−1
i=0
i
i
Proof. To deal with multilinear polynomials we need some more notation. Let X ∗ = {x1 , x2 , . . . , xn } denote the set of n variables, and
∗
let Xk denote the set of multilinear monomials of degree k, that is,
X∗
k
xi : I ∈
=
i∈I
[n]
k
,
∗
where we understand X0 = {1}. Let σ(X ∗ , k) denote the basic symmetric function of degree k in X ∗ , that is, f ∈(X ∗ ) f . For example,
k
if X ∗ = {x1 , x2 , x3 } then σ(X ∗ , 2) = x1 x2 + x2 x3 + x1 x3 . Finally, for
a subset F ⊂ [n] let F ∗ = {xi : i ∈ F } and let v(F ) = (v1 , . . . , vn )
be the characteristic vector of F , that is, vi = 1 if i ∈ F and vi = 0
if i ∈ F . Notice that if we substitute v(F ) into σ(X ∗ , k) then we get
the value Fk  . Also, if we substitute v(F ), where F ⊂ [n], into

σ(F ∗ , k) then we get the value F ∩F
.
k
Let L be a set of s positive integers. We deﬁne a polynomial in
y of degree s by
(y − l).
g(y) =
l∈L
We can uniquely determine s + 1 real numbers c0 , c1 , . . . , cs by g(y) =
s
y
i=0 ci i . Then for x = (x1 , . . . , xn ) we deﬁne
s
g ∗ (x) =
ci σ(X ∗ , i),
i=0
and for F ⊂ [n] let
s
gF∗ (x)
ci σ(F ∗ , i),
=
i=0
∗
where we understand σ(F , i) = 0 if F  < i. By deﬁnition we get the
following.
134
21. Upper bounds using multilinear polynomials
Claim 21.10. For F, F ⊂ [n] we have that
gF∗ (v(F )) = g ∗ (v(F ∩ F )) = g(F ∩ F ).
Now suppose that F ⊂ 2[n] satisﬁes F ∩ F  ∈ L for all distinct
F, F ∈ F. Since g(F ∩ F ) = 0 we get gF∗ (v(F )) = 0.
Claim 21.11. Polynomials in the set {gF∗ : F ∈ F} are linearly
independent.
Proof. Consider a linear combination
αF gF∗ = 0,
(21.5)
F ∈F
where αF ∈ R. By substituting v(F ) we obtain αF gF∗ (v(F )) = 0. So
if gF∗ (v(F )) = 0, or equivalently if F  ∈ L, then we get αF = 0. In
particular, αF = 0 for all F ∈ F with F  > max L. But we need to
show that all αF are zero. We prove this by induction on L.
The initial step is L = {l}. Let F ∈ F. If F  = l then αF = 0. If
F  = l, then F is the only lelement subset in F. Thus (21.5) reads
αF gF∗ = 0. Since gF∗ is not a zero polynomial, we have αF = 0.
We proceed to the induction step. Let L = {l1 , . . . , ls } with
l1 < · · · < ls , and let L = L \ {ls }. To apply the induction hypothesis
let F = F \ {F ∈ F : F  > ls }. Then F ∩ F  ∈ L for all distinct
F, F ∈ F . Since αF = 0 for all F ∈ F with F  > ls , (21.5) can be
rewritten as F ∈F αF gF∗ = 0. By the hypothesis applied to F with
L we obtain that αF = 0 for all F ∈ F .
Let L = {l1 , . . . , ls } with 1 ≤ l1 < · · · < ls and F = {F1 , . . . , Fm }.
If Fi  = l1 for some i, then Fi is the only l1 element subset in F and
we may assume that n ∈ Fi by renaming the vertices if necessary. Let
r ≥ 0 be the number of edges in F not containing n. We may assume
that n ∈ Fi for 1 ≤ i ≤ r and n ∈ Fi for r + 1 ≤ i ≤ m. We may also
assume that l1 < Fr+1  ≤ Fr+2  ≤ · · · ≤ Fm .
We are going to assign a polynomial pi to Fi for 1 ≤ i ≤ m. To
this end let
fFi (x) =
(v(Fi ) · x − lj ),
lj <Fi 
21.3. Snevily’s Theorem
135
and let f¯Fi be the corresponding multilinear polynomial. Then we
deﬁne
pi =
gF∗ i
f¯F
for 1 ≤ i ≤ r,
for r + 1 ≤ i ≤ m.
i
We note that all the pi sit in the vector space V of multilinear polys
nomials in n variables of degree at most s, and dim V = i=0 ni .
Claim 21.12. The polynomials p1 , . . . , pm are linearly independent.
Proof. By Claim 21.11, p1 , . . . , pr are linearly independent. For r +
1 ≤ j ≤ i ≤ m it follows that
= 0 if i = j,
f¯Fi (v(Fj ))
= 0 if i > j,
where we used Fi  > l1 for the ﬁrst case i = j. This means that
pr+1 , . . . , pm are linearly independent.
Now consider a linear combination
m
αi pi = 0.
(21.6)
i=1
Suppose that there is a nonzero scalar. Then there are at least two
of them: one is αi0 with i0 ≤ r and the other is αj0 with j0 > r.
So we may assume that αj0 = 0 and αj = 0 for r + 1 ≤ j < j0 .
Substituting v(Fj0 ) into (21.6), we get αj0 pj0 (v(Fj0 )) = 0 and so
αj0 = 0, a contradiction.
s
At this point we already have that F = m ≤ dim V = i=0
s−1
Next we will introduce N = i=0 n−1
more polynomials. Let
i
{B1 , B2 , . . . , BN } =
[n − 1]
0
[n − 1]
1
···
n
i
.
[n − 1]
.
s−1
We may assume that B1  ≤ B2  ≤ · · · ≤ BN . Then for 1 ≤ i ≤ N
we deﬁne
qi (x) = (xn − 1)
xj ,
j∈Bi
136
21. Upper bounds using multilinear polynomials
where we understand q1 (x) = xn − 1. For 1 ≤ j ≤ i ≤ N it follows
that
= 0 if i = j,
qi (v(Bj ))
= 0 if i > j.
This means that q1 , . . . , qN are linearly independent.
Claim 21.13. The m + N polynomials p1 , . . . , pm , q1 , . . . , qN are linearly independent.
Proof. Consider a linear combination
m
N
αi pi +
(21.7)
i=1
βj qj = 0.
j=1
First suppose that αi = 0 for all r + 1 ≤ i ≤ m. In this case let j0 be
such that βj0 = 0 and βj = 0 for 1 ≤ j < j0 , and let t = Bj0 . We
substitute y = (y1 , . . . , yn ) into (21.7), where yi = y if i ∈ Bj0 {n}
and yi = 0 otherwise. By the deﬁnition of gF∗ i it follows that pi (y)
(1 ≤ i ≤ r) is a polynomial in y of degree at most t. On the other
hand, qj0 (y) = (y − 1)y t , and this is the only term containing y t+1 ;
that is, the LHS of (21.7) reads βj0 y t+1 + O(y t ). This must be a zero
polynomial, so βj0 = 0, and this is a contradiction.
Next suppose that αi0 = 0 for some r + 1 ≤ i0 ≤ m. Since
v(Fi0 ) = (∗, . . . , ∗, 1) we get qj (v(Fi0 )) = 0 for all j. Then, by substituting v(Fi0 ) into (21.7), we have αi0 pi0 (v(Fi0 )) = 0 and hence
αi0 = 0, a contradiction.
We have found m + N linearly independent polynomials in V .
Consequently we obtain
s
F = m ≤ dim V − N =
i=0
as promised.
s−1
n
n−1
−
i
i
i=0
s
=
i=0
n−1
,
i
Chapter 22
Application to discrete
geometry
In some problems in discrete geometry one can express the geometric constraints in terms of intersections in hypergraphs. Then the
problems may be solved by applying the corresponding results on
Lsystems. In this chapter we present some such examples.
22.1. Sylvester’s problem and Fisher’s inequality
The following result is called Fisher’s inequality, which can be obtained from Theorem 21.9 by setting s = 1.
Theorem 22.1 ([41]). Let C1 , C2 , . . . , Cm be distinct subsets of [n],
and let λ ≥ 1. If Ci ∩ Cj  = λ for all i = j, then m ≤ n.
We present an application of Fisher’s inequality to a collinearity
problem in discrete geometry. We say that points are collinear if
they are on the same line. Two points in the plane determine a line.
So m points in the plane determine m
2 lines if those points are in
general position, that is, no three points are collinear. Of course, m
collinear points determine only one line. Now suppose that we have
m noncollinear points in the plane. Then at least how many lines are
determined by those points? This problem was posed by Sylvester,
137
138
22. Application to discrete geometry
solved by Gallai, and popularized by Erd˝
os through his article in the
American Mathematical Monthly [31].
Theorem 22.2. Any set of m noncollinear points in the plane determine at least m lines.
The idea of the following proof goes back to de Bruijn and Erd˝os [16].
Proof. Let p1 , . . . , pm be the given m points. Suppose that they
determine n lines, and let L = {l1 , l2 , . . . , ln } be the set of the lines.
Let Ci ⊂ L be the subset of lines passing through pi , and let H =
{C1 , . . . , Cm } ⊂ 2L . Then, for all i = j, Ci ∩Cj contains just one line,
that is, the line connecting two points pi and pj . Thus Ci ∩ Cj  = 1,
and by the Fisher’s inequality we get H = m ≤ L = n.
Chv´
atal posed a generalization of this problem in a metric space.
We say that three points x, y, and z in a metric space with metric d
are collinear1 if they satisfy
d(x, z) = d(x, y) + d(y, z).
For example, a graph can be viewed as a metric space by the usual
shortestpath metric. Then we can deﬁne a line in the graph in the
sense above. Such lines behave rather diﬀerently from the usual lines
in Euclidean space. Nevertheless, Chen and Chv´
atal conjectured the
following.
Conjecture 22.3 (Chen–Chv´
atal [18]). In any metric space, m noncollinear points determine at least m lines.
22.2. Chromatic number of the unitdistance
graph
A family of subsets H ⊂ 2[n] is called tavoiding if
H ∩ H  = t
for all H, H ∈ H. Applying Theorem 21.8 to the case
n = 4p − 1, s = p − 1,
we get the following.
1
This is called collinearity by betweeness.
L = {0, . . . , p − 2},