OK, this is my personal list of the 10 most beautiful, useful, or interesting results in combinatorics. (Note: I'm leaving out complexity theory, since I'll hopefully cover that in a future post). These are, of course, heavily biased toward my interests and specialties, but I hope that anyone interested in the subject will be able to appreciate the list. So, in no particular order:

1. Erdos' lower bound for the Ramsey numbers R(k,k).

WHY? It's one of the best examples of the probabilistic method in action. Furthermore, it showcases the often-seen phenomenon in combinatorics where we can prove the existence of a certain object (a complete graph with 2^(k/2) vertices and no monochromatic k-clique), but can't come close to constructing it.

2. Szemeredi's regularity lemma.

WHY? It's a highly unintuitive proof of a highly unintuitive result, but both the proof and the lemma have turned out to be incredibly useful. If you're a graph theorist, you have no excuse for not knowing the statement of the Lemma; if you do additive combinatorics or theoretical CS, you have no reason not to know its proof. And if you're a mathematical aficionado, you should at least look at both.

3. Shelah's proof of the Hales-Jewett theorem.

WHY? Even though Hales and Jewett's original proof is shorter and cleaner, Shelah's strikes at the heart of "why" HJT is true. It manages to sidestep the Ackermann-type bounds that arose in all virtually all earlier proofs of Ramsey-type theorems on the integers, and it does it in such an incredibly simple way, you'll be left dumbstruck.

4. Sum-product inequalities from the crossing number inequality.

WHY? The crossing number inequality, the Szemeredi-Trotter theorem, and good sum-product estimates are all examples of highly nontrivial, very interesting results in combinatorics. And yet all three can be derived in just a couple of pages from what's essentially high-school level mathematics. A great example of how interesting combinatorial results often have simple and beautiful "Book proofs."

5. The Marcus-Tardos proof of the Stanley-Wilf conjecture.

WHY? OK, it would be wrong not to give a shout-out to my semi-official mentor, Adam Marcus. But there's more than that; this is another Book proof of a long-standing and unobvious conjecture. Just more evidence that combinatorics is the most level of mathematical playing fields.

6. Thomassen's proof of the five-list-color theorem.

WHY? This is the four-color theorem for list colorings, except, well, it doesn't take fifty pages and 200 hours of computer time to lay out. More like half a page -- pretty much as simple as Kempe's proof of the regular 5CT. And a wonderful illustration of the general principle that, with induction, it's sometimes easier to prove a more restrictive result.

7. Arrow's impossibility theorem.

WHY? OK, OK, this isn't usually considered to be part of combinatorics. But the statement of the theorem, as well as the proof, are heavily combinatorial. The theorem has (gasp!) a real-world interpretation, in terms of (duh) voting methods, and the proof is canonical combinatorial contradiction.

8. Kuperberg's proof of the alternating sign matrix conjecture.

WHY? Zeilberger's original proof is a tour de force of combinatorial reasoning, but Kuperberg's argument is unsurpassed in its originality. Statistical mechanics and its offshoots (mathematical physics, ergodic theory) have surprisingly many applications in combinatorics, and Kuperberg's masterful use of the Yang-Baxter equation is a novel example of this trend.

9. The "necklace proof" of Fermat's Little Theorem.

WHY? First of all, the fact that FLT, often considered a "number-theoretical" result, permits a strictly combinatorial argument shows combinatorics' applicability to other branches of math. Second, the proof itself shows that symmetry groups are useful for more than just enumeration. (They're also good for enumeration (mod p).

10. Add your own! What's one of the most original, informative, clever, or beautiful combinatorial results of all time? Comments are more than welcome.

## Friday, November 30, 2007

## Wednesday, November 21, 2007

### The chromatic number of n-space

Consider the infinite graph G defined as follows:

The vertex set of G is the set of points of n-dimensional Euclidean space R^n. Two vertices are adjacent if the corresponding points are unit distance apart. The question is, what is the chromatic number of G?

First note that a simple compactness argument shows that this is equivalent to finding the maximum chromatic number of a unit distance graph in R^n; i.e., if we can show that any finite subgraph of G is k-chromatic, then G is k-chromatic.

For n = 2, this is a fairly famous problem. But for higher dimensions, although a lot of work has been done, the known results are (as often happens in combinatorics!) sometimes woefully poor.

Just as one example: The best known general upper bound for the chromatic number of R^n is (3+o(1))^n. But consider the c x c x ... x c hypercube, where c < 3, subdivided into 3^n hypercubes of edge length c/3. If we color each of these hypercubes distinctly, we may tessellate n-space with them, so \Chi(R^n) \leq 3^n is practically trivial!

This is one of the many examples in combinatorics where the best known results are only marginally better than easy, or even trivial, results. (Another good one: upper bounds for the Ramsey numbers R(n,n).) The interesting thing here is that it derives from a geometric fact: namely, that the only regular polytope that tiles n-space for n \geq 5 is the hypercube. The best known bound (7) for 2-space derives from a hexagonal tiling; the best known bound for 3-space is 16 -- I don't know where that comes from. The best bound for 4-space is 49 -- does this arise directly from the tessellation of hyperspace by 24-cells? I'd be greatly indebted to anyone who can shed light on that.

Incidentally, the best known lower bound for \Chi(R^n) is something like (1.239+o(1))^n. I don't see any obvious way to construct even a superlinear unit distance graph in R^n. Anyone know the details of the proof of that bound?

The vertex set of G is the set of points of n-dimensional Euclidean space R^n. Two vertices are adjacent if the corresponding points are unit distance apart. The question is, what is the chromatic number of G?

First note that a simple compactness argument shows that this is equivalent to finding the maximum chromatic number of a unit distance graph in R^n; i.e., if we can show that any finite subgraph of G is k-chromatic, then G is k-chromatic.

For n = 2, this is a fairly famous problem. But for higher dimensions, although a lot of work has been done, the known results are (as often happens in combinatorics!) sometimes woefully poor.

Just as one example: The best known general upper bound for the chromatic number of R^n is (3+o(1))^n. But consider the c x c x ... x c hypercube, where c < 3, subdivided into 3^n hypercubes of edge length c/3. If we color each of these hypercubes distinctly, we may tessellate n-space with them, so \Chi(R^n) \leq 3^n is practically trivial!

This is one of the many examples in combinatorics where the best known results are only marginally better than easy, or even trivial, results. (Another good one: upper bounds for the Ramsey numbers R(n,n).) The interesting thing here is that it derives from a geometric fact: namely, that the only regular polytope that tiles n-space for n \geq 5 is the hypercube. The best known bound (7) for 2-space derives from a hexagonal tiling; the best known bound for 3-space is 16 -- I don't know where that comes from. The best bound for 4-space is 49 -- does this arise directly from the tessellation of hyperspace by 24-cells? I'd be greatly indebted to anyone who can shed light on that.

Incidentally, the best known lower bound for \Chi(R^n) is something like (1.239+o(1))^n. I don't see any obvious way to construct even a superlinear unit distance graph in R^n. Anyone know the details of the proof of that bound?

## Wednesday, November 14, 2007

### The (Complexity) Theory of NCLB

So, one of the biggest complaints we hear these days about the No Child Left Behind Act is the possibility, with "high-stakes testing," of teachers "teaching to the test." If students aren't actually learning the material, critics argue, then NCLB is hurting, not helping, these kids.

But the Atlanta Journal-Constitution yesterday wrote the following:

The AJC's editorial board makes a good point, but I thought I'd go ahead and state it a little more formally. So here we go:

A state Department of Education ("Dean") wishes to write a test satisfying NCLB requirements. Furthermore, Dean has to write the test in limited (i.e., polynomial) time; after all, we have to test new students at least every year! An overworked teacher ("Olivia") has to pass all her students, but wants to spend as little time teaching as possible. So if she can save time and effort by "teaching to the test," she'll do so.

Now, let's say that the time complexity for Olivia of actually teaching the material is O(f(n)) (where n is the length of the test, in problems). (She can talk as fast or as slow as she likes, which is why the implicit constant doesn't make a difference in this case.) Then Dean's goal is as follows: He wants to write a test T such that, for any probabilistic "teaching algorithm" with o(f(n)) time complexity that Olivia might use and any constant $\epsilon$ > 0, there exists a constant N such that, for a test T of length n > N,

Pr[J. Random Student passes test T after being taught by Olivia] < $\epsilon$.

If this reminds you of the formal definition of a one-way function, well, you should probably look up the formal definition of a one-way function, 'cause there's some pretty key differences. But actually, the analogy isn't all that bad. We certainly have some good candidates for OWFs, and it's not unreasonable to think that similar methods exist for creating tests that can't be "taught to."

OK, so my analogy's pretty weak. But I don't think it's unsaveable. Therefore, I'm willing to shell out $15 to anyone who can suitably formalize high-stakes testing in such a way that they can show that the existence of (trapdoor) one-way functions (perhaps relative to some sort of "curriculum oracle?") implies the existence of tests that aren't "teachable to."

If, on the other hand, you can convince me that it's always possible to cheat the system in any good formalization of HST, I'll pay you $25 (yeah, I'm cheap. I'm a student, get over it) and write a letter to my Congressman.

What I'm more concerned about with NCLB -- and, by the way, I'm shocked that complexity theorists aren't already up in arms about this -- is the requirement that, by 2014, all children will test at the "proficient" level on state tests. Look, the "pencil drop" is a classic element of standardized testing, and while I'm happy that Congress is so convinced that P = BPP, I think mandating full derandomization by 2014 -- while simultaneously providing so little funding -- is just insane.

But the Atlanta Journal-Constitution yesterday wrote the following:

But why's that ["teaching to the test"] so terrible? If a test is an accurate measure of what should have been learned, teaching to the test is fine.

The AJC's editorial board makes a good point, but I thought I'd go ahead and state it a little more formally. So here we go:

A state Department of Education ("Dean") wishes to write a test satisfying NCLB requirements. Furthermore, Dean has to write the test in limited (i.e., polynomial) time; after all, we have to test new students at least every year! An overworked teacher ("Olivia") has to pass all her students, but wants to spend as little time teaching as possible. So if she can save time and effort by "teaching to the test," she'll do so.

Now, let's say that the time complexity for Olivia of actually teaching the material is O(f(n)) (where n is the length of the test, in problems). (She can talk as fast or as slow as she likes, which is why the implicit constant doesn't make a difference in this case.) Then Dean's goal is as follows: He wants to write a test T such that, for any probabilistic "teaching algorithm" with o(f(n)) time complexity that Olivia might use and any constant $\epsilon$ > 0, there exists a constant N such that, for a test T of length n > N,

Pr[J. Random Student passes test T after being taught by Olivia] < $\epsilon$.

If this reminds you of the formal definition of a one-way function, well, you should probably look up the formal definition of a one-way function, 'cause there's some pretty key differences. But actually, the analogy isn't all that bad. We certainly have some good candidates for OWFs, and it's not unreasonable to think that similar methods exist for creating tests that can't be "taught to."

OK, so my analogy's pretty weak. But I don't think it's unsaveable. Therefore, I'm willing to shell out $15 to anyone who can suitably formalize high-stakes testing in such a way that they can show that the existence of (trapdoor) one-way functions (perhaps relative to some sort of "curriculum oracle?") implies the existence of tests that aren't "teachable to."

If, on the other hand, you can convince me that it's always possible to cheat the system in any good formalization of HST, I'll pay you $25 (yeah, I'm cheap. I'm a student, get over it) and write a letter to my Congressman.

What I'm more concerned about with NCLB -- and, by the way, I'm shocked that complexity theorists aren't already up in arms about this -- is the requirement that, by 2014, all children will test at the "proficient" level on state tests. Look, the "pencil drop" is a classic element of standardized testing, and while I'm happy that Congress is so convinced that P = BPP, I think mandating full derandomization by 2014 -- while simultaneously providing so little funding -- is just insane.

Subscribe to:
Posts (Atom)