Thursday, August 30, 2007

Things I Believe, Part 1: 13.7*10^9 B.C.E - c. 1 B.C.E.

So, just for the hell of it, I've decided to attempt a complete inventory of all the things that I believe. All my beliefs, if you will. This'll be a three-part series: Part 1 will cover events from the Beginning of the Universe 'til around the time of some guy's birth in Judea (since that's how people tend to reckon their calendars these days); Part 2 will cover around that dude's birth until the present day; and Part 3 will cover what I believe to be Mathematical and Other Truths (and the Future?). Capitalized. So, without further ado, here is What Harrison Believes.

1. I believe that the Universe as we know it exploded out of a small, hot, dense region approximately 13.7 billion years ago, plus or minus 200 million;

2. That an infinitesimal fraction of a second later, space expanded at an exponential rate far beyond c, and that this led to a "flattening" of local spacetime, and that quantum fluctuations were magnified to macroscopic scales;

3. That a fraction of a second later, the electromagnetic force separated from the weak force, and fundamental particles acquired mass;

4. That under a second after this, quarks were bound together by the strong nuclear force into hadrons and anti-hadrons (most of which annihilated each other immediately);

5. That about 100-300 seconds later, temperatures were right for some protons and neutrons left over to fuse into deuterium nuclei and those of a few other light elements (mostly helium);

6. That hadron/anti-hadron and lepton/anti-lepton annihilation reactions created an abundance of photons, which reacted over the next ~400,000 years with other nuclei, leptons, and hadrons wandering the cosmos;

7. That, as nuclei and electrons combined to form atoms (at around 400,000 years after the Big Bang), the Universe became transparent to radiation, and we see this radiation today as the cosmic microwave background;

8. That, around 200-500 million years later, the first stars (made almost entirely of lightweight elements) began to form, as did the first galaxies;

9. That, around this time, a very special galaxy began forming, one that certain sentient beings living in it would later call the "Milky Way"...


More to come - hold on tight.

Sunday, August 19, 2007

Stuff.

What with school starting and all, I unfortunately will not be able to post new and interesting (heh) problems at the rate I'd hoped. But don't worry, dear reader (and I'm not using the plural, since I know there's only one of you) - I haven't fallen off the face of the Tubes. I'll be back with interesting problems. Just, right now, I have to make easy interesting problems for a math tournament. Yeah.


Harrison

Friday, August 10, 2007

Random graphs, logic, and math, oh my!

So a random graph on n vertices is, (semi-)formally, a probability space on the set of all graphs on n vertices. Normally, we consider pretty "vanilla" probability spaces; for example, the one defined by setting the probability of there being an edge between u and v (which we'll denote by P(E(u,v)) for the remainder of this blog post) to be some constant 0 < c < 1. (Technically, yes, we can set c=0 and c=1, but those are really boring probability spaces.) However, as is well known, I'm not normal. So I'd like to investigate whether we can even construct probability spaces, in general, that have weird properties. Here's what I want to know:


Given a sentence Q of first-order logic with two free variables; say, u and v (where the variables will represent vertices of our graph, and we'll restrict ourselves to the predicate E(w,x) which, in short, means "there is an edge between w and x"), and a constant k>1, does there exist a (nontrivial*) probability space on the set of graphs with n vertices which satisfies:

P(E(u,v)|Q(u,v)) \geq k*P(E(u,v))

for all vertices u,v?

*"Nontrivial" in this context basically means "the probability of the empty graph isn't 1," or maybe "for any k, there's some n for which the probability of choosing a graph with at least k edges is positive." I'm not really sure.

By studying these distributions, we can look at "random" graphs which nonetheless are expected to have some preassigned structure (i.e., by letting Q be "there exists w: E(u,w) and E(w,v)", we're looking at graphs that are in some sense "likelier" to contain triangles than a random graph. Alternatively, by setting k very large and constructing rather intricate FOL sentences, we might be able to get some insight into Ramsey-theoretic questions. This is definitely true if we move to SOL; I'm not not quite as sure with this weaker model.) You know, just in case you're the kind of weird person who wants some sort of "motivation" for studying a problem.


Apologies for the awkward wording of my problem; I'm just not sure how else to express it.

Wednesday, August 8, 2007

Problem #1 Redux

So, Problem #1 was maybe a bit too easy, as I "solved" it (cough, cough) while trying to fall asleep last night.

Heuristically: Choose about sqrt(n) elements from L, spaced more or less uniformly. Sort these elements in their places; this can be done in O(sqrt(n)ln(n)) time. Then (heuristically!) the smallest of these elements will on average have size ~sqrt(n)/2, the next-smallest will have size around ~3sqrt(n)/2, and so on. Counting the largest possible number of out-of-order pairs, we have that there are at most:

sqrt(n)+3sqrt(n)+5sqrt(n)+...+(2n-1)sqrt(n) ~ n^3/2 pairs which are out of order.

But this is o(n^2), so for large enough n, this will, on average, produce a (1-\epsilon)-sorted list.


We can easily extend this construction to produce an algorithm running in O(n^\epsilon) time for any positive \epsilon that gives (on average) arbitrarily good partial sorting (but the scaling gets worse and worse...)

So that part seems to be done (so long as the above argument can be made rigorous, which doesn't seem prohibitively difficult.) But the question remains: What about algorithms guaranteed to produce a c-sorting? My hunch: There do not exist algorithms with worst-case time complexity o(n) guaranteed to c-sort a list of size n. (Average-case complexity, I'm not so sure.) Anyone who can prove this gets my everlasting respect and admiration.

Tuesday, August 7, 2007

Problems #1

1. Given an unsorted list L with n elements, define the set S: {a, b \in L: a < b and f(a) < f(b)} (where f(x) is x's position in the list). Call a list c-sorted (0 < c \leq 1) if |S|/(\choose{n, 2}) \geq c. For what 0 < c < 1 do there exist sublinear algorithms to c-sort any list L? (We can consider either algorithms that are guaranteed to produce a c-sorting of a list, or those for which the expected value of the expression |S|/(\choose{n, 2}) is at least c.)

2. Define a candy-passing game on a graph G as follows: For the initial state s_0, we assign a number of "chips" to each vertex v. We go from state s_i to state s_(i+1) as follows: if a vertex v with degree d has at least d chips on it, then it "passes" one chip to each adjacent vertex. Denote the maximum degree of a vertex in G by \Delta(G); is it true that, for any initial configuration with at least (\Delta(G)+1)*|V(G)| total chips, there is some k for which s_k = s_(k+1)? Can we place an upper bound on k?

3. Anyone know how to do LaTeX (or any sort of mathematical notation stuff) in Blogger?