Sunday, January 29, 2006
To teh Megist(r)on
In a number of recent posts, I've gone and totally geeked out over several theories of infinitely large and small numbers. In comparison, the topic of extremely large finite numbers just doesn't seem all that interesting. But I recently remembered an incident from when I was a small child, when a grade school teacher asked our class to name the largest number we could think of. The then-current Guinness Book of World Records insisted the "largest number" was something called "megiston", which was denoted by the number 10 in a circle. So that's what I answered, and I seem to recall that I either got a lower grade, or was publicly scolded by the teacher for making up numbers that didn't exist. Apparently the teacher didn't regard the Guinness Book as the final authority on these matters, even though I think I'd actually gotten my dog-eared copy at one of those Scholastic book fairs they used to have all the time. I remember the incident because it was one of those early inklings that either a.) grownups aren't always right, and/or b.) the stuff you read in books isn't always true. I also remember being frustrated because at the time I was absolutely sure I was right, but nobody would take me seriously, just because I was a little kid, and everybody knows kids don't know anything.
So I was thinking about this recently, and it occurred to me that I'd never seen the word "megiston", or any numbers in circles, in any context since that time. Which made me curious. Was this exotic-sounding number just something dreamed up in the Guinness-drenched offices of the official world record people? Was the Book wrong? Is it possible that I just imagined the whole thing, or misremembered it, or misunderstood what the Book was talking about? Thanks to the magic of Google, I now know that I was right, and my idols at the massive beer conglomerate had not betrayed me. What a relief that was, let me tell you. There is such a number as "megiston", though for some reason people occasionally call it "megistron" with an 'r', which is obviously incorrect. The ten-in-a-circle is one example of something called Steinhaus-Moser Notation, one of several different ways of expressing really humongous numbers, ones that are so big they can't be easily expressed in terms of exponents. Others include Donald Knuth's up-arrows, John Conway's chained sideways arrows, hyper operators, and Ackermann functions. The basic idea in each case is that, since mere exponents aren't up to the job, we'll just define more poweful operations, generally by iterated exponentiation, with the first and simplest higher-level operator sometimes known as tetration. This is just the logical extension of the process where exponentiation is iterated multiplication, and multiplication, in turn is just iterated addition. These various notations are attempts to define a symbolism general enough to express incredibly large numbers in a reasonable amount of symbols. Although, since the integers are infinite and all, any finite symbolism you come up with is eventually going to run out of steam, no matter how many nested levels of recursion you use, and no matter what kind of clever notational tricks you can come up with. In the end, the best you can really hope for is that your notation will be sufficient for any numbers you think you're likely to need, and won't be too clumsy to use for actual work. Despite the memorable, mysterious-sounding names, and mystical-looking notation, the Steinhaus-Moser seems like it would be awkward to use in "everyday" usage, in the unlikely event that you had to deal with quantities that big on a daily basis. Kind of a shame, really. And then, none of the notations seem to be entirely adequate for expressing certain really big numbers, like Graham's number. (Which, incidentally, I think has since replaced megiston in the Guinness Book.)
Another way to think about things like the Ackermann function and the like is that they're functions that increase much more rapidly than mundane ones like x^2, x!, or even x^x. Conversely, you've got things like the inverse Ackermann function (also discussed in the Wikipedia article linked to above) that increase far more slowly than, say, your garden-variety logarithms, for example. And remarkably, the inverse Ackermann function a(m,n) has real-world applications, including some in the computer science world. Here's a report about a fancy minimum spanning tree algorithm that runs in O( m a(m,n)) time, which is pretty schweeet. If you can't find an O(n) or better algorithm, and usually you can't, this is pretty much the next best thing you can hope for. If you have no idea what I'm talking about, here's a primer on Big O notation from a CS standpoint, and another one given from a more pure mathematical perspective. My impression has been that algorithms coming from a simple, "naive" attempt to solve a problem tend to be O(n^2), bubble sort being the classic example, and it can take a fair bit of head-scratching to come up with something more efficient. O(n^2) isn't always unacceptable, depending on the values of n you're likely to run into, but as they drum into you as a first-year CS student, you shouldn't settle for it if you don't have to. However, back when I had the inevitable C coding assignment to write a little program that implements 4 or 5 sort algorithms, for fun I decided one of them would be bogosort, known as the "the archetypal perversely awful algorithm", which runs in O(n*n!). That's pretty bad. I ended up putting in lots of printfs so the user could see it "working", because it's so unlikely they'd have enough patience to wait for it to complete. The wikipedia article referenced mentions an even worse variant known as "bozosort", although it probably still runs in some variant of factorial time. At the office a while back we got into a discussion about whether it's possible to write a sort that runs in worse than factorial time, without having the code take any pathological steps to deliberately avoid sorting the data properly. We couldn't come up with anything off the tops of our heads that might be worse than the repeated shuffle-and-test process. I think you'd have to be fairly ingenious to invent an O(n^n), or (even better) O(A(n)) sort. You probably woudn't get a medal, but you'd certainly merit a writeup on Slashdot. And probably here as well, FWIW.
Ahh, but it gets better, and weirder. A good article I came across titled "Who can name the largest number?" The author, Scott Aaronson, starts out discussing the answers you get when you ask kids this question -- all of whom gave answers vastly smaller than mine, I'm happy to say -- and then moves on to discuss things like our friend the Ackermann function, and then brings up a family of functions collectively known as Busy Beaver functions, one of the goofier names you're likely to run across in the math or CS worlds. The exact definition is fairly technical, involving various questions about Turing machines, but the really key point is that the functions are non-computable, meaning that although a function BB(n) is well-defined, there's no easy way to calculate its exact value for a given n. All you can do is build hardware, or write software, and try to determine the values experimentally. The really bizarre part: I'm not sure if this is the cause of, or an effect of, the functions' non-computability, but it's been proven that BB functions increase more rapidly than any function that can be recursively defined (exponents, Ackermann, numbers-in-polygons, whatever). I find this extremely puzzling and counterintuitive, and I have a lot of trouble imagining any sort of upper bound on how rapidly a "normal" function can increase, no matter how many nestings and such that you do. But so be it, I guess. The Aaronson article mentions that once you've got BB funtions, you can define even bigger functions in terms of them, but everything past BB is squarely in the realm of non-computability, continuing on up forever from there. So it seems that our familiar world of well-behaved, easily computed mathematical expressions is just the merest tip of the iceberg, and our difficulty in grasping what else is out there isn't just due to difficulties in notation. Wow. Freaky.
Some related functions with far less astronomical values are disussed here. The author of that page is Australian, hence the names "placid platypus" and "weary wombat", and where BB functions look for the maximal values of various things relating to Turing machines, his family of Australian fauna functions look for the minimal values. And then of course you've got inverse Busy Beaver functions. The link is to the sole research paper I can find (via Google) that mentions the subject. If they need a flashier, sillier name, I'd propose "Sleepy Sloth", but then, I'm not a mathematician, so what do I know? :) Anyway, as the inverse of a BB, it seems like an SS would increase more slowly than any computable function. It would still tend to infinity, but at a nearly imperceptible rate.
Now, the main underlying reason I've been doing all this reading lately is that I've been trying to get a better handle on hyperreal numbers and related beasties. And it turns out that one way you can look at the hyperreals is to view them in a purely algebraic sense, as a field of real-valued functions. It's a little hard to get your brain around at first, but basically each real number can be thought of as a constant function with the value of that real number, while functions that increase or decrease without bound are infinite nonstandard numbers, and functons that converge on some horizontal asymptote are infinititesimals. And functions that oscillate get us in to that confusing business with ultrafilters, so let's not go there for right now. Really exploring this will have to wait for another post some time, but it's interesting to consider how BB and related functions fit in. In the world of the hyperreals, BB(n) and SS(n) (or BB-1(n), if you prefer) both define infinitely large numbers, while 1/BB(n) and 1/SS(n) both give us infinitesimals. 1/SS(n) would seem to be an especially "big" infinitesimal, with SS(n) being a particularly "small" infinite number, but neither is a limit. No, in the hyperreal universe (as well as that of the surreal numbers), there's no such thing as a least infinite number, or a greatest infinitesimal. So the domains of the finite, the infinitely large, and the infinitesimal stretch out towards each other without bound, but they never actually touch. Weird, huh?
And the rest I'll save for another post, when I get around to it. I've rambled enough for now.
Subscribe to:
Post Comments
(
Atom
)
1 comment :
Guinness described Megiston as "too great to have any physical meaning" but never called it the biggest number.See my web page here for some more thoughts...
Post a Comment