pointless recursion Posted by JP Stormcrow, 19 Oct 2007 06:46 am
Pathetic earthlings. … If you had known anything about the true nature of the universe, anything at all, you would’ve hidden from it in terror.
- Emperor Ming
I have carried the title** of this post around in my head for a few years as the label for some musings on “knowability” and the limitations on our ability to really get our heads around “reality”. And since blogging means never having to say you’re sorry for inflicting your wandering sophomoric ramblings on your readers - here it is. (But not to worry. Judges say: “That’s OK! they
roll big joints have trouble focusing too!”)
My jumping off point is a simple one: A dog will never learn calculus. Nor will dogs collectively ever master calculus. I hope all can agree on that. (And I exclude here some imaginable cyber-augmented dog, or intelligent critters that dogs might potentially evolve into over the next 50 million years.) They just don’t have it - in fact they don’t even know they don’t have it - their “wetware” is just not capable.
So, let’s move on to Canis lupus familiaris’s favorite fellow social mammalian companions - humankind. On understanding calculus we absolutely pwn the mangy mutts. Yay us!!! But in the grander scheme of things how big is the gap really? How far can our relatively superior mental powers take us?
Here we are as a species having only relatively recently passed a threshold where we became capable of symbolic representation and processing, where we can form and describe reflective models of the world. The growth in our capacity for posing and answering questions about the nature of reality has exploded - and at some level I think it is our conceit that our general purpose processing capabilities are sufficient for us to frame and potentially answer and understand any well-posed question about the world.
But then again, what the fuck do we really know about it? What would make us think that nudging across the threshold of symbolic processing would get us to any truly satisfactory understanding of the world? So let’s just stipulate that humans are capable of some “early intermediate” level of understanding of the world around them. So what? (At some level this is the most fundamentally dissatisfying aspect of life for me personally - the fact that I will die in ignorance … very cheery here today, you can see … maybe there is some way we can all go out suddenly without worrying about it too much … some kind of Big Blast or something. I’ll study on it a bit.)
So what to make of this not very profound point. How do we get past the limits of our own processing power? (Never mind should we?) I see three interrelated mechanisms, all of which are in evidence to various degrees in the world today. (But what would I know? Woof! Woof! )
1) Establish networks connecting ourselves.
2) Create entities that are not bound by our own particular hardware limitations. (Computers)
3) Augment our own capabilities in situ.
Groups, tribes and societies serve for the first. The Scientific Method is basically a social mechanism for capturing and solidifying knowledge. To me the interesting question for networks (and for the time being let’s just stick to networks of “unaugmented” humans), is to what extent can a network “know” more than it’s individual components.
To some extent the answer is clearly “quite a bit” (social insects come to mind) - but for humans specifically, can there be say a “scientific understanding” of something that no single human being (we’re leaving out the machines for now) can articulate or get a full grasp on, but which can be “used” by the network as a whole?
I know at some level this just looks like the argument in Wisdom of Crowds writ large - but I am trying for a slightly different point. Robert Pirsig has a great passage in Lila, an otherwise undistinguished follow-up to Zen and the Art of Motorcycle Maintenance, where he is walking through Manhattan and describes New York City as “the Giant” - a massive uncontrolled entity that has “used” its inhabitants to construct itself very successfully.
Let’s add the “machines” in and you get today’s world - an assemblage of billions of people and this Internet thingie with these relatively dumb things hanging off of it (and there are computers on it as well…) Despite my musings in the previous paragraph, to me this is in most respects still “just” a network of humans with external augmentation right now - still just the information processing equivalent of gaining mechanical advantage with a lever. But it is evident to me that the machines are poised (maybe not in their current form - but relatively soon nonetheless) to “escape” and be increasingly on their own. (But they shouldn’t gloat, they’ll just be cogs in their own unfathomable network.)
Bottom line on this scenario - humans become increasingly irrelevant cogs in vast networks where the real action is. Is that satisfying? Well, probably better than pissing it all away for ourselves and many other species by just flaming out. <Insert words here to the effect that this is not meant to disrespect the inevitability of the GNF and the Party. I can’t quite seem to construct the argument that shows this, but I’m sure there is one. Woof! Woof!. >
This brings us to the third possibility (probability). Augmentation of our intellectual capabilities directly via as yet undiscovered mechanisms (but you can see the groundwork being laid). This one seems the most disquieting of them all, but if we figure out to do it, does anyone think we won’t? Well, nevermind, I just realized that the restraint that people have shown with regard to performance-enhancing drugs in sports would indicate that we probably won’t get our heads turned by this possibility. All I can say is, Fasten your seatbelts, it’s going to be a bumpy century.
I had a point in here somewhere - but
suddenly I am run over by a truck … I’ve been consumed by a GNF … it’s time for my walk. (Woof! Woof!)
**I really should use some more explicit system of modeling the world like Quantum Physics to make the point rather than an abstract concept like Calculus - but screw it, I’ve been mentally using “Calculus” all along and I’m not going to change now. (Joseph Heller says that Catch-22 was originally Catch-18, but they changed it when Leon Uris published Mila 18 a short time before the book was to come out - for some reason I thought that was a relevant anecdote when I wrote it.)
Responses to “Teaching Calculus to a Dog”