Reading Yudkowsky, part 39

by Luke Muehlhauser on May 24, 2011 in Eliezer Yudkowsky,Resources,Reviews

AI researcher Eliezer Yudkowsky is something of an expert at human rationality, and at teaching it to others. His hundreds of posts at Less Wrong are a treasure trove for those who want to improve their own rationality. As such, I’m reading all of them, chronologically.

I suspect some of my readers want to “level up” their rationality, too. So I’m keeping a diary of my Yudkowsky reading. Feel free to follow along.

His 309th post is Zombies! Zombies?

Your “zombie”, in the philosophical usage of the term, is putatively a being that is exactly like you in every respect – identical behavior, identical speech, identical brain; every atom and quark in exactly the same position, moving according to the same causal laws of motion - except that your zombie is not conscious.

It is furthermore claimed that if zombies are “possible” (a term over which battles are still being fought), then, purely from our knowledge of this “possibility”, we can deduce a priori that consciousness is extra-physical, in a sense to be described below; the standard term for this position is “epiphenomenalism”.

The rest of the post is a very long reply to David Chalmer’s defense of property dualism on the basis of his famous zombies thought experiment.

Zombie Responses offers some replies to criticisms of the previous post by philosopher Richard Chappell. The Generalized Anti-Zombie Principle summarizes:

Zombies” are putatively beings that are atom-by-atom identical to us, governed by all the same third-party-visible physical laws, except that they are not conscious.

Though the philosophy is complicated, the core argument against zombies is simple:  When you focus your inward awareness on your inward awareness, soon after your internal narrative (the little voice inside your head that speaks your thoughts) says “I am aware of being aware”, and then you say it out loud, and then you type it into a computer keyboard, and create a third-party visible blog post.

Consciousness, whatever it may be – a substance, a process, a name for a confusion – is not epiphenomenal; your mind can catch the inner listener in the act of listening, and say so out loud.  The fact that I have typed this paragraph would at least seem to refute the idea that consciousness has no experimentally detectable consequences.

GAZP vs. GLUT continues this very technical discussion.

Belief in the Implied Invisible begins Eliezer’s discussion of quantum mechanics:

One generalized lesson not to learn from the Anti-Zombie Argument is, “Anything you can’t see doesn’t exist.”

It’s tempting to conclude the general rule.  It would make the Anti-Zombie Argument much simpler, on future occasions, if we could take this as a premise.  But unfortunately that’s just not Bayesian.

How ’bout an example?

Suppose I transmit a photon out toward infinity, not aimed at any stars, or any galaxies, pointing it toward one of the great voids between superclusters.  Based on standard physics, in other words, I don’t expect this photon to intercept anything on its way out.  The photon is moving at light speed, so I can’t chase after it and capture it again.

If the expansion of the universe is accelerating, as current cosmology holds, there will come a future point where I don’t expect to be able to interact with the photon even in principle – a future time beyond which I don’t expect the photon’s future light cone to intercept my world-line.  Even if an alien species captured the photon and rushed back to tell us, they couldn’t travel fast enough to make up for the accelerating expansion of the universe.

Should I believe that, in the moment where I can no longer interact with it even in principle, the photon disappears?

No.

It would violate Conservation of Energy.  And the second law of thermodynamics.  And just about every other law of physics…

But if you can believe in the continued existence of photons that have become experimentally undetectable to you, why doesn’t this imply a general license to believe in the invisible?

Why indeed?

…when it was first proposed that the Milky Way was ourgalaxy - that the hazy river of light in the night sky was made up of millions (or even billions) of stars – that Occam’s Razor was invoked against the new hypothesis.  Because, you see, the hypothesis vastly multiplied the number of “entities” in the believed universe…

That was Occam’s original formulation, the law of parsimony:  Entities should not be multiplied beyond necessity.

If you postulate billions of stars that no one has ever believed in before, you’re multiplying entities, aren’t you?

No.  There are two Bayesian formalizations of Occam’s Razor:  Solomonoff Induction, and Minimum Message Length.  Neither penalizes galaxies for being big.

In Solomonoff induction, the complexity of your model is the amount of code in the computer program you have to write to simulate your model.  The amount of code, not the amount of RAM it uses, or the number of cycles it takes to compute.  A model of the universe that contains billions of galaxies containing billions of stars, each star made of a billion trillion decillion quarks, will take a lot of RAM to run – but the code only has to describe the behavior of the quarks, and the stars and galaxies can be left to run themselves.  I am speaking semi-metaphorically here – there are things in the universe besides quarks – but the point is, postulating an extra billion galaxies doesn’t count against the size of your code, if you’ve already described one galaxy.  It just takes a bit more RAM, and Occam’s Razor doesn’t care about RAM.

Why not?  The Minimum Message Length formalism, which is nearly equivalent to Solomonoff Induction, may make the principle clearer:  If you have to tell someone how your model of the universe works, you don’t have to individually specify the location of each quark in each star in each galaxy.  You just have to write down some equations.  The amount of “stuff” that obeys the equation doesn’t affect how long it takes to write the equation down.  If you encode the equation into a file, and the file is 100 bits long, then there are 2100 other models that would be around the same file size, and you’ll need roughly 100 bits of supporting evidence.  You’ve got a limited amount of probability mass; and a priori, you’ve got to divide that mass up among all the messages you could send; and so postulating a model from within a model space of 2100 alternatives, means you’ve got to accept a 2-100 prior probability penalty – but having more galaxies doesn’t add to this.

Postulating billions of stars in billions of galaxies doesn’t affect the length of your message describing the overall behavior of all those galaxies.  So you don’t take a probability hit from having the same equations describing more things.  (So long as your model’s predictive successes aren’t sensitive to the exact initial conditions.  If you’ve got to specify the exact positions of all the quarks for your model to predict as well as it does, the extra quarks do count as a hit.)

So what does this have to do with the photon?

If you suppose that the photon disappears when you are no longer looking at it, this is an additional law in your model of the universe.  It’s the laws that are “entities”, costly under the laws of parsimony.  Extra quarks are free.

So does it boil down to, “I believe the photon goes on existing as it wings off to nowhere, because my priors say it’s simpler for it to go on existing than to disappear”?

This is what I thought at first, but on reflection, it’s not quite right.

So what is the right answer, according to Yudkowsky?

I would boil it down to a distinction between belief in the implied invisible, and belief in the additional invisible.

When you believe that the photon goes on existing as it wings out to infinity, you’re not believing that as anadditional fact.

What you believe (assign probability to) is a set of simple equations; you believe these equations describe the universe.  You believe these equations because they are the simplest equations you could find that describe the evidence.  These equations are highly experimentally testable; they explain huge mounds of evidence visible in the past, and predict the results of many observations in the future.

You believe these equations, and it is a logical implication of these equations that the photon goes on existing as it wings off to nowhere, so you believe that as well.

Your priors, or even your probabilities, don’t directly talk about the photon.  What you assign probability to is not the photon, but the general laws.  When you assign probability to the laws of physics as we know them, youautomatically contribute that same probability to the photon continuing to exist on its way to nowhere – if you believe the logical implications of what you believe.

It’s not that you believe in the invisible as such, from reasoning about invisible things.  Rather the experimental evidence supports certain laws, and belief in those laws logically implies the existence of certain entities that you can’t interact with.  This is belief in the implied invisible.

On the other hand, if you believe that the photon is eaten out of existence by the Flying Spaghetti Monster – maybe on this just one occasion – or even if you believed without reason that the photon hit a dust speck on its way out – then you would be believing in a specific extra invisible event, on its own.  If you thought that this sort of thing happened in general, you would believe in a specific extra invisible law.  This is belief in the additional invisible.

Previous post:

Next post:

{ 3 comments… read them below or add one }

JS Allen May 24, 2011 at 3:47 pm

In his “GAZP vs. GLUT” post, I kept expecting him to conclude with, “Therefore, it’s clear that we were designed by a unitary consciousness”.

  (Quote)

Jeff H May 26, 2011 at 12:12 pm

Luke, how many more of these posts are there? Yudkowsky has some interesting stuff, and I do appreciate the summaries, but after almost 40 posts about it, I’m wondering if we’re almost finished or if we’ve barely begun.

  (Quote)

Luke Muehlhauser May 26, 2011 at 10:45 pm

Jeff H,

We’re about 2/3 of the way through. The reason you’re getting all these posts is because I don’t have time right now to write new content for CSA, and I wrote these Yudkowsky posts months ago. For a little while, the only non-Yudkowsky new content you’re likely to see on CSA are Alonzo’s posts and our podcast episodes.

  (Quote)

Leave a Comment