Reading Yudkowsky, part 53

by Luke Muehlhauser on July 12, 2011 in Eliezer Yudkowsky,Resources,Reviews

AI researcher Eliezer Yudkowsky is something of an expert at human rationality, and at teaching it to others. His hundreds of posts at Less Wrong are a treasure trove for those who want to improve their own rationality. As such, I’m reading all of them, chronologically.

I suspect some of my readers want to “level up” their rationality, too. So I’m keeping a diary of my Yudkowsky reading. Feel free to follow along.

His 462nd post argues that Harder Choices Matter Less (or at least they should, logically). Qualitative Strategies of Friendliness engages some of Eliezer’s work on Friendly AI, continued with Dreams of Friendliness.

After a news post, Eliezer offers six more quotes posts: 1, 2, 3, 4, 5, 6. My favorite are:

So often when one level of delusion goes away, another one more subtle comes in its place.

– Rational Buddhist

We take almost all of the decisive steps in our lives as a result of slight inner adjustments of which we are barely conscious.”

– Austerlitz

The True Prisoner’s Dilemma offers a variation on the prisoner’s dilemma, followed up with The Truly Iterated Prisoner’s Dilemma.

After a news post, Eliezer warns again against treating humans as the Points of Departure when considering other possible minds.

In Excluding the Supernatural, Eliezer says:

By far the best definition I’ve ever heard of the supernatural is Richard Carrier’s:  A “supernatural” explanation appeals to ontologically basic mental things, mental entities that cannot be reduced to nonmental entities.

This is the difference, for example, between saying that water rolls downhill because it wants to be lower, and setting forth differential equations that claim to describe only motions, not desires.  It’s the difference between saying that a tree puts forth leaves because of a tree spirit, versus examining plant biochemistry.  Cognitive science takes the fight against supernaturalism into the realm of the mind.

Why is this an excellent definition of the supernatural?  I refer you to Richard Carrier for the full argument.  But consider:  Suppose that you discover what seems to be a spirit, inhabiting a tree: a dryad who can materialize outside or inside the tree, who speaks in English about the need to protect her tree, et cetera.  And then suppose that we turn a microscope on this tree spirit, and she turns out to be made of parts – not inherently spiritual and ineffable parts, like fabric of desireness and cloth of belief; but rather the same sort of parts as quarks and electrons, parts whose behavior is defined in motions rather than minds.  Wouldn’t the dryad immediately bedemoted to the dull catalogue of common things?

Now, Eliezer considers a question I’ve pondered myself:

But if we accept Richard Carrier’s definition of the supernatural, then a dilemma arises: we want to give religious claims a fair shake, but it seems that we have very good grounds for excluding supernatural explanations a priori.

I mean, what would the universe look like if reductionism were false?

I previously defined the reductionist thesis as follows: human minds create multi-level models of reality in which high-level patterns and low-level patterns are separately and explicitly represented. A physicist knows Newton’s equation for gravity, Einstein’s equation for gravity, and the derivation of the former as a low-speed approximation of the latter.  But these three separate mental representations, are only a convenience of human cognition.  It is not that reality itself has an Einstein equation that governs at high speeds, a Newton equation that governs at low speeds, and a “bridging law” that smooths the interface.  Reality itself has only a single level, Einsteinian gravity.  It is only the Mind Projection Fallacy that makes some people talk as if the higher levels could have a separate existence – different levels of organization can have separate representations in human maps, but the territory itself is a single unified low-level mathematical object.

Suppose this were wrong.

Suppose that the Mind Projection Fallacy was not a fallacy, but simply true.

Suppose that a 747 had a fundamental physical existence apart from the quarks making up the 747.

What experimental observations would you expect to make, if you found yourself in such a universe?

The dilemma hits:

If you can’t come up with a good answer to that, it’s not observation that’s ruling out “non-reductionist” beliefs, buta priori logical incoherence.  If you can’t say what predictions the “non-reductionist” model makes, how can you say that experimental evidence rules it out?

My thesis is that non-reductionism is a confusion; and once you realize that an idea is a confusion, it becomes a tad difficult to envision what the universe would look like if the confusion were true. Maybe I’ve got some multi-level model of the world, and the multi-level model has a one-to-one direct correspondence with the causal elements of the physics?  But once all the rules are specified, why wouldn’t the model just flatten out into yet another list of fundamental things and their interactions?  Does everything I can see in the model, like a 747 or a human mind, have to become a separate real thing?  But what if I see a pattern in that new supersystem?

Supernaturalism is a special case of non-reductionism, where it is not 747s that are irreducible, but just (some) mental things.  Religion is a special case of supernaturalism, where the irreducible mental things are God(s) and souls; and perhaps also sins, angels, karma, etc.

But the conclusion is:

Ultimately, reductionism is just disbelief in fundamentally complicated things.  If “fundamentally complicated” sounds like an oxymoron… well, that’s why I think that the doctrine of non-reductionism is a confusion, rather than a way that things could be, but aren’t.  You would be wise to be wary, if you find yourself supposing such things.

But the ultimate rule of science is to look and see.  If ever a God appeared to thunder upon the mountains, it would be something that people looked at and saw.

But of course we can also consider Psychic Powers. Then, one more post on Optimization.

Previous post:

Next post:

{ 5 comments… read them below or add one }

Leon July 13, 2011 at 4:44 am

A few things always confused me in that post:

Reality itself has only a single level, Einsteinian gravity.

Surely Einsteinian gravity is a level of the map, not a level of the territory. Similarly, I feel like I’ve heard a few times here/on LessWrong that the universe is “made of maths”. This seems to be the exactly the kind of map-territory confusion that non-reductionists typically object to.

Suppose that a 747 had a fundamental physical existence apart from the quarks making up the 747.

I feel like the meaning of the terms “fundamental”, “ontologically basic”, etc., is not entirely clear here.

For example, if mind uploading is possible — i.e., minds, like algorithms, can be implemented on many possible substrates with “personal identity preserved” [vague, I know] — then in what sense are minds “fundamentally” quarks, or “reducible” to quarks? In some sense, the “fundamental” thing about any particular mind is not what it’s made of: we can imagine transferring the same mind to a non-quark substrate, if we discover such a thing. Now that I think about it, perhaps the same is true of 747s …

Anyway, what I’m getting at is that perhaps a better way to capture supernaturalism would be “there are substrates for minds that are not accessible to humans”. This allows for e.g. one god to destroy another without simply “deleting” it, but also allows for gods to be “ineffable” to humans.

  (Quote)

Zeb July 14, 2011 at 8:58 am

When all you have is a hammer, everything looks like a nail. Eliezor has a mind for math, and it seems that anything that can’t be cracked (or at least feel like it’s been cracked) by math is therefore logically incoherent, and therefore false, and therefore dismissable a priori.

  (Quote)

hf July 14, 2011 at 12:12 pm

Surely Einsteinian gravity is a level of the map, not a level of the territory.

Einstein’s account of it counts as a map. If you want to go with solipsism (or Discordianism) then you could stop there. But while solipsism helps get at the fact of uncertainty, showing that we can always doubt our explanations of the world, it doesn’t work as a separate explanation for our experience. I feel compelled to say that, with a high degree of certainty, some other territory — resembling in part the Einsteinian math of gravity — generates our experience. This belief seems justified by the regularity or rule-based behavior that we observe in daily life, or in science using a more formal version of observation.

(I don’t know what happens if you somehow prevent an infant from observing any regularity, or keep breaking any rules that the child perceives. I imagine the brain would eventually stop trying to form explanations, effectively shutting down.)

–More–

  (Quote)

hf July 14, 2011 at 2:17 pm

(If this continuation posts twice, please delete the first version. I removed the link to the earlier thread and edited slightly.)

Now we humans form explanations by comparing our observations to the output of various mostly-opaque boxes that we carry inside ourselves. We give these boxes labels such as ‘anger’ and ‘love’ and ‘computation’. One of these is not like the others. We can sometimes use ‘computation’ or math to make reliable statements about the other boxes, while the reverse does not seem to hold. And of course we can explain how a computation box might work, well enough to build one. If we could see the source of experience in the “other territory” directly, I would expect it to act more like math than like any other tool we could use to grasp it.

On a practical level, consider the rule that you should wear a seat-belt while driving at high speeds. If we look at this rule using the ‘love’ and ‘morality’ boxes inside our head, treating it as a product of something that acts like ‘love’ or what have you, then we’ll probably decide the world won’t really kill us for disobeying the rule. Death seems completely out of proportion. But if we look at it as a consequence of math, well then it seems obvious that applying too much force to your skull could kill you. Why not? Math doesn’t care. Even something more complex than the seat-belt rule, such as the ‘sexual attraction’ box, behaves more like complicated math we don’t necessarily understand than like a product of ‘morality’ or any other black box. And we have at least a vague mathematical argument for regarding purely mathematical explanations as simpler than reasoning which calls on many boxes. The other boxes themselves seem more complicated, not only in terms of what it physically takes to duplicate their output, but also in terms of what we need to imagine before they could apply and perhaps in terms of how well we can understand/predict their most basic operations.

Anyway, what I’m getting at is that perhaps a better way to capture supernaturalism would be “there are substrates for minds that are not accessible to humans”.

Perhaps, but supernaturalism didn’t start that way. It seems like a transparent attempt to justify conclusions previously reached by using non-mathy boxes in the way that Eliezer calls the Mind Projection Fallacy, treating those boxes as a simpler and better tool for predicting reality than math. Even today, while Rufus in RY thread#46 spoke of “potential answers” involving some new substrate, I don’t know that he had the first thought as to what an “answer” might look like. I think he still wants to say that the source of experience acts more like love than like math. His “potential answer” seems like a series of words thrown up to avoid changing one’s views after seeing evidence against them.

  (Quote)

Brian July 15, 2011 at 2:13 pm

When all you have is a hammer, everything looks like a nail. Eliezor has a mind for math, and it seems that anything that can’t be cracked (or at least feel like it’s been cracked) by math is therefore logically incoherent, and therefore false, and therefore dismissable a priori.

So, that’s how you know his statements are false, even when you can’t see why? Presumably if you had reason to think this specific argument was wrong, you’d say how and point to the incorrect step. Instead, you’re saying we should fall back to general suspicion of arguments by people whose conclusions consistently resemble math? In that case, please say how this conclusion resembles math, as I don’t see it, and outline what a non-math-y solution would look like, in general terms.

  (Quote)

Leave a Comment