Reading Yudkowsky, part 35

by Luke Muehlhauser on May 8, 2011 in Eliezer Yudkowsky,Resources,Reviews

AI researcher Eliezer Yudkowsky is something of an expert at human rationality, and at teaching it to others. His hundreds of posts at Less Wrong are a treasure trove for those who want to improve their own rationality. As such, I’m reading all of them, chronologically.

I suspect some of my readers want to “level up” their rationality, too. So I’m keeping a diary of my Yudkowsky reading. Feel free to follow along.

His 278th is Variable Question Fallacies, recovering much ground from before (which is why you probably will want to read it). More fascinating is the accurately titled 37 Ways That Words Can Be Wrong. Here are just five of those ways:

  • You try to establish any sort of empirical proposition as being true “by definition”. Socrates is a human, and humans, by definition, are mortal.  So is it a logical truth if we empirically predict that Socrates should keel over if he drinks hemlock?  It seems like there are logically possible, non-self-contradictory worlds where Socrates doesn’t keel over – where he’s immune to hemlock by a quirk of biochemistry, say.  Logical truths are true in all possible worlds, and so never tell you which possible world you live in – and anything you can establish “by definition” is a logical truth.  (The Parable of Hemlock.)
  • You ask whether something “is” or “is not” a category member but can’t name the question you really want answered. What is a “man”?  Is Barney the Baby Boy a “man”?  The “correct” answer may depend considerably on whether the query you really want answered is “Would hemlock be a good thing to feed Barney?” or “Will Barney make a good husband?”  (Disguised Queries.)
  • You talk about categories as if they are manna fallen from the Platonic Realm, rather than inferences implemented in a real brain. The ancient philosophers said “Socrates is a man”, not, “My brain perceptually classifies Socrates as a match against the ‘human’ concept”.  (How An Algorithm Feels From Inside.)
  • You allow an argument to slide into being about definitions, even though it isn’t what you originally wanted to argue about. If, before a dispute started about whether a tree falling in a deserted forest makes a “sound”, you asked the two soon-to-be arguers whether they thought a “sound” should be defined as “acoustic vibrations” or “auditory experiences”, they’d probably tell you to flip a coin.  Only after the argument starts does the definition of a word become politically charged.  (Disputing Definitions.)
  • You get into arguments that you could avoid if you just didn’t use the word. If Albert and Barry aren’t allowed to use the word “sound”, then Albert will have to say “A tree falling in a deserted forest generates acoustic vibrations”, and Barry will say “A tree falling in a deserted forest generates no auditory experiences”.  When a word poses a problem, the simplest solution is to eliminate the word and its synonyms.  (Taboo Your Words.)

That is, clearly, one of the central posts on Less Wrong, summarizing a massive and extremely useful discussion.

Gary Gygax Annihilated at 69 is a farewell to the inventor of Dungeons & Dragons.

Dissolving the Question is, I think, one of the primary goals of good philosophy:

“If a tree falls in the forest, but no one hears it, does it make a sound?”

I didn’t answer that question.  I didn’t pick a position, “Yes!” or “No!”, and defend it.  Instead I went off anddeconstructed the human algorithm for processing words, even going so far as to sketch an illustration of a neural network.  At the end, I hope, there was no question left – not even the feeling of a question.

Many philosophers – particularly amateur philosophers, and ancient philosophers – share a dangerous instinct:  If you give them a question, they try to answer it.

Like, say, “Do we have free will?”

The dangerous instinct of philosophy is to marshal the arguments in favor, and marshal the arguments against, and weigh them up, and publish them in a prestigious journal of philosophy, and so finally conclude:  “Yes, we must have free will,” or “No, we cannot possibly have free will.”

The philosopher’s instinct is to find the most defensible position, publish it, and move on.  But the “naive” view, the instinctive view, is a fact about human psychology.  You can prove that free will is impossible until the Sun goes cold, but this leaves an unexplained fact of cognitive science:  If free will doesn’t exist, what goes on inside the head of a human being who thinks it does?  This is not a rhetorical question!

It is a fact about human psychology that people think they have free will.  Finding a more defensible philosophical position doesn’t change, or explain, that psychological fact. Philosophy may lead you to reject the concept, but rejecting a concept is not the same as understanding the cognitive algorithms behind it.

For example, there can be a dangling unit in the center of a neural network, which does not correspond to any real thing, or any real property of any real thing, existent anywhere in the real world.  This dangling unit is often useful as a shortcut in computation, which is why we have them.  (Metaphorically speaking.  Human neurobiology is surely far more complex.)

This dangling unit feels like an unresolved question, even after every answerable query is answered.  No matter how much anyone proves to you that no difference of anticipated experience depends on the question, you’re left wondering:  “But does the falling tree really make a sound, or not?”

The truth is that lots of questions over which gallons of ink have been spilled are simply Wrong Questions:

One good cue that you’re dealing with a “wrong question” is when you cannot even imagine any concrete, specific state of how-the-world-is that would answer the question.  When it doesn’t even seem possible to answer the question.

Take the Standard Definitional Dispute, for example, about the tree falling in a deserted forest.  Is there any way-the-world-could-be – any state of affairs – that corresponds to the word “sound” really meaning only acoustic vibrations, or really meaning only auditory experiences?

(“Why, yes,” says the one, “it is the state of affairs where ‘sound’ means acoustic vibrations.”  So Taboo the word ‘means’, and ‘represents’, and all similar synonyms, and describe again:  How can the world be, what state of affairs, would make one side right, and the other side wrong?)

Or if that seems too easy, take free will:  What concrete state of affairs, whether in deterministic physics, or in physics with a dice-rolling random component, could ever correspond to having free will?

And if that seems too easy, then ask “Why does anything exist at all?”, and then tell me what a satisfactory answer to that question would even look like.

Such questions must be dissolved.  Bad things happen when you try to answer them.  It inevitably generates the worst sort of Mysterious Answer to a Mysterious Question:  The one where you come up with seemingly strong arguments for your Mysterious Answer, but the “answer” doesn’t let you make any new predictions even in retrospect, and the phenomenon still possesses the same sacred inexplicability that it had at the start.

I could guess, for example, that the answer to the puzzle of the First Cause is that nothing does exist – that the whole concept of “existence” is bogus.  But if you sincerely believed that, would you be any less confused?  Me neither.

So how do you handle wrong questions? Eliezer offers some advice in Righting a Wrong Question:

When you are faced with an unanswerable question – a question to which it seems impossible to even imagine an answer – there is a simple trick which can turn the question solvable.


  • “Why do I have free will?”
  • “Why do I think I have free will?”

The nice thing about the second question is that it is guaranteed to have a real answer, whether or not there is any such thing as free will.  Asking “Why do I have free will?” or “Do I have free will?” sends you off thinking about tiny details of the laws of physics, so distant from the macroscopic level that you couldn’t begin to see them with the naked eye.  And you’re asking “Why is X the case?” where X may not be coherent, let alone the case.

If you’ve already outgrown free will, choose one of these substitutes:

  • “Why does time move forward instead of backward?” versus “Why do I think time moves forward instead of backward?”
  • “Why was I born as myself rather than someone else?” versus “Why do I think I was born as myself rather than someone else?”
  • “Why am I conscious?” versus “Why do I think I’m conscious?”
  • “Why does reality exist?” versus “Why do I think reality exists?”

If your belief does derive from valid observation of a real phenomenon, we will eventually reach that fact, if we start tracing the causal chain backward from your belief.

If what you are really seeing is your own confusion, tracing back the chain of causality will find an algorithm that runs skew to reality.

Either way, the question is guaranteed to have an answer.  You even have a nice, concrete place to begin tracing – your belief, sitting there solidly in your mind.

Cognitive science may not seem so lofty and glorious as metaphysics.  But at least questions of cognitive science are solvable. Finding an answer may not be easy, but at least an answer exists.

Oh, and also: the idea that cognitive science is not so lofty and glorious as metaphysics is simply wrong.  Some readers are beginning to notice this, I hope.

Previous post:

Next post:

{ 3 comments… read them below or add one }

Polymeron May 8, 2011 at 2:05 pm

“37 ways” was the first LW post I ever read. I wasn’t even sure whether it was Yudkowsky who wrote it, and didn’t bother checking. It’s not really important, I suppose.

It’s absolutely brilliant. If people are not going to read the LW sequences and/or strive to be rationalists, they can still massively benefit from this one post. It’s also pretty much proof positive of the philosophical usefulness of what Yudkowsky is doing.


Ben A May 8, 2011 at 7:02 pm

Hey Luke,

I know some of the posts on Less Wrong are interesting, and I know that you stated that you would be cutting down on post, but readers I think are more interested in your material, not Yudkowsky’s.

If readers wanted to read Less Wrong, they would be going to that blog, not Common Sense Atheism.


Luke Muehlhauser May 8, 2011 at 11:29 pm

Ben A,

The thing is, I don’t have time to update this blog much anymore, and I have all these Yudkowsky posts already written just waiting to be posted.


Leave a Comment