Reading Yudkowsky, part 32

by Luke Muehlhauser on April 28, 2011 in Eliezer Yudkowsky,General Atheism

AI researcher Eliezer Yudkowsky is something of an expert at human rationality, and at teaching it to others. His hundreds of posts at Less Wrong are a treasure trove for those who want to improve their own rationality. As such, I’m reading all of them, chronologically.

I suspect some of my readers want to “level up” their rationality, too. So I’m keeping a diary of my Yudkowsky reading. Feel free to follow along.

His 252 post, Disguised Queries, is one of his most central, and is worth quoting at length:

Imagine that you have a peculiar job in a peculiar factory:  Your task is to take objects from a mysterious conveyor belt, and sort the objects into two bins.  When you first arrive, Susan the Senior Sorter explains to you that blue egg-shaped objects are called “bleggs” and go in the “blegg bin”, while red cubes are called “rubes” and go in the “rube bin”.

Once you start working, you notice that bleggs and rubes differ in ways besides color and shape.  Bleggs have fur on their surface, while rubes are smooth.  Bleggs flex slightly to the touch; rubes are hard.  Bleggs are opaque; the rube’s surface slightly translucent.

Soon after you begin working, you encounter a blegg shaded an unusually dark blue – in fact, on closer examination, the color proves to be purple, halfway between red and blue.

Yet wait!  Why are you calling this object a “blegg”?  A “blegg” was originally defined as blue and egg-shaped – the qualification of blueness appears in the very name “blegg”, in fact.  This object is not blue.  One of the necessary qualifications is missing; you should call this a “purple egg-shaped object”, not a “blegg”.

But it so happens that, in addition to being purple and egg-shaped, the object is also furred, flexible, and opaque.  So when you saw the object, you thought, “Oh, a strangely colored blegg.”  It certainly isn’t a rube… right?

Still, you aren’t quite sure what to do next.  So you call over Susan the Senior Sorter.

Susan says it’s a Blegg for sure, and this leads to a discussion about why there is a blegg bin and a rube bin in the first place. The discussion as to how to tell the difference between a rube and a blegg in uncertain cases goes on and on until at the very end Susan reveals that “bleggs contain small nuggets of vanadium ore, and rubes contain shreds of palladium, both of which are useful industrially.”

So now it seems we’ve discovered the heart and essence of bleggness: a blegg is an object that contains a nugget of vanadium ore.  Surface characteristics, like blue color and furredness, do not determine whether an object is a blegg; surface characteristics only matter because they help you infer whether an object is a blegg, that is, whether the object contains vanadium.

Containing vanadium is a necessary and sufficient definition: all bleggs contain vanadium and everything that contains vanadium is a blegg: “blegg” is just a shorthand way of saying “vanadium-containing object.”

But no. It’s not that simple:

Around 98% of bleggs contain vanadium, but 2% contain palladium instead.  To be precise (Susan continues) around 98% of blue egg-shaped furred flexible opaque objects contain vanadium.  For unusual bleggs, it may be a different percentage: 95% of purple bleggs contain vanadium, 92% of hard bleggs contain vanadium, etc.

Now suppose you find a blue egg-shaped furred flexible opaque object, an ordinary blegg in every visible way, and just for kicks you take it to the sorting scanner, and the scanner says “palladium” – this is one of the rare 2%.  Is it a blegg?

At first you might answer that, since you intend to throw this object in the rube bin, you might as well call it a “rube”.  However, it turns out that almost all bleggs, if you switch off the lights, glow faintly in the dark; while almost all rubes do not glow in the dark.  And the percentage of bleggs that glow in the dark is not significantly different for blue egg-shaped furred flexible opaque objects that contain palladium, instead of vanadium.  Thus, if you want to guess whether the object glows like a blegg, or remains dark like a rube, you should guess that it glows like a blegg.

Enough already! So is the object really a blegg, or is really a rube?

On one hand, you’ll throw the object in the rube bin no matter what else you learn.  On the other hand, if there are any unknown characteristics of the object you need to infer, you’ll infer them as if the object were a blegg, not a rube – group it into the similarity cluster of blue egg-shaped furred flexible opaque things, and not the similarity cluster of red cube-shaped smooth hard translucent things.

The question “Is this object a blegg?” may stand in for different queries on different occasions.

If it weren’t standing in for some query [about physical realities], you’d have no reason to care.

Is atheism a “religion”?  Is transhumanism a “cult”?  People who argue that atheism is a religion “because it states beliefs about God” are really trying to argue (I think) that the reasoning methods used in atheism are on a par with the reasoning methods used in religion, or that atheism is no safer than religion in terms of the probability of causally engendering violence, etc…  What’s really at stake is an atheist’s claim of substantial difference and superiority relative to religion, which the religious person is trying to reject by denying the difference rather than the superiority(!)

But that’s not the a priori irrational part:  The a priori irrational part is where, in the course of the argument, someone pulls out a dictionary and looks up the definition of “atheism” or “religion”.  (And yes, it’s just as silly whether an atheist or religionist does it.)  How could a dictionary possibly decide whether an empirical cluster of atheists is really substantially different from an empirical cluster of theologians?  How can reality vary with the meaning of a word?  The points in thingspace don’t move around when we redraw a boundary.

But people often don’t realize that their argument about where to draw a definitional boundary, is really a dispute over whether to infer a characteristic shared by most things inside an empirical cluster…

Hence the phrase, “disguised query”.

Neural Categories applies this reasoning in graphical form by mapping bleggs and rubes in a visual thingspace. How an Algorithm Feels From the Inside begins with a good summary of the discussion so far:

Getting into a heated dispute about whether, if a tree falls in a deserted forest, it makes a sound, is traditionally considered a mistake.

So what kind of mind design corresponds to that error?

In Disguised Queries I introduced the blegg/rube classification task, in which Susan the Senior Sorter explains that your job is to sort objects coming off a conveyor belt, putting the blue eggs or “bleggs” into one bin, and the red cubes or “rubes” into the rube bin.  This, it turns out, is because bleggs contain small nuggets of vanadium ore, and rubes contain small shreds of palladium, both of which are useful industrially.

Except that around 2% of blue egg-shaped objects contain palladium instead.  So if you find a blue egg-shaped thing that contains palladium, should you call it a “rube” instead?  You’re going to put it in the rube bin – why not call it a “rube”?

But when you switch off the light, nearly all bleggs glow faintly in the dark.  And blue egg-shaped objects that contain palladium are just as likely to glow in the dark as any other blue egg-shaped object.

So if you find a blue egg-shaped object that contains palladium, and you ask “Is it a blegg?”, the answer depends on what you have to do with the answer:  If you ask “Which bin does the object go in?”, then you choose as if the object is a rube.  But if you ask “If I turn off the light, will it glow?”, you predict as if the object is a blegg.  In one case, the question “Is it a blegg?” stands in for the disguised query, “Which bin does it go in?” In the other case, the question “Is it a blegg?” stands in for the disguised query, “Will it glow in the dark?”

Now suppose that you have an object that is blue and egg-shaped and contains palladium; and you have already observed that it is furred, flexible, opaque, and glows in the dark.

This answers every query, observes every observable introduced.  There’s nothing left for a disguised query to stand for.

So why might someone feel an impulse to go on arguing whether the object is really a blegg?

It’s because our human brains works differently than rational analysis does:

We know where Pluto is, and where it’s going; we know Pluto’s shape, and Pluto’s mass – but is it a planet?  [If your brain was organized differently] you wouldn’t say “It depends on how you define ‘planet’,” you would just say, “Given that we know Pluto’s orbit and shape and mass, there is no question left to ask”…

Before you can question your intuitions, you have to realize that what your mind’s eye is looking at is an intuition – some cognitive algorithm, as seen from the inside – rather than a direct perception of the Way Things Really Are.

People cling to their intuitions, I think, not so much because they believe their cognitive algorithms are perfectly reliable, but because they can’t see their intuitions as the way their cognitive algorithms happen to look from the inside.

And so everything you try to say about how the native cognitive algorithm goes astray, ends up being contrasted to their direct perception of the Way Things Really Are – and discarded as obviously wrong.

So how do you avoid descending into a debate over definitions? Eliezer offers some advice in Disputing Definitions:

Personally I’d say that if the issue arises, both sides should switch to describing the event in unambiguous lower-level constituents, like acoustic vibrations or auditory experiences.  Or each side could designate a new word, like ‘alberzle’ and ‘bargulum’, to use for what they respectively used to call ‘sound’; and then both sides could use the new words consistently.  That way neither side has to back down or lose face, but they can still communicate.  And of course you should try to keep track, at all times, of some testable proposition that the argument is actually about.

Feel the Meaning reminds us:

It feels like a word has a meaning, as a property of the word itself; just like how redness is a property of a red apple, or mysteriousness is a property of a mysterious phenomenon.

…You may not even notice that anything has gone astray, until you try to perform the rationalist ritual of stating a testable experiment whose result depends on the facts you’re so heatedly disputing…

One way to pick a definition for a disputed term is The Argument from Common Usage:

…surely there is a social imperative to use words in a commonly understood way?  Does not our human telepathy, our valuable power of language, rely on mutual coordination to work?  Perhaps we should voluntarily treat dictionary editors as supreme arbiters – even if they prefer to think of themselves as historians – in order to maintain the quiet cooperation on which all speech depends.

…agreement on language can also be a cooperatively established public good.  If you and I wish to undergo an exchange of thoughts via language, the human telepathy, then it is in our mutual interest that we use the same word for similar concepts – preferably, concepts similar to the limit of resolution in our brain’s representation thereof – even though we have no obvious mutual interest in using any particular word for a concept.

We have no obvious mutual interest in using the word “oto” to mean sound, or “sound” to mean oto; but we have a mutual interest in using the same word, whichever word it happens to be…

Albert’s appeal to the Argument from Common Usage assumes that agreement on language is a cooperatively established public good.  Yet Albert assumes this for the sole purpose of rhetorically accusing Barry of breaking the agreement, and endangering the public good.  Now the falling-tree argument has gone all the way from botany to semantics to politics; and so Barry responds by challenging Albert for the authority to define the word.

A rationalist, with the discipline of hugging the query active, would notice that the conversation had gone rather far astray…

If the question is how to cluster together similar things for purposes of inference, empirical predictions will depend on the answer; which means that definitions can be wrong.  A conflict of predictions cannot be settled by an opinion poll.

If you want to know whether atheism should be clustered with supernaturalist religions for purposes of some particular empirical inference, the dictionary can’t answer you.

If you want to know whether blacks are people, the dictionary can’t answer you.

If everyone believes that the red light in the sky is Mars the God of War, the dictionary will define “Mars” as the God of War.  If everyone believes that fire is the release of phlogiston, the dictionary will define “fire” as the release of phlogiston.

There is an art to using words; even when definitions are not literally true or false, they are often wiser or more foolish.  Dictionaries are mere histories of past usage; if you treat them as supreme arbiters of meaning, it binds you to the wisdom of the pastforbidding you to do better.

Though do take care to ensure (if you must depart from the wisdom of the past) that people can figure out what you’re trying to swim.

Empty Labels concludes:

…if you’re going to have an Aristotelian proverb at all, the proverb should be, not “I can define a word any way I like,” nor even, “Defining a word never has any consequences,” but rather, “Definitions don’t need words.”

After another meetup post, the discussion of words continues with Taboo Your Words:

In the game Taboo (by Hasbro), the objective is for a player to have their partner guess a word written on a card, without using that word or five additional words listed on the card.  For example, you might have to get your partner to say “baseball” without using the words “sport”, “bat”, “hit”, “pitch”, “base” or of course “baseball”.

The existence of this game surprised me, when I discovered it.  Why wouldn’t you just say “An artificial group conflict in which you use a long wooden cylinder to whack a thrown spheroid, and then run between four safe positions”?

But then, by the time I discovered the game, I’d already been practicing it for years – albeit with a different purpose.

Here’s how you can Taboo your words:

When you find yourself in philosophical difficulties, the first line of defense is not to define your problematic terms, but to see whether you can think without using those terms at all.  Or any of their short synonyms.  And be careful not to let yourself invent a new word to use instead.  Describe outward observables and interior mechanisms; don’t use a single handle, whatever that handle may be.

Albert says that people have “free will”.  Barry says that people don’t have “free will”.  Well, that will certainly generate an apparent conflict.  Most philosophers would advise Albert and Barry to try to define exactly what they mean by “free will”, on which topic they will certainly be able to discourse at great length.  I would advise Albert and Barry to describe what it is that they think people do, or do not have, without using the phrase “free will” at all.  (If you want to try this at home, you should also avoid the words “choose”, “act”, “decide”, “determined”, “responsible”, or any of their synonyms.)

This is one of the nonstandard tools in my toolbox, and in my humble opinion, it works way way better than the standard one.  It also requires more effort to use; you get what you pay for.

Another way of saying all this is to say that if confusion is lurking, just Replace the Symbol with the Substance.

Previous post:

Next post:

{ 1 comment… read it below or add one }

drj April 28, 2011 at 7:39 am

That last bit is great advice…. something that seems lost on so many people. Its amazing to see all the arguments going on the internet and other places where people are clearly using the same terms, but mean completely different things, and seem to have no idea why the other just can’t get what they are trying to say.


Leave a Comment