Reading Yudkowsky, part 31

by Luke Muehlhauser on April 24, 2011 in Eliezer Yudkowsky,Resources,Reviews

AI researcher Eliezer Yudkowsky is something of an expert at human rationality, and at teaching it to others. His hundreds of posts at Less Wrong are a treasure trove for those who want to improve their own rationality. As such, I’m reading all of them, chronologically.

I suspect some of my readers want to “level up” their rationality, too. So I’m keeping a diary of my Yudkowsky reading. Feel free to follow along.

His 241st post is Something to Protect:

Historically speaking, the way humanity finally left the trap of authority and began paying attention to, y’know, the actual sky, was that beliefs based on experiment turned out to be much more useful than beliefs based on authority.  Curiosity has been around since the dawn of humanity, but the problem is that spinning campfire tales works just as well for satisfying curiosity.

Historically speaking, science won because it displayed greater raw strength in the form of technology, not because science sounded more reasonable.  To this very day, magic and scripture still sound more reasonable to untrained ears than science.  That is why there is continuous social tension between the belief systems.  If science not only worked better than magic, but also sounded more intuitively reasonable, it would have won entirely by now.

So does the rationalist pursue usefulness, or Truth?

…don’t oversimplify the relationship between loving truth and loving usefulness.  It’s not one or the other.  It’s complicated, which is not necessarily a defect in the moral aesthetics of single events.

Read the post for more details. The followup, Newcomb’s Problem and the Regret of Rationality opens with a nice summary of Newcomb’s problem:

A superintelligence from another galaxy, whom we shall call Omega, comes to Earth and sets about playing a strange little game.  In this game, Omega selects a human being, sets down two boxes in front of them, and flies away.

Box A is transparent and contains a thousand dollars.
Box B is opaque, and contains either a million dollars, or nothing.

You can take both boxes, or take only box B.

And the twist is that Omega has put a million dollars in box B iff Omega has predicted that you will take only box B.

Omega has been correct on each of 100 observed occasions so far – everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars.  (We assume that box A vanishes in a puff of smoke if you take only box B; no one else can take box A afterward.)

Before you make your choice, Omega has flown off and moved on to its next game.  Box B is already empty or already full.

Omega drops two boxes on the ground in front of you and flies off.

Do you take both boxes, or only box B?

The dialectic continues:

One-boxer:  “I take only box B, of course.  I’d rather have a million than a thousand.”

Two-boxer:  “Omega has already left.  Either box B is already full or already empty.  If box B is already empty, then taking both boxes nets me $1000, taking only box B nets me $0.  If box B is already full, then taking both boxes nets $1,001,000, taking only box B nets $1,000,000.  In either case I do better by taking both boxes, and worse by leaving a thousand dollars on the table – so I will be rational, and take both boxes.”

One-boxer:  “If you’re so rational, why ain’cha rich?”

Two-boxer:  “It’s not my fault Omega chooses to reward only people with irrational dispositions, but it’s already too late for me to do anything about that.”

The post goes on to offer a counter-argument to the dominant solution, called “causal decision theory.”

The next post is about a meetup, and then comes The Parable of the Dagger, which presents a version of the liar paradox. The Parable of Hemlock tells a story about Socrates to illustrate the following point:

The problem with [valid] syllogisms is that they’re always valid.  “All humans are mortal; Socrates is human; therefore Socrates is mortal” is – if you treat it as a logical syllogism – logically valid within our own universe.  It’s also logically valid within neighboring Everett branches in which, due to a slightly different evolved biochemistry, hemlock is a delicious treat rather than a poison.  And it’s logically valid even in universes where Socrates never existed, or for that matter, where humans never existed.

The Bayesian definition of evidence favoring a hypothesis is evidence which we are more likely to see if the hypothesis is true than if it is false.  Observing that a syllogism is logically valid can never be evidence favoring any empirical proposition, because the syllogism will be logically valid whether that proposition is true or false.

Syllogisms are valid in all possible worlds, and therefore, observing their validity never tells us anything about which possible world we actually live in.

Words as Hidden Inferences tells another parable:

Suppose I find a barrel, sealed at the top, but with a hole large enough for a hand.  I reach in, and feel a small, curved object.  I pull the object out, and it’s blue – a bluish egg.  Next I reach in and feel something hard and flat, with edges – which, when I extract it, proves to be a red cube.  I pull out 11 eggs and 8 cubes, and every egg is blue, and every cube is red.

Now I reach in and I feel another egg-shaped object.  Before I pull it out and look, I have to guess:  What will it look like?

The evidence doesn’t prove that every egg in the barrel is blue, and every cube is red.  The evidence doesn’t even argue this all that strongly: 19 is not a large sample size.  Nonetheless, I’ll guess that this egg-shaped object is blue – or as a runner-up guess, red.  If I guess anything else, there’s as many possibilities as distinguishable colors – and for that matter, who says the egg has to be a single shade?  Maybe it has a picture of a horse painted on.

…And if I name the egg-shaped objects “bleggs” (for blue eggs) and the red cubes “rubes”, then, when I reach in and feel another egg-shaped object, I may think:  Oh, it’s a blegg, rather than considering all that problem-of-induction stuff.

It is a common misconception that you can define a word any way you like.

This would be true if the brain treated words as purely logical constructs, Aristotelian classes, and you never took out any more information than you put in.

…If you asked an Aristotelian philosopher whether Carol the grocer was mortal, they would say “Yes.”  If you asked them how they knew, they would say “All humans are mortal, Carol is human, therefore Carol is mortal.”  Ask them whether it was a guess or a certainty, and they would say it was a certainty (if you asked before the sixteenth century, at least).  Ask them how they knew that humans were mortal, and they would say it was established by definition.

…Your brain doesn’t treat words as logical definitions with no empirical consequences, and so neither should you.  The mere act of creating a word can cause your mind to allocate a category, and thereby trigger unconscious inferences of similarity.  Or block inferences of similarity; if I create two labels I can get your mind to allocate two categories.  Notice how I said “you” and “your brain” as if they were different things?

Making errors about the inside of your head doesn’t change what’s there; otherwise Aristotle would have died when he concluded that the brain was an organ for cooling the blood.  Philosophical mistakes usually don’t interfere with blink-of-an-eye perceptual inferences.

But philosophical mistakes can severely mess up the deliberate thinking processes that we use to try to correct our first impressions.  If you believe that you can “define a word any way you like”, without realizing that your brain goes on categorizing without your conscious oversight, then you won’t take the effort to choose your definitions wisely.

This is a hugely important point, and it’s problems like this that motivated much of the linguistic turn in 20th century analytic philosophy. The story continues in Extensions and Intentions:

To give an “intensional definition” is to define a word or phrase in terms of other words, as a dictionary does.  To give an “extensional definition” is to point to examples, as adults do when teaching children.  The preceding sentence gives an intensional definition of “extensional definition”, which makes it an extensional example of “intensional definition”.

Intensional definitions don’t capture entire intensions; extensional definitions don’t capture entire extensions.  If I point to just one tiger and say the word “tiger”, the communication may fail if they think I mean “dangerous animal” or “male tiger” or “yellow thing”.  Similarly, if I say “dangerous yellow-black striped animal”, without pointing to anything, the listener may visualize giant hornets.

You can’t capture in words all the details of the cognitive concept – as it exists in your mind – that lets you recognize things as tigers or nontigers.  It’s too large.  And you can’t point to all the tigers you’ve ever seen, let alone everything you would call a tiger.

…So that’s another reason you can’t “define a word any way you like”:  You can’t directly program concepts into someone else’s brain.

…When you take into account the way the human mind actually, pragmatically works, the notion “I can define a word any way I like” soon becomes “I can believe anything I want about a fixed set of objects” or “I can move any object I want in or out of a fixed membership test”.  Just as you can’t usually convey a concept’s whole intension in words because it’s a big complicated neural membership test, you can’t control the concept’s entire intension because it’s applied sub-deliberately.  This is why arguing that XYZ is true “by definition” is so popular.  If definition changes behaved like the empirical nullops they’re supposed to be, no one would bother arguing them.  But abuse definitions just a little, and they turn into magic wands – in arguments, of course; not in reality.

Buy Now or Forever Hold Your Peace discusses a money-betting prediction market on who would win the 2008 Democratic presidential nomination: Hilary or Barack.

The point is not that prediction markets are a good predictor but that they are the best predictor. If you think you can do better, why ain’cha rich?  Any person, group, or method that does better can pump money out of the prediction markets.

Similarity Clusters argues:

A dictionary is best thought of, not as a book of Aristotelian class definitions, but a book of hints for matching verbal labels to similarity clusters, or matching labels to properties that are useful in distinguishing similarity clusters.

Typicality and Asymmetrical Similarity warns of some biases we have exhibit called “typicality effects” or “prototype effects.” The Cluster Structure of Thingspace continues the discussion with regard to configuration spaces.

Previous post:

Next post:

{ 3 comments… read them below or add one }

Garren April 26, 2011 at 9:03 am

‘So, too, if you ever find yourself keeping separate track of the “reasonable” belief, versus the belief that seems likely to be actually true. Either you have misunderstood reasonableness, or your second intuition is just wrong.’ – Alonzo’s Newcomb post

I really like the way Alonzo argues for goal achievement over reliance on hitherto reliable tools if you have good reason(!) to think they will fail in this case.

Still a one-boxer here.

  (Quote)

Garren April 26, 2011 at 9:04 am

And I just mixed up Luke’s secondary authors in spectacular fashion. Sorry guys. =P

  (Quote)

Christian Pascu April 26, 2011 at 2:01 pm

Historically speaking, science won because it displayed greater raw strength in the form of technology, not because science sounded more reasonable.

Science won a fight no one else was really fighting. Technology is not the real aim of “the war”.

If “science” will win the fight for the purpose of our existence, the very reason behind us being here, than we all will be its victims. We all lose then. Science does not fight anyone. Science is a method and it leads to technology. Or to lower grade truths like H plus O makes water.
If science fights an ideology, like theism, than it’s actually another ideology behind it, like atheism. And it’s only when we step beyond death that we can tell which one won the fight. That’d be moment of truth for each of us.

  (Quote)

Leave a Comment