Reading Yudkowsky, part 36

by Luke Muehlhauser on May 9, 2011 in Eliezer Yudkowsky,Resources,Reviews

AI researcher Eliezer Yudkowsky is something of an expert at human rationality, and at teaching it to others. His hundreds of posts at Less Wrong are a treasure trove for those who want to improve their own rationality. As such, I’m reading all of them, chronologically.

I suspect some of my readers want to “level up” their rationality, too. So I’m keeping a diary of my Yudkowsky reading. Feel free to follow along.

His 284th post is a classic, Mind Projection Fallacy:

In the dawn days of science fiction, alien invaders would occasionally kidnap a girl in a torn dress and carry her off for intended ravishing, as lovingly depicted on many ancient magazine covers.  Oddly enough, the aliens never go after men in torn shirts.

Would a non-humanoid alien, with a different evolutionary history andevolutionary psychology, sexually desire a human female?  It seems rather unlikely.  To put it mildly.

People don’t make mistakes like that by deliberately reasoning:  “All possible minds are likely to be wired pretty much the same way, therefore a bug-eyed monster will find human females attractive.”  Probably the artist did not even think to ask whether an alien perceives human females as attractive.  Instead, a human female in a torn dress is sexy – inherently so, as an intrinsic property.

They who went astray did not think about the alien’s evolutionary history; they focused on the woman’s torn dress.  If the dress were not torn, the woman would be less sexy; the alien monster doesn’t enter into it.

Apparently we instinctively represent Sexiness as a direct attribute of the Woman object, Woman.sexiness, likeWoman.height or Woman.weight.

If your brain uses that data structure, or something metaphorically similar to it, then from the inside it feels like sexiness is an inherent property of the woman, not a property of the alien looking at the woman.  Since the womanis attractive, the alien monster will be attracted to her – isn’t that logical?

E. T. Jaynes used the term Mind Projection Fallacy to denote the error of projecting your own mind’s properties into the external world…

Probability is in the Mind recounts the ancient battle between frequentists and Bayesians:

I cannot do justice to this ancient war in a few words – but the classic example of the argument runs thus:

You have a coin.

The coin is biased.

You don’t know which way it’s biased or how much it’s biased.  Someone just told you, “The coin is biased” and that’s all they said.

This is all the information you have, and the only information you have.

You draw the coin forth, flip it, and slap it down.

Now – before you remove your hand and look at the result – are you willing to say that you assign a 0.5 probability to the coin having come up heads?

The frequentist says, “No.  Saying ‘probability 0.5′ means that the coin has an inherent propensity to come up heads as often as tails, so that if we flipped the coin infinitely many times, the ratio of heads to tails would approach 1:1.  But we know that the coin is biased, so it can have any probability of coming up heads except 0.5.”

The Bayesian says, “Uncertainty exists in the map, not in the territory.  In the real world, the coin has either come up heads, or come up tails.  Any talk of ‘probability’ must refer to the information that I have about the coin – my state of partial ignorance and partial knowledge – not just the coin itself.  Furthermore, I have all sorts of theorems showing that if I don’t treat my partial knowledge a certain way, I’ll make stupid bets.  If I’ve got to plan, I’ll plan for a 50/50 state of uncertainty, where I don’t weigh outcomes conditional on heads any more heavily in my mind than outcomes conditional on tails.  You can call that number whatever you like, but it has to obey the probability laws on pain of stupidity.  So I don’t have the slightest hesitation about calling my outcome-weighting a probability.”

I side with the Bayesians.  You may have noticed that about me.

Another thought experiment:

To make the coinflip experiment repeatable, as frequentists are wont to demand, we could build an automated coinflipper, and verify that the results were 50% heads and 50% tails. But maybe a robot with extra-sensitive eyes and a good grasp of physics, watching the autoflipper prepare to flip, could predict the coin’s fall in advance – not with certainty, but with 90% accuracy. Then what would the real probability be?

There is no “real probability”. The robot has one state of partial information. You have a different state of partial information. The coin itself has no mind, and doesn’t assign a probability to anything; it just flips into the air, rotates a few times, bounces off some air molecules, and lands either heads or tails.

Now, a few brain teasers that “derive their brain-teasing ability from the tendency to think of probabilities as inherent properties of objects”:

Let’s take the old classic:  You meet a mathematician on the street, and she happens to mention that she has given birth to two children on two separate occasions.  You ask:  “Is at least one of your children a boy?”  The mathematician says, “Yes, he is.”

What is the probability that she has two boys?  If you assume that the prior probability of a child being a boy is 1/2, then the probability that she has two boys, on the information given, is 1/3.  The prior probabilities were:  1/4 two boys, 1/2 one boy one girl, 1/4 two girls.  The mathematician’s “Yes” response has probability ~1 in the first two cases, and probability ~0 in the third.  Renormalizing leaves us with a 1/3 probability of two boys, and a 2/3 probability of one boy one girl.

But suppose that instead you had asked, “Is your eldest child a boy?” and the mathematician had answered “Yes.”  Then the probability of the mathematician having two boys would be 1/2.  Since the eldest child is a boy, and the younger child can be anything it pleases.

Likewise if you’d asked “Is your youngest child a boy?”  The probability of their being both boys would, again, be 1/2.

Now, if at least one child is a boy, it must be either the oldest child who is a boy, or the youngest child who is a boy.  So how can the answer in the first case be different from the answer in the latter two?

Read the post for the other brain teasers. The Quotation is not the Referent ends with:

Similarly, the notion of truth is quite different from the notion of reality. Saying “true” compares a belief to reality.  Reality itself does not need to be compared to any beliefs in order to be real.  Remember this the next time someone claims that nothing is true.

Penguicon & Blook is a news update. Qualitatively Confused proposes:

I suggest that a primary cause of confusion about the distinction between “belief”, “truth”, and “reality” is qualitative thinking about beliefs.

Consider the archetypal postmodernist attempt to be clever:

“The Sun goes around the Earth” is true for Hunga Huntergatherer, but “The Earth goes around the Sun” is true for Amara Astronomer!  Different societies have different truths!

No, different societies have different beliefs. Belief is of a different type than truth; it’s like comparing apples and probabilities.

Ah, but there’s no difference between the way you use the word ‘belief’ and the way you use the word ‘truth’!  Whether you say, “I believe ‘snow is white’”, or you say, “‘Snow is white’ is true”, you’re expressing exactly the same opinion.

No, these sentences mean quite different things, which is how I can conceive of the possibility that my beliefs are false.

Oh, you claim to conceive it, but you never believe it.  As Wittgenstein said, “If there were a verb meaning ‘to believe falsely’, it would not have any significant first person, present indicative.”

And that’s what I mean by putting my finger on qualitative reasoning as the source of the problem.  The dichotomy between belief and disbelief, being binary, is confusingly similar to the dichotomy between truth and untruth.

What’s the solution?

So let’s use quantitative reasoning instead.  Suppose that I assign a 70% probability to the proposition that snow is white.  It follows that I think there’s around a 70% chance that the sentence “snow is white” will turn out to be true.  If the sentence “snow is white” is true, is my 70% probability assignment to the proposition, also “true”?  Well, it’s more true than it would have been if I’d assigned 60% probability, but not so true as if I’d assigned 80% probability.

When talking about the correspondence between a probability assignment and reality, a better word than “truth” would be “accuracy”.  “Accuracy” sounds more quantitative, like an archer shooting an arrow: how close did your probability assignment strike to the center of the target?

To make a long story short, it turns out that there’s a very natural way of scoring the accuracy of a probability assignment, as compared to reality: just take the logarithm of the probability assigned to the real state of affairs.

Now, a new topic. Reductionism:

I… hold that “reductionism”, according to the meaning I will give for that word, is obviously correct; and to perdition with any past civilizations that disagreed.

This seems like a strong statement, at least the first part of it.  General Relativity seems well-supported, yet who knows but that some future physicist may overturn it?

On the other hand, we are never going back to Newtonian mechanics.  The ratchet of science turns, but it does not turn in reverse. There are cases in scientific history where a theory suffered a wound or two, and then bounced back; but when a theory takes as many arrows through the chest as Newtonian mechanics, it stays dead.

Consider a Boeing 747:

Standard physics uses the same fundamental theory to describe the flight of a Boeing 747 airplane, and collisions in the Relativistic Heavy Ion Collider.  Nuclei and airplanes alike, according to our understanding, are obeying special relativity, quantum mechanics, and chromodynamics.

But we use entirely different models to understand the aerodynamics of a 747 and a collision between gold nuclei in the RHIC.  A computer modeling the aerodynamics of a 747 may not contain a single token, a single bit of RAM, that represents a quark.

So is the 747 made of something other than quarks?  No, you’re just modeling it with representational elementsthat do not have a one-to-one correspondence with the quarks of the 747.  The map is not the territory.

Why not model the 747 with a chromodynamic representation?  Because then it would take a gazillion years to get any answers out of the model.  Also we could not store the model on all the memory on all the computers in the world, as of 2008.

As the saying goes, “The map is not the territory, but you can’t fold up the territory and put it in your glove compartment.”  Sometimes you need a smaller map to fit in a more cramped glove compartment – but this does not change the territory.  The scale of a map is not a fact about the territory, it’s a fact about the map.

If it were possible to build and run a chromodynamic model of the 747, it would yield accurate predictions.  Better predictions than the aerodynamic model, in fact.

This means that…

To build a fully accurate model of the 747, it is not necessary, in principle, for the model to contain explicit descriptions of things like airflow and lift.  There does not have to be a single token, a single bit of RAM, that corresponds to the position of the wings.  It is possible, in principle, to build an accurate model of the 747 that makes no mention of anything except elementary particle fields and fundamental forces.

“What?” cries the antireductionist.  “Are you telling me the 747 doesn’t really have wings? I can see the wings right there!”

The notion here is a subtle one.  It’s not just the notion that an object can have different descriptions at different levels.

It’s the notion that “having different descriptions at different levels” is itself something you say that belongs in the realm of Talking About Maps, not the realm of Talking About Territory.

It’s not that the airplane itself, the laws of physics themselves, use different descriptions at different levels – as yonder artillery gunner thought.  Rather we, for our convenience, use different simplified models at different levels.

The laws of physics do not contain distinct additional causal entities that correspond to lift or airplane wings, the way that the mind of an engineer contains distinct additional cognitive entities that correspond to lift or airplane wings.

This, as I see it, is the thesis of reductionism.  Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.  Understanding this on a gut level dissolves the question of “How can you say the airplane doesn’t really have wings, when I can see the wings right there?”  The critical words are really and see.

Previous post:

Next post:

{ 6 comments… read them below or add one }

JS Allen May 9, 2011 at 10:02 am

Actually, sexiness is a property of the woman, much like woman.weight or woman.height. It is ((woman.waistsize / woman.hipsize) == 0.8)

  (Quote)

Luke Muehlhauser May 9, 2011 at 10:46 am

JS Allen,

Well, not quite. But it a widely desired trait. :)

  (Quote)

Moshe Zadka May 10, 2011 at 3:43 pm

JS Allen: There is a function, which we can call func_2787, which is defined thusly: func_2787(woman)=woman.waistsize/woman.hipsize==.8
If the Alien evaluates that function, it will get the same result as you. The reason we say “sexiness is in the mind” is because *you* happen to evaluate that function when choosing flirtation target. The alien happens to not know or care about this function: Glurb selects flirtation targets based on func_2899, whose description is irrelevant here except to say that it has little to negative correlation with func_2787. We can call func_2787 “JS Allen.sexiness_function”, and it probably correlates with Moshe.sexiness_function and Luke.sexiness_function, since we’re all (I assume) adult males from a similar cultural background.

  (Quote)

JS Allen May 10, 2011 at 5:21 pm

Hi Moshe,

Obviously I was joking, but I will point out that the 0.8 ratio is not culture-specific; it’s a human universal.

  (Quote)

mojo.rhythm May 11, 2011 at 7:34 pm

JS Allen,

Doesn’t Jessica Alba have the 0.8 ratio?

  (Quote)

JS Allen May 11, 2011 at 8:35 pm

@mojo.rhythm – Google says she’s a 0.7, and it also turns out I was totally wrong — the ratio varies by culture.

  (Quote)

Leave a Comment