Reading Yudkowsky, part 21

by Luke Muehlhauser on March 18, 2011 in Eliezer Yudkowsky,Resources,Reviews

AI researcher Eliezer Yudkowsky is something of an expert at human rationality, and at teaching it to others. His hundreds of posts at Overcoming Bias (now moved to Less Wrong) are a treasure trove for those who want to improve their own rationality. As such, I’m reading all of them, chronologically.

I suspect some of my readers want to “level up” their rationality, too. So I’m keeping a diary of my Yudkowsky reading. Feel free to follow along.

His 148th post is Fake Justification:

Many Christians who’ve stopped really believing now insist that they revere the Bible as a source of ethical advice.  The standard atheist reply is given by Sam Harris:  “You and I both know that it would take us five minutes to produce a book that offers a more coherent and compassionate morality than the Bible does.”  Similarly, one may try to insist that the Bible is valuable as a literary work.  Then why not revere Lord of the Rings, a vastly superior literary work?  And despite the standard criticisms of Tolkien’s morality, Lord of the Rings is at least superior to the Bible as a source of ethics.  So why don’t people wear little rings around their neck, instead of crosses? …

“How can you justify buying a $1 million gem-studded laptop,” you ask your friend, “when so many people have no laptops at all?”  And your friend says, “But think of the employment that this will provide – to the laptop maker, the laptop maker’s advertising agency – and then they’ll buy meals and haircuts – it will stimulate the economy and eventually many people will get their own laptops.”  But it would be even more efficient to buy 5,000 OLPC laptops, thus providing employment to the OLPC manufacturers and giving out laptops directly.

Let me guess:  Yes, you admit that you originally decided you wanted to buy a million-dollar laptop by thinking, “Ooh, shiny”.  Yes, you concede that this isn’t a decision process consonant with your stated goals.  But since then, you’ve decided that you really ought to spend your money in such fashion as to provide laptops to as many laptopless wretches as possible.  And yet you just couldn’t find any more efficient way to do this than buying a million-dollar diamond-studded laptop – because, hey, you’re giving money to a laptop store and stimulating the economy!  Can’t beat that!

My friend, I am damned suspicious of this amazing coincidence.  I am damned suspicious that the best answer under this lovely, rational, altruistic criterion X, is also the idea that just happened to originally pop out of the unrelated indefensible process Y.  If you don’t think that rolling dice would have been likely to produce the correct answer, then how likely is it to pop out of any other irrational cognition?

It’s improbable that you used mistaken reasoning, yet made no mistakes.

An Alien God begins with a common point:

In the days before Darwin, the cause of all this apparent purposefulness was a very great puzzle unto science.  The Goddists said “God did it”, because you get 50 bonus points each time you use the word “God” in a sentence.  Yet perhaps I’m being unfair.  In the days before Darwin, it seemed like a much more reasonable hypothesis.  Find a watch in the desert, said William Paley, and you can infer the existence of a watchmaker.

But when you look at all the apparent purposefulness in Nature, rather than picking and choosing your examples, you start to notice things that don’t fit the Judeo-Christian concept of one benevolent God. Foxes seem well-designed to catch rabbits.  Rabbits seem well-designed to evade foxes.  Was the Creator having trouble making up Its mind?

When I design a toaster oven, I don’t design one part that tries to get electricity to the coils and a second part that tries to prevent electricity from getting to the coils.  It would be a waste of effort.  Who designed the ecosystem, with its predators and prey, viruses and bacteria?  Even the cactus plant, which you might think well-designed to provide water fruit to desert animals, is covered with inconvenient spines.

The ecosystem would make much more sense if it wasn’t designed by a unitary Who, but, rather, created by a horde of deities – say from the Hindu or Shinto religions.  This handily explains both the ubiquitous purposefulnesses, and the ubiquitous conflicts:  More than one deity acted, often at cross-purposes.

However, it ends up being a superb introduction to some key points about evolution, as does The Wonder of Evolution. Evolutions Are Stupid (But Work Anyway) explains some limitations of evolution.

Natural Selection’s Speed Limit and Complexity Bound ended up being summarized over here at the Less Wrong Wiki, and this summary is recommended instead of the original post.

Beware of Stephen J. Gould explains how Gould wrote his 1996 book Full House as if Williams Revolution had never occurred in the 1960s.

The Tragedy of Group Selectionism argues against group selection.

Fake Selfishness opens:

Once upon a time, I met someone who proclaimed himself to be purely selfish, and told me that I should be purely selfish as well.  I was feeling mischievous that day, so I said, “I’ve observed that with most religious people, at least the ones I meet, it doesn’t matter much what their religion says, because whatever they want to do, they can find a religious reason for it.  Their religion says they should stone unbelievers, but they want to be nice to people, so they find a religious justification for that instead.  It looks to me like when people espouse a philosophy of selfishness, it has no effect on their behavior, because whenever they want to be nice to people, they can rationalize it in selfish terms.”

After some debate, it ends:

[I asked] “Did you start out by thinking that you wanted to be selfish, and then decide this was the most selfish thing you could possibly do?  Or did you start out by wanting to convert others to selfishness, then look for ways to rationalize that as self-benefiting?”

And the one said, “You may be right about that last part,” so I marked him down as intelligent.

Fake Morality exposes divine command ethics:

Suppose Omega makes a credible threat that if you ever step inside a bathroom between 7AM and 10AM in the morning, he’ll kill you. Would you be panicked by the prospect of Omega withdrawing his threat?  Would you cower in existential terror and cry:  “If Omega withdraws his threat, then what’s to keep me from going to the bathroom?”  No; you’d probably be quite relieved at your increased opportunity to, ahem, relieve yourself.

Which is to say:  The very fact that a religious person would be afraid of God withdrawing Its threat to punish them for committing murder, shows that they have a revulsion of murder which is independent of whether God punishes murder or not.  If they had no sense that murder was wrong independently of divine retribution, the prospect of God not punishing murder would be no more existentially horrifying than the prospect of God not punishing sneezing.

[To religious readers: ] it may be that you will someday lose your faith: and on that day, you will not lose all sense of moral direction.  For if you fear the prospect of God not punishing some deed, that is a moral compass.  You can plug that compass directly into your decision system and steer by it.  You can simply not do whatever you are afraid God may not punish you for doing.  The fear of losing a moral compass is itself a moral compass.  Indeed, I suspect you are steering by that compass, and that you always have been.

Fake Optimization Criteria opens:

I’ve previously dwelt in considerable length upon forms of rationalization whereby our beliefs appear to match theevidence much more strongly than they actually do.  And I’m not overemphasizing the point, either.  If we could beat this fundamental metabias and see what every hypothesis really predicted, we would be able to recover from almost any other error of fact.

The mirror challenge for decision theory is seeing which option a choice criterion really endorses.  If your stated moral principles call for you to provide laptops to everyone, does that really endorse buying a $1 million gem-studded laptop for yourself, or spending the same money on shipping 5000 OLPCs?

We seem to have evolved a knack for arguing that practically any goal implies practically any action.  A phlogiston theorist explaining why magnesium gains weight when burned has nothing on an Inquisitor explaining why God’s infinite love for all His children requires burning some of them at the stake.

So if we humans are terrible at this kind of reasoning, what can we do?

Where is the standardized, open-source, generally intelligent, consequentialist optimization process into which we can feed a complete morality as an XML file, to find out what that morality really recommends when applied to our world?  Is there even a single real-world case where we can know exactly what a choice criterion recommends?  Where is the pure moral reasoner – of known utility function, purged of all other stray desires that might distort its optimization – whose trustworthy output we can contrast to human rationalizations of the same utility function?

Unfortunately, that’s where I lose his chain of thought.

Adaptation-Executers, Not Fitness-Maximizers clears up another confusion about evolution

Fifty thousand years ago, the taste buds of Homo sapiens directed their bearers to the scarcest, most critical food resources – sugar and fat.  Calories, in a word.  Today, the context of a taste bud’s function has changed, but the taste buds themselves have not.  Calories, far from being scarce (in First World countries), are actively harmful.  Micronutrients that were reliably abundant in leaves and nuts are absent from bread, but our taste buds don’t complain.  A scoop of ice cream is a superstimulus, containing more sugar, fat, and salt than anything in the ancestral environment.

No human being with the deliberate goal of maximizing their alleles’ inclusive genetic fitness, would ever eat a cookie unless they were starving.  But individual organisms are best thought of as adaptation-executers, not fitness-maximizers.

Previous post:

Next post:

{ 7 comments… read them below or add one }

Adam March 18, 2011 at 9:13 am

Eliezer’s answer is Yudkowsky, and that’s what I lose his chain of thought.

This sentence confuses me. Copied and pasted the wrong phrase maybe?

  (Quote)

Garren March 18, 2011 at 9:17 am

Yes, I’ve taken a similar stance about divine command ethics (and also categorical imperatives, which are a different concept).

Imagine there are no divine commands or divine attitudes; or imagine there are no categorical imperatives. What terrible consequences would follow? W.L. Craig would say — as he does in Reasonable Faith — something like, “People might have no qualms about performing vivisections on pregnant women.” Congratulations, we’ve just identified a justification for morality, i.e. giving people qualms about performing vivisections on pregnant women. From here, we can generalize out to broader justifications for morality.

  (Quote)

Luke Muehlhauser March 18, 2011 at 12:37 pm

Adam,

Oops! Fixed.

  (Quote)

Haukur March 18, 2011 at 2:25 pm

Eliezer’s answer is Yudkowsky, and that’s what I lose his chain of thought.

Unfortunately, that’s what I lose his chain of thought.

The first version actually made sense to me – I thought you were saying that EY’s idea of an answer to this difficult problem was, well, himself – and that you found it hard to follow him there. That seemed like perfectly plausible criticism!

( Both versions look like they’d be improved by changing ‘what’ to ‘when’.)

  (Quote)

MarkD March 18, 2011 at 4:31 pm

A strong position against multilevel selection is premature, whether considering eusocial mammals or proposed heterozygous defect expurgation mechanisms (fragile males), and there are reasonable arguments in the social sphere as well. EY was mostly arguing against original formulations as the modern synthesis gelled, admittedly.

  (Quote)

David Marshall March 18, 2011 at 5:25 pm

I don’t know, he seems flippant and rather glib.

Lord of the Rings is “vastly superior” to the Bible, both as a literary work, and for its morality? Tolkien didn’t think so. Yudkowski should read Tolkien’s On Fairy Stories.

  (Quote)

cl March 22, 2011 at 3:10 am

David Marshall,

I don’t know, he seems flippant and rather glib.

Duh! It’s Eliezer “The Superior Rationalist” Yudkowsky, Luke’s “Jewish Messiah” [according to the wit of Haukur]. I mean, come on: this is the same guy who hated on smart people who played the lottery, as if playing the lottery somehow belied one’s understanding of probability.

  (Quote)

Leave a Comment