Reading Yudkowsky, part 11

by Luke Muehlhauser on January 28, 2011 in Eliezer Yudkowsky,Resources,Reviews

AI researcher Eliezer Yudkowsky is something of an expert at human rationality, and at teaching it to others. His hundreds of posts at Overcoming Bias (now moved to Less Wrong) are a treasure trove for those who want to improve their own rationality. As such, I’m reading all of them, chronologically.

I suspect some of my readers want to improve their rationality, too. So I’m keeping a diary of my Yudkowsky reading. Feel free to follow along.

His 60th post is a must-read, and very entertaining. Focus Your Uncertainty begins:

Will bond yields go up, or down, or remain the same? If you’re a TV pundit and your job is to explain the outcome after the fact, then there’s no reason to worry. No matter which of the three possibilities comes true, you’ll be able to explain why the outcome perfectly fits your pet market theory . There’s no reason to think of these three possibilities as somehow opposed to one another, as exclusive, because you’ll get full marks for punditry no matter which outcome occurs.

But wait! Suppose you’re a novice TV pundit, and you aren’t experienced enough to make up plausible explanations on the spot. You need to prepare remarks in advance for tomorrow’s broadcast, and you have limited time to prepare. In this case, it would be helpful to know which outcome will actually occur – whether bond yields will go up, down, or remain the same – because then you would only need to prepare one set of excuses.

In The Proper Use of Doubt, Yudkowsky reminds us that many times, we profess doubt and wear doubt as our attire, but that is for group identification, not because we actually want to doubt was is dear to us:

If you don’t really doubt something, why would you pretend that you do?

Because we all want to be seen as rational – and doubting is widely believed to be a virtue of a rationalist.  But it is not widely understood that you need a particular reason to doubt, or that an unresolved doubt is a null-op.  Instead people think it’s about modesty, a submissive demeanor, maintaining the tribal status hierarchy…  Making a great public display of doubt to convince yourself that you are a rationalist, will do around as much good as wearing a lab coat.

To avoid professing doubts, remember:

  • A rational doubt exists to destroy its target belief, and if it does not destroy its target it dies unfulfilled.
  • A rational doubt arises from some specific reason the belief might be wrong…
  • An uninvestigated doubt might as well not exist.
  • You should not be proud of mere doubting, although you can justly be proud when you have just finished tearing a cherished belief to shreds.
  • Though it may take courage to face your doubts, never forget that to an ideal mind doubt would not be scary in the first place.

The Virtue of Narrowness resists our tendency to try “to broaden a word as widely as possible, to cover as much territory as possible.” Sarcastically, Yudkowsky writes:

It is perfectly all right for modern evolutionary biologists to explain just the patterns of living creatures, and not the “evolution” of stars or the “evolution” of technology.  Alas, some unfortunate souls use the same word “evolution” to cover the naturally selected patterns of replicating life, and the strictly accidental structure of stars, and the intelligently configured structure of technology.  And as we all know, if people use the same word, it must all be the same thing.  You should automatically generalize anything you think you know about biological evolution to technology.  Anyone who tells you otherwise must be a mere pointless pedant.  It couldn’t possibly be that your abysmal ignorance of modern evolutionary theory is so total that you can’t tell the difference between a carburetor and a radiator.  That’s unthinkable.  No, the other guy – you know, the one who’s studied the math – is just too dumb to see the connections.

You Can Face Reality is a poem:

What is true is already so.
Owning up to it doesn’t make it worse.
Not being open about it doesn’t make it go away.
And because it’s true, it is what is there to be interacted with.
Anything untrue isn’t there to be lived.
People can stand what is true,
for they are already enduring it.
– Eugene Gendlin

The Apocalypse Bet focuses on a particular problem in predicting a global catastrophe.

Your Strength as a Rationalist recounts a time when Eliezer “fell down” as a rationalist:

Your strength as a rationalist is your ability to be more confused by fiction than by reality.  If you are equally good at explaining any outcome, you have zero knowledge.

We are all weak, from time to time; the sad part is that I could have been stronger.  I had all the information I needed to arrive at the correct answer, I even noticed the problem, and then I ignored it.  My feeling of confusion was a Clue, and I threw my Clue away.

In I Defy the Data Yudkowsky reminds us that experiments can be wrong:

One of the great weaknesses of Science is this mistaken idea that if an experiment contradicts the dominant theory, we should throw out the theory instead of the experiment.

Experiments can go awry.  They can contain design flaws. They can be deliberately corrupted.  They can be unconsciously corrupted.  They can be selectively reported.  Most of all, 1 time in 20 they can be “statistically significant” by sheer coincidence, and there are a lot of experiments out there

Someone once presented me with a new study on the effects of intercessory prayer (that is, people praying for patients who are not told about the prayer), which showed 50% of the prayed-for patients achieving success at in-vitro fertilization, versus 25% of the control group.  I liked this claim.  It had a nice large effect size.  Claims of blatant impossible effects are much more pleasant to deal with than claims of small impossible effects that are “statistically significant”.

So I cheerfully said:  “I defy the data.”

…Soon enough we heard that [the prayer study] had been retracted and was probably fraudulent.  But I didn’t say fraud.  I didn’t speculate on how the results might have been obtained.  That would have been dismissive. I just stuck my neck out, and nakedly, boldly, without excuses, defied the data.

…You can defy the data on one experiment.  You can’t defy the data on multiple experiments.  At that point you either have to relinquish the theory or dismiss the data – point to a design flaw, or refer to an even larger body of experiments that failed to replicate the result, or accuse the researchers of a deliberate hoax, et cetera.

In another controversial post, Yudkowsky declares Absence of Evidence is Evidence of Absence. Specifically, he gives a Bayesian explanation for why Earl Warren was wrong:

Post-hoc fitting of evidence to hypothesis was involved in a most grievous chapter in United States history: the internment of Japanese-Americans at the beginning of the Second World War.  When California governor Earl Warren testified before a congressional hearing in San Francisco on February 21, 1942, a questioner pointed out that there had been no sabotage or any other type of espionage by the Japanese-Americans up to that time.  Warren responded, “I take the view that this lack [of subversive activity] is the most ominous sign in our whole situation. It convinces me more than perhaps any other factor that the sabotage we are to get, the Fifth Column activities are to get, are timed just like Pearl Harbor was timed… I believe we are just being lulled into a false sense of security.”

Conservation of Expected Evidence explains:

Friedrich Spee von Langenfeld, a priest who heard the confessions of condemned witches, wrote in 1631 theCautio Criminalis (‘prudence in criminal cases’) in which he bitingly described the decision tree for condemning accused witches:  If the witch had led an evil and improper life, she was guilty; if she had led a good and proper life, this too was a proof, for witches dissemble and try to appear especially virtuous. After the woman was put in prison: if she was afraid, this proved her guilt; if she was not afraid, this proved her guilt, for witches characteristically pretend innocence and wear a bold front. Or on hearing of a denunciation of witchcraft against her, she might seek flight or remain; if she ran, that proved her guilt; if she remained, the devil had detained her so she could not get away.

Spee acted as confessor to many witches; he was thus in a position to observe every branch of the accusation tree, that no matter what the accused witch said or did, it was held a proof against her.  In any individual case, you would only hear one branch of the dilemma.  It is for this reason that scientists write down their experimental predictions in advance.

But you can’t have it both ways – as a matter of probability theory, not mere fairness.  The rule that “absence of evidence is evidence of absence” is a special case of a more general law, which I would name Conservation of Expected Evidence:  The expectation of the posterior probability, after viewing the evidence, must equal the prior probability.

This means it’s okay if your theory takes a few hits, because “for every expectation of evidence, there is an equal and opposite expectation of counterevidence.” You just have to Update Yourself Incrementally.

Previous post:

Next post:

{ 6 comments… read them below or add one }

Beelzebub January 29, 2011 at 3:04 am

Thanks for the listing. I loved Absence of Evidence is Evidence of Absence. It puts on a firm foundation exactly why failure to observe something should correctly give more weight to the proposition that it doesn’t exist than that you simply haven’t observed it yet, everything else being neutral.

  (Quote)

Beelzebub January 29, 2011 at 3:09 am

Explicitly, the probability that you haven’t observed something because it doesn’t exist will always be greater than the probability you haven’t observed it because “insert-reason-here.”

  (Quote)

Polymeron January 29, 2011 at 4:31 am

I too like “Absence of Evidence…”.

As an aside to that post, I did follow the Japanese internment example and got some interesting conclusions.

Yudkowsky writes that either Japanese insurrection was evidence fitting a fifth column more than the alternative, or the alternative is; but it can’t be both. He’s right, of course. And yet, I can justify the senator’s reasoning by adding in both utility and breaking the options a bit.

Consider the following options:
1. There is no Japanese fifth column, posing 0 danger.
2. There is a disorganized Japanese fifth column, posing some danger.
3. There is an organized Japanese fifth column, posing great danger, which increases with time.

In case #1, locking up the Japanese avoids no harm.
In case #2, locking up the Japanese avoids some harm.
In case #3, locking up the Japanese avoids great harm.
In all 3 cases, taking the action causes some harm – for simplicity we’ll assume it’s the same between cases. So now we only need to consider the probability times expected utility.

If there were a Japanese insurrection, as per the law of conservation of expected evidence, we would see the probability of #2 rise at the expense of the other two, mostly #1. This greatly raises the expected utility of locking the Japanese up. So far, mostly what we would expect.

If there is no Japanese insurrection, the probability of #2 goes down dramatically. Most of its probability goes to #1. However some of that probability goes to #3; which also rises in utility because the danger posed increases with time. In the final tally, the utility for locking up the Japanese still rises – the rise in probability and severity of #3 counteracted the drop in probability of #2, and in fact overall utility could be greater than in the other scenario.

So, the senator’s reasoning could still makes sense. He just needed to express it differently: “Yes, the probability of a fifth column is lower because of this evidence; but the danger posed by this lowered probability is greater overall than otherwise”. If indeed his position was that it’s more likely, then obviously that is wrong.

So, a Bayesian intelligence could still conclude that in the future internment would be justified under either outcome. Of course, a Bayesian intelligence would also try to devise a test that has good chances of falsifying the alternative hypotheses rather than rely on such crude methods.

(Putting the Japanese in internment camps was still unconscionable. I think I just value the harm caused by this action a lot higher than did the senator).

  (Quote)

Beelzebub January 29, 2011 at 5:06 am

Yes, I follow what you’re saying. The example I had in mind was likelihood of God; so applying your point to that seems to have a certain “Pascalianism” to it. The best choice considering the wager is to force yourself to believe because that has the highest utility. However, utility has no relevance to truth value. For instance, high religious utility has no bearing on truth.

But yes, I see what you mean regarding the Warren example.

  (Quote)

Paul Carter January 29, 2011 at 8:17 pm

I found your blog linked from the review you posted on the Blackwell Companion to Nat. Theol.

I appreciate the top notch content produce herein, good job to you and your thoughtful readers.

I am curious if you have ever posted your reading list. I would be interested and I think others would be as well to see the list of books you have read. I know that would probably be a time consuming task but your readers might be interested to review it.

Thanks and take care.

  (Quote)

Luke Muehlhauser January 29, 2011 at 8:30 pm

Paul Carter,

Closest things I’ve done to a reading list are here and here.

  (Quote)

Leave a Comment