Reading Yudkowsky, part 58

by Luke Muehlhauser on July 25, 2011 in Eliezer Yudkowsky,Resources,Reviews

AI researcher Eliezer Yudkowsky is something of an expert at human rationality, and at teaching it to others. His hundreds of posts at Less Wrong are a treasure trove for those who want to improve their own rationality. As such, I’m reading all of them, chronologically.

I suspect some of my readers want to “level up” their rationality, too. So I’m keeping a diary of my Yudkowsky reading. Feel free to follow along.

In his 537th post is a news post, and then comes some excellent advice in Back Up and Ask Whether, Not Why:

A recent conversation reminded me of this simple, important, and difficult method:

When someone asks you “Why are you doing X?”,
And you don’t remember an answer previously in mind,
Do not ask yourself “Why am I doing X?”.

For example, if someone asks you
“Why are you using a QWERTY keyboard?” or “Why haven’t you invested in stocks?”
and you don’t remember already considering this exact question and deciding it,
do not ask yourself “Why am I using a QWERTY keyboard?” or “Why aren’t I invested in stocks?”

Instead, try to blank your mind – maybe not a full-fledged crisis of faith, but at least try to prevent your mind from knowing the answer immediately – and ask yourself:

“Should I do X, or not?”

Should I use a QWERTY keyboard, or not?  Should I invest in stocks, or not?

When you finish considering this question, print out a traceback of the arguments that you yourself considered in order to arrive at your decision, whether that decision is to X, or not X.  Those are your only real reasons, nor is it possible to arrive at a real reason in any other way.

The series on AI that I don’t have time to read continues with Recognizing Intelligence and Lawful CreativityLawful Uncertainty, Worse Than Random, The Weighted Majority Problem, Selling Non-Apples. (There are also a few asides tucked in there: 1, 2, 3, 4.)

Of more universal interest is The Nature of Logic, though unfortunately it is written in a non-universal language, that of computer programming. Then, the AI thread continues with Logical or Connectionist AI, Failure by Analogy, and Failure by Affective Analogy.

Next comes a series concerning “The Hanson-Yudkowsky Al-Foom Debate”:

Far more accessible than these is The Complete Idiot’s Guide to Ad Hominem, which quotes Stephen Bond:

In reality, ad hominem is unrelated to sarcasm or personal abuse.  Argumentum ad hominem is the logical fallacy of attempting to undermine a speaker’s argument by attacking the speaker instead of addressing the argument.  The mere presence of a personal attack does not indicate ad hominem: the attack must be used for the purpose of undermining the argument, or otherwise the logical fallacy isn’t there.

[...]

A: “All rodents are mammals, but a weasel isn’t a rodent, so it can’t be a mammal.”
B: “You evidently know nothing about logic. This does not logically follow.”

B’s argument is still not ad hominem.  B does not imply that A’s sentence does not logically follow because A knows nothing about logic.  B is still addressing the substance of A’s argument…

And then, the Singularity and AI stuff that I’m not reading for now, continues:

(There are also a couple asides tucked in: Thanksgiving Prayer and a meetup.)

Also tucked in there is The Mechanics of Disagreement:

Two ideal Bayesians cannot have common knowledge of disagreement; this is a theorem.  If two rationalist-wannabes have common knowledge of a disagreement between them, what could be going wrong?

The obvious interpretation of these theorems is that if you know that a cognitive machine is a rational processor of evidence, its beliefs become evidence themselves.

If you design an AI and the AI says “This fair coin came up heads with 80% probability”, then you know that the AI has accumulated evidence with an likelihood ratio of 4:1 favoring heads – because the AI only emits that statement under those circumstances.

It’s not a matter of charity; it’s just that this is how you think the other cognitive machine works.

And if you tell an ideal rationalist, “I think this fair coin came up heads with 80% probability”, and they reply, “I now think this fair coin came up heads with 25% probability”, and your sources of evidence are independent of each other, then you should accept this verdict, reasoning that (before you spoke) the other mind must have encountered evidence with a likelihood of 1:12 favoring tails.

But this assumes that the other mind also thinks that you’re processing evidence correctly, so that, by the time it says “I now think this fair coin came up heads, p=.25″, it has already taken into account the full impact of all the evidence you know about, before adding more evidence of its own.

If, on the other hand, the other mind doesn’t trust your rationality, then it won’t accept your evidence at face value, and the estimate that it gives won’t integrate the full impact of the evidence you observed.

So does this mean that when two rationalists trust each other’s rationality less than completely, then they can agree to disagree?

It’s not that simple.  Rationalists should not trust themselves entirely, either.

Previous post:

Next post:

{ 4 comments… read them below or add one }

Sniffnoy July 25, 2011 at 4:30 am

“Nature of Logic” links to Selling Nonapples.

  (Quote)

Alain July 25, 2011 at 5:12 am

The reason for not asking why questions is obvious, they come from the reptile brain. This brain is unsuited in expressing itself lexically. People who confuse feelings with uncertainty make that mistake.
Instead it is better rephrase into open (how and what) questions:
“What is the reason that…”
“How does work”?

  (Quote)

Luke Muehlhauser July 26, 2011 at 5:51 pm

Sniffnoy,

Thanks!

  (Quote)

100 Days Sober July 27, 2011 at 12:25 am

Is true that sometimes a offhand query can trigger in me a deep soul searching self assessment of goals I have been striving after passionately for years.
I remember this guy asking me when I was 19 why I always sought other people’s approval – and it rocked me to my core. Looking back I should have recognized his first year psychology text for what it was. Nothing personal, just him trying to rattle me. And he did.
You have to be passionate enough to not let other people set limits on you.

  (Quote)

Leave a Comment