Reading Yudkowsky, part 1

by Luke Muehlhauser on November 18, 2010 in Eliezer Yudkowsky,Resources,Reviews

AI researcher Eliezer Yudkowsky is something of an expert at human rationality, and at teaching it to others. His hundreds of posts at Overcoming Bias (now moved to Less Wrong) are a treasure trove for those who want to improve their own rationality. As such, I’m reading all of them, chronologically.

I suspect some of my readers want to improve their rationality, too. So I’m keeping a diary of my Yudkowsky reading. Feel free to follow along.

Yudkowsky’s first post is The Martial Art of Rationality:

Our minds respond less readily to [training] than our hands. Muscles are evolutionarily ancient subjects of neural control, while cognitive reflectivity is a comparatively more recent innovation. We shouldn’t be surprised that muscles are easier to use than brains. But it is not wise to neglect the latter training because it is more difficult. It is not by bigger muscles that the human species rose to prominence upon Earth.

He notes that in just the past few decades, we’ve learned a ton about human rationality. Much of this comes from the study of our cognitive biases in behavioral economics and experimental psychology. Other results have come from the Bayesian systematization of probability theory. More still has come from evolutionary psychology and from social psychology. Right now is the best time ever to be somebody who cares about becoming more rational.

These fields give us new focusing lenses through which to view the landscape of our own minds.  With their aid, we may be able to see more clearly the muscles of our brains, the fingers of thought as they move.

Next is Why Truth? Yudkowsky suggests some reasons to pursue truth:

  • Curiosity.
  • To achieve a goal, such as building a video chat technology or curing AIDS or investing wisely for retirement.
  • Perhaps morality? Some knowledge can benefit society.

Much of getting at the truth has to do with overcoming biases. What’s a bias, again? A bias is “a certain kind of obstacle to our goal of obtaining truth.”

What kind of obstacle, specifically? Before we answer that we must remember that error is the norm, not the exception:

As the proverb goes, “There are forty kinds of lunacy but only one kind of common sense.” The truth is a narrow target, a small region of configuration space to hit. “She loves me, she loves me not” may be a binary question, but E=MC^2 is a tiny dot in the space of all equations, like a winning lottery ticket in the space of all lottery tickets. Error is not an exceptional condition; it is success which is a priori so improbable that it requires an explanation.

Okay, so what is a bias?

Perhaps we will find that we can’t really give any explanation better than pointing to a few extensional examples, and hoping the listener understands. If you are a scientist just beginning to investigate fire, it might be a lot wiser to point to a campfire and say “Fire is that orangey-bright hot stuff over there,” rather than saying “I define fire as an alchemical transmutation of substances which releases phlogiston.” …you should not ignore something just because you can’t define it.

With all that said, we seem to label as “biases” those obstacles to truth which are produced, not by the cost of information, nor by limited computing power, but by the shape of our own mental machinery.

Biases are obstacles to truth not caused by the cost of information, or by limited computing power, or by adopted beliefs or moral values, or by brain damage. Instead, “biases arise from machinery that is humanly universal.”

The next post is The Proper Use of Humility:

It is widely recognized that good science requires some kind of humility. What sort of humility is more controversial.

Consider the creationist who says: “But who can really know whether evolution is correct? It is just a theory. You should be more humble and open-minded.” Is this humility?

After surveying some examples of “good” and “bad” humility, Yudkowsky writes:

The vast majority of appeals that I witness to “rationalist’s humility” are excuses to shrug. The one who buys a lottery ticket, saying, “But you can’t know that I’ll lose.” The one who disbelieves in evolution, saying, “But you can’t prove to me that it’s true.” The one who refuses to confront a difficult-looking problem, saying, “It’s probably too hard to solve.” The problem is motivated skepticism aka disconfirmation bias – more heavily scrutinizing assertions that we don’t want to believe. Humility, in its most commonly misunderstood form, is a fully general excuse not to believe something; since, after all, you can’t be sure. Beware of fully general excuses!

He concludes:

To be humble is to take specific actions in anticipation of your own errors.  To confess your fallibility and then do nothing about it is not humble; it is boasting of your modesty.

He then tackles a very hot topic in epistemology, The Modesty Argument:

When you disagree with someone, even after talking over your reasons, the Modesty Argument claims that you should each adjust your probability estimates toward the other’s, and keep doing this until you agree.  The Modesty Argument is inspired by Aumann’s Agreement Theorem, a very famous and oft-generalized result which shows that genuine Bayesians literally cannot agree to disagree; if genuine Bayesians have common knowledge of their individual probability estimates, they must all have the same probability estimate.

Yudkowsky disagrees with the Modesty Argument:

If I have five different accurate maps of a city, they will all be consistent with each other.  Some philosophers, inspired by this, have held that “rationality” consists of having beliefs that are consistent among themselves.  But, although accuracy necessarily implies consistency, consistency does not necessarily imply accuracy.  If I sit in my living room with the curtains drawn, and make up five maps that are consistent with each other, but I don’t actually walk around the city and make lines on paper that correspond to what I see, then my maps will be consistent but not accurate.  When genuine Bayesians agree in their probability estimates, it’s not because they’re trying to be consistent – Aumann’s Agreement Theorem doesn’t invoke any explicit drive on the Bayesians’ part to be consistent.  That’s what makes AAT surprising!  Bayesians only try to be accurate; in the course of seeking to be accurate, they end up consistent.  The Modesty Argument, that we can end up accurate in the course of seeking to be consistent, does not necessarily follow.

He gives other arguments, too.

Because I say “I don’t know” pretty often, and because I’m tempted to say it’s a morally good thing to do so in many circumstances, Yudkowsky’s post “I don’t know” is the earliest of his posts to really smack me upside the head and perhaps change my mind. The post records an online chat he had with somebody else (person X):

X: it still seems that saying “i don’t know” in some situations is better than giving your best guess…

Eliezer: in real life, you have to choose, and bet, at some betting odds

Eliezer: i.e., people who want to say “I don’t know” for cryonics still have to sign up or not sign up, and they’ll probably do the latter

Eliezer: “I don’t know” is usually just a screen that people think is defensible and unarguable before they go on to do whatever they feel like, and it’s usually the wrong thing because they refused to admit to themselves what their guess was, or examine their justifications, or even realize that they’re guessing

X: how many apples are in a tree outside?

X: i’ve never seen it and neither have you

Eliezer: 10 to 1000

Eliezer: if you offer to bet me a million dollars against one dollar that the tree outside has fewer than 20 apples, when neither of us have seen it, I will take your bet…

Eliezer: therefore I have assigned a nonzero and significant probability to apples < 20 whether I admit it or not

Eliezer: the first thing to decide is, are you trying to accomplish something for yourself (like not getting in trouble) or are you trying to improve someone else’s picture of reality

Eliezer: “I don’t know” is often a good way of not getting in trouble

It’s interesting to see how Yudkowsky stubbornly defends the claim that “I don’t know” is never really a good answer. I’m not sure he’s, but he presents a fair case.

It is odd that in the end, Person X asks him: “What about people who don’t know about ignorance priors?” (That is, 99.9% of all people living today, I would bet.)

Yudkowsky responds: “Sometimes you just can’t save people from themselves.”

Rather than take that view, I wonder if it’s more helpful to say that sometimes “I don’t know” is a good answer.

Previous post:

Next post:

{ 22 comments… read them below or add one }

Tshepang Lekhonkhobe November 18, 2010 at 3:40 pm

I’m glad you are covering this series here. Posts on Less Wrong tend to be very heavy (brains of writers too large) for me to bear, and making it more digestible to mere mortals like me is really a good favour. Thanks…

  (Quote)

Luke Muehlhauser November 18, 2010 at 4:00 pm

Tshepang Lekhonkhobe,

That’s a bit of what I’m trying to do, but alas, I’m not – for example -going to rewrite Yudkowsky’s introduction to Bayes. That’s a different project! :)

  (Quote)

Joel November 18, 2010 at 4:00 pm

Yudkowsky has a good post explaining Bayesian probability on Yudkowsky.net. I would recommend it to people unfamiliar with Bayes.

Also of note, Yudkowsky is the author of some popular fan fiction that explores, again, rationality. Word of warning: they aren’t terribly good, especially if you value the “show not tell” rule of good literature; his stories tend to be author tracts ala the Anthem or Fountainhead.

  (Quote)

Steven November 18, 2010 at 4:03 pm

It’s posts like these that make me wish I had more than a layman’s passing knowledge of Bayesian Probability and all of this other stuff. Still, it’s a pretty interesting read, even if I do have very limited understanding of it.

In consideration of Yudkowsky’s criticism of the “I don’t know response”, I have to (mostly) agree with him. Every single time I’ve uttered “I don’t know”, I’m dodging a question and replying dishonestly, as I do have a guess and a sort of rationalization but refuse to reveal it to the person questioning me. I think the best example of this is was on the day that my father died and, when he didn’t come home for the night, my mother kept on asking me if I knew what had happened to him. I kept on answering “I don’t know”, even when in reality I suspected that he had been in a grave accident or something of a serious nature and had legitimate reasons for this speculation. But it was much easier to just answer “I don’t know.”

My only question is–and I realize that this question may come from a misunderstanding of what is being discussed–wouldn’t “I don’t know” be a legitimate response for a nonsensical question, such as: “Do you know how to pronounce “tjkrtsddhdkk”? Or when posed with a question that you have no prior knowledge of, say “What sort of organism is a platypus?” when you’ve never heard of the name.

  (Quote)

Sly November 19, 2010 at 12:26 am

I am STOKED for this series.

Sidenote: Luke, did you ever post the prezi from the talk you did recently?

  (Quote)

Kutta November 19, 2010 at 1:19 am

I’m delighted by your endeavor, great part because I personally read all of it chronologically back then a bit before the launch of LW. It was almost a complete brain-rewiring experience for me that left me great deal exhausted, but also with a distinct feeling of having leveled up.

  (Quote)

Alexandros Marinos November 19, 2010 at 2:35 am

First, I should say I’m very happy to see this series. There has been recent discussion on LW of how to expose the sequences to a broader audience and this is a great way to do that! Hopefully it will keep getting updated and not go into indefinite hiatus like the Kalaam one :)

Also, on Eliezer’s Harry Potter fanfic, I’d like to say that it’s more of a love it or hate it thing, rather than ‘not very good’ as Joel says. It is now one of the 10 most reviewed fanfics on ff.net, it has gotten a great response on Hacker News, and has been endorsed by people such as David Brin and Eric S. Raymond. That said, it’s been very divisive on DarkLordPotter.com, so it’s not all good either. I think everyone should try reading the first 5 chapters and see how they react to it. A great many people seem to love it with ‘this is the best work of fiction I’ve ever read’ not being uncommon in the reviews.

a typo: “I’m not sure he’s”

  (Quote)

Luke Muehlhauser November 19, 2010 at 7:29 am

Sly,

No, not yet. Working on it.

  (Quote)

Luke Muehlhauser November 19, 2010 at 7:30 am

Alexandros,

Kalam is not in indefinite hiatus. I wrote the next few posts in that series a few weeks ago, they just haven’t been posted yet.

  (Quote)

Luke Muehlhauser November 19, 2010 at 7:44 am

Correction: actually, it went up TODAY. Lol.

  (Quote)

Charles November 19, 2010 at 12:45 pm

Luke, were you referring to this? Or something else.

[We might say] “In order for something to come into existence, there must be a time t such that the thing exists at t and there is no time t* earlier than t at which the thing exists,” or more simply, “In order for anything to come into existence, there has to be a first moment of its existence.”

  (Quote)

Luke Muehlhauser November 19, 2010 at 2:02 pm

Charles,

Yes.

  (Quote)

Eugine Nier November 19, 2010 at 4:33 pm

I’m not sure how whether the “I don’t know” post reflects Eliezer’s current position since he has subsequently indeed answered questions with effectively “I don’t know”.

For example in this Q&A:

http://lesswrong.com/lw/1lq/less_wrong_qa_with_eliezer_yudkowsky_video_answers/

  (Quote)

dan November 19, 2010 at 5:09 pm

What is this fellow’s religious beliefs? Is he athiest? Christian? Thanks

  (Quote)

Charles November 19, 2010 at 6:36 pm

Why do you want to know?

  (Quote)

Luke Muehlhauser November 19, 2010 at 9:47 pm

dan,

Eliezer? He definitely doesn’t believe in deities.

  (Quote)

Sly November 20, 2010 at 12:26 am

Sly,No, not yet. Working on it.  

Ah oh well, eagerly awaiting. =)

  (Quote)

Alexandros Marinos November 20, 2010 at 9:15 am

Correction: actually, it went up TODAY. Lol.  

Yeah, saw it when it went up, shortly after my comment. Reminds me of the guy who posted a thread on Hacker News complaining about the ‘beta’ tag on GMail about a day before it was removed :)

  (Quote)

Alexander Kruel November 20, 2010 at 9:37 am

Here is an updated list of all articles from Less Wrong (in chronological order):

http://wiki.lesswrong.com/wiki/Less_Wrong/All_Articles

Because the link in your post ends with ‘Visualizing Eutopia’ when there are many more posts by Yudkowsky.

You might also be interested in following list that I compiled:

http://lesswrong.com/lw/2un/references_resources_for_lesswrong/

  (Quote)

Luke Muehlhauser November 20, 2010 at 9:51 am

Kruel,

Thanks!

  (Quote)

Timothy Underwood November 20, 2010 at 8:48 pm

If you have a speculative conjecture, in which you put very little confidence, depending on the context it may be a bad idea to explain your conjecture.

  (Quote)

Luke Muehlhauser November 21, 2010 at 12:28 am

Timothy Underwood,

I read your Introduction post. Your central concern is very much a central concern of mine too! The most important part of my talk in Colorado was on exactly that subject, for example. (Though you probably disagree with my conclusions!)

  (Quote)

Leave a Comment